k8s_v1.15.0_HA集群基础环境搭建

简介: Kubernetes高可用集群基础环境搭建. 在学习Kubernetes的过程中, 操作实践是非常有必要的, 但是往往第一步环境的搭建会成为门槛. 而且使用minikube不利于集群层面的学习, 不如一步到位.

第一次在云恓社区写博客,就先把自己整理的Kubernetes高可用集群基础环境的搭建流程拿过来试试水.哈哈
先对配置做个简单介绍:proxy使用的是ipvs,网络使用的是calico,堆叠式etcd集群,apiserver的负载均衡使用haproxy+keepalived

本人的github链接 https://github.com/JackWBC/k8s_v1.15.0_HA_cluster

虚拟机环境准备

操作系统 CentOS7 x86_64 mini (建议至少2核CPU,2G内存)
网卡 ens33(不同服务器的不一样, 将安装过程中对应的网卡名称改成自己的即可)
3个master, 3个node, 域名与IP如下

角色 域名 IP
master master1.k8s 192.168.250.141
master master2.k8s 192.168.250.142
master master3.k8s 192.168.250.143
node node1.k8s 192.168.250.144
node node2.k8s 192.168.250.145
node node3.k8s 192.168.250.146
虚拟IP -- 192.168.250.99

虚拟机基础配置

在所有master与node上操作

解决 setLocale 问题

cat <<EOF >  /etc/environment
LANG=en_US.UTF-8
LC_ALL=C
EOF

停止iptables

systemctl stop firewalld.service && systemctl disable  firewalld.service

设置 SELinux 为 disabled 模式

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

禁用交换分区

swapoff -a && sed -i '/ swap / s/^/#/' /etc/fstab

设置sysctl

cat <<EOF > /etc/sysctl.conf
fs.file-max=1000000
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
net.ipv4.ip_forward = 1
net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_max_syn_backlog = 16384
net.core.netdev_max_backlog = 32768
net.core.somaxconn = 32768
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_fin_timeout = 20
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.ip_local_port_range = 1024 65000
net.nf_conntrack_max = 6553500
net.netfilter.nf_conntrack_max = 6553500
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_established = 3600
EOF

加载ipvs

cat << EOF | tee /etc/sysconfig/modules/ipvs.modules
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

修改yum repo, 提升下载速度

mkdir /etc/yum.repos.d/bak
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/
CentOS-Base.repo
vi /etc/yum.repos.d/CentOS-Base.repo

# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
[os]
name=Qcloud centos os - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/os/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[updates]
name=Qcloud centos updates - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/updates/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[centosplus]
name=Qcloud centosplus - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/centosplus/$basearch/
enabled=0
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[cloud]
name=Qcloud centos contrib - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/cloud/$basearch/openstack-kilo/
enabled=0
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[cr]
name=Qcloud centos cr - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/cr/$basearch/
enabled=0
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[extras]
name=Qcloud centos extras - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/extras/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[fasttrack]
name=Qcloud centos fasttrack - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/fasttrack/$basearch/
enabled=0
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7
kubernetes.repo
vi /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
docker-ce.repo
vi /etc/yum.repos.d/docker-ce.repo

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
清空yum缓存并重新加载
yum clean all
yum makecache

安装相关插件

  • 在所有节点上操作
yum install ipset -y

yum install ipvsadm -y

yum install -y docker-ce-18.09.7-3.el7
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload && systemctl restart docker
systemctl enable docker && systemctl start docker

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
systemctl daemon-reload && systemctl restart kubelet

安装haproxy + keepalived, 实现HA

  • 在所有的master节点上配置haproxy代理和keepalived
mkdir /etc/haproxy
cat >/etc/haproxy/haproxy.cfg<<EOF
global
  log 127.0.0.1 local0 err
  maxconn 4096
  uid 99
  gid 99
  #daemon
  nbproc 1
  pidfile haproxy.pid

defaults
  mode http
  log 127.0.0.1 local0 err
  maxconn 4096
  retries 3
  timeout connect 5s
  timeout client 30s
  timeout server 30s
  timeout check 2s

listen admin_stats
  mode http
  bind 0.0.0.0:1080
  log 127.0.0.1 local0 err
  stats refresh 30s
  stats uri     /haproxy-status
  stats realm   Haproxy\ Statistics
  stats auth    baicheng:baicheng
  stats hide-version
  stats admin if TRUE

frontend k8s-https
  bind 0.0.0.0:8443
  mode tcp
  #maxconn 4096
  default_backend k8s-https

backend k8s-https
  mode tcp
  balance roundrobin
  server master1.k8s 192.168.250.141:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  server master2.k8s 192.168.250.142:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  server master3.k8s 192.168.250.143:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
EOF

docker run -d --name my-haproxy \
-v /etc/haproxy:/usr/local/etc/haproxy:ro \
-p 8443:8443 \
-p 1080:1080 \
--restart always \
registry.cn-shanghai.aliyuncs.com/baicheng_dev/haproxy:2.0.0

# 注意网卡配置
docker run --net=host --cap-add=NET_ADMIN -d \
-e KEEPALIVED_INTERFACE=ens33 \
-e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['192.168.250.99']" \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.250.141','192.168.250.142','192.168.250.143']" \
-e KEEPALIVED_PASSWORD=baicheng \
--name k8s-keepalived \
--restart always \
registry.cn-shanghai.aliyuncs.com/baicheng_dev/keepalived:2.0.16
  • haproxy与keepalived安装检查
# 查看日志
docker logs my-haproxy
docker logs k8s-keepalived

# ping虚拟IP
ping -c4 192.168.250.99

# 查看haproxy状态 (用户名baicheng, 密码baicheng)
http://master1.k8s:1080/haproxy-status
http://master2.k8s:1080/haproxy-status
http://master3.k8s:1080/haproxy-status

搭建k8s集群基础环境

  • 在所有master节点上配置环境变量
vi .bash_profile
export CP0_IP="192.168.250.99"
export CP1_IP="192.168.250.141"
export CP1_HOSTNAME="master1.k8s"
export CP2_IP="192.168.250.142"
export CP2_HOSTNAME="master2.k8s"
export CP3_IP="192.168.250.143"
export CP3_HOSTNAME="master3.k8s"

source .bash_profile

# 查看是否生效
echo $CP0_IP
  • 在master1上进行操作
cd /etc/kubernetes

cat >kubeadm-config.yaml<<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
controlPlaneEndpoint:  $CP0_IP:8443
controllerManagerExtraArgs:
    node-monitor-grace-period: 10s
    pod-eviction-timeout: 10s
networking:
    podSubnet: 10.244.0.0/16
kubeProxy:
    config:
        mode: ipvs
imageRepository: registry.cn-shanghai.aliyuncs.com/baicheng_dev
clusterName: baicheng-k8s-cluster
EOF

sudo kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs
  • 根据init输出, 将剩余的master节点以及node节点全部join到集群中
  • 同时根据init输出, 配置并使用kubectl
安装检测
  • 使用kubectl get nodes查看是否所有节点均已加入集群, 并且处于notready状态
  • 检测etcd集群状态
docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes registry.cn-shanghai.aliyuncs.com/baicheng_dev/etcd:3.3.10 etcdctl \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://${CP1_IP}:2379 cluster-health

配置网络插件 Calico

  • 在任意master节点上操作
cd /etc/kubernetes
mkdir calico && cd calico

vi kube-calico.yaml
# kube-calico.yaml文件见本人github项目目录

kubectl apply -f kube-calico.yaml
  • 使用kubectl get po --all-namespaces 查看并等待所有pod running&ready
  • 这时再次使用kubectl get nodes, 所有节点均已ready
至此, k8s高可用集群的基础环境均已搭建完毕
谢谢
相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
目录
相关文章
|
2月前
|
Kubernetes Ubuntu 应用服务中间件
在Ubuntu22.04 LTS上搭建Kubernetes集群
在Ubuntu22.04.4上安装Kubernetes v1.28.7,步骤超详细
321 1
在Ubuntu22.04 LTS上搭建Kubernetes集群
|
2月前
|
Kubernetes 安全 Docker
在 K8s 集群中创建 DERP 服务器
在 K8s 集群中创建 DERP 服务器
|
2月前
|
Prometheus 监控 Kubernetes
如何用 Prometheus Operator 监控 K8s 集群外服务?
如何用 Prometheus Operator 监控 K8s 集群外服务?
|
1月前
|
Prometheus 监控 Kubernetes
Kubernetes 集群监控与日志管理实践
【2月更文挑战第29天】 在微服务架构日益普及的当下,Kubernetes 已成为容器编排的事实标准。然而,随着集群规模的扩大和业务复杂度的提升,有效的监控和日志管理变得至关重要。本文将探讨构建高效 Kubernetes 集群监控系统的策略,以及实施日志聚合和分析的最佳实践。通过引入如 Prometheus 和 Fluentd 等开源工具,我们旨在为运维专家提供一套完整的解决方案,以保障系统的稳定性和可靠性。
|
19天前
|
数据库 存储 监控
什么是 SAP HANA 内存数据库 的 Delta Storage
什么是 SAP HANA 内存数据库 的 Delta Storage
16 0
什么是 SAP HANA 内存数据库 的 Delta Storage
|
8天前
|
Kubernetes 搜索推荐 Docker
使用 kubeadm 部署 Kubernetes 集群(二)k8s环境安装
使用 kubeadm 部署 Kubernetes 集群(二)k8s环境安装
46 17
|
21天前
|
消息中间件 Kubernetes Kafka
Terraform阿里云创建资源1分钟创建集群一键发布应用Terraform 创建 Kubernetes 集群
Terraform阿里云创建资源1分钟创建集群一键发布应用Terraform 创建 Kubernetes 集群
15 0
|
22天前
|
Kubernetes 安全 网络安全
搭建k8s集群kubeadm搭建Kubernetes二进制搭建Kubernetes集群
搭建k8s集群kubeadm搭建Kubernetes二进制搭建Kubernetes集群
104 0
|
29天前
|
Kubernetes Cloud Native Docker
【云原生】kubeadm快速搭建K8s集群Kubernetes1.19.0
Kubernetes 是一个开源平台,用于管理容器化工作负载和服务,提供声明式配置和自动化。源自 Google 的大规模运维经验,它拥有广泛的生态支持。本文档详细介绍了 Kubernetes 集群的搭建过程,包括服务器配置、Docker 和 Kubernetes 组件的安装,以及 Master 和 Node 的部署。此外,还提到了使用 Calico 作为 CNI 网络插件,并提供了集群功能的测试步骤。
216 0
|
1月前
|
Prometheus 监控 Kubernetes
Kubernetes 集群的监控与日志管理实践
【2月更文挑战第31天】 在微服务架构日益普及的今天,容器编排工具如Kubernetes已成为部署、管理和扩展容器化应用的关键平台。然而,随着集群规模的扩大和业务复杂性的增加,如何有效监控集群状态、及时响应系统异常,以及管理海量日志信息成为了运维人员面临的重要挑战。本文将深入探讨 Kubernetes 集群监控的最佳实践和日志管理的高效策略,旨在为运维团队提供一套系统的解决思路和操作指南。
26 0