kubeadm安装1.9版本

本文涉及的产品
可观测可视化 Grafana 版,10个用户账号 1个月
简介:

kubernetes 1.9.0 kubeadm方式安装

1、安装rpm包

yum localinstall -y kubeadm-1.9.0-0.x86_64.rpm kubectl-1.9.0-0.x86_64.rpm kubelet-1.9.0-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm

2、修改内核参数

修改 /etc/sysctl.conf,添加以下内容

net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

修改后,及时生效

sysctl -p

3、修改kubelet配置文件

kubelet和docker 的cgroup driver 有2种方式:cgroupfs和systemd.注意保持 2个应用的driver保持一致。

3.1 docker是cgroupfs的,修改kubelet

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

#修改systemd为cgroupfs
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

#新加一行
Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=foxchan/google_containers/pause-amd64:3.0"

修改完成后

systemctl daemon-reload

3.2 docker是systemd的,kubelet无需修改

docker 启动命令添加如下内容,可以修改cgroup driver为systemd

--exec-opt native.cgroupdriver=systemd

4、安装 kubernetes

kubeadm init --kubernetes-version=1.9.0 --token-ttl 0

参数说明

因为镜像要"翻""墙",可以预先下载镜像,然后更改tag

镜像列表

  • gcr.io/google_containers/kube-apiserver-amd64:v1.9.0
  • gcr.io/google_containers/kube-controller-manager-amd64:v1.9.0
  • gcr.io/google_containers/kube-scheduler-amd64:v1.9.0
  • gcr.io/google_containers/pause-amd64:3.0
  • gcr.io/google_containers/etcd-amd64:3.1.10
  • gcr.io/google_containers/kube-proxy-amd64:v1.9.0
  • gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
  • gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
  • gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
  • quay.io/calico/node:v3.0.1
  • quay.io/coreos/etcd:v3.2.4
  • quay.io/calico/cni:v2.0.0
  • quay.io/calico/kube-controllers:v1.0.1

如果下载自己的或者dockerhub的镜像。可以利用脚本,批量替换镜像imagename

docker images | sed 's/foxchan/gcr.io\/google_containers/'| awk '{print "docker tag "$3" "$1":"$2}'

安装信息

[root@kvm-gs242024 ~]# kubeadm init --kubernetes-version=1.9.0 --token-ttl 0 --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kvm-gs024 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.24]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[apiclient] All control plane components are healthy after 78.502690 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kvm-gs242024 as master by adding a label and a taint
[markmaster] Master kvm-gs242024 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 1ac970.704ce2d03cc45382
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 1ac970.704ce2d03cc45382 192.168.0.24:6443 --discovery-token-ca-cert-hash sha256:f70f07be83a7b2af2c41752b00def4389e3019006b3be643fe1ccf1c53368043

安装后提示如何管理集群和添加node

token要记得保存,当前版本 token 无法通过命令找回,否则无法添加node

master节点验证

[root@kvm-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}

node节点验证

[root@kvm-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kvm-node1 Ready 1h v1.9.0
kvm-master NotReady master 18h v1.9.0
kvm-node2 Ready 6m v1.9.0

命令行验证

curl --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key https://k8smaster:6443

管理集群使用kubectl的报错

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

https://github.com/kubernetes/kubernetes/issues/48378

解决方式: export KUBECONFIG=/etc/kubernetes/kubelet.conf 这是普通用户,没有权限,会有报错 Error from server (Forbidden): daemonsets.extensions is forbidden: User "system:node:kvm-master" cannot list daemonsets.extensions in the namespace "default"
export KUBECONFIG=/etc/kubernetes/admin.conf 管理用户

###由于安全原因,默认情况下pod不会被schedule到master节点上,可以通过下面命令解除这种限制:

kubectl taint nodes --all node-role.kubernetes.io/master-

calico网络安装

先装rbac
kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/rbac.yaml

下载calico.yaml
https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/calico.yaml

如果使用calico自带etcd,注意保证calico etcd的稳定
如果使用自有etcd集群,需要修改yaml里的etcd_endpoints

默认安装时,如果节点是多网卡会报错,导致网络不成功,calico 总是随机绑定网卡,导致注册失败

calico 报错日志

Skipping datastore connection test
IPv4 address 10.96.0.1 discovered on interface kube-ipvs0
No AS number configured on node resource, using global value

需要修改calico.yaml,注意顺序

        - name: IP
          value: "autodetect"
        - name: IP_AUTODETECTION_METHOD
          value: "can-reach=192.168.1.1"

IP_AUTODETECTION_METHOD 参数说明

使用通过ip访问的interface
can-reach=192.168.1.1

使用通过域名访问的interface
can-reach=www.baidu.com

使用指定的interface
interface=ethx

安装dashboard

下载yaml

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

下载完成后修改imagename

name: kubernetes-dashboard
        image: foxchan/google_containers/kubernetes-dashboard-amd64:v1.8.0
kubectl proxy --address=masterip --accept-hosts='^*$'

访问

http://masterip:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

安装heapster

下载yaml

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml

下载完成后,修改image为自己的

#grafana.yaml
      - name: grafana
        image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.3

#heapster.yaml
      - name: heapster
        image: gcr.io/google_containers/heapster-amd64:v1.4.2

#influxdb.yaml
      - name: influxdb
        image: gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3

下载镜像

docker pull foxchan/heapster-grafana-amd64:v4.4.3
docker pull foxchan/heapster-amd64:v1.4.2
docker pull foxchan/heapster-influxdb-amd64:v1.3.3

如果成功,dashboard页面 会有图形

将skydns替换为coredns

下载脚本

wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed

利用脚本 修改coredns.yaml.sed配置文件

deploy.sh clusterip|kubectl apply

我的clusterip是10.96.0.0/12

./deploy.sh 10.96.0.0/12|kubectl apply -f

请确保coredns正常运行,然后就可以删除skydns

kubectl delete --namespace=kube-system deployment kube-dns

另外 kubeadm 也添加了替换dns功能,需要kubeadm 1.9版本

参数如下:

kubeadm init --feature-gates=CoreDNS=true

总结

总的来说1.9 和1.8 没什么大的变化,以下是我关注的

  • Bootstrap Tokens 从alpha 变为beta。
  • etcd 版本从3.0 升级到3.1,因为etcd 版本数据是不兼容的,如果要升级etcd,需要提前备份数据。
  • kube-proxy 的ipvs 从alpha变为beta
  • coredns 比较有前景,可以持续关注
本文转自银狐博客51CTO博客,原文链接http://blog.51cto.com/foxhound/2057395如需转载请自行联系原作者

战狐
相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
相关文章
|
Kubernetes 网络协议 Ubuntu
Kubeadm 快速搭建 k8s v1.19.1 集群(Ubuntu Server 20.04 LTS)
安装准备工作安装环境要求:角色 实验环境 生产环境 操作系统 master cpu/内存:2 Core/2G cpu/内存:2 Core/4G linux 内核 4.4+ node cpu/内存:1 Core/2G cpu/内存:4 Core/16G linux 内核 4.4+ 备注 Node:应根据需要运行的容器数量进行配置; Linux 操作系统基于 x86_64 架构的各种 Linux 发行版...
966 2
Kubeadm 快速搭建 k8s v1.19.1 集群(Ubuntu Server 20.04 LTS)
|
20天前
|
Kubernetes Linux 网络安全
kubeadm安装k8s
该文档提供了一套在CentOS 7.6上安装Docker和Kubernetes(kubeadm)的详细步骤,包括安装系统必备软件、关闭防火墙和SELinux、禁用swap、开启IP转发、设置内核参数、配置Docker源和加速器、安装指定版本Docker、启动Docker、设置kubelet开机启动、安装kubelet、kubeadm、kubectl、下载和配置Kubernetes镜像、初始化kubeadm、创建kubeconfig文件、获取节点加入集群命令、下载Calico YAML文件以及安装Calico。这些步骤不仅适用于v1.19.14,也适用于更高版本。
78 1
|
1月前
|
Kubernetes Ubuntu 应用服务中间件
Ubuntu 22.04 利用kubeadm方式部署Kubernetes(v1.28.2版本)
Ubuntu 22.04 利用kubeadm方式部署Kubernetes(v1.28.2版本)
115 0
|
1月前
|
Kubernetes Linux 网络安全
CentOS7上kubeadm方式部署Kubernetes(v1.24.3版本)
CentOS7上kubeadm方式部署Kubernetes(v1.24.3版本)
97 0
|
9月前
|
Kubernetes 安全 Linux
k8s--使用 kubeadm 搭建 k8s 1.25.2 版本
k8s--使用 kubeadm 搭建 k8s 1.25.2 版本
|
Linux 网络安全 开发工具
|
Kubernetes Cloud Native Linux
kubeadm部署安装k8s v1.18.1详解
kubeadm部署安装k8s v1.18.1详解
kubeadm部署安装k8s v1.18.1详解
|
12月前
|
Kubernetes Ubuntu 应用服务中间件
kubeadm基于docker安装高可用1.26.3版本k8s集群
kubeadm基于docker安装高可用1.26.3版本k8s集群
788 2
|
Kubernetes 调度 容器
使用kubeadm命令升级k8s集群1.20到1.20.4版本
使用kubeadm命令升级k8s集群1.20到1.20.4版本
245 0
使用kubeadm命令升级k8s集群1.20到1.20.4版本