03-二进制部署kubernetes高可用集群

单机kubeadm部署请参考:Kubeadm单机部署kubernetes

本次部署采用开源项目https://github.com/easzlab/kubeasz,以二进制安装的方式,此方式安装支持系统有CentOS/RedHat 7, Debian 9/10, Ubuntu 1604/1804/2004。

  • 注意1:确保各节点时区设置一致、时间同步。 如果你的环境没有提供NTP 时间同步
  • 注意2:确保在干净的系统上开始安装,不要使用曾经装过kubeadm或其他k8s发行版的环境
  • 注意3:建议操作系统升级到新的稳定内核,请结合阅读内核升级文档
  • 注意3: 各节点设置免密

一、集群系统环境

root@ubuntu2004:~# cat /etc/issue
Ubuntu 20.04.4 LTS \n \l
 -docker: 19.03.15
- k8s: v.1.23.1

二、IP和角色规划

下面是此次虚拟机集群安装前的IP等信息规划,因为资源有限所以有些节点资源混用。如果资源充足的话,master节点基数最好(三台以上)可以考虑每个服务一台虚拟机。

IP HostName Role VIP
172.31.4.101 k8s-master1-etcd1.host.com master1/etcd1 172.31.7.188
172.31.4.102 k8s-master2-etcd2.host.com master2/etcd2 172.31.7.188
172.31.4.103 k8s-ha1-etcd3.host.com HA1/etcd3
172.31.4.104 k8s-ha2_harbor.host.com HA2/harbor
172.31.4.111 k8s-node1.host.com work node
172.31.4.112 k8s-node2.host.com work node

三、初始化系统和全局变量

3.1 设置主机名(此处略)

root@ubuntu2004:~# hostnamectl set-hostname k8s-master1-etcd1.host.com #其他的更换主机名即可

3.2 修改IP信息(单台为例)

root@k8s-master1-etcd1:~# cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
 ethernets:
 ens33:
 dhcp4: no
 addresses: [172.31.7.101/24] #ip
 gateway4: 172.31.7.254
 nameservers:
 addresses: [114.114.114.114] #dns
 version: 2
 renderer: networkd

 #修改完重启网络
 netplan apply

3.3 设置系统时区和时钟同步

timedatectl set-timezone Asia/Shanghai
root@k8s-master2-etcd2:/etc/default# cat /etc/default/locale
LANG=en_US.UTF-8
LC_TIME=en_DK.UTF-8

同步任务
root@k8s-master1-etcd1:~# cat /var/spool/cron/crontabs/root
*/5 * * * * ntpdate time1.aliyun.com &> /dev/null && hwclock -w
</pre>

3.4 内核资源优化

cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.ipv4.ip_forward=1
vm.max_map_count=262144
kernel.pid_max=4194303
fs.file-max=1000000
net.ipv4.tcp_max_tw_buckets=6000
net.netfilter.nf_conntrack_max=2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

modprobe ip_conntrack
modprobe br_netfilter
sysctl -p /etc/sysctl.d/kubernetes.conf

reboot
各节点做快照

3.5 免密设置

>root@k8s-master1-etcd1:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:MQhTEGzxTr9rnP412bdROTZVgW6ZfnUiU4b6b6VIzok root@k8s-master1-etcd1.host.com
The key's randomart image is:
+---[RSA 3072]----+
|   .*=.      ...o|
|    o+ .    ..o .|
|   .  + o  ..oo .|
|     o . o. o=. =|
|      . S  .oo *+|
|         .  o+..=|
|       ... =++o+.|
|        +.E.=.+.o|
|       oo..  . . |
+----[SHA256]-----+

root@k8s-master1-etcd1:~# ssh-copy-id $IPs #$IPs为所有节点地址包括自身,按照提示输入yes 和root密码

四、高可用负载均衡

k8s-ha1-etcd3.host.com 和k8s-ha2_harbor.host.com

#k8s-ha1-etcd3.host.com 和k8s-ha2_harbor.host.com

#apt install keepalived haproxy -y 
cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf

#keepalived主节点配置文件
#cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER #别一个主机上用BACKUP
    interface ens33  #网卡名与主机的一致
    garp_master_delay 10 #每个虚拟路由器必须唯一,同属一个虚拟路由器的多个keepalived节点必须相同。
    smtp_alert
    virtual_router_id 51
    priority 100   #在另一个节点上为80
    advert_int 1
    authentication {
        auth_type PASS #预共享密钥认证,同一虚拟路由器的keepalived节点一样
        auth_pass 1111
    }
    virtual_ipaddress {
        172.31.7.188 dev ens33 label ens33:0
        172.31.7.189 dev ens33 label ens33:1
        172.31.7.190 dev ens33 label ens33:2

    }
}

#复制keepalived配置文件到另一台节点,按上面要求修改配置
cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
#重启keepalived
systemctl restart keepalived.service && systemctl enable keepalived
#编辑haproxy的配置文件
#增加listen配置
#cat /etc/haproxy/haproxy.cfg
listen k8s-cluster1-6443
        bind 172.31.7.188:6443   #监听vip的6443端口
        mode tcp                 #模式tcp
        #
        server k8s-master1-etcd1.host.com 172.31.7.101:6443 check inter 3s fall 3 rise 1
        server k8s-master2-etcd2.host.com 172.31.7.102:6443 check inter 3s fall 3 rise 1

#重启haproxy
root@k8s-ha1-etcd3:~# systemctl restart haproxy.service
root@k8s-ha1-etcd3:~# systemctl enable haproxy.service

#复制haproxy配置文件到另一台ha节点
scp  /etc/haproxy/haproxy.cfg  172.31.7.104:/etc/haproxy/

#内核配置文件添加如下
net.ipv4.ip_nonlocal_bind = 1 #意思是启动haproxy的时候,允许忽视VIP的存在

五、部署harbor

5.1 安装docker

各master、node和harbor节点需要安装docker,本节参考清华源安装方式
清华源官网--docker-ce安装

#如果你过去安装过 docker,先删掉
sudo apt-get remove docker docker-engine docker.io
#安装依赖
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common
#根据你的发行版,下面的内容有所不同。信任docker的GPG公钥
#这里使用的是ubuntu系统
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
#对于 amd64 架构的计算机,添加软件仓库:
sudo add-apt-repository \
   "deb [arch=amd64] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
#安装
sudo apt-get update
sudo apt-get install -y docker-ce=5:19.03.15~3-0~ubuntu-focal #这里指定版本安装,用apt-cache madison docker-ce查看下有哪些版本。如果不指定默认安装最新的版本

#安装docker-compose
wget https://github.com/docker/compose/releases/download/v2.4.1/docker-compose-linux-x86_64
mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose
#添加运行权限
chmod a+x /usr/local/bin/docker-compose

5.2 .部署https的harbor服务器

github 地址

Harbor默认不带有任何认证,使用HTTP访问服务(关闭配置中的HTTPS),这种方式很容易配置并且运行,适合于开发和测试环境,如果在生产环境,推荐启动HTTPS


root@k8s-ha2-harbor:/apps# pwd
/apps
==================================下载离线安装包=============================================
#wget https://github.com/goharbor/harbor/releases/download/v2.3.2/harbor-offline-installer-v2.3.2.tgz
#tar xf harbor-offline-installer-v2.3.2.tgz
mkdir ./harbor/certs 
#cd harbor
#cp harbor.yml.tmp1 harbor.yml

==================================签发证书=============================================
openssl genrsa -out /apps/harbor/certs/harbor-ca.key#⽣成私有key
openssl req -x509 -new -nodes -key /apps/harbor/certs/harbor-ca.key -subj "/CN=harbor.host.com" -days 7120 -out /apps/harbor/certs/harbor-ca.crt #公钥 域名必须跟配置文件的一致

root@k8s-ha2-harbor:/apps/harbor# tree ./
./
├── LICENSE
├── certs
│   ├── harbor-ca.crt
│   └── harbor-ca.key
├── common.sh
├── harbor.v2.3.2.tar.gz
├── harbor.yml
├── harbor.yml.tmpl
├── install.sh
└── prepare

1 directory, 9 files

==================================修改配置文件=============================================
#vim harbor.yml #修改配置文件
hostname: harbor.host.com  #修改自己的harbor域名,必须和公钥的一致

#如果走http协议请注释掉https部分
# https related config
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /apps/harbor/certs/harbor-ca.crt #公钥文件目录
  private_key: /apps/harbor/certs/harbor-ca.key  #私钥文件目录
  
harbor_admin_password: 123456  #web UI 登陆密码

# The default data volume
data_volume: /data/harbor

==================================安装harbor=============================================
root@k8s-ha2-harbor:/apps/harbor# ./install.sh --help

Note: Please set hostname and other necessary attributes in harbor.yml first. DO NOT use localhost or 127.0.0.1 for hostname, because Harbor needs to be accessed by external clients.
Please set --with-notary if needs enable Notary in Harbor, and set ui_url_protocol/ssl_cert/ssl_cert_key in harbor.yml bacause notary must run under https.
Please set --with-trivy if needs enable Trivy in Harbor  #安全和漏洞扫描
Please set --with-chartmuseum if needs enable Chartmuseum in Harbor  #Helm Charts

root@k8s-ha2-harbor:/apps/harbor# ./install.sh --with-trivy --with-chartmuseum

5.3 同步harbor crt证书:

如果没有同步harbor.crt证书之前,登陆时会有如下报错

root@k8s-ha2-harbor:~# docker login harbor.host.com
Username: admin
Password:
Error response from daemon: Get https://harbor.host.com/v2/: x509: certificate signed by unknown authority

把harbor.crt同步到所有要用到harbor镜像仓库的节点。步骤如下:

==================================创建目录=============================================
root@k8s-ha2-harbor:~# mkdir /etc/docker/certs.d/harbor.host.com -p  #host名称必须与证书公钥生成时的一致

==================================复制harbor公钥=============================================
root@k8s-ha2-harbor:~# cp /apps/harbor/certs/harbor-ca.crt /etc/docker/certs.d/harbor.host.com

~# vim /etc/hosts #添加host⽂件解析 如果你配置有DNS可以直接把这个域名配置到DNS里面
172.31.7.104 harbor.host.com

#重启docker
systemc restart docker

5.4 测试harbor

#登陆
root@k8s-ha2-harbor:/apps/harbor# docker login harbor.host.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded

#拉取镜像
root@k8s-ha2-harbor:/apps/harbor# docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
2db29710123e: Pull complete
Digest: sha256:10d7d58d5ebd2a652f4d93fdd86da8f265f5318c6a73cc5b6a9798ff6d2b2e67
Status: Downloaded newer image for hello-world:latest
docker.io/library/hello-world:latest
#查看镜像
root@k8s-ha2-harbor:/apps/harbor# docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
hello-world                     latest              feb5d9fea6a5        6 months ago        13.3kB

#镜像打标签 
root@k8s-ha2-harbor:/apps/harbor# docker tag hello-world:latest harbor.host.com/test/hello-world:latest

#推送到harbor /test这个项目得提前在harbor上面创建这个项目
root@k8s-ha2-harbor:/apps/harbor# docker push harbor.host.com/test/hello-world:latest
The push refers to repository [harbor.host.com/test/hello-world]
e07ee1baac5f: Pushed
latest: digest: sha256:f54a58bc1aac5ea1a25d796ae155dc228b3f0e11d046ae276b39c4bf2f13d8c4 size: 525
harbor web 页面

六、部署Kubernetes

6.1 部署节点ansible 安装(这里直在master1节点部署)

apt install ansible -y

#为每个节点设置python软链接
root@k8s-master1-etcd1:~# ln -s /usr/bin/python3.8 /usr/bin/python

6.2 下载项目源码、二进制及离线镜像

# 下载工具脚本ezdown,举例使用kubeasz版本3.2.0
export release=3.2.0
wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
chmod +x ./ezdown
# 使用工具脚本下载
root@k8s-master1-etcd1:~# ./ezdown --help
./ezdown: illegal option -- -
Usage: ezdown [options] [args]
  option: -{DdekSz}
    -C         stop&clean all local containers
    -D         download all into "/etc/kubeasz 会自动下载到/etc/kubeasz这个目录
    -P         download system packages for offline installing
    -R         download Registry(harbor) offline installer
    -S         start kubeasz in a container
    -d <ver>   set docker-ce version, default "19.03.15"
    -e <ver>   set kubeasz-ext-bin version, default "1.0.0"
    -k <ver>   set kubeasz-k8s-bin version, default "v1.23.1"
    -m <str>   set docker registry mirrors, default "CN"(used in Mainland,China)
    -p <ver>   set kubeasz-sys-pkg version, default "0.4.2"
    -z <ver>   set kubeasz version, default "3.2.0

#./ezdown -D 会自动下载到/etc/kubeasz这个目录
root@k8s-master1-etcd1:~# ./ezdown -D
......
......
60775238382e: Pull complete
528677575c0b: Pull complete
Digest: sha256:f741e403b3ca161e784163de3ebde9190905fdbf7dfaa463620ab8f16c0f6423
Status: Downloaded newer image for easzlab/nfs-subdir-external-provisioner:v4.0.2
docker.io/easzlab/nfs-subdir-external-provisioner:v4.0.2
3.2.0: Pulling from easzlab/kubeasz
Digest: sha256:55910c9a401c32792fa4392347697b5768fcc1fd5a346ee099e48f5ec056a135
Status: Image is up to date for easzlab/kubeasz:3.2.0
docker.io/easzlab/kubeasz:3.2.0
2022-04-14 14:14:57 INFO Action successed: download_all

#上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeasz
root@k8s-master1-etcd1:~# ll /etc/kubeasz/
total 120
drwxrwxr-x  11 root root  4096 Apr 14 13:32 ./
drwxr-xr-x 101 root root  4096 Apr 14 13:08 ../
-rw-rw-r--   1 root root   301 Jan  5 20:19 .gitignore
-rw-rw-r--   1 root root  6137 Jan  5 20:19 README.md
-rw-rw-r--   1 root root 20304 Jan  5 20:19 ansible.cfg
drwxr-xr-x   3 root root  4096 Apr 14 13:32 bin/
drwxrwxr-x   8 root root  4096 Jan  5 20:28 docs/
drwxr-xr-x   2 root root  4096 Apr 14 14:14 down/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 example/
-rwxrwxr-x   1 root root 24716 Jan  5 20:19 ezctl*
-rwxrwxr-x   1 root root 15350 Jan  5 20:19 ezdown*
drwxrwxr-x  10 root root  4096 Jan  5 20:28 manifests/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 pics/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 playbooks/
drwxrwxr-x  22 root root  4096 Jan  5 20:28 roles/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 tools/

6.3 创建集群配置实例

root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl new k8s-cluster-01
2022-04-14 14:34:07 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-cluster-01
2022-04-14 14:34:07 DEBUG set versions
2022-04-14 14:34:07 DEBUG cluster k8s-cluster-01: files successfully created.
2022-04-14 14:34:07 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-cluster-01/hosts'
2022-04-14 14:34:07 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-cluster-01/config.yml'

root@k8s-master1-etcd1:/etc/kubeasz/clusters/k8s-cluster-01# vim hosts
root@k8s-master1-etcd1:/etc/kubeasz/clusters/k8s-cluster-01# vim config.yml

然后根据提示配置/etc/kubeasz/clusters/k8s-cluster-01/hosts/etc/kubeasz/clusters/k8s-01/config.yml:根据前面节点规划修改hosts 文件和其他集群层面的主要配置选项;其他集群组件等配置项可以在config.yml 文件中修改

6.4 编辑ansible host文件

# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
172.31.7.101
172.31.7.102
172.31.7.103

# master node(s)
[kube_master]
172.31.7.101
172.31.7.102

# work node(s)
[kube_node]
172.31.7.111
172.31.7.112

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#172.31.7.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb]
172.31.7.6 LB_ROLE=backup EX_APISERVER_VIP=172.31.7.188 EX_APISERVER_PORT=6443
172.31.7.7 LB_ROLE=master EX_APISERVER_VIP=172.31.7.188 EX_APISERVER_PORT=6443

# [optional] ntp server for the cluster
[chrony]
#172.31.7.1

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"

# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"

# NodePort Range
NODE_PORT_RANGE="30000-32767"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/local/bin"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-cluster-01"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"

6.5 编辑ansible config文件

############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"

# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false

# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
  - "ntp1.aliyun.com"
  - "time1.cloud.tencent.com"
  - "0.cn.pool.ntp.org"

# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",默认全部允许
local_network: "0.0.0.0/0"


############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"

# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"

# k8s version
K8S_VER: "1.23.1"

############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""


############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true

# [containerd]基础容器镜像
SANDBOX_IMAGE: "harbor.host.com/base/pause:3.6"

# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]开启Restful API
ENABLE_REMOTE_API: false

# [docker]信任的HTTP仓库
INSECURE_REG: '["127.0.0.1/8"]'


############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
  - "172.31.7.188"
  - "k8s.test.io"
  #- "www.test.com"

# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node节点最大pod 数
MAX_PODS: 200

# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"

# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"

# haproxy balance mode
BALANCE_ALG: "roundrobin"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.15.1"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"

# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"

# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.19.3"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"

# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1

# [cilium]镜像版本
cilium_ver: "v1.4.1"

# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"

# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"

# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"

# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"

# [kube-router]kube-router 离线镜像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"


############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "no" #是否自动安装DNS组件
corednsVer: "1.8.6"
ENABLE_LOCAL_DNS_CACHE: false #是否启用DNS缓存
dnsNodeCacheVer: "1.21.1"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"

# metric server 自动安装
metricsserver_install: "yes"
metricsVer: "v0.5.2"

# dashboard 自动安装
dashboard_install: "no"
dashboardVer: "v2.4.0"
dashboardMetricsScraperVer: "v1.0.7"

# ingress 自动安装
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "10.3.0"

# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"

# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443

# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true

6.6 部署kubernetes集群

6.6.1环境初始化
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup --help
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master
          
#to prepare CA/certs & kubeconfig & other system settings
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup k8s-cluster-01 01
6.6.2 部署etcd集群
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup --help
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup k8s-cluster-01 02

#======================验证==========================
root@etcd1:~# export NODE_IPS="172.31.7.101 172.31.7.102 172.31.7.103"
root@etcd1:~# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done

#预期结果:
https://172.31.7.101:2379 is healthy: successfully committed proposal: took = 8.382006ms
https://172.31.7.102:2379 is healthy: successfully committed proposal: took = 9.229917ms
https://172.31.7.103:2379 is healthy: successfully committed proposal: took = 9.351794ms

6.6.3 部署Docker

手动安装可以参考5.1

root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup --help
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup k8s-cluster-01 03

#=========================================docker info==================================
root@k8s-master1-etcd1:/etc/kubeasz# docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 15
 Server Version: 19.03.15
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: ea765aba0d05254012b0b9e595e995c09186427f
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-107-generic
 Operating System: Ubuntu 20.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 1.907GiB
 Name: k8s-master1-etcd1.host.com
 ID: MTDT:7OEU:TYOT:E5QT:OISZ:BXHC:EM4P:IHM5:SKXI:V6RA:5QFI:R676
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://docker.mirrors.ustc.edu.cn/
  http://hub-mirror.c.163.com/
 Live Restore Enabled: false
 Product License: Community Engine

WARNING: No swap limit support
6.6.4 部署master
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup --help
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup k8s-cluster-01 04

#=============================验证===============================
root@k8s-master1-etcd1:/etc/kubeasz# kubectl get node
NAME           STATUS                     ROLES    AGE     VERSION
172.31.7.101   Ready,SchedulingDisabled   master   6m31s   v1.23.1
172.31.7.102   Ready,SchedulingDisabled   master   6m32s   v1.23.1
6.6.5 部署node
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup --help
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup k8s-cluster-01 05

#=============================验证===============================
root@k8s-master1-etcd1:/etc/kubeasz# kubectl get node
NAME           STATUS                     ROLES    AGE   VERSION
172.31.7.101   Ready,SchedulingDisabled   master   12m   v1.23.1
172.31.7.102   Ready,SchedulingDisabled   master   12m   v1.23.1
172.31.7.111   Ready                      node     48s   v1.23.1
172.31.7.112   Ready                      node     48s   v1.23.1
6.6.6 部署calico网络服务
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup --help
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl setup k8s-cluster-01 06

#=============================验证===============================

root@k8s-master1-etcd1:/etc/kubeasz# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-754966f84c-qp66w   0/1     ContainerCreating   0          4m37s
kube-system   calico-node-dzdnf                          0/1     Init:0/2            0          4m37s
kube-system   calico-node-g87zf                          0/1     Init:0/2            0          4m37s
kube-system   calico-node-gq7hc                          0/1     Init:0/2            0          4m37s
kube-system   calico-node-tz4bb                          0/1     Init:0/2            0          4m37s
报错信息

修复以上报错

root@k8s-master1-etcd1:/etc/kubeasz# docker tag 6270bb605e12 harbor.host.com/base/pause:3.6
root@k8s-master1-etcd1:/etc/kubeasz# docker push harbor.host.com/base/pause:3.6 #把pause镜像推送到harbor仓库

root@k8s-master1-etcd1:/etc/kubeasz# ansible -i /etc/kubeasz/clusters/k8s-cluster-01/hosts kube_node -m shell -a "docker pull harbor.host.com/base/pause:3.6" #node节点拉取镜象


root@k8s-master1-etcd1:/etc/kubeasz# ansible -i /etc/kubeasz/clusters/k8s-cluster-01/hosts kube_master -m shell -a "docker pull harbor.host.com/base/pause:3.6" #master节点拉取镜象

#========================================验证======================================
root@k8s-master1-etcd1:/etc/kubeasz# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-754966f84c-qp66w   1/1     Running   0          27m
kube-system   calico-node-dzdnf                          1/1     Running   0          27m
kube-system   calico-node-g87zf                          1/1     Running   0          27m
kube-system   calico-node-gq7hc                          1/1     Running   0          27m
kube-system   calico-node-tz4bb                          1/1     Running   0          27m


root@k8s-master1-etcd1:~# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 172.31.7.111 | node-to-node mesh | up    | 08:55:34 | Established |
| 172.31.7.112 | node-to-node mesh | up    | 08:56:07 | Established |
| 172.31.7.102 | node-to-node mesh | up    | 08:56:53 | Established |
+--------------+-------------------+-------+----------+-------------+

验证网络

#创建多个pod
kubectl run net-test1 --image=centos:7.6.1810 sleep 360000
kubectl run net-test2 --image=centos:7.6.1810 sleep 360000
kubectl run net-test3 --image=centos:7.6.1810 sleep 360000
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,457评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,837评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,696评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,183评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,057评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,105评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,520评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,211评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,482评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,574评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,353评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,213评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,576评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,897评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,174评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,489评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,683评论 2 335

推荐阅读更多精彩内容