前置准备
官方文档指出k8s在安装kubeadm时需要使用iptables作为后端,但CentOS 8已经使用nftable无法切换,存在兼容性问题!
本次采用网络部署,需要:
- 梯子,docker和k8s的安装包基本都在需要(当然也可以准备下载安装包或者切换到阿里源,但本次不采用,增加部署的复杂度)
- 三台安装了CentOS 7的虚拟机,均连接好网络
确保 iptables 工具不使用 nftables 后端
在 Linux 中,nftables 当前可以作为内核 iptables 子系统的替代品。 iptables 工具可以充当兼容性层,其行为类似于 iptables 但实际上是在配置 nftables。 nftables 后端与当前的 kubeadm 软件包不兼容:它会导致重复防火墙规则并破坏 kube-proxy。
如果您系统的 iptables 工具使用 nftables 后端,则需要把 iptables 工具切换到“旧版”模式来避免这些问题。 默认情况下,至少在 Debian 10 (Buster)、Ubuntu 19.04、Fedora 29 和较新的发行版本中会出现这种问题。RHEL 8 不支持切换到旧版本模式,因此与当前的 kubeadm 软件包不兼容
目录:
- 一、安装Docker【每台host上均需要执行】
- 1.1、卸载旧版本(本次为新host无需卸载)
- 1.2、安装存储库(在新host上首次安装Docker CE前,需要设置Docker存储库,之后可从存储库安装和更新Docker)
- 1.3、安装 Docker CE
- 1.4、设置开机自启动及启动docker
- 1.5、修改docker cgroup驱动,与k8s一致,使用systemd
- 二、安装kubelet、kubeadm和kubectl【每台host上都需要安装】
- 2.1前置准备
- 2.2安装kubectl、kubelet、kubeadm
- 三、用kubeadm创建Cluster【仅仅master的host需要运行】
- 3.1初始化Master
- 3.2、配置kubectl
- 3.3、安装Pod网络
- 四、添加其他节点到Cluster上
- 4.1、将其他节点注册到cluster上【在非master的host上执行】
- 4.2、验证
一、安装Docker【每台host上均需要执行】
1.1、卸载旧版本(本次为新host无需卸载)
Docker的旧版本被称为docker或docker-engine。 如果安装了这些,请卸载它们以及关联的依赖关系
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
1.2、安装存储库(在新host上首次安装Docker CE前,需要设置Docker存储库,之后可从存储库安装和更新Docker)
1.2.0确保 yum 包更新到最新
sudo yum update
1.2.1安装必须的包。yum-utils提供了yum-config-manager实用程序,并且device-mapper-persistent-data和lvm2需要devicemapper存储驱动程序。
sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
[woods@k8s-host1 ~]$ sudo yum install -y yum-utils \
> device-mapper-persistent-data \
> lvm2
[sudo] woods 的密码:
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.bfsu.edu.cn
软件包 yum-utils-1.1.31-54.el7_8.noarch 已安装并且是最新版本
软件包 device-mapper-persistent-data-0.8.5-2.el7.x86_64 已安装并且是最新版本
软件包 7:lvm2-2.02.186-7.el7_8.2.x86_64 已安装并且是最新版本
无须任何处理
1.2.2使用以下命令设置稳定存储库
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
[woods@k8s-host1 ~]$ sudo yum-config-manager \
> --add-repo \
> https://download.docker.com/linux/centos/docker-ce.repo
已加载插件:fastestmirror, langpacks
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
2.3查看docker版本(使用排序-r命令对结果进行排序,版本号由最高到最低,并被截断。)
sudo yum list docker-ce --showduplicates | sort -r
[woods@k8s-host1 ~]$ sudo yum list docker-ce --showduplicates | sort -r
已加载插件:fastestmirror, langpacks
可安装的软件包
* updates: mirrors.bfsu.edu.cn
Loading mirror speeds from cached hostfile
* extras: mirrors.aliyun.com
docker-ce.x86_64 3:19.03.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.10-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
* base: mirrors.aliyun.com
1.3、安装 Docker CE
1.3.1.安装最新版本的Docker CE,或安装特定版本。
#最新版本
sudo yum install docker-ce
#特定版本,需要使用完全限定的包名,eg:docker-ce-18.06.3.ce
sudo yum install docker-ce-18.06.3.ce
[woods@k8s-host1 ~]$ sudo yum install docker-ce
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.bfsu.edu.cn
正在解决依赖关系
--> 正在检查事务
---> 软件包 docker-ce.x86_64.3.19.03.10-3.el7 将被 安装
--> 正在处理依赖关系 container-selinux >= 2:2.74,它被软件包 3:docker-ce-19.03.10-3.el7.x86_64 需要
--> 正在处理依赖关系 containerd.io >= 1.2.2-3,它被软件包 3:docker-ce-19.03.10-3.el7.x86_64 需要
--> 正在处理依赖关系 docker-ce-cli,它被软件包 3:docker-ce-19.03.10-3.el7.x86_64 需要
--> 正在检查事务
---> 软件包 container-selinux.noarch.2.2.119.1-1.c57a6f9.el7 将被 安装
---> 软件包 containerd.io.x86_64.0.1.2.13-3.2.el7 将被 安装
---> 软件包 docker-ce-cli.x86_64.1.19.03.10-3.el7 将被 安装
--> 解决依赖关系完成
依赖关系解决
============================================================================================================================================
Package 架构 版本 源 大小
============================================================================================================================================
正在安装:
docker-ce x86_64 3:19.03.10-3.el7 docker-ce-stable 24 M
为依赖而安装:
container-selinux noarch 2:2.119.1-1.c57a6f9.el7 extras 40 k
containerd.io x86_64 1.2.13-3.2.el7 docker-ce-stable 25 M
docker-ce-cli x86_64 1:19.03.10-3.el7 docker-ce-stable 38 M
事务概要
============================================================================================================================================
安装 1 软件包 (+3 依赖软件包)
总下载量:88 M
安装大小:360 M
Is this ok [y/d/N]: y
Downloading packages:
(1/4): container-selinux-2.119.1-1.c57a6f9.el7.noarch.rpm | 40 kB 00:00:00
containerd.io-1.2.13-3.2.el7.x FAILED ] 56 B/s | 1.8 MB 443:32:37 ETA
https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm: [Errno 12] Timeout on https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
正在尝试其它镜像。
docker-ce-19.03.10-3.el7.x86_6 FAILED ] 212 kB/s | 15 MB 00:05:51 ETA
https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-19.03.10-3.el7.x86_64.rpm: [Errno 12] Timeout on https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-19.03.10-3.el7.x86_64.rpm: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
正在尝试其它镜像。
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/docker-ce-cli-19.03.10-3.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
docker-ce-cli-19.03.10-3.el7.x86_64.rpm 的公钥尚未安装
(2/4): docker-ce-cli-19.03.10-3.el7.x86_64.rpm | 38 MB 00:04:56
(3/4): containerd.io-1.2.13-3.2.el7.x86_64.rpm | 25 MB 00:02:13
(4/4): docker-ce-19.03.10-3.el7.x86_64.rpm | 24 MB 00:03:35
--------------------------------------------------------------------------------------------------------------------------------------------
总计 118 kB/s | 88 MB 00:12:43
从 https://download.docker.com/linux/centos/gpg 检索密钥
导入 GPG key 0x621E9F35:
用户ID : "Docker Release (CE rpm) <docker@docker.com>"
指纹 : 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
来自 : https://download.docker.com/linux/centos/gpg
是否继续?[y/N]:y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : 2:container-selinux-2.119.1-1.c57a6f9.el7.noarch 1/4
正在安装 : containerd.io-1.2.13-3.2.el7.x86_64 2/4
正在安装 : 1:docker-ce-cli-19.03.10-3.el7.x86_64 3/4
正在安装 : 3:docker-ce-19.03.10-3.el7.x86_64 4/4
验证中 : 3:docker-ce-19.03.10-3.el7.x86_64 1/4
验证中 : 2:container-selinux-2.119.1-1.c57a6f9.el7.noarch 2/4
验证中 : 1:docker-ce-cli-19.03.10-3.el7.x86_64 3/4
验证中 : containerd.io-1.2.13-3.2.el7.x86_64 4/4
已安装:
docker-ce.x86_64 3:19.03.10-3.el7
作为依赖被安装:
container-selinux.noarch 2:2.119.1-1.c57a6f9.el7 containerd.io.x86_64 0:1.2.13-3.2.el7 docker-ce-cli.x86_64 1:19.03.10-3.el7
完毕!
1.4、设置开机自启动及启动docker
#启动docker
systemctl start docker
#设置开机启动docker
systemctl enable docker
#查看docker是否运行
systemctl status docker
[woods@k8s-host1 ~]$ systemctl start docker
[woods@k8s-host1 ~]$ systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[woods@k8s-host1 ~]$ systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since 六 2020-05-30 14:55:08 CST; 28s ago
Docs: https://docs.docker.com
Main PID: 85200 (dockerd)
CGroup: /system.slice/docker.service
└─85200 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
5月 30 14:55:07 k8s-host1 dockerd[85200]: time="2020-05-30T14:55:07.881068173+08:00" level=info msg="scheme \"unix\" not register...le=grpc
5月 30 14:55:07 k8s-host1 dockerd[85200]: time="2020-05-30T14:55:07.881083114+08:00" level=info msg="ccResolverWrapper: sending u...le=grpc
5月 30 14:55:07 k8s-host1 dockerd[85200]: time="2020-05-30T14:55:07.881090513+08:00" level=info msg="ClientConn switching balance...le=grpc
5月 30 14:55:07 k8s-host1 dockerd[85200]: time="2020-05-30T14:55:07.895660619+08:00" level=info msg="Loading containers: start."
5月 30 14:55:08 k8s-host1 dockerd[85200]: time="2020-05-30T14:55:08.088192255+08:00" level=info msg="Default bridge (docker0) is ...ddress"
5月 30 14:55:08 k8s-host1 dockerd[85200]: time="2020-05-30T14:55:08.216881984+08:00" level=info msg="Loading containers: done."
5月 30 14:55:08 k8s-host1 dockerd[85200]: time="2020-05-30T14:55:08.231973209+08:00" level=info msg="Docker daemon" commit=9424ae...9.03.10
5月 30 14:55:08 k8s-host1 dockerd[85200]: time="2020-05-30T14:55:08.232260324+08:00" level=info msg="Daemon has completed initialization"
5月 30 14:55:08 k8s-host1 dockerd[85200]: time="2020-05-30T14:55:08.245414500+08:00" level=info msg="API listen on /var/run/docker.sock"
5月 30 14:55:08 k8s-host1 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
1.5、修改docker cgroup驱动,与k8s一致,使用systemd
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
# Restart Docker
systemctl daemon-reload
systemctl restart docker
二、安装kubelet、kubeadm和kubectl【每台host上都需要安装】
- kubelet 运行在 Cluster 所有节点上,负责启动 Pod 和容器
- kubeadm 用于初始化 Cluster
- kubectl 是 Kubernetes 命令行工具,通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
2.1前置准备
2.1.1通过运行命令 setenforce 0 和 sed ... 将 SELinux 设置为 permissive 模式可以有效的将其禁用。 这是允许容器访问主机文件系统所必须的,例如正常使用 pod 网络。 您必须这么做,直到 kubelet 做出升级支持 SELinux 为止
[woods@k8s-host1 ~]$ sudo setenforce 0
[woods@k8s-host1 ~]$ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
2.1.2一些 RHEL/CentOS 7 的用户曾经遇到过问题:由于 iptables 被绕过而导致流量无法正确路由的问题。应该确保 在 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 被设置为 1
#需要 root权限
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
[root@k8s-host1 woods]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@k8s-host1 woods]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
2.1.3关闭swap和注释swap分区
#如果开启了 swap 分区,kubelet 会启动失败(可以通过将参数 --fail-swap-on 设置为false 来忽略 swap on),故需要在每台机器上关闭 swap 分区
sudo swapoff -a
#为了防止开机自动挂载 swap 分区,可以注释 /etc/fstab 中相应的条目:
sudo vi /etc/fstab
[woods@k8s-host1 ~]$ sudo swapoff -a
[woods@k8s-host1 ~]$ sudo vi /etc/fstab
[woods@k8s-host1 ~]$ cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Sat May 30 12:42:41 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=154bcd19-085c-4052-9c9e-1a24b8665f8b /boot xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
2.2安装kubectl、kubelet、kubeadm
#最好root下安装
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes
systemctl enable kubelet
[root@k8s-host1 woods]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF
[root@k8s-host1 woods]# yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
base | 3.6 kB 00:00:00
docker-ce-stable | 3.5 kB 00:00:00
extras | 2.9 kB 00:00:00
kubernetes/signature | 454 B 00:00:00
从 https://packages.cloud.google.com/yum/doc/yum-key.gpg 检索密钥
导入 GPG key 0xA7317B0F:
用户ID : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
指纹 : d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
来自 : https://packages.cloud.google.com/yum/doc/yum-key.gpg
从 https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 检索密钥
kubernetes/signature | 1.4 kB 00:00:00 !!!
updates | 2.9 kB 00:00:00
kubernetes/primary | 69 kB 00:00:00
kubernetes 505/505
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.18.3-0 将被 安装
--> 正在处理依赖关系 kubernetes-cni >= 0.7.5,它被软件包 kubeadm-1.18.3-0.x86_64 需要
--> 正在处理依赖关系 cri-tools >= 1.13.0,它被软件包 kubeadm-1.18.3-0.x86_64 需要
---> 软件包 kubectl.x86_64.0.1.18.3-0 将被 安装
---> 软件包 kubelet.x86_64.0.1.18.3-0 将被 安装
--> 正在处理依赖关系 socat,它被软件包 kubelet-1.18.3-0.x86_64 需要
--> 正在处理依赖关系 conntrack,它被软件包 kubelet-1.18.3-0.x86_64 需要
--> 正在检查事务
---> 软件包 conntrack-tools.x86_64.0.1.4.4-7.el7 将被 安装
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_queue.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
---> 软件包 cri-tools.x86_64.0.1.13.0-0 将被 安装
---> 软件包 kubernetes-cni.x86_64.0.0.7.5-0 将被 安装
---> 软件包 socat.x86_64.0.1.7.3.2-2.el7 将被 安装
--> 正在检查事务
---> 软件包 libnetfilter_cthelper.x86_64.0.1.0.0-11.el7 将被 安装
---> 软件包 libnetfilter_cttimeout.x86_64.0.1.0.0-7.el7 将被 安装
---> 软件包 libnetfilter_queue.x86_64.0.1.0.2-2.el7_2 将被 安装
--> 解决依赖关系完成
依赖关系解决
============================================================================================================================================
Package 架构 版本 源 大小
============================================================================================================================================
正在安装:
kubeadm x86_64 1.18.3-0 kubernetes 8.8 M
kubectl x86_64 1.18.3-0 kubernetes 9.5 M
kubelet x86_64 1.18.3-0 kubernetes 21 M
为依赖而安装:
conntrack-tools x86_64 1.4.4-7.el7 base 187 k
cri-tools x86_64 1.13.0-0 kubernetes 5.1 M
kubernetes-cni x86_64 0.7.5-0 kubernetes 10 M
libnetfilter_cthelper x86_64 1.0.0-11.el7 base 18 k
libnetfilter_cttimeout x86_64 1.0.0-7.el7 base 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
socat x86_64 1.7.3.2-2.el7 base 290 k
事务概要
============================================================================================================================================
安装 3 软件包 (+7 依赖软件包)
总下载量:55 M
安装大小:246 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm | 187 kB 00:00:00
warning: /var/cache/yum/x86_64/7/kubernetes/packages/a23839a743e789babb0ce912fa440f6e6ceb15bc5db42dd91aa0838c994b3452-kubeadm-1.18.3-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
a23839a743e789babb0ce912fa440f6e6ceb15bc5db42dd91aa0838c994b3452-kubeadm-1.18.3-0.x86_64.rpm 的公钥尚未安装
(2/10): a23839a743e789babb0ce912fa440f6e6ceb15bc5db42dd91aa0838c994b3452-kubeadm-1.18.3-0.x86_64.rpm | 8.8 MB 00:00:13
(3/10): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm | 5.1 MB 00:00:16
(4/10): cd5d6980c3e1b15de222db08729eff40f7031b7fa56c71ae3e28e420ba9678cd-kubectl-1.18.3-0.x86_64.rpm | 9.5 MB 00:00:15
(5/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm | 18 kB 00:00:00
(6/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm | 18 kB 00:00:01
(7/10): socat-1.7.3.2-2.el7.x86_64.rpm | 290 kB 00:00:01
(8/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm | 23 kB 00:00:03
(9/10): 548a0dcd865c16a50980420ddfa5fbccb8b59621179798e6dc905c9bf8af3b34-kubernetes-cni-0.7.5-0.x86_64.rpm | 10 MB 00:00:20
(10/10): d1a0216cfab2fb28e82be531327ebde9a554bb6d33e3c8313acc9bc728ba59d1-kubelet-1.18.3-0.x86_64.rpm | 21 MB 00:00:42
--------------------------------------------------------------------------------------------------------------------------------------------
总计 963 kB/s | 55 MB 00:00:58
从 https://packages.cloud.google.com/yum/doc/yum-key.gpg 检索密钥
导入 GPG key 0xA7317B0F:
用户ID : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
指纹 : d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
来自 : https://packages.cloud.google.com/yum/doc/yum-key.gpg
从 https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 检索密钥
导入 GPG key 0x3E1BA8D5:
用户ID : "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
指纹 : 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5
来自 : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 1/10
正在安装 : socat-1.7.3.2-2.el7.x86_64 2/10
正在安装 : cri-tools-1.13.0-0.x86_64 3/10
正在安装 : kubectl-1.18.3-0.x86_64 4/10
正在安装 : libnetfilter_queue-1.0.2-2.el7_2.x86_64 5/10
正在安装 : libnetfilter_cthelper-1.0.0-11.el7.x86_64 6/10
正在安装 : conntrack-tools-1.4.4-7.el7.x86_64 7/10
正在安装 : kubernetes-cni-0.7.5-0.x86_64 8/10
正在安装 : kubelet-1.18.3-0.x86_64 9/10
正在安装 : kubeadm-1.18.3-0.x86_64 10/10
验证中 : kubelet-1.18.3-0.x86_64 1/10
验证中 : libnetfilter_cthelper-1.0.0-11.el7.x86_64 2/10
验证中 : conntrack-tools-1.4.4-7.el7.x86_64 3/10
验证中 : libnetfilter_queue-1.0.2-2.el7_2.x86_64 4/10
验证中 : kubeadm-1.18.3-0.x86_64 5/10
验证中 : kubectl-1.18.3-0.x86_64 6/10
验证中 : cri-tools-1.13.0-0.x86_64 7/10
验证中 : kubernetes-cni-0.7.5-0.x86_64 8/10
验证中 : socat-1.7.3.2-2.el7.x86_64 9/10
验证中 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 10/10
已安装:
kubeadm.x86_64 0:1.18.3-0 kubectl.x86_64 0:1.18.3-0 kubelet.x86_64 0:1.18.3-0
作为依赖被安装:
conntrack-tools.x86_64 0:1.4.4-7.el7 cri-tools.x86_64 0:1.13.0-0 kubernetes-cni.x86_64 0:0.7.5-0
libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
socat.x86_64 0:1.7.3.2-2.el7
完毕!
三、用kubeadm创建Cluster【仅仅master的host需要运行】
3.1初始化Master
kubeadm init --apiserver-advertise-address 192.168.137.21 --pod-network-cidr=10.244.0.0/16
- --apiserver-advertise-address 指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface
- --pod-network-cidr 指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对 --pod-network-cidr 有自己的要求,这里设置为 10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR
[root@k8s-host1 woods]# kubeadm init --apiserver-advertise-address 192.168.137.21 --pod-network-cidr=10.244.0.0/16
W0530 16:15:05.487513 87490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "k8s-host1" could not be reached
[WARNING Hostname]: hostname "k8s-host1": lookup k8s-host1 on 192.168.137.1:53: no such host
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-host1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.137.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-host1 localhost] and IPs [192.168.137.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-host1 localhost] and IPs [192.168.137.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0530 16:22:07.804392 87490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0530 16:22:07.805056 87490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.502804 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-host1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-host1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: jj3yma.mhpv44juycfelre7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.137.21:6443 --token jj3yma.mhpv44juycfelre7 \
--discovery-token-ca-cert-hash sha256:73c888c25066386dc233d68ad7f424e792ab02340b54fba5250a3ffa1b92e28b
3.2、配置kubectl
推荐使用Linux普通用户执行kubectl
[woods@k8s-host1 ~]$ su - woods
密码:
上一次登录:六 5月 30 12:51:28 CST 2020:0 上
[woods@k8s-host1 ~]$ mkdir -p $HOME/.kube
[woods@k8s-host1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[sudo] woods 的密码:
[woods@k8s-host1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[woods@k8s-host1 ~]$ echo "source <(kubectl completion bash)" >> ~/.bashrc
3.3、安装Pod网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[woods@k8s-host1 ~]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
四、添加其他节点到Cluster上
4.1、将其他节点注册到cluster上【在非master的host上执行】
可以用kubectl token list
查看token,kubeadm初始化时生成的token只有24小时有效期,过后需要重新生成
#需要root
kubeadm join 192.168.137.21:6443 --token jj3yma.mhpv44juycfelre7 \
--discovery-token-ca-cert-hash sha256:73c888c25066386dc233d68ad7f424e792ab02340b54fba5250a3ffa1b92e28b
[root@k8s-host2 woods]# kubeadm join 192.168.137.21:6443 --token jj3yma.mhpv44juycfelre7 --discovery-token-ca-cert-hash sha256:73c888c25066386dc233d68ad7f424e792ab02340b54fba5250a3ffa1b92e28b
W0530 18:14:58.284790 68181 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "k8s-host2" could not be reached
[WARNING Hostname]: hostname "k8s-host2": lookup k8s-host2 on 192.168.137.1:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
41.1、
#查看--discovery-token-ca-cert-hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
#查看token
kubeadm token list
[woods@host1 ~]$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
acdf23472a1650791d3297b4670a428b5dc035900e7c7a7bbfd2e333f8080fd1
[woods@host1 ~]$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
ukjrrv.mli0o16jpj0sgzed 23h 2020-05-31T23:00:21+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
4.2、验证
查看节点
[woods@k8s-host1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-host1 Ready master 117m v1.18.3
k8s-host2 NotReady <none> 5m v1.18.3
k8s-host3 Ready <none> 9m21s v1.18.3
查看pod
[woods@k8s-host1 ~]$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-8vf4r 1/1 Running 0 117m
kube-system coredns-66bff467f8-jv5nr 1/1 Running 0 117m
kube-system etcd-k8s-host1 1/1 Running 0 117m
kube-system kube-apiserver-k8s-host1 1/1 Running 0 117m
kube-system kube-controller-manager-k8s-host1 1/1 Running 0 117m
kube-system kube-flannel-ds-amd64-j2454 0/1 Init:0/1 0 4m50s
kube-system kube-flannel-ds-amd64-jht6k 1/1 Running 0 82m
kube-system kube-flannel-ds-amd64-tnlcf 1/1 Running 0 9m11s
kube-system kube-proxy-b2jrk 1/1 Running 0 4m50s
kube-system kube-proxy-cz8ss 1/1 Running 0 9m11s
kube-system kube-proxy-dsmrn 1/1 Running 0 117m
kube-system kube-scheduler-k8s-host1 1/1 Running 0 117m