一、概述
kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本。Kubernetes 1.13 是迄今为止发布间隔最短的版本之一(与上一版本间隔十周),主要关注 Kubernetes 的稳定性与可扩展性,其中存储与集群生命周期相关的三项主要功能已逐步实现普遍可用性。
Kubernetes 1.13 的核心特性包括:利用 kubeadm 简化集群管理、容器存储接口(CSI )以及将 CoreDNS 作为默认 DNS 。
利用 kubeadm 简化集群管理功能
大多数与 Kubernetes 接触频繁的人或多或少都会亲自动手使用 kubeadm ,它是管理集群生命周期的重要工具,能够帮助从创建到配置再到升级的整个流程。;随着 1.13 版本的发布,kubeadm 功能进入 GA 版本,正式普遍可用。kubeadm 处理现有硬件上的生产集群的引导,并以最佳实践方式配置核心 Kubernetes 组件,以便为新节点提供安全而简单的连接流程并支持轻松升级。
该 GA 版本中最值得注意的是已经毕业的高级功能,尤其是可插拔性和可配置性。kubeadm 旨在为管理员与高级自动化系统提供一套工具箱,如今已迈出重要一步。
容器存储接口(CSI)
容器存储接口最初于 1.9 版本中作为 alpha 测试功能引入,在 1.10 版本中进入 beta 测试,如今终于进入 GA 阶段正式普遍可用。在 CSI 的帮助下,Kubernetes 卷层将真正实现可扩展性。通过 CSI ,第三方存储供应商将可以直接编写可与 Kubernetes 互操作的代码,而无需触及任何 Kubernetes 核心代码。事实上,相关规范也已经同步进入 1.0 阶段。
随着 CSI 的稳定,插件作者将能够按照自己的节奏开发核心存储插件,详见 CSI 文档。
CoreDNS 成为 Kubernetes 的默认 DNS 服务器
在 1.11 版本中,开发团队宣布 CoreDNS 已实现基于 DNS 服务发现的普遍可用性。在最新的 1.13 版本中,CoreDNS 正式取代 kuber-dns 成为 Kubernetes 中的默认 DNS 服务器。CoreDNS 是一种通用的、权威的 DNS 服务器,能够提供与 Kubernetes 向下兼容且具备可扩展性的集成能力。由于 CoreDNS 自身单一可执行文件与单一进程的特性,因此 CoreDNS 的活动部件数量会少于之前的 DNS 服务器,且能够通过创建自定义 DNS 条目来支持各类灵活的用例。此外,由于 CoreDNS 采用 Go 语言编写,它具有强大的内存安全性。
CoreDNS 现在是 Kubernetes 1.13 及后续版本推荐的 DNS 解决方案,Kubernetes 已将常用测试基础设施架构切换为默认使用 CoreDNS ,因此,开发团队建议用户也尽快完成切换。KubeDNS 仍将至少支持一个版本,但现在是时候开始规划迁移了。另外,包括 1.11 中 Kubeadm 在内的许多 OSS 安装工具也已经进行了切换。
1、安装环境准备
2、kubernetes架构图
3、kubernetes工作流程
集群功能各模块功能描述:
Master节点:
Master节点上面主要由四个模块组成,APIServer,schedule,controller-manager,etcd
APIServer: APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。
schedule: schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。
controller manager: 如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而control manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了。
etcd:etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。
Node节点:
每个Node节点主要由三个模板组成:kublet, kube-proxy
kube-proxy: 该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP连接转发,默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化,并且维护一个service到endpoint的映射关系,从而保证了后端pod的IP变化不会对访问者造成影响,另外,kube-proxy还支持session affinity。
kublet:kublet是Master在每个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上的所有容器,但是如果容器不是通过kubernetes创建的,它并不会管理。本质上,它负责使Pod的运行状态与期望的状态一致。
二、kubernetes安装及配置
1、初始化环境
1.1、设置关闭防火墙及SELINUX
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled
1.2、关闭swap
swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#UUID=7bff6243-324c-4587-b550-55dc34018ebf swap swap defaults 0 0
1.3、设置Docker所需参数
cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
1.4、安装 Docker
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y
systemctl start docker && systemctl enable docker
1.5、创建安装目录
mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
1.6、安装及配置CFSSL
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2、安装ETCD
2.1、创建认证证书
创建 ETCD 证书
cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
创建 ETCD CA 配置文件
cat << EOF | tee ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Shenzhen"
}
]
}
EOF
创建 ETCD Server 证书
cat << EOF | tee server-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.187.131",
"192.168.187.132",
"192.168.187.133"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Shenzhen"
}
]
}
EOF
生成 ETCD CA 证书和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2.2、部署ETCD
解压安装文件
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
vi /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.187.131:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.187.131:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.187.131:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.187.131:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.187.131:2380,etcd02=https://192.168.187.132:2380,etcd03=https://192.168.187.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
创建 etcd的 systemd unit 文件
vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/k8s/etcd/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/k8s/etcd/ssl/server.pem \
--peer-key-file=/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
拷贝证书文件
cp ca*pem server*pem /k8s/etcd/ssl
将启动文件、配置文件拷贝到 节点1、节点2
cd /k8s/
scp -r etcd 192.168.187.132:/k8s/
scp -r etcd 192.168.187.133:/k8s/
scp /usr/lib/systemd/system/etcd.service 192.168.187.132:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service 192.168.187.133:/usr/lib/systemd/system/etcd.service
vi /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.187.132:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.187.132:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.187.132:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.187.132:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.187.131:2380,etcd02=https://192.168.187.132:2380,etcd03=https://192.168.187.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
vi /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.187.133:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.187.133:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.187.133:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.187.133:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.187.131:2380,etcd02=https://192.168.187.132:2380,etcd03=https://192.168.187.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
启动ETCD服务
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
创建etcdctl命令软链接
ln -s /k8s/etcd/bin/etcdctl /usr/bin/etcdctl
验证集群是否正常运行
[root@master ~]# etcdctl \
--ca-file=/k8s/etcd/ssl/ca.pem \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--endpoints="https://192.168.187.131:2379,\
https://192.168.187.132:2379,\
https://192.168.187.133:2379" cluster-health
member 88594f44ddf134a9 is healthy: got healthy result from https://192.168.187.131:2379
member a1380f268e8526c6 is healthy: got healthy result from https://192.168.187.133:2379
member ba9a4cade1b1efa7 is healthy: got healthy result from https://192.168.187.132:2379
cluster is healthy
2.2、部署harbor
2.2.1、harbor简介
Docker容器应用的开发和运行离不开可靠的镜像管理,虽然Docker官方也提供了公共的镜像仓库,但是从安全和效率等方面考虑,部署我们私有环境内的Registry也是非常必要的。Harbor是由VMware公司开源的企业级的Docker Registry管理项目,它包括权限管理(RBAC)、LDAP、日志审核、管理界面、自我注册、镜像复制和中文支持等功能。
2.2.2、安装docker-compose
curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
#查看版本
[root@harbor ~]# docker-compose version
docker-compose version 1.25.0-rc2, build 661ac20e
docker-py version: 4.0.1
CPython version: 3.7.4
OpenSSL version: OpenSSL 1.1.0k 28 May 2019
2.2.3、Harbor私有仓库的安装
- 下载Harbor安装文件
从 github harbor 官网 release 页面下载指定版本的安装包。
wget https://github.com/vmware/harbor/releases/download/v1.5.0/harbor-offline-installer-v1.5.0.tgz
tar xvf harbor-offline-installer-v1.5.0.tgz
- 配置Harbor
解压缩之后,目录下回生成harbor.conf文件,该文件就是Harbor的配置文件。
## Configuration file of Harbor
# hostname设置访问地址,可以使用ip、域名,不可以设置为127.0.0.1或localhost
hostname = 192.168.187.134
# 访问协议,默认是http,也可以设置https,如果设置https,则nginx ssl需要设置on
ui_url_protocol = http
# mysql数据库root用户默认密码root123,实际使用时修改下
db_password = root123
max_job_workers = 3
customize_crt = on
ssl_cert = /data/cert/server.crt
ssl_cert_key = /data/cert/server.key
secretkey_path = /data
admiral_url = NA
# 邮件设置,发送重置密码邮件时使用
email_identity =
email_server = smtp.mydomain.com
email_server_port = 25
email_username = sample_admin@mydomain.com
email_password = abc
email_from = admin <sample_admin@mydomain.com>
email_ssl = false
# 启动Harbor后,管理员UI登录的密码,默认是Harbor12345
harbor_admin_password = Harbor12345
# 认证方式,这里支持多种认证方式,如LADP、本次存储、数据库认证。默认是db_auth,mysql数据库认证
auth_mode = db_auth
# LDAP认证时配置项
#ldap_url = ldaps://ldap.mydomain.com
#ldap_searchdn = uid=searchuser,ou=people,dc=mydomain,dc=com
#ldap_search_pwd = password
#ldap_basedn = ou=people,dc=mydomain,dc=com
#ldap_filter = (objectClass=person)
#ldap_uid = uid
#ldap_scope = 3
#ldap_timeout = 5
# 是否开启自注册
self_registration = on
# Token有效时间,默认30分钟
token_expiration = 30
# 用户创建项目权限控制,默认是everyone(所有人),也可以设置为adminonly(只能管理员)
project_creation_restriction = everyone
verify_remote_cert = on
- 启动 Harbor
修改完配置文件后,在的当前目录执行./install.sh,Harbor服务就会根据当期目录下的docker-compose.yml开始下载依赖的镜像,检测并按照顺序依次启动各服务。
[root@localhost harbor]# ./install.sh
[Step 0]: checking installation environment ...
Note: docker version: 19.03.2
Note: docker-compose version: 1.25.0
[Step 1]: loading Harbor images ...
- Harbor启动服务如下
[root@harbor harbor]# docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------------------------
harbor-adminserver /harbor/start.sh Up (healthy)
harbor-db /usr/local/bin/docker-entr ... Up (healthy) 3306/tcp
harbor-jobservice /harbor/start.sh Up
harbor-log /bin/sh -c /usr/local/bin/ ... Up (healthy) 127.0.0.1:1514->10514/tcp
harbor-ui /harbor/start.sh Up (healthy)
nginx nginx -g daemon off; Up (healthy) 0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:80->80/tcp
redis docker-entrypoint.sh redis ... Up 6379/tcp
registry /entrypoint.sh serve /etc/ ... Up (healthy) 5000/tcp
-
访问web harbor
在浏览器输入http://192.168.187.134访问harbor
-
登录 Web Harbor
输入admin用户密码登录系统
在192.168.187.134主机登录harbor
[root@harbor ~]# docker login 192.168.187.134
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
推送pause镜像到harbor仓库
[root@harbor ~]# docker push 192.168.187.134/k8s/pause-amd64:3.0
The push refers to repository [192.168.187.134/k8s/pause-amd64]
5f70bf18a086: Pushed
41ff149e94f2: Pushed
3.0: digest: sha256:f04288efc7e65a84be74d4fc63e235ac3c6c603cf832e442e0bd3f240b10a91b size: 939
2.3、部署kubernetes
2.3.1、创建认证证书
创建 Kubernetes CA 证书
cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat << EOF | tee ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
生成API_SERVER证书
cat << EOF | tee server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.187.131",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
创建 Kubernetes Proxy 证书
cat << EOF | tee kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ssh-key认证
[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RpxtEMGvYrQN5yA5tawDLd3jBRAgx33+xYUOAfH6mto root@master
The key's randomart image is:
+---[RSA 2048]----+
|..ooooo+=+ . |
| o. . +oo+. . |
| o B o==o. |
| o * Xoo.= |
| o *.@So |
| o =o= |
| o .. |
| . o |
| ..E |
+----[SHA256]-----+
2.3.2、部署master节点
kubernetes master 节点运行如下组件:
- kube-apiserver
- kube-scheduler
- kube-controller-manager
kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。
下载kubernetes-server压缩包
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.11/kubernetes-server-linux-amd64.tar.gz
将二进制文件解压拷贝到master 节点
[root@master ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
[root@master ~]# cd kubernetes/server/bin/
[root@master bin]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
拷贝认证
[root@master ~]# cp *pem /k8s/kubernetes/ssl/
部署 kube-apiserver 组件
创建 TLS Bootstrapping Token
[root@master bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
9f5cb09c4e8d625501b4bfd6df0e56a3
[root@master bin]# vi /k8s/kubernetes/cfg/token.csv
9f5cb09c4e8d625501b4bfd6df0e56a3,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
创建apiserver配置文件
[root@master bin]# vi /k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.187.131:2379,https://192.168.187.132:2379,https://192.168.187.133:2379 \
--bind-address=192.168.187.131 \
--secure-port=6443 \
--advertise-address=192.168.187.131 \
--allow-privileged=true \
--service-cluster-ip-range=20.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
创建 kube-apiserver systemd unit 文件
vi /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
查看apiserver是否运行
[root@master ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2019-10-04 12:47:16 CST; 1min 19s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 6597 (kube-apiserver)
Tasks: 8
Memory: 320.1M
CGroup: /system.slice/kube-apiserver.service
└─6597 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.187.131:2379,https://192.168.18...
10月 04 12:48:32 master kube-apiserver[6597]: I1004 12:48:32.392464 6597 available_controller.go:434] Updating v1.storage.k8s.io
10月 04 12:48:32 master kube-apiserver[6597]: I1004 12:48:32.409206 6597 available_controller.go:434] Updating v1beta1.admissionr....k8s.io
10月 04 12:48:32 master kube-apiserver[6597]: I1004 12:48:32.409462 6597 available_controller.go:434] Updating v1beta1.certificates.k8s.io
10月 04 12:48:32 master kube-apiserver[6597]: I1004 12:48:32.409582 6597 available_controller.go:434] Updating v1beta1.scheduling.k8s.io
10月 04 12:48:33 master kube-apiserver[6597]: I1004 12:48:33.467725 6597 httplog.go:90] GET /api/v1/namespaces/kube-system: (11.6...:42400]
10月 04 12:48:33 master kube-apiserver[6597]: I1004 12:48:33.471771 6597 httplog.go:90] GET /api/v1/namespaces/kube-public: (3.56...:42400]
10月 04 12:48:33 master kube-apiserver[6597]: I1004 12:48:33.475319 6597 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (...:42400]
10月 04 12:48:35 master kube-apiserver[6597]: I1004 12:48:35.693829 6597 httplog.go:90] GET /api/v1/namespaces/default: (13.43223...:42400]
10月 04 12:48:35 master kube-apiserver[6597]: I1004 12:48:35.708441 6597 httplog.go:90] GET /api/v1/namespaces/default/services/k...:42400]
10月 04 12:48:35 master kube-apiserver[6597]: I1004 12:48:35.776261 6597 httplog.go:90] GET /api/v1/namespaces/default/endpoints/...:42400]
Hint: Some lines were ellipsized, use -l to show in full.
部署kube-scheduler
创建kube-scheduler配置文件
[root@master ~]# vi /k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
- –address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
- –kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
- –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
创建kube-scheduler systemd unit 文件
[root@master ~]# vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动服务
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
查看kube-scheduler是否运行
[root@master ~]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2019-10-04 13:05:14 CST; 10s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 6737 (kube-scheduler)
Tasks: 7
Memory: 48.1M
CGroup: /system.slice/kube-scheduler.service
└─6737 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
10月 04 13:05:17 master kube-scheduler[6737]: I1004 13:05:17.907502 6737 shared_informer.go:227] caches populated
10月 04 13:05:17 master kube-scheduler[6737]: I1004 13:05:17.907516 6737 shared_informer.go:227] caches populated
10月 04 13:05:17 master kube-scheduler[6737]: I1004 13:05:17.907530 6737 shared_informer.go:227] caches populated
10月 04 13:05:17 master kube-scheduler[6737]: I1004 13:05:17.907544 6737 shared_informer.go:227] caches populated
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.008490 6737 shared_informer.go:227] caches populated
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.008556 6737 shared_informer.go:227] caches populated
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.008585 6737 shared_informer.go:227] caches populated
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.008866 6737 leaderelection.go:241] attempting to acquire leader leas...uler...
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.066076 6737 leaderelection.go:251] successfully acquired lease kube-...heduler
10月 04 13:05:18 master kube-scheduler[6737]: I1004 13:05:18.067450 6737 shared_informer.go:227] caches populated
Hint: Some lines were ellipsized, use -l to show in full.
部署kube-controller-manager
创建kube-controller-manager配置文件
[root@master ~]# vi /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=20.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
创建kube-controller-manager systemd unit 文件
[root@master ~]# vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动服务
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
查看kube-controller-manager是否运行
[root@master ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2019-10-04 13:11:14 CST; 5s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 6792 (kube-controller)
Tasks: 5
Memory: 124.6M
CGroup: /system.slice/kube-controller-manager.service
└─6792 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address...
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.217119 6792 ttl_controller.go:116] Starting TTL controller
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.217503 6792 shared_informer.go:197] Waiting for caches to sy...or TTL
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.217654 6792 request.go:538] Throttling request took 414.6079...ut=32s
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.218381 6792 garbagecollector.go:130] Starting garbage collec...roller
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.218545 6792 shared_informer.go:197] Waiting for caches to sy...lector
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.218597 6792 graph_builder.go:272] garbage controller monitor...isions
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.221343 6792 graph_builder.go:282] GraphBuilder running
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.221862 6792 controllermanager.go:534] Started "pv-protection"
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.221912 6792 controllermanager.go:519] Starting "replicationc...oller"
10月 04 13:11:19 master kube-controller-manager[6792]: I1004 13:11:19.223370 6792 controllermanager.go:534] Started "replicationcontroller"
Hint: Some lines were ellipsized, use -l to show in full.
将可执行文件路/k8s/kubernetes/ 添加到 PATH 变量中
[root@master ~]# vi .bash_profile
PATH=$PATH:$HOME/bin:/k8s/kubernetes/bin
[root@master ~]# . .bash_profile
查看master集群状态
[root@master bin]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
2.3.3、部署node节点
kubernetes work 节点运行如下组件:
- docker 前面已经部署
- kubelet
- kube-proxy
部署 kubelet 组件
- kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如* exec、run、logs 等;
- kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况;
- 为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)。
将kubelet 二进制文件拷贝node节点
cp kubelet kube-proxy /k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.187.132:/k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.187.133:/k8s/kubernetes/bin/
创建 kubelet bootstrap kubeconfig 文件
vi environment.sh
# 创建kubelet bootstrapping kubeconfig
BOOTSTRAP_TOKEN=b772932145d9062ea2f2e9adf0ac87dc
KUBE_APISERVER="https://192.168.187.131:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
将bootstrap kubeconfig kube-proxy.kubeconfig 文件拷贝到所有 nodes节点
cp bootstrap.kubeconfig kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.187.132:/k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.187.133:/k8s/kubernetes/cfg/
创建kubelet 参数配置文件拷贝到所有 nodes节点
创建 kubelet 参数配置模板文件:(其他节点需要将address改为节点IP)
vi /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.187.131
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["20.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
创建kubelet配置文件(其他节点需要将hostname-override改为该节点IP)
[root@node1 ~]# vi /k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.187.131 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=192.168.187.134/k8s/pause-amd64:3.0"
创建kubelet systemd unit 文件
[root@node1 ~]# vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
将kubelet-bootstrap用户绑定到系统集群角色
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
启动服务
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
approve kubelet CSR 请求
可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书。
手动 approve CSR 请求
查看 CSR 列表:
[root@master ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-P7FPCm4aKQguhmejD8v0mTTHBKBwMDTHkVnf7bE04uk 12s kubelet-bootstrap Pending
[root@master ~]# kubectl certificate approve node-csr-P7FPCm4aKQguhmejD8v0mTTHBKBwMDTHkVnf7bE04uk
certificatesigningrequest.certificates.k8s.io/node-csr-P7FPCm4aKQguhmejD8v0mTTHBKBwMDTHkVnf7bE04uk approved
[root@master ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-P7FPCm4aKQguhmejD8v0mTTHBKBwMDTHkVnf7bE04uk 74s kubelet-bootstrap Approved,Issued
- Requesting User:请求 CSR 的用户,kube-apiserver 对它进行认证和授权;
- Subject:请求签名的证书信息;
- 证书的 CN 是 system:node:kube-node2, Organization 是 system:nodes,kube-apiserver 的 Node 授权模式会授予该证书的相关权限;
查看集群状态
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.187.131 Ready <none> 22h v1.13.11
192.168.187.132 Ready <none> 9m9s v1.13.11
192.168.187.133 Ready <none> 6m50s v1.13.11
部署 kube-proxy 组件
kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。
创建 kube-proxy 配置文件(其他节点需要将hostname-override改为该节点IP)
vi /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.187.131 \
--cluster-cidr=20.0.0.0/24 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
- bindAddress: 监听地址;
- clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
- clusterCIDR: kube-proxy 根据 –cluster-cidr 判断集群内部和外部流量,指定 –cluster-cidr 或 –masquerade-* all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
- hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
- mode: 使用 ipvs 模式;
创建kube-proxy systemd unit 文件
vi /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动服务
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
[root@master ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 三 2019-10-09 23:09:14 CST; 2min 3s ago
Main PID: 5525 (kube-proxy)
Tasks: 0
Memory: 33.0M
CGroup: /system.slice/kube-proxy.service
‣ 5525 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.187.131 --cluster-cidr=20.0.0.0/24 --k...
Kubernetes配置secret拉取私仓库
1.登录harbor镜像仓库
[root@master ~]# docker login 192.168.187.134
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
之后输入密码就可以了, 这个时候我们可以在配置文件中查看登录情况
[root@master ~]# cat ./.docker/config.json |base64
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4Ny4xMzQiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2
WVdKalFERXlNMEU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAi
RG9ja2VyLUNsaWVudC8xOS4wMy4yIChsaW51eCkiCgl9Cn0=
- 生成密钥secret
创建secret.yaml
vi secret.yaml
apiVersion: v1
kind: Secret
metadata:
namespace: default
name: harbor
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4Ny4xMzQiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2WVdKalFERXlNMEU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy4yIChsaW51eCkiCgl9Cn0=
将secret发布到Kubernetes集群中
[root@master ~]# kubectl create -f secret.yaml
secret/harbor created
创建nginx应用
创建nginx.yaml
vi nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 192.168.187.134/public/nginx:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: harbor
将nginx.yaml发布到Kubernetes集群
[root@master ~]# kubectl create -f nginx.yaml
deployment.extensions/nginx-deployment created
[root@master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-84bb5596c-bnb6x 1/1 Running 0 2m23s 172.17.0.2 192.168.187.133 <none> <none>
nginx-deployment-84bb5596c-jkt4h 1/1 Running 0 2m23s 172.17.0.2 192.168.187.132 <none> <none>
nginx-deployment-84bb5596c-zvmhq 1/1 Running 0 2m24s 172.17.0.3 192.168.187.133 <none> <none>