2022-03-24 day110 k8s二进制 安装

01节点初始化

设置虚拟机网卡


image.png

新建5台虚拟机


image.png

第1章 系统初始化 每台主机都操作
1.安装常用工具
所有节点都操作
yum install -y tree vim wget bash-completion bash-completion-extras lrzsz net-tools sysstat iotop iftop unzip telnet ntpdate git

2.关闭防火墙和selinx
所有节点都操作
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0
systemctl stop firewalld NetworkManager
systemctl disable firewalld NetworkManager

3.设置时区
所有节点都操作
\cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime -rf

4.关闭交换分区
所有节点都操作
swapoff -a
sed -i '/swap/d' /etc/fstab

5.设置时间同步
所有节点都操作

echo "*/5 * * * * ntpdate time1.aliyun.com >/dev/null 2>&1" >> /etc/crontab
service crond restart

6.设置主机名
所有节点都操作

cat > /etc/hosts <<EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.91.18 master-1
192.168.91.19 master-2
192.168.91.20 master-3
192.168.91.21 node-1
192.168.91.22 node-2
EOF

7.优化内核参数
所有节点都操作

cat >/etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
fs.file-max=52706963
fs.nr_open=52706963
EOF
sysctl -p

8.设置免密登陆
所有master节点操作

yum install -y sshpass
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
for ip in {master-1,master-2,master-3,node-1,node-2};do sshpass -p123 ssh-copy-id -p 22 ${ip} -o StrictHostKeyChecking=no;done
for ip in {master-1,master-2,master-3,node-1,node-2};do ssh ${ip} hostname;done   

9.k8s命令补全

所有master节点操作

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash >/etc/bash_completion.d/kubectl

02keepalived安装配置

部署keepalived
1.master-1配置步骤

yum install -y keepalived
cat >/etc/keepalived/keepalived.conf <<EOF
global_defs {
    router_id master-1
}
vrrp_script CheckMaster {
    script "curl -k https://192.168.91.254:6443"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 50
    priority 150
    advert_int 1
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 111111
    }

    virtual_ipaddress {
        192.168.91.254/24 dev eth0
    }

    track_script {
        CheckMaster
    }
}
EOF
systemctl enable keepalived && systemctl restart keepalived
service keepalived status

2.master-2配置步骤

yum install -y keepalived
cat >/etc/keepalived/keepalived.conf <<EOF
global_defs {
    router_id master-2
}
vrrp_script CheckMaster {
    script "curl -k https://192.168.91.254:6443"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 50
    priority 100
    advert_int 1
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 111111
    }

    virtual_ipaddress {
        192.168.91.254/24 dev eth0
    }

    track_script {
        CheckMaster
    }
}
EOF
systemctl enable keepalived && systemctl restart keepalived
service keepalived status

3.master-3配置步骤

yum install -y keepalived
cat >/etc/keepalived/keepalived.conf <<EOF
global_defs {
    router_id master-3
}
vrrp_script CheckMaster {
    script "curl -k https://192.168.91.254:6443" 
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 50
    priority 50
    advert_int 1
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 111111
    }

    virtual_ipaddress {
        192.168.91.254/24 dev eth0
    }

    track_script {
        CheckMaster
    }
}
EOF
systemctl enable keepalived && systemctl restart keepalived
service keepalived status

查看keepalived虚拟IP


image.png
vrrp_script CheckMaster {
    script "curl -k https://192.168.91.254:6443"

这个是检查脚本,如果测试不通,keepalived就退出
检查漂移
systemctl stop keepalived.service
在谁的身上谁就是apiservice


image.png

03 生成etcd证书

生成证书--重点
只需要在master-1操作即可,生成后发送给其他节点就行了

第1章 安装生成证书工具

mkdir /soft && cd /soft
#wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
#wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
#wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
下载生成证书的工具
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
修改执行权限
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
移动到指定位置并修改名字

不能下载的话qq群有

根据各种配置才能生成证书,所以要先进行配置

1.创建目录
mkdir /root/etcd
cd /root/etcd
这个目录专门放etcd证书配置文件和配置

2.CA证书配置文件

cat >/root/etcd/ca-config.json<<'EOF'
{
  "signing": {
    "default": {
      "expiry": "87600h" #证书过期时间
    },
    "profiles": {
      "www": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      }
    }
  }
}
EOF

3.创建CA证书请求文件

cat > /root/etcd/ca-csr.json << 'EOF'
{
  "CN": "etcd CA",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing"
    }
  ]
}
EOF

4.创建ETCD证书请求文件
可以把所有的 master IP 加入到 csr 文件中

cat > /root/etcd/server-csr.json << 'EOF'
{
  "CN": "etcd",
  "hosts": [
    "master-1",
    "master-2",
    "master-3",
    "192.168.91.18",
    "192.168.91.19",
    "192.168.91.20",
    "192.168.91.254"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing"
    }
  ]
}
EOF

这个是要连接的名字写在这里就是白名单

5.生成ca证书

cd /root/etcd/
cfssl gencert -initca ca-csr.json | cfssljson -bare ca –

检查

[root@master-1 ~/etcd]# ll
总用量 24
-rw-r--r-- 1 root root  277 7月  23 17:47 ca-config.json
-rw-r--r-- 1 root root  956 7月  23 17:51 ca.csr
-rw-r--r-- 1 root root  165 7月  23 17:48 ca-csr.json
-rw------- 1 root root 1675 7月  23 17:51 ca-key.pem
-rw-r--r-- 1 root root 1265 7月  23 17:51 ca.pem
-rw-r--r-- 1 root root  290 7月  23 17:50 server-csr.json
image.png

6.生成etcd证书

cd /root/etcd/
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

检查

[root@master-1 ~/etcd]# ll
总用量 36
-rw-r--r-- 1 root root  277 7月  23 17:47 ca-config.json         #ca证书的配置文件
-rw-r--r-- 1 root root  165 7月  23 17:48 ca-csr.json                #cs证书的请求文件
-rw------- 1 root root 1675 7月  23 17:51 ca-key.pem             #ca证书   
-rw-r--r-- 1 root root  956 7月  23 17:51 ca.csr                 #ca请求文件证书
-rw-r--r-- 1 root root 1265 7月  23 17:51 ca.pem                 #ca证书

-rw-r--r-- 1 root root 1054 7月  23 17:52 server.csr
-rw-r--r-- 1 root root  290 7月  23 17:50 server-csr.json            #定义了哪些主机可以连接到etcd
-rw------- 1 root root 1675 7月  23 17:52 server-key.pem         #给需要连接etcd的客户端的证书
-rw-r--r-- 1 root root 1379 7月  23 17:52 server.pem             #给需要连接etcd的客户端的证书
image.png

ca证书就是官方的印章,自己的定义配置都要使用ca证书

k8s组件涉及的证书及配置


image.png

04 生成api-service证书

第1章 创建CA证书
1.创建证书目录

mkdir /root/kubernetes/
cd /root/kubernetes/

2.创建CA配置文件

cat > /root/kubernetes/ca-config.json << 'EOF'
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      }
    }
  }
}
EOF

3.创建CA证书申请文件

cat > /root/kubernetes/ca-csr.json <<'EOF'
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

4.生成CA证书和公私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca –
image.png

第2章 创建API Server证书
1.创建API Server证书请求文件

cat > /root/kubernetes/server-csr.json << 'EOF'
{
  "CN": "kubernetes",
  "hosts": [
    "10.0.0.1",        
    "127.0.0.1",
    "10.0.0.2",
    "192.168.91.18",
    "192.168.91.19",
    "192.168.91.20",
    "192.168.91.21",
    "192.168.91.22",
    "192.168.91.254",
    "master-1",
    "master-2",
    "master-3",
    "node-1",
    "node-2",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

2.创建API Server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

3.检查

[root@master-1 ~/kubernetes]# ll
总用量 36
-rw-r--r-- 1 root root  284 3月  23 11:31 ca-config.json
-rw-r--r-- 1 root root 1001 3月  23 11:31 ca.csr
-rw-r--r-- 1 root root  208 3月  23 11:31 ca-csr.json
-rw------- 1 root root 1675 3月  23 11:31 ca-key.pem
-rw-r--r-- 1 root root 1359 3月  23 11:31 ca.pem
-rw-r--r-- 1 root root 1358 3月  23 11:35 server.csr
-rw-r--r-- 1 root root  633 3月  23 11:35 server-csr.json
-rw------- 1 root root 1679 3月  23 11:35 server-key.pem
-rw-r--r-- 1 root root 1724 3月  23 11:35 server.pem
image.png

05生成kube-proxy证书

创建kube-proxy证书
注意:使用的是和api server同样的ca证书进行签发的

1.创建kube-proxy证书请求文件

cd /root/kubernetes/
cat > kube-proxy-csr.json  << 'EOF' 
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

2.生成kube-proxy证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

3.检查

[root@master-1 ~/kubernetes]# ll
总用量 52
-rw-r--r-- 1 root root  284 3月  23 11:31 ca-config.json
-rw-r--r-- 1 root root 1001 3月  23 11:31 ca.csr
-rw-r--r-- 1 root root  208 3月  23 11:31 ca-csr.json
-rw------- 1 root root 1675 3月  23 11:31 ca-key.pem
-rw-r--r-- 1 root root 1359 3月  23 11:31 ca.pem
-rw-r--r-- 1 root root 1009 3月  23 11:42 kube-proxy.csr
-rw-r--r-- 1 root root  230 3月  23 11:41 kube-proxy-csr.json    
-rw------- 1 root root 1675 3月  23 11:42 kube-proxy-key.pem #kube-proxy启动文件要求需要有证书  
-rw-r--r-- 1 root root 1403 3月  23 11:42 kube-proxy.pem
-rw-r--r-- 1 root root 1358 3月  23 11:35 server.csr
-rw-r--r-- 1 root root  633 3月  23 11:35 server-csr.json
-rw------- 1 root root 1679 3月  23 11:35 server-key.pem
-rw-r--r-- 1 root root 1724 3月  23 11:35 server.pem
image.png

06安装部署etcd

第1章 部署ETCD
1.下载安装文件
master-1操作即可,安装好后发送给其他节点

cd /soft
#wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /usr/local/bin/
etcdctl -v
scp /usr/local/bin/etcd* master-2:/usr/local/bin/
scp /usr/local/bin/etcd* master-3:/usr/local/bin/
ssh master-2 etcdctl -v
ssh master-3 etcdctl -v
image.png

2.编辑etcd配置文件

配置文件解释

ETCD_NAME 节点名称, 如果有多个节点, 那么每个节点要修改为本节点的名称。
ETCD_DATA_DIR 数据目录
ETCD_LISTEN_PEER_URLS 集群通信监听地址
ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
ETCD_INITIAL_CLUSTER 集群节点地址,如果多个节点那么逗号分隔
ETCD_INITIAL_CLUSTER_TOKEN 集群 Token
ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new 是新集群,existing 表示加入已有集群

master-1操作

mkdir -p /etc/etcd/{cfg,ssl}
cat >/etc/etcd/cfg/etcd.conf<<'EOF'
#[Member]
ETCD_NAME="master-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.91.18:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.91.18:2379,http://192.168.91.18:2390"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.91.18:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.91.18:2379"
ETCD_INITIAL_CLUSTER="master-1=https://192.168.91.18:2380,master-2=https://192.168.91.19:2380,master-3=https://192.168.91.20:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

2380是成员etcd相互连接的端口,别人连接时候是2379


image.png

master-2操作

mkdir -p /etc/etcd/{cfg,ssl}
cat >/etc/etcd/cfg/etcd.conf<<'EOF'
#[Member]
ETCD_NAME="master-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.91.19:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.91.19:2379,http://192.168.91.19:2390"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.91.19:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.91.19:2379"
ETCD_INITIAL_CLUSTER="master-1=https://192.168.91.18:2380,master-2=https://192.168.91.19:2380,master-3=https://192.168.91.20:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

master-3操作

mkdir -p /etc/etcd/{cfg,ssl}
cat >/etc/etcd/cfg/etcd.conf<<'EOF'
#[Member]
ETCD_NAME="master-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.91.20:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.91.20:2379,http://192.168.91.20:2390"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.91.20:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.91.20:2379"
ETCD_INITIAL_CLUSTER="master-1=https://192.168.91.18:2380,master-2=https://192.168.91.19:2380,master-3=https://192.168.91.20:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

3.复制证书到指定目录

master-1操作

mkdir -p /etc/etcd/ssl/
\cp /root/etcd/*pem /etc/etcd/ssl/ -rf
ll /etc/etcd/ssl/
#复制 etcd 证书到每个节点
for i in master-2 master-3 node-1 node-2;do ssh $i mkdir -p /etc/etcd/{cfg,ssl};done
for i in master-2 master-3 node-1 node-2;do scp /etc/etcd/ssl/* $i:/etc/etcd/ssl/;done
for i in master-2 master-3 node-1 node-2;do ssh $i ls /etc/etcd/ssl;done

4.启动文件-三台master都操作

cat > /usr/lib/systemd/system/etcd.service << 'EOF'
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/etc/etcd/cfg/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=${ETCD_INITIAL_CLUSTER_STATE} \
--cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--peer-cert-file=/etc/etcd/ssl/server.pem \
--peer-key-file=/etc/etcd/ssl/server-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

注意:EnvironmentFile=/etc/etcd/cfg/etcd.conf 上面配置的

image.png

5.启动etcd-三台master都操作

systemctl daemon-reload
systemctl start etcd
systemctl status etcd
systemctl enable etcd

6.检查etcd集群健康状态

etcdctl \
--ca-file=/etc/etcd/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--endpoints="https://192.168.91.18:2379" \
cluster-health

启动的时候使用的证书,查询的时候也要接证书,master-1操作的


image.png

07 安装部署Flannel插件

第1章 安装配置Flannel
master-1安装配置好然后复制给其他节点
1.下载Flannel命令

cd /soft
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
tar xvf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /usr/local/bin/
for i in master-2 master-3 node-1 node-2;do scp /usr/local/bin/flanneld $i:/usr/local/bin/;done
for i in master-2 master-3 node-1 node-2;do scp /usr/local/bin/mk-docker-opts.sh $i:/usr/local/bin/;done

注意:其实master节点不需要flannel配置跟包,只需要发送给node节点,但是发送过去也无所谓。

2.创建Flannel配置文件

mkdir -p /etc/flannel
cat > /etc/flannel/flannel.cfg <<'EOF'
FLANNEL_OPTIONS="-etcd-endpoints=https://192.168.91.18:2379,https://192.168.91.19:2379,https://192.168.91.20:2379 \
-etcd-cafile=/etc/etcd/ssl/ca.pem \
-etcd-certfile=/etc/etcd/ssl/server.pem \
-etcd-keyfile=/etc/etcd/ssl/server-key.pem"
EOF
for i in master-2 master-3 node-1 node-2;do ssh $i mkdir -p /etc/flannel;done
for i in master-2 master-3 node-1 node-2;do scp /etc/flannel/flannel.cfg $i:/etc/flannel/flannel.cfg;done
endpoints=https://192.168.91.18:2379,https://192.168.91.19:2379,https://192.168.91.20:2379 \

注意:这个是flannel对etcd的连接

3.创建Flannel启动文件

cat > /usr/lib/systemd/system/flanneld.service << 'EOF'
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/flannel/flannel.cfg
ExecStart=/usr/local/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
for i in master-2 master-3 node-1 node-2;do scp  /usr/lib/systemd/system/flanneld.service $i:/usr/lib/systemd/system/flanneld.service;done
EnvironmentFile=/etc/flannel/flannel.cfg
image.png

整个流程,图例空白的部分是docker启动读取subnet.evn


image.png

注意:现在是启动不了flannel的,etcd里面是没有网段情况,所以要手动写入
4.ETCD写入POD网段信息
解释:
172.17.0.0/16 为 Kubernetes Pod 的 IP 地址段.
网段必须与 kube-controller-manager 的 --cluster-cidr 参数值一致

写入命令:

etcdctl \
--ca-file=/etc/etcd/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--endpoints="https://192.168.91.18:2379,https://192.168.91.19:2379,https://192.168.91.20:2379" \
set /coreos.com/network/config \
'{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

查看命令:

etcdctl \
--ca-file=/etc/etcd/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--endpoints="https://192.168.91.18:2379,https://192.168.91.19:2379,https://192.168.91.20:2379" \
get /coreos.com/network/config 

注意:上面所有的master节点都操作,写入etcd

5.启动Flannel
所有节点都操作

systemctl daemon-reload
systemctl start flanneld.service 
systemctl status flanneld.service 
systemctl enable flanneld.service 

6.检查

查看设备每个节点获取的网段不一样 ,包括master节点

ip a | grep flannel

image.png

查看env文件是否生成并和flannel保持一致,docker 启动的时候要读这个文件

cat /run/flannel/subnet.env


image.png

image.png

08安装部署docker

第1章 Node节点安装配置Docker
以下操作都在node节点
以下操作都在node节点
以下操作都在node节点
以下操作都在node节点
以下操作都在node节点
以下操作都在node节点
以下操作都在node节点
以下操作都在node节点
以下操作都在node节点

1.安装docker-ce

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce-19.03.15 docker-ce-cli-19.03.15

2.创建镜像加速配置

mkdir /etc/docker -p
cat > /etc/docker/daemon.json <<EOF
    {
      "registry-mirrors": ["https://ig2l319y.mirror.aliyuncs.com"],
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
EOF

3.修改启动文件

cat >/usr/lib/systemd/system/docker.service << 'EOF'
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

4.启动docker

systemctl daemon-reload
systemctl start docker.service 
systemctl status docker.service 
systemctl enable docker.service 

5.检查docker0和flannel.1设备网段是否一致
ip a

结果示意图解释:


image.png

09安装部署api-service

第1章 部署API Server
master-1操作即可,操作完复制给其他节点

1.下载k8s二进制包

cd /soft
tar xvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /usr/local/bin/

2.将命令发送给其他master节点

for i in master-2 master-3;do scp /usr/local/bin/kube* $i:/usr/local/bin/;done

3.创建证书目录并将证书放到指定位置

mkdir -p /etc/kubernetes/{cfg,ssl}
cp /root/kubernetes/*.pem /etc/kubernetes/ssl/

cfg是配置目录
ssl是证书目录

注意:
证书是所有节点都有
命令只有master节点才有

4.将证书发送到其他所有节点

for i in master-2 master-3 node-1 node-2;do scp -r /etc/kubernetes $i:/etc/kubernetes;done
for i in master-2 master-3 node-1 node-2;do echo $i "---------->"; ssh $i ls /etc/kubernetes/ssl;done

5.创建TLS Bootstrapping Token
作用:
TLS bootstrapping 功能就是让 kubelet 先使用一个预定的低权限用户连接到 apiserver,
然后向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署
Token 可以是任意的包涵 128 bit 的字符串,可以使用安全的随机数发生器生成

生成随机数命令:

[root@master-1 ~]# cd /soft/
[root@master-1 /soft]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
4729a56d65dd08659ed0b2c0c948b26a

编辑Token文件:

cat > /etc/kubernetes/cfg/token.csv << 'EOF'
4729a56d65dd08659ed0b2c0c948b26a,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

复制给其他master节点:

for i in master-2 master-3;do scp /etc/kubernetes/cfg/token.csv $i:/etc/kubernetes/cfg/token.csv;done

6.创建Apiserver配置文件

###配置文件解释
#--logtostderr 启用日志
#---v 日志等级
#--etcd-servers etcd 集群地址
#--bind-address 监听地址
#--secure-port https 安全端口
#--advertise-address 集群通告地址
#--allow-privileged 启用授权
#--service-cluster-ip-range Service 虚拟 IP 地址段
#--enable-admission-plugins 准入控制模块
#--authorization-mode 认证授权,启用 RBAC 授权
#--enable-bootstrap-token-auth 启用 TLS bootstrap 功能
#--token-auth-file token 文件
#--service-node-port-range Service Node 类型默认分配端口范围
###

创建命令

cat >/etc/kubernetes/cfg/kube-apiserver.cfg <<EOF
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--etcd-servers=https://192.168.91.18:2379,https://192.168.91.19:2379,https://192.168.91.20:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=0.0.0.0 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/server.pem \
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/server.pem \
--etcd-keyfile=/etc/etcd/ssl/server-key.pem"
EOF

复制给其他master节点:

for i in master-2 master-3;do scp /etc/kubernetes/cfg/kube-apiserver.cfg $i:/etc/kubernetes/cfg/kube-apiserver.cfg;done

7.配置kube-apiserver启动文件

cat >/usr/lib/systemd/system/kube-apiserver.service <<'EOF'
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-apiserver.cfg
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
for i in master-2 master-3;do scp /usr/lib/systemd/system/kube-apiserver.service $i:/usr/lib/systemd/system/kube-apiserver.service;done

8.启动kube-apiserver服务
所有master都操作

systemctl daemon-reload
systemctl start kube-apiserver.service
systemctl status kube-apiserver.service
systemctl enable kube-apiserver.service

9.检查端口

[root@master-1 ~]# netstat -lntup|egrep "8080|6443"
tcp6       0      0 :::8080                 :::*                    LISTEN      8295/kube-apiserver 
tcp6       0      0 :::6443                 :::*                    LISTEN      8295/kube-apiserver 

--insecure-port=8080
不安全端口
--secure-port=6443
安全端口

10.node节点连接测试vip是否好使

[root@node-1 ~]# telnet 192.168.91.254 6443
Trying 192.168.91.254...
Connected to 192.168.91.254.
Escape character is '^]'.

192.168.91.254 vip地址

10 安装部署kube-scheduler

第1章 安装部署kube-scheduler
master-1操作完成然后发送给其他master
1.创建kube-scheduler配置文件

配置解释

--bind-address=0.0.0.0 启动绑定地址
--master 连接本地 apiserver(非加密端口)
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态

创建命令

cat >/etc/kubernetes/cfg/kube-scheduler.cfg<<'EOF'
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--bind-address=0.0.0.0 \
--master=127.0.0.1:8080 \
--leader-elect"
EOF

2.创建kube-scheduler启动文件

cat >/usr/lib/systemd/system/kube-scheduler.service<<'EOF'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-scheduler.cfg
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

3.发送给其他master节点

for i in master-2 master-3;do scp /etc/kubernetes/cfg/kube-scheduler.cfg $i:/etc/kubernetes/cfg/kube-scheduler.cfg;done
for i in master-2 master-3;do scp /usr/lib/systemd/system/kube-scheduler.service $i:/usr/lib/systemd/system/kube-scheduler.service;done

4.启动kube-scheduler

systemctl daemon-reload
systemctl start kube-scheduler.service
systemctl status kube-scheduler.service
systemctl enable kube-scheduler.service

安装部署kube-proxy

第1章 部署kube-controller-manager
部署顺序:
etcd --> api server --> xxxx
etcd --> flannel --> docker --> xxxx
master-1操作完成然后发送给其他master

网络规划:
POD IP: 172.17.0.0
Cluster IP: 10.0.0.0
Node Port: 192.168.91.0 30000-50000

1.创建kube-controller-manager配置文件

参数解释:

--master=127.0.0.1:8080 #指定 Master 地址
--leader-elect #竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。
--service-cluster-ip-range #kubernetes service 指定的 IP 地址范围。

创建命令:

cat >/etc/kubernetes/cfg/kube-controller-manager.cfg<<'EOF'
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=0.0.0.0 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem"
EOF

2.创建kube-controller-manager启动文件

cat >/usr/lib/systemd/system/kube-controller-manager.service<<'EOF'
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-controller-manager.cfg
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

3.发送给其他节点

for i in master-2 master-3;do scp /etc/kubernetes/cfg/kube-controller-manager.cfg $i:/etc/kubernetes/cfg/kube-controller-manager.cfg;done
for i in master-2 master-3;do scp /usr/lib/systemd/system/kube-controller-manager.service $i:/usr/lib/systemd/system/kube-controller-manager.service;done

4.启动controller-manager

systemctl daemon-reload
systemctl start kube-controller-manager.service
systemctl status kube-controller-manager.service
systemctl enable kube-controller-manager.service

5.检查组件状态,系统组件健康状态

[root@master-1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
image.png

12安装部署kubelet

第1章 Node节点部署kubelet服务
-----------------以下操作master-1执行-----------------
1.master-1发送Node节点需要的命令

cd /soft
scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy node-1:/usr/local/bin/
scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy node-2:/usr/local/bin/

2.master-1创建kubelet bootstrap.kubeconfig文件
解释:
Kubernetes 中 kubeconfig 文件配置文件用于访问集群信息,在开启了 TLS 的集群中,每次与集群交互时都需要身份认证
生产环境一般使用证书进行认证,其认证所需要的信息会放在 kubeconfig 文件中

创建脚本命令:

mkdir /root/config
cd /root/config
cat > environment.sh <<'EOF'
# 创建 kubelet bootstrapping kubeconfig,这个token需要和/etc/kubernetes/cfg/token.csv一致
BOOTSTRAP_TOKEN=4729a56d65dd08659ed0b2c0c948b26a
KUBE_APISERVER="https://192.168.91.254:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default \
--kubeconfig=bootstrap.kubeconfig
EOF

3.执行脚本并检查

[root@master-1 ~/config]# bash environment.sh 
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
[root@master-1 ~/config]# ll
总用量 8
-rw------- 1 root root 2168 3月  23 15:50 bootstrap.kubeconfig
-rw-r--r-- 1 root root  778 3月  23 15:50 environment.sh

4.把生成的bootstrap.kubeconfig发送到node节点

for i in node-1 node-2;do scp -r /root/config/bootstrap.kubeconfig $i:/etc/kubernetes/cfg/bootstrap.kubeconfig;done

5.将 kubelet-bootstrap 用户绑定到系统集群角色(master-1节点执行)

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

--------------------------------以下操作node节点执行--------------------------------
6.创建 kubelet 参数配置文件

#node节点都操作
cat > /etc/kubernetes/cfg/kubelet.config << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: $(ifconfig eth0|awk 'NR==2{print $2}')
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF

kubelet图例:


image.png

6.创建 kubelet 配置文件

cat >/etc/kubernetes/cfg/kubelet<< EOF
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=$(ifconfig eth0|awk 'NR==2{print $2}') \
--kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/etc/kubernetes/cfg/bootstrap.kubeconfig \
--config=/etc/kubernetes/cfg/kubelet.config \
--cert-dir=/etc/kubernetes/ssl \
--pod-infra-container-image=docker.io/kubernetes/pause:latest"
EOF

7.创建 kubelet 启动文件

cat >/usr/lib/systemd/system/kubelet.service<<'EOF'
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kubelet
ExecStart=/usr/local/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

8.启动kubelet服务

systemctl daemon-reload
systemctl start kubelet.service
systemctl status kubelet.service
systemctl enable kubelet.service

-----------master-1操作-----------
9.服务端查看与批准CSR请求

[root@master-1 ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-3gH30n_DdXfBcAQUmm682C6WLVOd4Kxv5glJ2Qtlykw   52s   kubelet-bootstrap   Pending
node-csr-rUzayzlIr4IkUM2vyzQDXqvXNMhGRYoeWLYvQK_XUYk   54s   kubelet-bootstrap   Pending
image.png

10.批准请求

kubectl certificate approve node-csr-3gH30n_DdXfBcAQUmm682C6WLVOd4Kxv5glJ2Qtlykw
kubectl certificate approve node-csr-rUzayzlIr4IkUM2vyzQDXqvXNMhGRYoeWLYvQK_XUYk

11.再次查看申请

[root@master-1 ~]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-3gH30n_DdXfBcAQUmm682C6WLVOd4Kxv5glJ2Qtlykw   3m53s   kubelet-bootstrap   Approved,Issued
node-csr-rUzayzlIr4IkUM2vyzQDXqvXNMhGRYoeWLYvQK_XUYk   3m55s   kubelet-bootstrap   Approved,Issue

注意:要修改加速器资源隔离
不修改就会启动失败


image.png

把node节点docker加速改了,docker的资源隔离跟k8s的不一样


image.png

12.查看node节点

[root@master-1 ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
192.168.91.21   Ready    <none>   16s   v1.15.1
192.168.91.22   Ready    <none>   10s   v1.15.1

13安装部署kube-proxy

第1章 部署 kube-proxy 组件
##########master节点操作###############
1.创建 kube-proxy kubeconfig 文件

cd /root/config/
cat >env_proxy.sh<< 'EOF'
BOOTSTRAP_TOKEN=4729a56d65dd08659ed0b2c0c948b26a
KUBE_APISERVER="https://192.168.91.254:6443"
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
EOF

2.执行创建脚本

[root@master-1 ~/config]# bash env_proxy.sh 
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@master-1 ~/config]# ll
总用量 20
-rw------- 1 root root 2168 3月  23 15:50 bootstrap.kubeconfig
-rw-r--r-- 1 root root  778 3月  23 15:50 environment.sh
-rw-r--r-- 1 root root  691 3月  23 16:43 env_proxy.sh
-rw------- 1 root root 6270 3月  23 16:44 kube-proxy.kubeconfig

3.将生成的kube-proxy.kubeconfig发送给node节点

for i in node-1 node-2;do scp -r /root/config/kube-proxy.kubeconfig $i:/etc/kubernetes/cfg/kube-proxy.kubeconfig;done

##########node节点操作###############
4.创建kube-proxy配置文件

cat >/etc/kubernetes/cfg/kube-proxy << EOF
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--metrics-bind-address=0.0.0.0 \
--hostname-override=$(ifconfig eth0|awk 'NR==2{print $2}') \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/etc/kubernetes/cfg/kube-proxy.kubeconfig"
EOF

5.创建启动文件

cat >/usr/lib/systemd/system/kube-proxy.service<<'EOF'
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-proxy
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

6.启动服务

systemctl daemon-reload
systemctl start kube-proxy.service
systemctl status kube-proxy.service
systemctl enable kube-proxy.service

检查:


image.png

14安装部署coredns与命令补全和打标签

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash >/etc/bash_completion.d/kubectl
kubectl apply -f coredns.yaml
kubectl label nodes 192.168.91.21  node-role.kubernetes.io/node=
kubectl label nodes 192.168.91.22  node-role.kubernetes.io/node=
kubectl get nodes
kubectl -n kube-system get pod
image.png

coredns.yaml的文件

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local. in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        #proxy . /etc/resolv.conf
        forward . 8.8.8.8 8.8.4.4
        cache 30
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.0.6
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 1024Mi
          requests:
            cpu: 100m
            memory: 512Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 10.0.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 199,711评论 5 468
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 83,932评论 2 376
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 146,770评论 0 330
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 53,799评论 1 271
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 62,697评论 5 359
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,069评论 1 276
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,535评论 3 390
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,200评论 0 254
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,353评论 1 294
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,290评论 2 317
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,331评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,020评论 3 315
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,610评论 3 303
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,694评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,927评论 1 255
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,330评论 2 346
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 41,904评论 2 341

推荐阅读更多精彩内容