基于CentOS7.5搭建Kubernetes集群

为了方便区分,所有执行的命令都用粗斜体标记
参考文章
https://www.kubernetes.org.cn/4948.html
https://www.kubernetes.org.cn/5025.html

环境准备

  • 服务器
    Virtual IP:192.168.3.88
    k8s-master,192.168.3.80
    k8s-node1,192.168.3.81
    k8s-node2,192.168.3.82
    k8s-node3,192.168.3.83
    k8s-storage1,192.168.3.86
    docker-registry,192.168.3.89

  • 基础环境
    基于CentOS-7-x86_64-Minimal-1810最小安装

需要做的工作包括如下内容

  • 更新系统
  • 关闭 SELINUX
  • 关闭交换分区
  • 调整时区并同步时间
  • 升级内核

系统安装完成后,执行以下命令配置基础环境

yum update -y
yum install wget net-tools yum-utils vim -y
修改源为阿里云
-- 先备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
-- 下载
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
-- 禁用c7-media库
yum-config-manager --disable c7-media
-- 或者 vim /etc/yum.repos.d/CentOS-Media.repo 修改enabled的值为0

  • 时钟同步
    rm -rf /etc/localtime
    vim /etc/sysconfig/clock
    -- 文件中添加 Zone=Asia/Shanghai
    ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    reboot
    使用
    date -R
    确认是+8时区,如果不是就重来一次上面的操作。
    -- 安装ntp服务
    yum install ntp -y
    -- 修改成国内时区并同步
    timedatectl set-timezone Asia/Shanghai
    timedatectl set-ntp yes
    -- 查看时间确保同步
    timedatectl
    或者执行以下命令也可以完成时钟同步
    yum install -y ntpdate
    ntpdate -u ntp.api.bz

  • 关闭SELINUX
    vim /etc/sysconfig/selinux
    SELINUX=permissive 修改为 SELINUX=disabled

  • 关闭Selinux/firewalld
    systemctl stop firewalld
    systemctl disable firewalld
    setenforce 0
    sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

  • master节点到其他节点的ssh免登陆
    首先在每台机器上执行
    ssh 本机IP
    exit

    在master节点执行如下名称
    ssh-keygen -t rsa
    ssh-copy-id 192.168.3.81
    ssh-copy-id 192.168.3.82
    ssh-copy-id 192.168.3.83
    这里要注意,每台机器都要保证能访问自己也是免密的

  • 关闭交换分区
    swapoff -a
    yes | cp /etc/fstab /etc/fstab_bak
    cat /etc/fstab_bak |grep -v swap > /etc/fstab

  • 设置网桥包经IPTables,core文件生成路径
    echo """
    vm.swappiness = 0
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    """ > /etc/sysctl.conf

    sysctl -p

  • 查看内核版本
    lsb_release -a
    如果提示命令不存在则安装
    yum install -y redhat-lsb

  • 要求集群中所有机器具有不同的Mac地址、产品uuid、Hostname
    cat /sys/class/dmi/id/product_uuid
    ip link

  • 解决“cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录”的问题
    先安装相关库
    yum install -y epel-release
    yum install -y conntrack ipvsadm ipset jq sysstat curl iptables
    执行如下配置命令
    安装模块
    modprobe br_netfilter
    modprobe ip_vs
    添加配置项
    cat > /etc/rc.sysinit <<EOF
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    net.ipv4.ip_forward=1
    vm.swappiness=0
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    EOF

    sysctl -p
    成功!



安装docker环境

  • 所有主机都要安装docker
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

    yum makecache fast
    yum install -y docker-ce
    编辑systemctl的Docker启动文件
    sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service
    启动docker
    systemctl daemon-reload
    systemctl enable docker
    systemctl start docker

搭建docker-registry私服

配置docker加速器(仅限于私服机器,本文是192.168.3.89)
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://7471d7b2.mirror.aliyuncs.com"]} EOF
systemctl daemon-reload
systemctl restart docker
安装 registry
docker pull registry:latest
下载链接:链接:https://pan.baidu.com/s/1ZdmgnrYGVobc22FX__vYwg 提取码:69gq ,随后将该文件放置到registry机器上,并在registry主机上加载、启动该镜像(嘉定该镜像在/var/lib/docker目录下)
docker load -i /var/lib/docker/k8s-repo-1.13.0
运行 docker images查看镜像
docker run --restart=always -d -p 80:5000 --name repo harbor.io:1180/system/k8s-repo:v1.13.0
或者
docker run --restart=always -d -p 80:5000 --privileged=true --log-driver=none --name registry -v /home/registrydata:/tmp/registry harbor.io:1180/system/k8s-repo:v1.13.0

在浏览器输入http://192.168.3.89/v2/_catalog

浏览器显示如上图则服务正常

  • 所有非registry主机配置私有源
    mkdir -p /etc/docker
    echo -e '{\n"insecure-registries":["k8s.gcr.io", "gcr.io", "quay.io"]\n}' > /etc/docker/daemon.json
    systemctl restart docker
    此处应当修改为registry所在机器的IP
    REGISTRY_HOST="192.168.3.89"
    设置Hosts
    yes | cp /etc/hosts /etc/hosts_bak
    cat /etc/hosts_bak|grep -vE '(gcr.io|harbor.io|quay.io)' > /etc/hosts
    echo """ $REGISTRY_HOST gcr.io harbor.io k8s.gcr.io quay.io """ >> /etc/hosts

安装配置kubernetes(master & worker)

首先下载链接:链接:https://pan.baidu.com/s/1t3EWAt4AET7JaIVIbz-zHQ 提取码:djnf ,并放置在k8s各个master和worker主机上,我放在/home下
yum install -y socat keepalived ipvsadm
cd /home/
scp k8s-v1.13.0-rpms.tgz 192.168.3.81:/home
scp k8s-v1.13.0-rpms.tgz 192.168.3.82:/home
scp k8s-v1.13.0-rpms.tgz 192.168.3.83:/home

然后依次在每台机器上执行如下命令
cd /home
tar -xzvf k8s-v1.13.0-rpms.tgz
cd k8s-v1.13.0
rpm -Uvh * --force
systemctl enable kubelet
kubeadm version -o short

  • 部署HA Master
    先使用ifconfig -a 查看网卡设备名,这里是enp0s3
    在192.168.3.80上执行
    cd ~/
    echo """
    CP0_IP=192.168.3.80
    CP1_IP=192.168.3.81
    CP2_IP=192.168.3.82
    VIP=192.168.3.88
    NET_IF=enp0s3
    CIDR=10.244.0.0/16
    """ > ./cluster-info

    bash -c "$(curl -fsSL https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/kubeha-gen.sh)"
    该步骤将可能持续2到10分钟,在该脚本进行安装部署前,将有一次对安装信息进行检查确认的机会
    执行结束记住输出的join信息
    join command:
    kubeadm join 192.168.3.88:6443 --token 29ch4f.0zshhxkh0ii4q9ej --discovery-token-ca-cert-hash sha256:a3cb2e754064b1ea4871a12f2a31dd2a776cc32a6dde57e5009cabca520cb56f
  • 安装helm
    如果需要安装helm,请先下载离线包链接:链接:https://pan.baidu.com/s/1B7WHuomXOmZKhHai4tV5MA 提取码:kgzi
    cd /home/
    tar -xzvf helm-v2.12.0-linux-amd64.tar
    cd linux-amd64
    cp helm /usr/local/bin
    helm init --service-account=kubernetes-dashboard-admin --skip-refresh --upgrade
    helm version
  • 加入work node
    在要接入集群的节点主机执行命令
    kubeadm join 192.168.3.88:6443 --token 29ch4f.0zshhxkh0ii4q9ej --discovery-token-ca-cert-hash sha256:a3cb2e754064b1ea4871a12f2a31dd2a776cc32a6dde57e5009cabca520cb56f

挂载扩展存储


  • kubeha-gen.sh脚本内容如下(建议将脚本下载到本地然后修改其中的邮件地址等信息):
    #/bin/bash

function check_parm()
{
if [ "${2}" == "" ]; then
echo -n "${1}"
return 1
else
return 0
fi
}

if [ -f ./cluster-info ]; then
source ./cluster-info
fi

check_parm "Enter the IP address of master-01: " ${CP0_IP}
if [ $? -eq 1 ]; then
read CP0_IP
fi
check_parm "Enter the IP address of master-02: " ${CP1_IP}
if [ $? -eq 1 ]; then
read CP1_IP
fi
check_parm "Enter the IP address of master-03: " ${CP2_IP}
if [ $? -eq 1 ]; then
read CP2_IP
fi
check_parm "Enter the VIP: " ${VIP}
if [ $? -eq 1 ]; then
read VIP
fi
check_parm "Enter the Net Interface: " ${NET_IF}
if [ $? -eq 1 ]; then
read NET_IF
fi
check_parm "Enter the cluster CIDR: " ${CIDR}
if [ $? -eq 1 ]; then
read CIDR
fi

echo """
cluster-info:
master-01: ${CP0_IP}
master-02: ${CP1_IP}
master-02: ${CP2_IP}
VIP: ${VIP}
Net Interface: ${NET_IF}
CIDR: ${CIDR}
"""
echo -n 'Please print "yes" to continue or "no" to cancel: '
read AGREE
while [ "${AGREE}" != "yes" ]; do
if [ "${AGREE}" == "no" ]; then
exit 0;
else
echo -n 'Please print "yes" to continue or "no" to cancel: '
read AGREE
fi
done

mkdir -p ~/ikube/tls

IPS=(${CP0_IP} ${CP1_IP} ${CP2_IP})

PRIORITY=(100 50 30)
STATE=("MASTER" "BACKUP" "BACKUP")
HEALTH_CHECK=""
for index in 0 1 2; do
HEALTH_CHECK=${HEALTH_CHECK}"""
real_server ${IPS[$index]} 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
"""
done

for index in 0 1 2; do
ip=${IPS[${index}]}
echo """
global_defs {
router_id LVS_DEVEL
}

vrrp_instance VI_1 {
state ${STATE[{index}]} interface \{NET_IF}
virtual_router_id 80
priority ${PRIORITY[{index}]} advert_int 1 authentication { auth_type PASS auth_pass just0kk } virtual_ipaddress { \{VIP}
}
}

virtual_server ${VIP} 6443 {
delay_loop 6
lb_algo loadbalance
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 0
protocol TCP

${HEALTH_CHECK}
}
""" > ~/ikube/keepalived-${index}.conf
scp ~/ikube/keepalived-${index}.conf ${ip}:/etc/keepalived/keepalived.conf

ssh ${ip} "
systemctl stop keepalived
systemctl enable keepalived
systemctl start keepalived
kubeadm reset -f
rm -rf /etc/kubernetes/pki/"
done

echo """
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
controlPlaneEndpoint: "${VIP}:6443"
apiServer:
certSANs:
- ${CP0_IP}
- ${CP1_IP}
- ${CP2_IP}
- ${VIP}
networking:
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: ${CIDR}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
""" > /etc/kubernetes/kubeadm-config.yaml

kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf ${HOME}/.kube/config

kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/calico/rbac.yaml
curl -fsSL https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/calico/calico.yaml | sed "s!8.8.8.8!${CP0_IP}!g" | sed "s!10.244.0.0/16!${CIDR}!g" | kubectl apply -f -

JOIN_CMD=kubeadm token create --print-join-command

for index in 1 2; do
ip=${IPS[${index}]}
ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.key
scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf $ip:~/.kube/config

ssh ${ip} "${JOIN_CMD} --experimental-control-plane"
done

echo "Cluster create finished."

echo """
[req]
distinguished_name = req_distinguished_name
prompt = yes

[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_value = CN

stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_value = Dalian

localityName = Locality Name (eg, city)
localityName_value = Haidian

organizationName = Organization Name (eg, company)
organizationName_value = Channelsoft

organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_value = R & D Department

commonName = Common Name (eg, your name or your server's hostname)
commonName_value = *.multi.io

emailAddress = Email Address
emailAddress_value = lentil1016@gmail.com
""" > ~/ikube/tls/openssl.cnf
openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days 3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key
kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt --key ~/ikube/tls/tls.key
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/traefik.yaml
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/metrics.yaml
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/kubernetes-dashboard.yaml

echo "Plugin install finished."
echo "Waiting for all pods into 'Running' status. You can press 'Ctrl + c' to terminate this waiting any time you like."
POD_UNREADY=`kubectl get pods -n kube-system 2>&1|awk '{print $3}'|grep -vE 'Running|STATUS'`
NODE_UNREADY=`kubectl get nodes 2>&1|awk '{print $2}'|grep 'NotReady'`
while [ "${POD_UNREADY}" != "" -o "${NODE_UNREADY}" != "" ]; do
sleep 1
POD_UNREADY=`kubectl get pods -n kube-system 2>&1|awk '{print $3}'|grep -vE 'Running|STATUS'`
NODE_UNREADY=`kubectl get nodes 2>&1|awk '{print $2}'|grep 'NotReady'`
done

echo

kubectl get cs
kubectl get nodes
kubectl get pods -n kube-system

echo """
join command:
`kubeadm token create --print-join-command`"""


最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,547评论 6 477
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,399评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,428评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,599评论 1 274
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,612评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,577评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,941评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,603评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,852评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,605评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,693评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,375评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,955评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,936评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,172评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 43,970评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,414评论 2 342

推荐阅读更多精彩内容