kubernetes之三 搭建k8s集群(离线部署)

kubernetes的部署有2种方式

  1. 在线使用yum安装
  2. 离线使用安装包安装
    由于并不是每个安装环境都能联网,所以本文是采用离线安装的方式。kubernetes部署有点复杂,尤其是网络部分,需要先划定好网络,注意每步的细节成功率才高点。以下是安装步骤,本文较长,有些细节尽量补全,不使遗漏。

准备

1). 版本信息
组件 版本号 补充说明
docker 18.03.0-ce
kubernetes 1.18.12
etcd 3.4.7 API VERSION 3.4
linux centos 3.10.0-1127.8.2.el7.x86_64
2). 选择安装节点

资源有限,这里用了三台机器,除了kubernetes的组件外,etcd集群也共享了相同的资源。

IP地址 角色 部署的组件
173.119.126.200 master kube-proxy,kubelet,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler
173.119.126.199 node kube-proxy,kubelet,etcd,flanneld
173.119.126.198 node kube-proxy,kubelet,etcd,flanneld
3). 修改host,3台机器都要修改
#在200机器执行
echo "k8s-master-216-200" > /etc/hosts
#或者
vim /etc/hosts
173.119.126.200 k8s-master-216-200
173.119.126.199 k8s-worker-216-199
173.119.126.198 k8s-worker-216-198
4). 确认mac地址和product_uuid的唯一性
ifconfig -a
cat /sys/class/dmi/id/product_uuid
5). 关闭防火墙
systemctl stop firewalld # 关闭服务
systemctl disable firewalld
6). 禁用SELinux
sestatus    # 查看SELinux状态
vi /etc/sysconfig/selinux
SELINUX=disabled
7). 禁止交换分区
vim /etc/fstab 
#以下这行注释掉
/dev/mapper/rhel-swap   swap    swap    defaults        0 0 
8).安装ETCD

此步骤请参照其他文档吧

docker安装

kubernetes是运行于容器之上的组件,需要先安装docker。
下载版本 docker-18.03.0-ce

wget https://download.docker.com/linux/static/stable/x86_64/docker-18.03.0-ce.tgz

解压

tar -xvzf docker-18.03.0-ce.tgz -C ./
cp docker/* /usr/bin/

配置服务自适应启动,用systemd管理docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

测试下是否安装好

docker  -v

设置开机启动

systemctl daemon-reload
systemctl start docker
systemctl enable docker

kubernetes安装

终于到了kubernetes安装这一步了,坚持下。
先根据服务器版本下载对应的kubernetes版本

https://dl.k8s.io/v1.18.12/kubernetes-server-linux-amd64.tar.gz

解压

mkdir -p /tools/kubernetes/{bin,cfg,ssl,logs}
tar -xvzf kubernetes-server-linux-amd64.tar.gz ./ 
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /tools/kubernetes/bin
cp kubectl /usr/bin/
1). 证书生成
mkdir -p /tools/k8s/k8s-cert  && cd /tools/k8s/k8s-cert
cat > server-csr.json<<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "173.10.0.1",
      "173.119.126.200",
      "173.119.126.199",
      "173.119.126.198",
      "localhost",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
cat > ca-config.json<<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json<<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

生成ca证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

根据ca证书生成server端证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2) 在master节点部署kube-apisever
cat >/tools/kubernetes/cfg/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--logtostderr=false \ 
--v=2 \
--log-dir=/tools/kubernetes/logs \
--etcd-servers=https://173.119.126.200:2379,https://173.119.126.199:2379,https://173.119.126:2379 \
--bind-address=173.119.126.200 \
--secure-port=6443 \
--advertise-address=173.119.126.200 \
--allow-privileged=true \
--service-cluster-ip-range=173.10.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/tools/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/tools/kubernetes/ssl/server.pem 
--kubelet-client-key=/tools/kubernetes/ssl/server-key.pem \
--tls-cert-file=/tools/kubernetes/ssl/server.pem \
--tls-private-key-file=/tools/kubernetes/ssl/server-key.pem \
--client-ca-file=/tools/kubernetes/ssl/ca.pem \
--service-account-key-file=/tools/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/tools/etcd/ssl/ca.pem \
--etcd-certfile=/tools/etcd/ssl/server.pem \
--etcd-keyfile=/tools/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/tools/kubernetes/logs/k8s-audit.log"
EOF

copy证书

cp /tools/k8s/k8s-cert/*pem /tools/kubernetes/ssl/ 

配置服务自适应启动,用systemd管理kube-apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/tools/kubernetes/cfg/kube-apiserver.conf
ExecStart=/tools/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

设置开机启动

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
3)在master节点部署kube-controller-manager
cat > /tools/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/tools/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=173.10.0.0/16 \
--service-cluster-ip-range=173.10.0.0/24 \
--cluster-signing-cert-file=/tools/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/tools/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/tools/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/tools/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
EOF

配置服务自适应启动,用systemd管理kube-controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/tools/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/tools/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

设置开机启动

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
3). 部署kube-scheduler
cat > /tools/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/tools/kubernetes/logs \\
--leader-elect \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1"
EOF

配置服务自适应启动,用systemd管理kube-scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/tools/kubernetes/cfg/kube-scheduler.conf
ExecStart=/tools/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

设置开机启动

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
4). 部署kubelet

在node节点上创建工作目录

mkdir -p /tools/kubernetes/{bin,cfg,ssl,logs} 

在各个节点上执行

cp kubectl /usr/bin/

在master节点上执行

cat > /tools/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/tools/kubernetes/logs \
--hostname-override=k8s-master-216-200 \
--network-plugin=cni \
--kubeconfig=/tools/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/tools/kubernetes/cfg/bootstrap.kubeconfig \
--config=/tools/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/tools/kubernetes/ssl \
--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
EOF

配置参数文件

cat > /tools/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 173.10.10.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /tools/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

生成bootstrap.kubeconfig文件
注意这里的TOKEN配置要与: /tools/kubernetes/cfg/token.csv里保持一致
生成 kubelet bootstrap kubeconfig 配置文件
apiserver IP:PORT
KUBE_APISERVER="https://173.119.126 .200:6443"
TOKEN=""

kubectl config set-cluster kubernetes \
  --certificate-authority=/tools/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
  kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
  kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

kubectl create clusterrolebinding system:anonymous   --clusterrole=cluster-admin   --user=system:anonymous

配置服务自适应启动,用systemd管理kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/tools/kubernetes/cfg/kubelet.conf
ExecStart=/tools/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

设置开机启动

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

master上查看kubelet证书请求

kubectl get csr
{生成的token}

批准申请 注意:此命令不要直接复制执行,将后面的node-csr-* 替换为kubectl get csr得到的name值

kubectl certificate approve node-csr-{替换成生成的token}

注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

5). 部署kube-proxy

创建配置文件

cat > /tools/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/tools/kubernetes/logs \
--config=/tools/kubernetes/cfg/kube-proxy-config.yml"
EOF

配置参数文件

cat > /tools/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /tools/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master-216-200
clusterCIDR: 173.10.0.0/24
EOF

切换目录

cd /tools/k8s/k8s-cert/

创建证书请求文件

cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "ShangHai",
      "ST": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

查看证书

ls kube-proxy*pem

执行以下命令

KUBE_APISERVER="https://173.119.126.200:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/tools/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=/tools/k8s/k8s-cert/kube-proxy.pem \
  --client-key=/tools/k8s/k8s-cert/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig


kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
  
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

拷贝生成的配置文件kube-proxy.kubeconfig到指定路径

cp kube-proxy.kubeconfig /tools/kubernetes/cfg/

配置服务自适应启动,用systemd管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/tools/kubernetes/cfg/kube-proxy.conf
ExecStart=/tools/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

设置开机启动

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

把证书同步到其它2台worker主机

 scp -r /tools/k8s/k8s-cert/kube-proxy*pem 173.119.126.199:/tools/kubernetes/ssl/
 scp -r /tools/k8s/k8s-cert/kube-proxy*pem 173.119.126.198:/tools/kubernetes/ssl/

基本的组件部署完毕了,接下来需要部署CNI网络通信

4). 部署CNI网络

替换镜像地址,这里将flannel:v0.12.0-amd64上传到了私有harbor上了,如果可以联网不需要这一步操作,直接执行下一步
sed -i -r "s#quay.io/dummy.net/g" kube-flannel.yml
创建配置文件

FLANNEL_OPTIONS="--etcd-endpoints=https://173.119.126.200:2379,https://173.119.126.199:2379,https://173.119.126.198:2379,https://127.0.0.1:2379 -etcd-cafile=/tools/etcd/ssl/ca.pem -etcd-certfile=/tools/etcd/ssl/server.pem -etcd-keyfile=/tools/etcd/ssl/server-key.pem -etcd-prefix=/dummy.net/network"

设置IP地址网段, 值存入ETCD中

ETCDCTL_API=2 /tools/etcd/bin/etcdctl --ca-file=/tools/etcd/ssl/ca.pem --cert-file=/tools/etcd/ssl/server.pem --key-file=/tools/etcd/ssl/server-key.pem --endpoints="https://173.119.126.200:2379,https://173.119.126.199:2379,https://173.119.126.198:2379" set /bestpay.net/network/config { "Network": "173.10.0.0/16", "Backend": {"Type": "vxlan"}}

配置服务自适应启动,用systemd管理flanneld

cat >/usr/lib/systemd/system/flanneld.service<<EOF
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
 
[Service]
Type=notify
#EnvironmentFile=/etc/sysconfig/flanneld
#EnvironmentFile=/etc/sysconfig/docker-network
EnvironmentFile=/tools/kubernetes/cfg/flanneld
ExecStart=/tools/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/tools/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
EOF

使flannel生效

kubectl apply -f kube-flannel.yml
#查看flannel是否已在运行
kubectl get pods -n kube-system

flannel需要在其他node节点运行
配置flanneld网络

cat >/tools/kubernetes/cfg/flanneld<<EOF
FLANNEL_OPTIONS="--etcd-endpoints=https://173.119.126.200:2379,https://173.119.126.199:2379,https://173.119.126.198:2379 -etcd-cafile=/tools/etcd/ssl/ca.pem -etcd-certfile=/tools/etcd/ssl/server.pem -etcd-keyfile=/tools/etcd/ssl/server-key.pem -etcd-prefix=/dummy.net/network"
EOF

cat >/usr/lib/systemd/system/flanneld.service<<EOF
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
 
[Service]
Type=notify
#EnvironmentFile=/etc/sysconfig/flanneld
#EnvironmentFile=/etc/sysconfig/docker-network
EnvironmentFile=/tools/kubernetes/cfg/flanneld
ExecStart=/tools/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/tools/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
EOF

设置开机启动

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld

网络配置完成可查看
/run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=173.10.1.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=173.10.1.1/24 --ip-masq=false --mtu=1450"

在etcd中设置网络模型

ETCDCTL_API=2 /tools/etcd/bin/etcdctl --ca-file=/tools/etcd/ssl/ca.pem --cert-file=/tools/etcd/ssl/server.pem --key-file=/tools/etcd/ssl/server-key.pem --endpoints="https://173.119.126.200:2379,https://173.119.126.199:2379,https://173.119.126.198:2379" set /dummy.net/network/config  '{ "Network": "173.10.0.0/16", "Backend": {"Type": "vxlan"}}' 

授权apiserver访问kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

使之生效

kubectl apply -f apiserver-to-kubelet-rbac.yaml

扫尾工作
在node节点上这几个文件是证书申请审批后自动生成的,每个node不同,必须删除重新生成。

rm -f /tools/kubernetes/cfg/kubelet.kubeconfig
rm -f /tools/kubernetes/ssl/kubelet*

5). CoreDNS部署

获取yaml文件

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/coredns/coredns.yaml.base

将获取到的yaml文件重命名

mv coredns.yaml.base coredns.yaml

修改镜像地址(如果可以联网,此步骤可以忽略)将spec.containers.image指向可以下载的地址,这里已经上传到私有harbor
dummy.net/coredns/coredns:1.3.1

6). Dashboard部署

获取yaml配置文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml --no-check-certificate

指定在某个worker节点执行(此步骤可以忽略) 添加配置 nodeName: k8s-worker-126-198,此处使用的镜像已上传到了私有harbor,如果可以联网,也可以不修改。

spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      nodeName: k8s-worker-216-198  # 可以不修改
      containers:
        - name: kubernetes-dashboard
          image: dummy.net/kubernetesui/dashboard:v2.0.0-beta8  #如果可以联网,也可以不修改
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard

启动,使服务生效

kubectl apply -f recommended.yaml

到此,组件都安装完毕了,看看最后的运行结果吧

NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/busybox 1/1 Running 0 30h
kube-system pod/coredns-79b975988-69r5p 1/1 Running 0 30h
kube-system pod/coredns-79b975988-bn4mc 1/1 Running 0 30h
kube-system pod/kube-flannel-ds-amd64-jnnwg 1/1 Running 0 5d23h
kube-system pod/kube-flannel-ds-amd64-s9hmz 1/1 Running 6 5d23h
kube-system pod/kube-flannel-ds-amd64-vt8cl 1/1 Running 3 5d23h
kubernetes-dashboard pod/dashboard-metrics-scraper-cd77fc8d-k5pm4 1/1 Running 0 5d22h
kubernetes-dashboard pod/kubernetes-dashboard-9d8dc486-675wl 1/1 Running 0 5d22h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 173.10.0.1 <none> 443/TCP 8d
kube-system service/coredns ClusterIP 173.10.0.11 <none> 53/UDP,53/TCP,9153/TCP 30h
kube-system service/kube-dns ClusterIP 173.10.0.2 <none> 53/UDP,53/TCP,9153/TCP 2d
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 173.10.0.168 <none> 8000/TCP 5d22h
kubernetes-dashboard service/kubernetes-dashboard NodePort 173.10.0.34 <none> 443:30001/TCP 5d22h

其他常用命令
查看系统状态

kubectl get pods,svc --all-namespaces -o wide

查看异常日志

kubectl logs pod/dashboard-metrics-scraper-cd77fc8d-k5pm4 -n kubernetes-dashboard

删除网卡

kubectl delete -f kube-flannel.yml
ifconfig flannel0 down
ip link delete flannel0

node节点上重启操作

systemctl restart flanneld.service 
systemctl restart kube-proxy && systemctl status kube-proxy.service
systemctl restart kubelet && systemctl status kubelet.service

master节点上重启操作

systemctl restart kube-apiserver && systemctl status kube-apiserver.service 
systemctl restart kube-controller-manager && systemctl status kube-controller-manager
systemctl restart kube-scheduler && systemctl status kube-scheduler

显示和查找资源

$ kubectl get services # 列出所有 namespace 中的所有 service
$ kubectl get pods --all-namespaces # 列出所有 namespace 中的所有 pod
$ kubectl get pods -o wide # 列出所有 pod 并显示详细信息
$ kubectl get deployment my-dep # 列出指定 deployment
$ kubectl get pods --include-uninitialized # 列出该 namespace 中的所有 pod 包括未初始化的
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,457评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,837评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,696评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,183评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,057评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,105评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,520评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,211评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,482评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,574评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,353评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,213评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,576评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,897评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,174评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,489评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,683评论 2 335

推荐阅读更多精彩内容