上篇文章部署了Kubernetes的集群,接着我们来基于该集群部署KuberSphere。对应不清楚KuberSphere的可以阅读其官网了解下。
先来上两张部署后的kubeSphere管理端图吧。
通过本文的实践,最终你也可以在自己的机器通过http://IP:port访问自己的环境。下面我们开始部署吧~
安装要求:
官方说法如下,而且安装手册也十分简洁,反正我是没执行成功。
- Kubernetes 版本必须为 “1.15.x,1.16.x,1.17.x 或 1.18.x”;
- 确保您的计算机满足最低硬件要求:CPU > 1 核,内存 > 2 G;
- 在安装之前,需要配置 Kubernetes 集群中的默认存储类;
- 当使用
--cluster-signing-cert-file
和--cluster-signing-key-file
参数启动时,在 kube-apiserver 中会激活 CSR 签名功能。请参阅 RKE 安装问题; - 有关在 Kubernetes 上安装 KubeSphere 的前提条件的详细信息,请参阅前提条件。
安装 Helm
Helm 可以理解为 Kubernetes 的包管理工具,可以方便地发现、共享和使用为Kubernetes构建的应用,它包含几个基本概念:
- Chart:一个 Helm 包,其中包含了运行一个应用所需要的镜像、依赖和资源定义等,还可能包含 Kubernetes 集群中的服务定义
- Release: 在 Kubernetes 集群上运行的 Chart 的一个实例。在同一个集群上,一个 Chart 可以安装很多次。每次安装都会创建一个新的 release。例如一个 MySQL Chart,如果想在服务器上运行两个数据库,就可以把这个 Chart 安装两次。每次安装都会生成自己的 Release,会有自己的 Release 名称。
- Repository:用于发布和存储 Chart 的仓库。
helm客户端安装
# 下载
wget https://get.helm.sh/helm-v2.16.3-linux-amd64.tar.gz
# 解压
tar xf helm-v2.16.3-linux-amd64.tar.gz
cp linux-amd64/helm /usr/local/bin
# 查看版本
helm version
tiller(helm服务端)安装
安装前需在集群每个节点安装socat,否则会报错Error: cannot connect to Tiller
yum install -y socat
初始化helm,部署tiller
Tiller 是以 Deployment 方式部署在 Kubernetes 集群中的,只需执行helm init命令便可简单的完成安装,但是Helm默认会去 storage.googleapis.com 拉取镜像。。。。。。这里我们使用阿里云的仓库完成安装
#添加阿里云的仓库
helm init --client-only --stable-repo-url https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts/
helm repo add incubator https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/
helm repo update
#创建服务端 使用-i指定阿里云仓库
helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
#创建TLS认证服务端,参考地址:#https://github.com/gjmzj/kubeasz/blob/master/docs/guide/helm.md
helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3 --tiller-tls-cert /etc/kubernetes/ssl/tiller001.pem --tiller-tls-key /etc/kubernetes/ssl/tiller001-key.pem --tls-ca-cert /etc/kubernetes/ssl/ca.pem --tiller-namespace kube-system --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
因为 Helm 的服务端 Tiller 是一个部署在 Kubernetes 中 kube-system namespace下的deployment,它会去连接 kube-api在Kubernetes里创建和删除应用。
而从Kubernetes1.6版本开始,API Server 启用了RBAC授权。目前的Tiller部署时默认没有定义授权的ServiceAccount,这会导致访问API Server时被拒绝。所以我们需要明确为Tiller部署添加授权。
# 创建 Kubernetes 的服务帐号和绑定角色
#创建serviceaccount
kubectl create serviceaccount --namespace kube-system tiller
#创建角色绑定
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
# 设置tiller帐号
#使用kubectl patch更新API对象
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
#验证是否授权成功
kubectl get deploy --namespace kube-system tiller-deploy --output yaml|grep serviceAccount
serviceAccount: tiller
serviceAccountName: tiller
验证tiller是否安装成功
kubectl -n kube-system get pods|grep tiller
tiller-deploy-6d8dfbb696-4cbcz 1/1 Running 0 88s
输入命令 helm version 显示结果以下既为成功
Client: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
如果之前安装过,或者出现错误希望重新安装的。下面的命令可能会帮到你
# 卸载helm服务端tiller
helm reset
# 强制删除
helm reset -f
安装nfs存储
在master上安装nfs,资源有限,无法提供更多的机器去单独部署。
# 在node01、node02节点执行:
yum -y install nfs-utils
# 在master节点下执行如下所有命令:
yum -y install nfs-utils rpcbind
# 配置文件中的*是允许所有网段,根据自己实际情况写明自己的网段比如我的是10.211.55.0/24
cat >/etc/exports <<EOF
/data *(insecure,rw,async,no_root_squash)
EOF
# 创建目录并修改权限
mkdir /data/k8s && chmod 777 /data/k8s
# 启动服务
systemctl enable nfs-server rpcbind && systemctl start nfs-server rpcbind
配置storageclass,注意修改nfs服务端IP和共享目录
cat >storageclass.yaml <<EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
namespace: default
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
selector:
matchLabels:
app: nfs-client-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 10.211.55.5 # 此处修改为nfs服务器ip
- name: NFS_PATH
value: /data/k8s # 这里为nfs共享目录
volumes:
- name: nfs-client
nfs:
server: 10.211.55.5 # 此处修改为nfs服务器ip
path: /data/k8s #这里为nfs共享目录
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage # 存储的名称
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain
EOF
创建storageclass
kubectl apply -f storageclass.yaml
设置默认的storageclass
kubectl patch storageclass nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
检查nfs-client pod状态
kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-7b9746695c-nrz4n 1/1 Running 0 2m38s
查看默认存储
kubectl get sc
NAME PROVISIONER AGE
nfs-storage (default) fuseim.pri/ifs 7m22s
部署kubesphere
最小化安装 KubeSphere
# 最小化安装
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml
# 集群安装
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml
另开个ssh连接,查看日志
# 查看安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
# 结果
当日志最后提示如下即表明安装完成,但是还是要等待一些pod完全运行起来才可以
Start installing monitoring
**************************************************
task monitoring status is successful
total: 1 completed:1
**************************************************
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://10.211.55.5:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After logging into the console, please check the
monitoring status of service components in
the "Cluster Status". If the service is not
ready, please wait patiently. You can start
to use when all components are ready.
2. Please modify the default password after login.
#####################################################
检查所有pod状态,都为running时表示安装成功了
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-client-provisioner-7b9746695c-nrz4n 1/1 Running 0 18m
kube-system calico-kube-controllers-bc44d789c-ksgnt 1/1 Running 0 39h
kube-system calico-node-2t4gr 1/1 Running 0 39h
kube-system calico-node-5bzjl 1/1 Running 0 39h
kube-system calico-node-fjdll 1/1 Running 0 39h
kube-system coredns-58cc8c89f4-8jrlt 1/1 Running 0 39h
kube-system coredns-58cc8c89f4-nt5z5 1/1 Running 0 39h
kube-system etcd-k8s-master1 1/1 Running 0 39h
kube-system kube-apiserver-k8s-master1 1/1 Running 0 39h
kube-system kube-controller-manager-k8s-master1 1/1 Running 0 39h
kube-system kube-proxy-b7vj4 1/1 Running 0 39h
kube-system kube-proxy-bghx7 1/1 Running 0 39h
kube-system kube-proxy-ntrxx 1/1 Running 0 39h
kube-system kube-scheduler-k8s-master1 1/1 Running 0 39h
kube-system kuboard-756d46c4d4-dwzwt 1/1 Running 0 39h
kube-system metrics-server-78cff478b7-lwcfl 1/1 Running 0 39h
kube-system tiller-deploy-6d8dfbb696-ldpjd 1/1 Running 0 40m
......
访问http://10.211.55.5:30880输入admin/P@88w0rd 进入系统,就可以看的最上面的图片啦