1.前言 以前搭建集群都是使用nginx反向代理,但現在我們有了更好的選擇——K8S。我不打算一上來就講K8S的知識點,因為知識點還是比較多,我打算先從搭建K8S集群講起,我也是在搭建集群的過程中熟悉了K8S的一些概念,希望對大家有所幫助。K8S集群的搭建難度適中,網上有很多搭建k8s的教程,我搭建 ...
1.前言
以前搭建集群都是使用nginx反向代理,但現在我們有了更好的選擇——K8S。我不打算一上來就講K8S的知識點,因為知識點還是比較多,我打算先從搭建K8S集群講起,我也是在搭建集群的過程中熟悉了K8S的一些概念,希望對大家有所幫助。K8S集群的搭建難度適中,網上有很多搭建k8s的教程,我搭建的過程中或多或少遇到一些問題,現在就把我總結完的教程給大家總結一下。這裡主要講通過二進位包安裝K8S
2.集群組件介紹
節點 | ip | 組件 |
master | 192.168.8.201 |
etcd:存儲集群節點信息 kubectl:管理集群組件,通過kubectl控制集群 kube-controller-manage:監控節點是否健康,不健康則自動修複至健康狀態 kube-scheduler:負責為kube-controller-manage創建的pod選擇合適的節點,將節點信息寫入etcd |
node | 192.168.8.202 |
kube-proxy:service與pod通信 kubelet:kube-scheduler將節點數據存入etcd後,kubelet獲取到並按規則創建pod docker |
3.etcd安裝
yum install etcd –y vi /etc/etcd/etcd.conf
修改etcd.conf內容
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
啟動
systemctl start etcd
systemctl enable etcd
4.下載k8s安裝包
打開github中k8s地址,選擇一個版本的安裝包
點擊CHANGELOG-1.13.md,在master節點上安裝server包,node節點上安裝node包
5.master節點安裝server
tar zxvf kubernetes-server-linux-amd64.tar.gz #解壓 mkdir -p /opt/kubernetes/{bin,cfg} #創建文件夾 mv kubernetes/server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager,kubectl} /opt/kubernetes/bin #移動文件到上一步的文件夾
chmod +x /opt/kubernetes/bin/*
5.1配置apiserver
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ --etcd-servers=http://192.168.8.201:2379 \\ --insecure-bind-address=0.0.0.0 \\ --insecure-port=8080 \\ --advertise-address=192.168.8.201 \\ --allow-privileged=true \\ --service-cluster-ip-range=10.10.10.0/24 \\ --service-node-port-range=30000-50000 \\ --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota" EOF
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
5.2配置kube-controller-manager
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=4 \\ --master=127.0.0.1:8080 \\ --leader-elect=true \\ --address=127.0.0.1" EOF
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
5.3配置kube-scheduler
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=4 \\ --master=127.0.0.1:8080 \\ --leader-elect" EOF
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
5.4運行kube-api與kube-controller-manager與kube-scheduler
vim ku.sh #創建一個腳本,內容如下
#!/bin/bash systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver systemctl enable kube-controller-manager systemctl restart kube-controller-manager systemctl enable kube-scheduler systemctl restart kube-scheduler
執行以上腳本
chmod +x *.sh #給許可權 ./ku.sh #運行
5.5將kubectl配置到環境變數,便於執行
echo "export PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile source /etc/profile
至此server安裝成功,可通過命令查看相關進程是否啟動成功
ps -ef |grep kube
啟動失敗可通過以下命令查看信息
journalctl -u kube-apiserver
6.安裝node節點
6.1docker安裝
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo sudo yum makecache fast sudo yum -y install docker-ce sudo systemctl start docker
6.2解壓node.zip包
tar zxvf kubernetes-node-linux-amd64.tar.gz mkdir -p /opt/kubernetes/{bin,cfg} mv kubernetes/node/bin/{kubelet,kube-proxy} /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/*
6.3創建配置文件
vim /opt/kubernetes/cfg/kubelet.kubeconfig
apiVersion: v1 kind: Config clusters: - cluster: server: http://192.168.8.201:8080 name: kubernetes contexts: - context: cluster: kubernetes name: default-context current-context: default-context
vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
apiVersion: v1 kind: Config clusters: - cluster: server: http://192.168.8.201:8080 name: kubernetes contexts: - context: cluster: kubernetes name: default-context current-context: default-context
cat <<EOF >/opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \\ --v=4 \\ --address=192.168.8.202 \\ --hostname-override=192.168.8.202 \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --allow-privileged=true \\ --cluster-dns=10.10.10.2 \\ --cluster-domain=cluster.local \\ --fail-swap-on=false \\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF
cat <<EOF >/usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF
cat <<EOF >/opt/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.8.202 \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" EOF
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
6.4啟動kube-proxy與kubelet
vim ku.sh
#!/bin/bash systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet systemctl enable kube-proxy systemctl restart kube-proxy
至此node安裝完成,查看是否安裝成功
失敗則查看日誌
journalctl -u kubelet
7.master節點驗證是否有node節點
查看集群健康狀態
至此master與node安裝成功
8.啟動一個nginx示例
kubectl run nginx --image=nginx --replicas=3 kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
驗證
瀏覽器訪問
9.安裝dashbord
vim kube.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 selector: matchLabels: app: kubernetes-dashboard template: metadata: labels: app: kubernetes-dashboard annotations: scheduler.alpha.kubernetes.io/tolerations: | [ { "key": "dedicated", "operator": "Equal", "value": "master", "effect": "NoSchedule" } ] spec: containers: - name: kubernetes-dashboard image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.7.0 imagePullPolicy: Always ports: - containerPort: 9090 protocol: TCP args: - --apiserver-host=http://192.168.8.201:8080 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 --- kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 selector: app: kubernetes-dashboard
創建
kubectl create -f kube.yaml
查看pod
查看埠
訪問bord
至此集群搭建完成