鏡像下載、功能變數名稱解析、時間同步請點擊 阿裡雲開源鏡像站 一、在兩台機器上安裝docker // 1.安裝Docker源 yum install -y wget && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.rep ...
鏡像下載、功能變數名稱解析、時間同步請點擊 阿裡雲開源鏡像站
一、在兩台機器上安裝docker
// 1.安裝Docker源
yum install -y wget && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
// 2.安裝Docker
yum -y install docker-ce-18.06.1.ce-3.el7
// 3.開啟自啟和啟動
systemctl enable docker && systemctl start docker
// 4.查看版本
docker --version
二、安裝最新的,k8s
// 查找最新版本
[root@master ~]# curl -sSL https://dl.k8s.io/release/stable.txt
v1.23.5
// 下載安裝
[root@master tmp]# wget -q https://dl.k8s.io/v1.23.5/kubernetes-server-linux-amd64.tar.gz
[root@master tmp]# tar -zxf kubernetes-server-linux-amd64.tar.gz
[root@master tmp]# ls kubernetes
addons kubernetes-src.tar.gz LICENSES server
[root@master tmp]# ls kubernetes/server/bin/ | grep -E 'kubeadm|kubelet|kubectl'
kubeadm
kubectl
kubelet
// 可以看到在 server/bin/ 目錄下有我們所需要的全部內容,將我們所需要的 kubeadm kubectl kubelet 等都移動至 /usr/bin 目錄下。
[root@master tmp]# mv kubernetes/server/bin/kube{adm,ctl,let} /usr/bin/
[root@master tmp]# ls /usr/bin/kube*
/usr/bin/kubeadm /usr/bin/kubectl /usr/bin/kubelet
[root@master tmp]# kubeadm version
[root@master tmp]# kubectl version --client
[root@master tmp]# kubelet --version
//為了在生產環境中保障各組件的穩定運行,同時也為了便於管理,我們增加對 kubelet 的 systemd 的配置,由 systemd 對服務進行管理。
[root@master tmp]# cat <<'EOF' > /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
[root@master tmp]# mkdir -p /etc/systemd/system/kubelet.service.d
[root@master tmp]# cat <<'EOF' > /etc/systemd/system/kubelet.service.d/kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
EOF
// 設置開機自啟
[root@master tmp]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
// 此時,我們的前期準備已經基本完成,可以使用 kubeadm 來創建集群了。彆著急,在此之前,我們還需要安裝兩個工具,名為crictl 和 socat。
// Kubernetes v1.23.5 對應 crictl-v1.23.0
[root@master ~]# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz
[root@master ~]# tar zxvf crictl-v1.23.0-linux-amd64.tar.gz
[root@master ~]# mv crictl /usr/bin/
sudo yum install -y socat
// 啟動 master
[root@master ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileExisting-conntrack]: conntrack not found in system path
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
// 報錯了 需要安裝conntrack-tools
yum -y install socat conntrack-tools
// 又報錯了
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
// Docker是用yum安裝的,docker的cgroup驅動程式預設設置為systemd。預設情況下Kubernetes cgroup為system,我們需要更改Docker cgroup驅動,
# 添加以下內容
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
# 重啟docker
systemctl restart docker
# 重新初始化 kubeadm
kubeadm reset # 先重置
kubeadm init \
--apiserver-advertise-address=192.168.42.122 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.22.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all
kubeadm reset
// 可以簡單初始化
kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
Your Kubernetes control-plane has initialized successfully!
/var/lib/kubelet/config.yaml #kubeadm配置文件
/etc/kubernetes/pki #證書存放目錄
[root@master ~]# kubeadm config images list --kubernetes-version v1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
[root@master ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.5
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.5
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.5
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.5
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.5
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
// 配置 環境變數 ,每次重啟,kubeadm 都要配置,這個待研究
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
// 安裝 通信組件 flannel 或者 calico
mkdir ~/kubernetes-flannel && cd ~/kubernetes-flannel
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
kubectl get nodes
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d8c4cb4d-7jfb8 0/1 Pending 0 11m
coredns-6d8c4cb4d-m8hfd 0/1 Pending 0 11m
etcd-master 1/1 Running 4 11m
kube-apiserver-master 1/1 Running 3 11m
kube-controller-manager-master 1/1 Running 4 11m
kube-flannel-ds-m65q6 1/1 Running 0 17s
kube-proxy-qlrmp 1/1 Running 0 11m
kube-scheduler-master 1/1 Running 4 11m
// coredns 一直是 Pending沒有找到原因
// 於是乎決定換成 calico試試
先刪除 kube-flannel
[root@master ~]# kubectl delete -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy "psp.flannel.unprivileged" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.apps "kube-flannel-ds" deleted
[root@master ~]# ifconfig cni0 down
cni0: ERROR while getting interface flags: No such device
[root@master ~]# ip link delete cni0
Cannot find device "cni0"
[root@master ~]# rm -rf /var/lib/cni/
[root@master ~]# ifconfig flannel.1 down
[root@master ~]# ip link delete flannel.1
[root@master ~]# rm -f /etc/cni/net.d/*
[root@master ~]# restart kubelet
-bash: restart: command not found
[root@master ~]# systemctl restart kubelet
// 安裝 calico
[root@master ~]# curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 212k 100 212k 0 0 68018 0 0:00:03 0:00:03 --:--:-- 68039
[root@master ~]# ls
calico.yaml kube-flannel.yml kubernetes-flannel
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 16h v1.23.5
node1 NotReady <none> 12h v1.23.5
[root@master ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
// 查詢 pod
[root@master ~]# kubectl get -w pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-56fcbf9d6b-28w9g 1/1 Running 0 21m
kube-system calico-node-btgnl 1/1 Running 0 21m
kube-system calico-node-z64mb 1/1 Running 0 21m
kube-system coredns-6d8c4cb4d-8pnxx 1/1 Running 0 12h
kube-system coredns-6d8c4cb4d-jdbj2 1/1 Running 0 12h
kube-system etcd-master 1/1 Running 4 17h
kube-system kube-apiserver-master 1/1 Running 3 17h
kube-system kube-controller-manager-master 1/1 Running 4 17h
kube-system kube-proxy-68qrn 1/1 Running 0 12h
kube-system kube-proxy-qlrmp 1/1 Running 0 17h
kube-system kube-scheduler-master 1/1 Running 4 17h
運行正常了
原文鏈接:https://blog.csdn.net/qq_36002737/article/details/123678418