亚洲乱码中文字幕综合,中国熟女仑乱hd,亚洲精品乱拍国产一区二区三区,一本大道卡一卡二卡三乱码全集资源,又粗又黄又硬又爽的免费视频

centos7基于keepalived+nginx部署k8s1.26.0高可用集群

 更新時(shí)間:2025年01月11日 16:46:55   作者:南北二斗  
Kubernetes是一個(gè)開(kāi)源的容器編排平臺(tái),用于自動(dòng)化地部署、擴(kuò)展和管理容器化應(yīng)用程序,在生產(chǎn)環(huán)境中,為了確保集群的高可用性,我們需要使用多個(gè)Master節(jié)點(diǎn)來(lái)實(shí)現(xiàn)冗余和故障切換

k8s集群角色

IP地址

主機(jī)名

master

192.168.209.116

k8s-master1

master

192.168.209.117

k8s-master2

master

192.168.209.118

k8s-master3

node

192.168.209.119

k8s-node1

echo "設(shè)置主機(jī)名"

echo "在 192.168.209.116 上執(zhí)行如下:"
hostnamectl set-hostname k8s-master1 && bash

echo "在 192.168.209.117 上執(zhí)行如下:"
hostnamectl set-hostname k8s-master2 && bash

echo "在 192.168.209.118 上執(zhí)行如下:"
hostnamectl set-hostname k8s-master3 && bash

echo "在 192.168.209.119 上執(zhí)行如下:"
hostnamectl set-hostname k8s-node1 && bash

一、初始化(所有節(jié)點(diǎn)都執(zhí)行)

echo "配置阿里云yum源"
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache


echo "更新系統(tǒng)并安裝必要工具..."
yum update -y
yum install -y yum-utils device-mapper-persistent-data lvm2 bash-completion


echo "禁用 SELinux 和防火墻..."
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
systemctl disable --now firewalld

echo "禁用 swap..."
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

echo "# 優(yōu)化系統(tǒng)配置,開(kāi)啟 IP 轉(zhuǎn)發(fā)、關(guān)閉 swap 等"
echo "優(yōu)化系統(tǒng)配置..."
cat <<EOF | tee /etc/sysctl.d/k8s.conf
vm.swappiness = 0
vm.panic_on_oom = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.netfilter.nf_conntrack_max = 2310720
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 1048576
fs.file-max = 52706963
fs.nr_open = 52706963
EOF

sysctl -p /etc/sysctl.d/k8s.conf

echo "加載 br_netfilter 模塊..."
modprobe br_netfilter
lsmod | grep br_netfilter


echo "安裝 ipset 和 ipvsadm..."
yum -y install ipset ipvsadm


echo "配置 ipvsadm 模塊加載方式..."
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs
lsmod | grep -e ip_vs -e nf_conntrack

二、安裝containerd(所有節(jié)點(diǎn)都執(zhí)行)

echo "安裝 Containerd..."
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y containerd.io
containerd config default > /etc/containerd/config.toml

#修改/etc/containerd/config.toml文件:
#1、把 SystemdCgroup = false 修改成 SystemdCgroup = true

#2、把 sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"

#3、在[plugins."io.containerd.grpc.v1.cri".registry.mirrors]下面加4行
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."swr.cn-north-4.myhuaweicloud.com"]
          endpoint = ["https://swr.cn-north-4.myhuaweicloud.com"]

        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io"]
systemctl enable --now containerd
systemctl start containerd

三、安裝docker-ce(所有節(jié)點(diǎn)都執(zhí)行)

echo "停止舊版本docker"
sudo systemctl stop docker

echo "卸載舊版本docker"
# yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

sudo rm -rf /var/lib/docker
sudo rm -rf /run/docker
sudo rm -rf /var/run/docker
sudo rm -rf /etc/docker

echo "安裝docker-ce"

yum install -y yum-utils device-mapper-persistent-data lvm2 git
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce -y
cat /etc/docker/daemon.json
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": [
    "https://2a6bf1988cb6428c877f723ec7530dbc.mirror.swr.myhuaweicloud.com",
    "https://docker.m.daocloud.io",
    "https://hub-mirror.c.163.com",
    "https://mirror.baidubce.com",
    "https://your_preferred_mirror",
    "https://dockerhub.icu",
    "https://docker.registry.cyou",
    "https://docker-cf.registry.cyou",
    "https://dockercf.jsdelivr.fyi",
    "https://docker.jsdelivr.fyi",
    "https://dockertest.jsdelivr.fyi",
    "https://mirror.aliyuncs.com",
    "https://dockerproxy.com",
    "https://mirror.baidubce.com",
    "https://docker.m.daocloud.io",
    "https://docker.nju.edu.cn",
    "https://docker.mirrors.sjtug.sjtu.edu.cn",
    "https://docker.mirrors.ustc.edu.cn",
    "https://mirror.iscas.ac.cn",
    "https://docker.rainbond.cc"
    ]
}
systemctl enable --now docker
systemctl restart docker

四、安裝kubelet+kubeadm+kubectl(所有節(jié)點(diǎn)都執(zhí)行)

echo "安裝 Kubernetes 工具..."
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.26.0 kubeadm-1.25.0 kubectl-1.26.0

systemctl enable kubelet
systemctl restart kubelet

五、安裝keepalived+nginx(只在master節(jié)點(diǎn)執(zhí)行)

sudo yum install -y epel-release
sudo yum install -y nginx keepalived

echo "配置nginx"
vim /etc/nginx/nginx.conf

#在http塊的上方加上stream塊
...
stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
    upstream k8s-apiserver {
            server 192.168.209.118:6443 weight=5 max_fails=3 fail_timeout=30s;
            server 192.168.209.117:6443 weight=5 max_fails=3 fail_timeout=30s;
            server 192.168.209.116:6443 weight=5 max_fails=3 fail_timeout=30s;
    }
    server {
       listen 16443; # 由于nginx與master節(jié)點(diǎn)復(fù)用,這個(gè)監(jiān)聽(tīng)端口不能是6443,否則會(huì)沖突
       proxy_pass k8s-apiserver;
    }
}

http {

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+nginx

echo "配置keepalived"
echo "--------k8s-master1配置--------------"


cat /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33  # 修改為實(shí)際網(wǎng)卡名
    virtual_router_id 51 # VRRP 路由 ID實(shí)例,每個(gè)實(shí)例是唯一的
    priority 100    # 優(yōu)先級(jí),備服務(wù)器設(shè)置 90
    advert_int 1    # 指定VRRP 心跳包通告間隔時(shí)間,默認(rèn)1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虛擬IP
    virtual_ipaddress {
        192.168.209.111/24
    }
    track_script {
        check_nginx
    }
}

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm部署k8s高可用集群_02

echo "配置keepalived"
echo "--------k8s-master2配置--------------"


cat /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33  # 修改為實(shí)際網(wǎng)卡名
    virtual_router_id 51 # VRRP 路由 ID實(shí)例,每個(gè)實(shí)例是唯一的
    priority 90    # 優(yōu)先級(jí),備服務(wù)器設(shè)置 90
    advert_int 1    # 指定VRRP 心跳包通告間隔時(shí)間,默認(rèn)1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虛擬IP
    virtual_ipaddress {
        192.168.209.111/24
    }
    track_script {
        check_nginx
    }
}

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+nginx_03

echo "配置keepalived"
echo "--------k8s-master3配置--------------"

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33  # 修改為實(shí)際網(wǎng)卡名
    virtual_router_id 51 # VRRP 路由 ID實(shí)例,每個(gè)實(shí)例是唯一的
    priority 80    # 優(yōu)先級(jí),備服務(wù)器設(shè)置 80
    advert_int 1    # 指定VRRP 心跳包通告間隔時(shí)間,默認(rèn)1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虛擬IP
    virtual_ipaddress {
        192.168.209.111/24
    }
    track_script {
        check_nginx
    }
}

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm部署k8s高可用集群_04

cat /etc/keepalived/check_nginx.sh
#!/bin/bash
#1、判斷Nginx是否存活
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
    #2、如果不存活則嘗試啟動(dòng)Nginx
    service nginx start
    sleep 2
    #3、等待2秒后再次獲取一次Nginx狀態(tài)
    counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
    #4、再次進(jìn)行判斷,如Nginx還不存活則停止Keepalived,讓地址進(jìn)行漂移
    if [ $counter -eq 0 ]; then
        service  keepalived stop
    fi
fi
echo "啟動(dòng)nginx和keepalived服務(wù)"

chmod +x /etc/keepalived/check_nginx.sh
systemctl daemon-reload && systemctl restart nginx
systemctl restart keepalived && systemctl enable nginx keepalived

六、kubeadm初始化k8s集群(只在k8s-master1執(zhí)行)

改注釋文字的地方,改成下面這個(gè)樣子

kubeadm config print init-defaults > kubeadm.yaml

cat kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
#localAPIEndpoint:            #注釋掉
#  advertiseAddress: 1.2.3.4  #注釋掉
#  bindPort: 6443             #注釋掉
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock   #指定好
  imagePullPolicy: IfNotPresent
#  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers     #改成這樣
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
controlPlaneEndpoint: 192.168.209.111:16443                              #改成vip+nginx端口
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16                                    #指定pod網(wǎng)段                                   
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_keepalived+nginx_05

kubeadm init --cnotallow=kubeadm.yaml --ignore-preflight-errors=SystemVerification

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+nginx_06

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes

至此 k8s主節(jié)點(diǎn)已安裝完成

-----------------------------

注意提前去k8s-master2和k8s-master3節(jié)點(diǎn)上創(chuàng)建個(gè)文件夾

mkdir -p /etc/kubernetes/pki/etcd/

echo "將k8s-master1中的證書(shū)scp到k8s-master2和k8s-master3節(jié)點(diǎn)"

cd  /etc/kubernetes/pki/
scp ca.* k8s-master2:/etc/kubernetes/pki/
scp sa.* k8s-master2:/etc/kubernetes/pki/
scp front-proxy-ca.*  k8s-master2:/etc/kubernetes/pki/
scp etcd/ca.*  k8s-master2:/etc/kubernetes/pki/etcd/

scp ca.* k8s-master3:/etc/kubernetes/pki/
scp sa.* k8s-master3:/etc/kubernetes/pki/
scp front-proxy-ca.*  k8s-master3:/etc/kubernetes/pki/
scp etcd/ca.*  k8s-master3:/etc/kubernetes/pki/etcd/

七、將其他master join到k8s集群(在k8s-master2和k8s-master3執(zhí)行)

--control-plane --ignore-preflight-errors=SystemVerification 加上這兩個(gè)

kubeadm join 192.168.209.111:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:ec363a06d7681d941e7969fb6e994f4a4c1c4ef0d154c7290131c1e830b4bec5 \
        --control-plane --ignore-preflight-errors=SystemVerification

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_centos7部署k8s高可用集群_07

八、將node節(jié)點(diǎn)join到k8s集群(只在k8s-node1節(jié)點(diǎn)執(zhí)行)

kubeadm join 192.168.209.111:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:ec363a06d7681d941e7969fb6e994f4a4c1c4ef0d154c7290131c1e830b4bec5   --ignore-preflight-errors=SystemVerification

九、部署calico網(wǎng)絡(luò)插件(只在k8s-master1執(zhí)行)

curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml -O

sed -i 's|docker.io/calico/cni:v3.25.0|swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/cni:v3.25.0|g' calico.yaml
sed -i 's|docker.io/calico/node:v3.25.0|swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/node:v3.25.0|g' calico.yaml
sed -i 's|docker.io/calico/kube-controllers:v3.25.0|swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/kube-controllers:v3.25.0|g' calico.yaml
cat calico.yaml
#在下面對(duì)應(yīng)位置加上2行內(nèi)容
            - name: IP_AUTODETECTION_METHOD
              value: "interface=ens33"
              
#在下面對(duì)應(yīng)位置加上2行內(nèi)容              
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_centos7部署k8s高可用集群_08

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm部署k8s高可用集群_09

kubectl apply -f calico.yaml

calico相關(guān)pod為running代表成功

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+nginx_10

測(cè)試DNS 解析和網(wǎng)絡(luò)是否正常

kubectl run busybox --image swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/library/busybox:1.28 --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_centos7部署k8s高可用集群_11

十、配置etcd高可用狀態(tài)

vim /etc/kubernetes/manifests/etcd.yaml

#將- --initial-cluster=xuegod63=https://192.168.209.116:2380
改成
- --initial-cluster=k8s-master1=https://192.168.209.116:2380,k8s-master2=https://192.168.209.117:2380,k8s-master3=https://192.168.209.118:2380

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_centos7部署k8s高可用集群_12

測(cè)試 etcd 集群是否配置成功:

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes  registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt member list

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+keepalived_13

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://192.168.209.116:2379,https://192.168.209.117:2379,https://192.168.209.118:2379 endpoint health --cluster

全是successfuly代表正常,博主電腦資源不足把一臺(tái)master3關(guān)機(jī)了

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_centos7部署k8s高可用集群_14

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl -w table --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://192.168.209.116:2379,https://192.168.209.117:2379,https://192.168.209.118:2379  endpoint status --cluster

能看到三個(gè)endpoint代表正常,博主電腦資源不足把一臺(tái)master3關(guān)機(jī)了

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+keepalived_15

十一、總結(jié)

到此這篇關(guān)于centos7基于keepalived+nginx部署k8s1.26.0高可用集群的文章就介紹到這了,更多相關(guān)Centos7安裝部署Kubernetes(k8s) 高可用集群內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

最新評(píng)論