pod污點(diǎn)taint?與容忍度tolerations詳解
一.系統(tǒng)環(huán)境
| 服務(wù)器版本 | docker軟件版本 | Kubernetes(k8s)集群版本 | CPU架構(gòu) |
|---|---|---|---|
| CentOS Linux release 7.4.1708 (Core) | Docker version 20.10.12 | v1.21.9 | x86_64 |
Kubernetes集群架構(gòu):k8scloude1作為master節(jié)點(diǎn),k8scloude2,k8scloude3作為worker節(jié)點(diǎn)
| 服務(wù)器 | 操作系統(tǒng)版本 | CPU架構(gòu) | 進(jìn)程 | 功能描述 |
|---|---|---|---|---|
| k8scloude1/192.168.110.130 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico | k8s master節(jié)點(diǎn) |
| k8scloude2/192.168.110.129 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker節(jié)點(diǎn) |
| k8scloude3/192.168.110.128 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker節(jié)點(diǎn) |
二.前言
本文介紹污點(diǎn)taint 與容忍度tolerations,可以影響pod的調(diào)度。
使用污點(diǎn)taint 與容忍度tolerations的前提是已經(jīng)有一套可以正常運(yùn)行的Kubernetes集群,關(guān)于Kubernetes(k8s)集群的安裝部署,可以查看博客《Centos7 安裝部署Kubernetes(k8s)集群》
三.污點(diǎn)taint
3.1 污點(diǎn)taint概覽
節(jié)點(diǎn)親和性 是 Pod 的一種屬性,它使 Pod 被吸引到一類(lèi)特定的節(jié)點(diǎn) (這可能出于一種偏好,也可能是硬性要求)。 污點(diǎn)(Taint) 則相反——它使節(jié)點(diǎn)能夠排斥一類(lèi)特定的 Pod。
3.2 給節(jié)點(diǎn)添加污點(diǎn)taint
給節(jié)點(diǎn)增加一個(gè)污點(diǎn)的語(yǔ)法如下:給節(jié)點(diǎn) node1 增加一個(gè)污點(diǎn),它的鍵名是 key1,鍵值是 value1,效果是 NoSchedule。 這表示只有擁有和這個(gè)污點(diǎn)相匹配的容忍度的 Pod 才能夠被分配到 node1 這個(gè)節(jié)點(diǎn)。
#污點(diǎn)的格式:鍵=值:NoSchedule kubectl taint nodes node1 key1=value1:NoSchedule #只有鍵沒(méi)有值的話,格式為:鍵:NoSchedule kubectl taint nodes node1 key1:NoSchedule
移除污點(diǎn)語(yǔ)法如下:
kubectl taint nodes node1 key1=value1:NoSchedule-
節(jié)點(diǎn)的描述信息里有一個(gè)Taints字段,Taints字段表示節(jié)點(diǎn)有沒(méi)有污點(diǎn)
[root@k8scloude1 deploy]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8scloude1 Ready control-plane,master 8d v1.21.0 192.168.110.130 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.12
k8scloude2 Ready <none> 8d v1.21.0 192.168.110.129 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.12
k8scloude3 Ready <none> 8d v1.21.0 192.168.110.128 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.12
[root@k8scloude1 deploy]# kubectl describe nodes k8scloude1
Name: k8scloude1
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8scloude1
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.110.130/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.158.64
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 09 Jan 2022 16:19:06 +0800
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
......
查看節(jié)點(diǎn)是否有污點(diǎn),Taints: node-role.kubernetes.io/master:NoSchedule表示k8s集群的master節(jié)點(diǎn)有污點(diǎn),這是默認(rèn)就存在的污點(diǎn),這也是master節(jié)點(diǎn)為什么不能運(yùn)行應(yīng)用pod的原因。
[root@k8scloude1 deploy]# kubectl describe nodes k8scloude2 | grep -i Taints Taints: <none> [root@k8scloude1 deploy]# kubectl describe nodes k8scloude1 | grep -i Taints Taints: node-role.kubernetes.io/master:NoSchedule [root@k8scloude1 deploy]# kubectl describe nodes k8scloude3 | grep -i Taints Taints: <none>
創(chuàng)建pod,nodeSelector:kubernetes.io/hostname: k8scloude1表示pod運(yùn)行在標(biāo)簽為kubernetes.io/hostname=k8scloude1的節(jié)點(diǎn)上。
關(guān)于pod的調(diào)度詳細(xì)內(nèi)容,請(qǐng)查看博客《pod(八):pod的調(diào)度——將 Pod 指派給節(jié)點(diǎn)》
[root@k8scloude1 pod]# vim schedulepod4.yaml
[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
nodeSelector:
kubernetes.io/hostname: k8scloude1
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
標(biāo)簽為kubernetes.io/hostname=k8scloude1的節(jié)點(diǎn)為k8scloude1節(jié)點(diǎn)
[root@k8scloude1 pod]# kubectl get nodes -l kubernetes.io/hostname=k8scloude1 NAME STATUS ROLES AGE VERSION k8scloude1 Ready control-plane,master 8d v1.21.0
創(chuàng)建pod,因?yàn)閗8scloude1上有污點(diǎn),pod1不能運(yùn)行在k8scloude1上,所以pod1狀態(tài)為Pending
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml pod/pod1 created #因?yàn)閗8scloude1上有污點(diǎn),pod1不能運(yùn)行在k8scloude1上,所以pod1狀態(tài)為Pending [root@k8scloude1 pod]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 0/1 Pending 0 9s <none> <none> <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pod -o wide No resources found in pod namespace.
四.容忍度tolerations
4.1 容忍度tolerations概覽
容忍度(Toleration) 是應(yīng)用于 Pod 上的。容忍度允許調(diào)度器調(diào)度帶有對(duì)應(yīng)污點(diǎn)的 Pod。 容忍度允許調(diào)度但并不保證調(diào)度:作為其功能的一部分, 調(diào)度器也會(huì)評(píng)估其他參數(shù)。
污點(diǎn)和容忍度(Toleration)相互配合,可以用來(lái)避免 Pod 被分配到不合適的節(jié)點(diǎn)上。 每個(gè)節(jié)點(diǎn)上都可以應(yīng)用一個(gè)或多個(gè)污點(diǎn),這表示對(duì)于那些不能容忍這些污點(diǎn)的 Pod, 是不會(huì)被該節(jié)點(diǎn)接受的。
4.2 設(shè)置容忍度tolerations
只有擁有和這個(gè)污點(diǎn)相匹配的容忍度的 Pod 才能夠被分配到 node節(jié)點(diǎn)。
查看k8scloude1節(jié)點(diǎn)的污點(diǎn)
[root@k8scloude1 pod]# kubectl describe nodes k8scloude1 | grep -i taint Taints: node-role.kubernetes.io/master:NoSchedule
你可以在 Pod 規(guī)約中為 Pod 設(shè)置容忍度,創(chuàng)建pod,tolerations參數(shù)表示可以容忍污點(diǎn):node-role.kubernetes.io/master:NoSchedule ,nodeSelector:kubernetes.io/hostname: k8scloude1表示pod運(yùn)行在標(biāo)簽為kubernetes.io/hostname=k8scloude1的節(jié)點(diǎn)上。
[root@k8scloude1 pod]# vim schedulepod4.yaml
[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: ""
effect: "NoSchedule"
nodeSelector:
kubernetes.io/hostname: k8scloude1
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace.
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created
查看pod,即使k8scloude1節(jié)點(diǎn)有污點(diǎn),pod還是正常運(yùn)行。
taint污點(diǎn)和cordon,drain的區(qū)別:某個(gè)節(jié)點(diǎn)上有污點(diǎn),可以設(shè)置tolerations容忍度,讓pod運(yùn)行在該節(jié)點(diǎn),某個(gè)節(jié)點(diǎn)被cordon,drain,則該節(jié)點(diǎn)不能被分配出去運(yùn)行pod。
關(guān)于cordon,drain的詳細(xì)信息,請(qǐng)查看博客《cordon節(jié)點(diǎn),drain驅(qū)逐節(jié)點(diǎn),delete 節(jié)點(diǎn)》
[root@k8scloude1 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 1/1 Running 0 4s 10.244.158.84 k8scloude1 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide No resources found in pod namespace.
注意,tolerations容忍度有兩種寫(xiě)法,任選一種即可:
tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" tolerations: - key: "key1" operator: "Exists" effect: "NoSchedule"
給k8scloude2節(jié)點(diǎn)打標(biāo)簽
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 taint=T node/k8scloude2 labeled [root@k8scloude1 pod]# kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS k8scloude1 Ready control-plane,master 8d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= k8scloude2 Ready <none> 8d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linux,taint=T k8scloude3 Ready <none> 8d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux
對(duì)k8scloude2設(shè)置污點(diǎn)
#污點(diǎn)taint的格式:鍵=值:NoSchedule [root@k8scloude1 pod]# kubectl taint node k8scloude2 wudian=true:NoSchedule node/k8scloude2 tainted [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -i Taints Taints: wudian=true:NoSchedule
創(chuàng)建pod,tolerations參數(shù)表示容忍污點(diǎn)wudian=true:NoSchedule,nodeSelector:taint: T參數(shù)表示pod運(yùn)行在標(biāo)簽為nodeSelector=taint: T的節(jié)點(diǎn)。
[root@k8scloude1 pod]# vim schedulepod4.yaml
[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
tolerations:
- key: "wudian"
operator: "Equal"
value: "true"
effect: "NoSchedule"
nodeSelector:
taint: T
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl get pod -o wide
No resources found in pod namespace.
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created
查看pod,k8scloude2節(jié)點(diǎn)就算有污點(diǎn)也能運(yùn)行pod
[root@k8scloude1 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 1/1 Running 0 8s 10.244.112.177 k8scloude2 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide No resources found in pod namespace.
污點(diǎn)容忍的另一種寫(xiě)法:operator: "Exists",沒(méi)有value值。
[root@k8scloude1 pod]# vim schedulepod4.yaml
[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
tolerations:
- key: "wudian"
operator: "Exists"
effect: "NoSchedule"
nodeSelector:
taint: T
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created
查看pod,k8scloude2節(jié)點(diǎn)就算有污點(diǎn)也能運(yùn)行pod
[root@k8scloude1 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 1/1 Running 0 10s 10.244.112.178 k8scloude2 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide No resources found in pod namespace.
給k8scloude2節(jié)點(diǎn)再添加一個(gè)污點(diǎn)
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep Taints
Taints: wudian=true:NoSchedule
[root@k8scloude1 pod]# kubectl taint node k8scloude2 zang=shide:NoSchedule
node/k8scloude2 tainted
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep Taints
Taints: wudian=true:NoSchedule
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints
Taints: wudian=true:NoSchedule
zang=shide:NoSchedule
Unschedulable: false
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A1 Taints
Taints: wudian=true:NoSchedule
zang=shide:NoSchedule
創(chuàng)建pod,tolerations參數(shù)表示容忍2個(gè)污點(diǎn):wudian=true:NoSchedule和zang=shide:NoSchedule,nodeSelector:taint: T參數(shù)表示pod運(yùn)行在標(biāo)簽為nodeSelector=taint: T的節(jié)點(diǎn)。
[root@k8scloude1 pod]# vim schedulepod4.yaml
[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
tolerations:
- key: "wudian"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "zang"
operator: "Equal"
value: "shide"
effect: "NoSchedule"
nodeSelector:
taint: T
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created
查看pod,k8scloude2節(jié)點(diǎn)就算有2個(gè)污點(diǎn)也能運(yùn)行pod
[root@k8scloude1 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 1/1 Running 0 6s 10.244.112.179 k8scloude2 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted
創(chuàng)建pod,tolerations參數(shù)表示容忍污點(diǎn):wudian=true:NoSchedule,nodeSelector:taint: T參數(shù)表示pod運(yùn)行在標(biāo)簽為nodeSelector=taint: T的節(jié)點(diǎn)。
[root@k8scloude1 pod]# vim schedulepod4.yaml
[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
tolerations:
- key: "wudian"
operator: "Equal"
value: "true"
effect: "NoSchedule"
nodeSelector:
taint: T
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created
查看pod,一個(gè)節(jié)點(diǎn)有兩個(gè)污點(diǎn)值,但是yaml文件只容忍一個(gè),所以pod創(chuàng)建不成功。
[root@k8scloude1 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 0/1 Pending 0 8s <none> <none> <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide No resources found in pod namespace.
取消k8scloude2污點(diǎn)
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints
Taints: wudian=true:NoSchedule
zang=shide:NoSchedule
Unschedulable: false
#取消污點(diǎn)
[root@k8scloude1 pod]# kubectl taint node k8scloude2 zang-
node/k8scloude2 untainted
[root@k8scloude1 pod]# kubectl taint node k8scloude2 wudian-
node/k8scloude2 untainted
[root@k8scloude1 pod]# kubectl describe nodes k8scloude1 | grep -A2 Taints
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints
Taints: <none>
Unschedulable: false
Lease:
[root@k8scloude1 pod]# kubectl describe nodes k8scloude3 | grep -A2 Taints
Taints: <none>
Unschedulable: false
Lease:
Tips:如果自身機(jī)器有限,只能有一臺(tái)機(jī)器,則可以把master節(jié)點(diǎn)的污點(diǎn)taint取消,就可以在master上運(yùn)行pod了。
以上就是pod污點(diǎn)taint 與容忍度tolerations詳解的詳細(xì)內(nèi)容,更多關(guān)于污點(diǎn)taint容忍度tolerations 的資料請(qǐng)關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
docker容器狀態(tài)出現(xiàn)Exit(1)的問(wèn)題及解決
這篇文章主要介紹了docker容器狀態(tài)出現(xiàn)Exit(1)的問(wèn)題及解決方案,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2023-06-06
docker部署firefox瀏覽器實(shí)現(xiàn)遠(yuǎn)程訪問(wèn)
在使用docker時(shí),默認(rèn)情況下只能在本地進(jìn)行訪問(wèn),本文就來(lái)介紹一下docker部署firefox瀏覽器實(shí)現(xiàn)遠(yuǎn)程訪問(wèn),具有一定的參考價(jià)值,感興趣的可以了解一下2024-01-01
教你使用docker安裝elasticsearch和head插件的方法
這篇文章主要介紹了docker安裝elasticsearch和head插件,安裝時(shí)需要下載鏡像和修改系統(tǒng)參數(shù),本文分流程給大家講解的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2022-04-04
Docker 部署net5程序?qū)崿F(xiàn)跨平臺(tái)功能
本文講述使用docker容器部署.net5項(xiàng)目、實(shí)現(xiàn)跨平臺(tái),本文通過(guò)圖文的形式給大家介紹了創(chuàng)建.net5項(xiàng)目的過(guò)程及安裝成功后如何使用docker部署項(xiàng)目,感興趣的朋友跟隨小編一起學(xué)習(xí)吧2021-05-05
Docker數(shù)據(jù)存儲(chǔ)之Volumes詳解
今天小編就為大家分享一篇關(guān)于Docker數(shù)據(jù)存儲(chǔ)之Volumes詳解,小編覺(jué)得內(nèi)容挺不錯(cuò)的,現(xiàn)在分享給大家,具有很好的參考價(jià)值,需要的朋友一起跟隨小編來(lái)看看吧2019-02-02
Docker Compose搭建Redis主從復(fù)制環(huán)境的實(shí)現(xiàn)步驟
在Docker中搭建Redis主從架構(gòu)非常方便,下面是一個(gè)示例,演示如何使用Docker Compose設(shè)置一個(gè)Redis主從復(fù)制環(huán)境,文中有詳細(xì)的代碼示例,具有一定的參考價(jià)值,需要的朋友可以參考下2023-09-09
docker?registry?私有倉(cāng)庫(kù)的搭建過(guò)程
這篇文章主要介紹了docker?registry?私有倉(cāng)庫(kù),私有倉(cāng)庫(kù)最常用的就是Registry、Harbor兩種,那接下來(lái)詳細(xì)介紹如何搭建registry私有倉(cāng)庫(kù),感興趣的朋友跟隨小編一起看看吧2022-01-01
詳解Docker使用Linux iptables 和 Interfaces管理容器網(wǎng)絡(luò)
這篇文章主要介紹了詳解Docker使用Linux iptables 和 Interfaces管理容器網(wǎng)絡(luò)的相關(guān)內(nèi)容,涉及Linux 網(wǎng)橋接口,iptables等,內(nèi)容豐富,需要的朋友可以了解下。2017-09-09

