一、安装
Master机器上安装以下服务:
Docker
etcd
kubelet
kube-proxy
kube-apiserver
kube-controller-manager
kube-scheduler
1.1 修改主机名
# vim /etc/hostname
master01
1.2 修改hosts文件
[root@master01 ~]# cat >> /etc/hosts << EOF
192.168.11.129 master01
192.168.11.130 node01
EOF
1.3 禁用swap
# swapoff -a
1.4 内核参数修改
本文的k8s网络使用flannel,该网络需要设置内核参数bridge-nf-call-iptables=1,修改这个参数需要系统有br_netfilter模块。
1) br_netfilter模块加载
查看br_netfilter模块:
[root@master01 ~]# lsmod |grep br_netfilter
如果系统没有br_netfilter模块则执行下面的新增命令,如有则忽略。
2) 临时新增br_netfilter模块:
[root@master01 ~]# modprobe br_netfilter
永久新增br_netfilter模块:
[root@master01 ~]# cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
[root@master01 ~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF[root@master01 ~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules
3) 内核参数永久修改
cat > /etc/sysctl.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF[root@master01 ~]# sysctl -p /etc/sysctl.conf
1.5 设置kubernetes源
新增kubernetes源
[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
1.6 设置Docker源
[root@master01 ~]# wget https://download.docker.com/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
1.7 安装依赖包
[root@master01 ~]# dnf install -y yum-utils device-mapper-persistent-data lvm2 libcgroup iproute-tc
1.8 安装Docker CE
[root@master01 ~]# dnf list docker-ce --showduplicates | sort -r
[root@master01 ~]# dnf install containerd.io
[root@master01 ~]# dnf install -y docker-ce[root@master01 ~]# systemctl enable docker
[root@master01 ~]# systemctl start docker
注: 通过dnf 安装的containerd.io包版本过低,与docker-ce的依赖版本不符,可以通过以下安装高版本的containerd.io
# dnf install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm
1.9 安装kubectl kubelet kubeadm
[root@master01 ~]# dnf install kubectl kubelet kubeadm[root@master01 ~]# systemctl enable kubelet
[root@master01 ~]# systemctl start kubelet
1.10 配置docker的默认目录和国内镜像源
配置/etc/docker/daemon.json文件
{"registry-mirrors": ["http://hub-mirror.c.163.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn/"],"exec-opts": ["native.cgroupdriver=systemd"],
}
重启docker服务
[root@master01 ~]# systemctl restart docker
1.11 k8s安装
方法1:
[root@master01 ~]# kubeadm init --kubernetes-version=1.18.2 \
--apiserver-advertise-address=192.168.11.129 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
方法2:
配置kubeadm-config.yaml
kubeadm-config.yaml组成部署说明:
- InitConfiguration: 用于定义一些初始化配置,如初始化使用的token以及apiserver地址等
- ClusterConfiguration:用于定义apiserver、etcd、network、scheduler、controller-manager等master组件相关配置项
- KubeletConfiguration:用于定义kubelet组件相关的配置项
- KubeProxyConfiguration:用于定义kube-proxy组件相关的配置项
在master节点安装,master定于为192.168.11.129,通过如下指令创建默认的kubeadm-config.yaml文件:
# kubeadm config print init-defaults > kubeadm-config.yaml或
# kubeadm config print init-defaults --kubeconfig InitConfiguration > kubeadm-init.yaml
修改kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.11.129bindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: master01taints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
# imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
networking:dnsDomain: cluster.localserviceSubnet: 10.10.0.0/16podSubnet: 10.122.0.0/16
安装kuberenets
# kubeadm init --config kubeadm-init.yaml
注:
1. 若忘记join token,可通过以下命令生成一个新的token
kubeadm token create --print-join-command
获取SHA256加密字符串
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
2. kubeadm init默认初始化生成单节点的etcd服务,为避免单点故障,可手动创建2节点的etcd集群;kubeadm init连接外部etcd集群的方式是使用–config参数外挂配置文件kubeadm-config.yaml,以下为示例:
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: MasterConfiguration
kubernetesVersion: v1.18.2
apiServerCertSANs:
- 192.168.11.129
networking:serviceSubnet: 10.10.0.0/12podSubnet: 10.122.0.0/16
api:advertiseAddress: 192.168.11.129
etcd:endpoints:- https://192.168.11.129:2379caFile: k8s.org /etc/etcd/ssl/etcd-ca.pemcertFile: k8s.org /etc/etcd/ssl/etcd.pemkeyFile: k8s.org /etc/etcd/ssl/etcd-key.pem
3. 如果使用的单机集群方式,需要执行下面的命令:
kubectl taint nodes --all node-role.kubernetes.io/master-
1.12 配置环境变量
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config# kubeadm join 192.168.11.129:6443 --token kvwa0b.a22em0rx2r9e4th7 \--discovery-token-ca-cert-hash sha256:a9cbed8896b8dd38d05072d9c6c9e8844054c93120deb755c68fd559b216c28d \--control-plane# kubectl get pod --all-namespaces
1.13 添加防火墙
[root@master01 ~]# firewall-cmd --add-port=6443/tcp --permanent
[root@master01 ~]# firewall-cmd --add-port=10250/tcp --permanent
[root@master01 ~]# firewall-cmd --add-port=30000/tcp --permanent
[root@master01 ~]# firewall-cmd --add-port=2379/tcp --permanent
二、安装网络
1. flannel网络
由CoreOS开发的项目Flannel,可能是最直接和CNI插件。它是容器编排系统中最成熟的网络结构示例之一,旨在实现更好的容器间和主机间网络。随着CNI概念的兴起,Flannel CNI插件算是早期的入门。
与其他方案相比,Flannel相对容易安装和配置。它被打包为单个二进制文件FlannelD,许多常见的Kubernetes集群部署工具和许多Kubernetes发行版都可以默认安装Flannel。Flannel可以使用Kubernetes集群的现有etcd集群来使用API存储其状态信息,因此不需要专用的数据存储。
Flannel配置第3层IPv4 Overlay网络。它会创建一个大型内部网络,跨越集群中每个节点。在此Overlay网络中,每个节点都有一个子网,用于在内部分配IP地址。在配置Pod时,每个节点上的Docker桥接口都会为每个新容器分配一个地址。同一主机中的Pod可以使用Docker桥接进行通信,而不同主机上的pod会使用flanneld将其流量封装在UDP数据包中,以便路由到适当的目标。
Flannel有几种不同类型的后端可用于封装和路由。默认和推荐的方法是使用VXLAN,因为VXLAN性能更良好并且需要的手动干预更少。
安装:
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml修改镜像地址:
registry.cn-hangzhou.aliyuncs.com/google-containers/flannel:v0.9.0# kubectl apply -f kube-flannel.yml
2. calico网络
Calico是Kubernetes生态系统中另一种流行的网络选择。虽然Flannel被公认为是最简单的选择,但Calico以其性能、灵活性而闻名。Calico的功能更为全面,不仅提供主机和pod之间的网络连接,还涉及网络安全和管理。Calico CNI插件在CNI框架内封装了Calico的功能。
在满足系统要求的新配置的Kubernetes集群上,用户可以通过应用单个manifest文件快速部署Calico。如果您对Calico的可选网络策略功能感兴趣,可以向集群应用其他manifest,来启用这些功能。
尽管部署Calico所需的操作看起来相当简单,但它创建的网络环境同时具有简单和复杂的属性。与Flannel不同,Calico不使用overlay网络。相反,Calico配置第3层网络,该网络使用BGP路由协议在主机之间路由数据包。这意味着在主机之间移动时,不需要将数据包包装在额外的封装层中。BGP路由机制可以本地引导数据包,而无需额外在流量层中打包流量。
除了性能优势之外,在出现网络问题时,用户还可以用更常规的方法进行故障排除。虽然使用VXLAN等技术进行封装也是一个不错的解决方案,但该过程处理数据包的方式同场难以追踪。使用Calico,标准调试工具可以访问与简单环境中相同的信息,从而使更多开发人员和管理员更容易理解行为。
除了网络连接外,Calico还以其先进的网络功能而闻名。 网络策略是其最受追捧的功能之一。此外,Calico还可以与服务网格Istio集成,以便在服务网格层和网络基础架构层中解释和实施集群内工作负载的策略。这意味着用户可以配置强大的规则,描述pod应如何发送和接受流量,提高安全性并控制网络环境。
如果对你的环境而言,支持网络策略是非常重要的一点,而且你对其他性能和功能也有需求,那么Calico会是一个理想的选择。
(1) 下载calico包
# wget https://github.com/projectcalico/calico/releases/download/v3.14.1/release-v3.14.1.tgz
(2) 导入镜像
在master与node节点导入镜像到本地
# tar -xvf release-v3.14.1.tgz
# cd release-v3.14.1/images/
# docker load -i calico-cni.tar
# docker load -i calico-node.tar
# docker load -i calico-pod2daemon-flexvol.tar
# docker load -i calico-kube-controllers.tar
(3) 应用calico.yaml
# kubectl apply -f release-v3.14.1/k8s-manifests/calico.yaml
(4) 查看calico pod状态
# kubectl -n kube-system get pods
结果:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6f6f6477c6-wx67p 1/1 Running 0 40s 10.122.218.1 master01 <none> <none>
calico-node-lgz8z 0/1 Running 0 40s 192.168.11.128 master01 <none> <none>
calico-node-rsfnt 0/1 Running 0 40s 192.168.11.129 node01 <none> <none>
calico-node的状态不对,READY处于0/1,说明calico-node启动后有问题,查看pod的详细信息
# kubectl -n kube-system describe pod calico-node-lgz8z
会出现异常信息:
Warning Unhealthy 3s kubelet, master01 Readiness probe failed: calico/node is not ready:
BIRD is not ready: BGP not established with 192.168.11.1292020-06-15 10:04:55.636 [INFO][297] health.go 156:
Number of node(s) with BGP peering established = 0
calico绑定网卡时,没有绑定到主机上用于群集通信的网卡(ens160),而是绑定到了其它网卡上。因此需要调整calicao网络插件的网卡发现机制,修改 IP_AUTODETECTION_METHOD 对应的value值。官方提供的yaml文件中,ip识别策略(IPDETECTMETHOD)没有配置,即默认为first-found,这可能会导致一个网络异常的ip作为nodeIP被注册,从而影响 node-to-node mesh。
修改calico.yaml
kind: DaemonSet
apiVersion: apps/v1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node......- name: CLUSTER_TYPEvalue: "k8s,bgp"- name: IP_AUTODETECTION_METHOD # 添加value: "interface=ens.*"- name: IPvalue: "autodetect"......- name: CALICO_IPV4POOL_CIDR # 去掉注释value: "10.122.0.0/16"
应用calico.yaml,让配置生效。
(5) 通过命令calicoctl查看节点状态:
[root@master01 ~]# calicoctl node status
Calico process is running.IPv4 BGP status
+----------------+-------------------+-------+----------+---------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+---------+
| 192.168.11.129 | node-to-node mesh | start | 10:04:17 | Passive |
+----------------+-------------------+-------+----------+---------+IPv6 BGP status
No IPv6 peers found.-------------------------------------------------------------------
[root@node01 calico]# calicoctl node status
Calico process is running.IPv4 BGP status
+----------------+-------------------+-------+----------+--------------------------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+--------------------------------+
| 192.168.11.128 | node-to-node mesh | start | 10:04:25 | Active Socket: Host is |
| | | | | unreachable |
+----------------+-------------------+-------+----------+--------------------------------+IPv6 BGP status
No IPv6 peers found.
可以看到节点会处于Passive,连接不可达;这种情况是bird通信不正常,查看防火墙可以看到端口179没有放开。
添加防火墙
# firewall-cmd --add-port=179/tcp --permanent
再次查看状态:
[root@master01 ~]# calicoctl node status
Calico process is running.IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+-------------+
| 192.168.11.129 | node-to-node mesh | up | 10:13:09 | Established |
+----------------+-------------------+-------+----------+-------------+IPv6 BGP status
No IPv6 peers found.
网络已经正常连接
查看pod状态
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6f6f6477c6-wx67p 1/1 Running 0 68s 10.122.218.1 master01 <none> <none>
calico-node-lgz8z 1/1 Running 0 68s 192.168.11.128 master01 <none> <none>
calico-node-rsfnt 1/1 Running 0 68s 192.168.11.129 node01 <none> <none>
calico-node状态已正常
三、Dashboard搭建
1. 下载yaml
[root@master01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
2. 配置yaml
2.1 同样需要修改image参数为阿里镜像源
kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:type: NodePort # 配置NodePort,外部通过https://NodeIp:NodePort 访问Dashboardports:- port: 443targetPort: 8443nodePort: 30000 # 配置NodePort,外部通过https://NodeIp:NodePort 访问Dashboardselector:k8s-app: dashboard-metrics-scraper
.....image: registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.0.1imagePullPolicy: Always.....- name: dashboard-metrics-scraperimage: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.4
2.2 新增管理员帐号
[root@master01 ~]# cat >> recommended.yaml << EOF
---
# ------------------- dashboard-admin ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:name: dashboard-adminnamespace: kubernetes-dashboard---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: dashboard-admin
subjects:
- kind: ServiceAccountname: dashboard-adminnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
3. 部署访问
3.1 部署Dashboard
[root@master01 ~]# kubectl apply -f recommended.yaml
3.2 令牌查看
[root@master01 ~]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin
3.3 访问
https://192.168.11.129:30000
使用token进行登录,执行下面命令获取token
[root@master01 ~]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin | grep token | awk 'NR==3{print $2}'
3.4 查看pod,service
[root@master01 ~]# kubectl get svc -n kubernetes-dashboard
四、node01节点加入集群
Node01机器上安装以下服务:
Docker
kubelet
kube-proxy
node01加入集群
[root@node01 ~]# kubeadm join 192.168.11.129:6443 --token kvwa0b.a22em0rx2r9e4th7 \--discovery-token-ca-cert-hash sha256:a9cbed8896b8dd38d05072d9c6c9e8844054c93120deb755c68fd559b216c28d
查看节点信息
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 68m v1.18.2
node01 Ready <none> 52m v1.18.2
五、创建ingress反向代理及内部负载均衡器插件
1. 下载yaml
[root@master01 ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml -O ingress-nginx.yaml
2. 配置yaml
修改image参数为阿里镜像源
apiVersion: apps/v1
kind: Deployment
metadata:name: ingress-nginx-controllernamespace: ingress-nginx
spec:...spec:dnsPolicy: ClusterFirstcontainers:- name: controllerimage: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.32.0imagePullPolicy: IfNotPresent...nodeSelector:ingress-ready: 'true' # 指定节点
3. 安装
# kubectl create -f ingress-nginx.yaml
注:ingress插件需要匹配到nodeSelector有"ingress-ready=true"才会运行
设置节点标签
# kubectl label node node01 ingress-ready=true
node/node01 labeled
4.查看相关信息
查看ingress-nginx下的所有信息资源
# kubectl get all -n ingress-nginxNAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-s6986 0/1 Completed 0 33m
pod/ingress-nginx-admission-patch-49s5k 0/1 Completed 0 33m
pod/ingress-nginx-controller-75f84dfcd7-mndlw 1/1 Running 0 34mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.10.68.185 <none> 80:32134/TCP,443:30846/TCP 34m
service/ingress-nginx-controller-admission ClusterIP 10.10.158.25 <none> 443/TCP 34mNAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 34mNAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-75f84dfcd7 1 1 1 34mNAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 18s 34m
job.batch/ingress-nginx-admission-patch 1/1 25s 34m
查看pod运行在那些节点
# kubectl get pod -n ingress-nginx -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-admission-create-s6986 0/1 Completed 0 34m 10.122.1.2 node01 <none> <none>
ingress-nginx-admission-patch-49s5k 0/1 Completed 0 34m 10.122.1.4 node01 <none> <none>
ingress-nginx-controller-75f84dfcd7-mndlw 1/1 Running 0 35m 10.122.1.6 node01 <none> <none>
获取到端口
# kubectl get svc -n ingress-nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.10.68.185 <none> 80:32134/TCP,443:30846/TCP 38m
ingress-nginx-controller-admission ClusterIP 10.10.158.25 <none> 443/TCP 38m
查看服务详细信息
# kubectl describe svc -n ingress-nginx ingress-nginx-controllerName: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controllerapp.kubernetes.io/instance=ingress-nginxapp.kubernetes.io/managed-by=Helmapp.kubernetes.io/name=ingress-nginxapp.kubernetes.io/version=0.32.0helm.sh/chart=ingress-nginx-2.10.0
Annotations: Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: NodePort
IP: 10.10.68.185
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32134/TCP
Endpoints: 10.122.1.6:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30846/TCP
Endpoints: 10.122.1.6:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
测试
# curl http://192.168.11.130:32134
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.19.0</center>
</body>
</html>
5.部署tomcat服务
5.1 部署一个tomcat用于测试ingress转发功能
apiVersion: v1
kind: Service
metadata:name: tomcatnamespace: default
spec:selector:app: tomcatrelease: canaryports:- name: httptargetPort: 8080port: 8080- name: ajptargetPort: 8009port: 8009---
apiVersion: apps/v1
kind: Deployment
metadata:name: tomcat-deploynamespace: default
spec:replicas: 1selector:matchLabels:app: tomcatrelease: canarytemplate:metadata:labels:app: tomcatrelease: canaryspec:containers:- name: tomcatimage: tomcatports:- name: httpcontainerPort: 8080
5.2 定义ingress策略
apiVersion: extensions/v1beta1
kind: Ingress
metadata:name: ingress-tomcatannotations:kubernets.io/ingress.class: "nginx"
spec:rules:- host: test.comhttp:paths:- path: /tomcatbackend:serviceName: tomcatservicePort: 8080
说明:
当创建tomcat ingress 会产生如下错误:
Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io":
Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s
这个是由 ingress-nginx的中ValidatingWebhookConfiguration 导致,需要将这节配置删除
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:labels:helm.sh/chart: ingress-nginx-2.10.0app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 0.33.0app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhookname: ingress-nginx-admissionnamespace: ingress-nginx
webhooks:- name: validate.nginx.ingress.kubernetes.iorules:- apiGroups:- extensions- networking.k8s.ioapiVersions:- v1- v1beta1operations:- CREATE- UPDATEresources:- ingressesfailurePolicy: FailsideEffects: NoneadmissionReviewVersions:- v1- v1beta1clientConfig:service:namespace: ingress-nginxname: ingress-nginx-controller-admissionpath: /extensions/v1beta1/ingresses
# kubectl delete -f validatingwebhook.yaml
5.3 测试
将域名test.com写入主机配置中
# echo "192.168.11.130 test.com" > /etc/hosts
通过curl访问tomcat
# curl http://test.com:32134/tomcat
5.4 查看ingress-nginx的容器中的nginx 配置
## start server test.k8s.local
server {server_name test.k8s.local ;listen 80 ;listen 443 ssl http2 ;set $proxy_upstream_name "-";ssl_certificate_by_lua_block {certificate.call()}location /tomcat {set $namespace "default";set $ingress_name "ingress-tomcat";set $service_name "tomcat";set $service_port "8080";set $location_path "/tomcat";
六、添加control plane节点
1. 证书分发
master01分发证书:
在master01上运行脚本cert-main-master.sh,将证书分发至master02
[root@master01 ~]# cat cert-main-master.shUSER=root # customizable
CONTROL_PLANE_IPS="192.168.11.130"
for host in ${CONTROL_PLANE_IPS}; doscp /etc/kubernetes/pki/ca.crt "${USER}"@$host:scp /etc/kubernetes/pki/ca.key "${USER}"@$host:scp /etc/kubernetes/pki/sa.key "${USER}"@$host:scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt# Quote this line if you are using external etcdscp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
2. master02加入集群
[root@master02 ~]# kubeadm join 192.168.11.129:6443 --token kvwa0b.a22em0rx2r9e4th7 \--discovery-token-ca-cert-hash sha256:a9cbed8896b8dd38d05072d9c6c9e8844054c93120deb755c68fd559b216c28d \--control-plane
3. 加载环境变量
master02加载环境变量
[root@master02 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master02 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master02 ~]# source .bash_profile
4. 集群节点查看
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 44m v1.18.2
master02 Ready master 33m v1.18.2
node01 Ready <none> 52m v1.18.2
备注:
1. 国外镜像下载方法
a. 搜索
curl -s https://zhangguanzhang.github.io/bash/pull.sh | bash -s search gcr.io/google_containers/kube-apiserver
b.拉取
curl -s https://zhangguanzhang.github.io/bash/pull.sh | bash -s -- gcr.io/google_containers/kube-apiserver:v1.18.6
pull.sh
#!/bin/bash
[ -z "$set_e" ] && set -e[ -z "$1" ] && { echo '$1 is not set';exit 2; }# imgFullName
sync_pull(){local targetName pullNametargetName=$1pullName=${1//k8s.gcr.io/gcr.io\/google_containers}pullName=${pullName//google-containers/google_containers}
# if [ $( tr -dc '/' <<< $pullName | wc -c) -gt 2 ];then #大于2为gcr的超长镜像名字
# pullName=$(echo $pullName | sed -r 's#io#azk8s.cn#')
# elsepullName=zhangguanzhang/${pullName//\//.}
# fidocker pull $pullNamedocker tag $pullName $targetNamedocker rmi $pullName
}if [ "$1" == search ];thenshiftwhich jq &> /dev/null || { echo 'search needs jq, please install the jq';exit 2; }img=${1%/}[[ $img == *:* ]] && img_name=${img/://} || img_name=$imgif [[ "$img" =~ ^gcr.io|^k8s.gcr.io ]];thenif [[ "$img" =~ ^k8s.gcr.io ]];thenimg_name=${img_name/k8s.gcr.io\//gcr.io/google_containers/}elif [[ "$img" == *google-containers* ]]; thenimg_name=${img_name/google-containers/google_containers}firepository=gcr.ioelif [[ "$img" =~ ^quay.io ]];thenrepository=quay.io else echo 'not sync the namespaces!';exit 0;fi#echo '查询用的github,GFW原因可能会比较久,请确保能访问到github'curl -sX GET https://api.github.com/repos/zhangguanzhang/${repository}/contents/$img_name?ref=develop |jq -r 'length as $len | if $len ==2 then .message elif $len ==12 then .name else .[].name end'
elseimg=$1if [[ "$img" =~ ^gcr.io|^quay.io|^k8s.gcr.io|^docker.elastic.co ]];thensync_pull $1elseecho 'not sync the namespaces!';exit 0;fi
fi
c.参考:https://github.com/anjia0532/gcr.io_mirror
2. docker 容器dns服务失效问题
在docker容器中dns域名解析总是失败,具体现象是可以ping通DNS服务器,但是无法解析相关域名,已在容器中配置DNS,宿主机网络一切正常。具体现象如下:
# docker run --name 'busybox' --rm -it busybox ping www.baidu.com
ping: bad address 'www.baidu.com'# docker run --name 'busybox' --rm -it busybox cat /etc/resolv.conf
search localdomain
nameserver 114.114.114.114# docker run --name 'busybox' --rm -it busybox ping 114.114.114.114
PING 114.114.114.114 (114.114.114.114): 56 data bytes
64 bytes from 114.114.114.114: seq=0 ttl=127 time=33.502 ms
64 bytes from 114.114.114.114: seq=1 ttl=127 time=33.926 ms
系统启用firewalld防火墙,导致DNS服务未生效,通过以下指令将docker0网卡添加到信任区,发现DNS服务生效了
# firewall-cmd --permanent --zone=trusted --change-interface=docker0
# firewall-cmd --reload
# systemctl restart docker
busybox 启动一个容器服务来测试网络
# docker run --name 'busybox' --rm -it busybox ping www.baidu.com
PING www.baidu.com (182.61.200.7): 56 data bytes
64 bytes from 182.61.200.7: seq=0 ttl=127 time=26.818 ms
64 bytes from 182.61.200.7: seq=1 ttl=127 time=59.485 ms
DNS服务正常了