master扩容
一、简介
非高可用,高可用需配置keep alive
我们目前的集群信息:
Node | Type | Ip |
---|---|---|
k8s-master01 | master | 192.168.56.105 |
k8s-slave02 | node | 192.168.56.106 |
k8s-slave03 | node | 192.168.56.107 |
只有一个master节点,并且我们k8s-master01
节点的 内存快不够用了,因此就涉及到了 master 扩容的问题。
由于我们之前部署集群使用的是kubeadm
,因此我们在进行master扩容
的时候也使用kubeadm
。
首先我们准备好新的虚拟机,并命名为k8s-master02
,做好前置准备操作
二、实现
前置准备工作
配置好新虚拟机,已知的信息如下:
Node | Type | Ip |
---|---|---|
k8s-master02 | master | 192.168.56.108 |
shell
# 基础依赖包安装
yum -y install wget vim net-tools ntpdate bash-completion
# 修改当前机器名
hostnamectl set-hostname k8s-slave02
# 或 hostnamectl set-hostname k8s-slave03
# 修改hosts文件
vim /etc/hosts
192.168.56.105 k8s-master01
192.168.56.106 k8s-slave02
192.168.56.107 k8s-slave03
# 系统时钟同步与时区配置
ntpdate time1.aliyun.com
rm -rf /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
date -R || date
# 关闭防火墙、selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
# 关闭swap
swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab
free -m
# 网桥过滤
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward=1
net.ipv4.ip_forward_use_pmtu = 0
# 生效命令 与 查看
sysctl --system
sysctl -a|grep "ip_forward"
# docker安装
yum install -y yum-utils
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum -y install docker-ce docker-ce-cli containerd.io
systemctl enable docker && systemctl start docker
# docker 镜像加速 与 cgroup配置
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload && systemctl restart docker
# k8s安装
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum -y install --nogpgcheck kubelet-1.23.8 kubeadm-1.23.8 kubectl-1.23.8
# 运行k8s
systemctl enable kubelet && systemctl start kubelet
master 节点加入集群
shell
# 在k8s-master01 上生成新的 token
kubeadm token create --print-join-command
# 得到token 我们记为 tag1
#-----------------------------------------
kubeadm join 192.168.56.105:6443 --token elcp73.3z2us47hegdm51k6 --discovery-token-ca-cert-hash sha256:de2d35d81f3740e93aca8a461713ca4ab1fcb9e7e881dc0f8836dd06d8a40229
#-----------------------------------------
# 在k8s-master01 上生成新的cert
kubeadm init phase upload-certs --upload-certs
#-----------------------------------------
I0707 15:37:26.377015 4186 version.go:255] remote version is much newer: v1.24.2; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
# 主要是这个key,我们记录为 tag2
1c1bbeb3837ba082f088056d2357c9e30a4a57cd8084ebaf007e945cd8288eba
#-----------------------------------------
# 然后我们用 tag1 + `--certificate-key` + tag2 得到如下命令
kubeadm join 192.168.56.105:6443 --token elcp73.3z2us47hegdm51k6 --discovery-token-ca-cert-hash sha256:de2d35d81f3740e93aca8a461713ca4ab1fcb9e7e881dc0f8836dd06d8a40229 --certificate-key 1c1bbeb3837ba082f088056d2357c9e30a4a57cd8084ebaf007e945cd8288eba
# 在k8s-master02节点上运行上边的到的命令,加入成功提示如下:
#-----------------------------------
W0707 15:47:27.534766 13860 join.go:392] [preflight] WARNING: --control-plane is also required when passing control-plane related flags such as [certificate-key, apiserver-advertise-address, apiserver-bind-port]
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
#-----------------------------------
# 还没完成,我们需要执行下面操作,master02节点才会加入成功哦,否则已知是NotReady状态
# master节点上操作:从master节点 复制到 子节点
scp /etc/kubernetes/admin.conf root@192.168.56.106:/etc/kubernetes/
# scp /etc/kubernetes/admin.conf root@192.168.56.107:/etc/kubernetes/
# node节点上操作:配置环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
# 然后我们就可以在 k8s-master02 节点成功的运行 kubectl 命令了。