环境介绍
操作系统: CentOS Linux release 7.3.1611 (Core)
Kubernetes 版本: 1.5.2
Etcd 版本: 3.2.7
Docker 版本: 1.12.6
Flannel 版本: 0.7.1
主机信息
Role service Hostname IP Address
Master kube-APIServer kubelet proxy etcd k8s-master.suzf.net 172.16.9.50
Node1 kubelet proxy flannel docker k8s-node1.suzf.net 172.16.9.60
Node2 kubelet proxy flannel docker k8s-node2.suzf.net 172.16.9.70
准备工作
关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0
Master 安装与配置
Etcd 安装与配置
yum install etcd -y
PS: 本文并没有搭建etcd集群,如果需要参见官方搭建etcd集群的指导教程。
yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。
编辑配置文件
# grep -v ^# /etc/etcd/etcd.conf
ETCD_NAME=k8s
ETCD_DATA_DIR="/var/lib/etcd/k8s.etcd"
ETCD_LISTEN_PEER_URLS="http://172.16.9.50:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.16.9.50:4001,http://localhost:4001"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.9.50:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.9.50:4001"
启动 etcd 服务
# systemctl start etcd.service
查看健康状态
# etcdctl cluster-health
member 9026e369ffe8e114 is healthy: got healthy result from http://172.16.9.50:4001
cluster is healthy
查看成员列表
# etcdctl member list
9026e369ffe8e114: name=k8s peerURLs=http://172.16.9.50:2380 clientURLs=http://172.16.9.50:4001 isLeader=true
Kube Master 安装与配置
yum install kubernetes -y
修改Master节点kubernetes的全局配置文件 /etc/kubernetes/config
# grep -v '^#\|^$' /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=2"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://127.0.0.1:8080"
修改Master节点kubernetes apiserver的配置文件 /etc/kubernetes/apiserver
# grep -v '^#\|^$' /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=2"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://127.0.0.1:8080"
[root@monkey ~]# grep -v '^#\|^$' /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://172.16.9.50:4001"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
启动Master节点的相关服务:etcd、kube-apiserver、kube-scheduler、kube-controller-manager,并将这些服务设为开机自动启动。
for SRV in etcd kube-apiserver kube-scheduler kube-controller-manager;
do
sudo systemctl start ${SRV}
sudo systemctl enable ${SRV}
sudo systemctl status ${SRV}
done
在Master节点修改etcd的配置,设定Node中flannel所使用的子网范围为192.168.1.0 ~ 192.168.60.0(每一个Node节点都有一个独立的flannel子网)。
# etcdctl mk /suzf.net/network/config '{"Network":"192.168.0.0/16", "SubnetMin": "192.168.1.0", "SubnetMax": "192.168.60.0"}'
{"Network":"192.168.0.0/16", "SubnetMin": "192.168.1.0", "SubnetMax": "192.168.60.0"}
Node 安装与配置
yum install kubernetes-node docker flannel -y
修改Node上flannel的配置/etc/sysconfig/flanneld,设定etcd的相关信息,其中172.16.9.50 为master的IP地址。
Flannel 网络配置
# grep -v '^#\|^$' /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://172.16.9.50:4001"
FLANNEL_ETCD_PREFIX="/suzf.net/network"
Kube node 配置
# grep -v '^#\|^$' /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://172.16.9.50:8080"
# grep -v '^#\|^$' /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME=""
KUBELET_API_SERVER="--api-servers=http://172.16.9.50:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
在Node上启动相关服务:flanneld、docker、kube-proxy、kubelet,并将这些服务设为开机自启动
for SRV in flanneld docker kube-proxy kubelet;
do
sudo systemctl start ${SRV}
sudo systemctl enable ${SRV}
sudo systemctl status ${SRV}
done
在Master节点查看k8s集群状态
# kubectl get nodes -o wide
NAME STATUS AGE EXTERNAL-IP
horse.suzf.net Ready 6d <none>
zebra Ready 7m <none>
查看flannel子网分配情况
# etcdctl ls /suzf.net/network/subnets
/suzf.net/network/subnets/192.168.53.0-24
/suzf.net/network/subnets/192.168.42.0-24
在Node节点通过ip a查看flannel和docker网桥的网络配置信息,确认是否与etcd中/coreos.com/network/subnets的信息一致。