centOS7.2使用yum安装kubernetes

2024-04-18 00:32

本文主要是介绍centOS7.2使用yum安装kubernetes,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

2015年9月1日,CentOS 已经把 Kubernetes 加入官方源,所以现在安装Kubernetes已经方便很多。

master包含kube-apiserver kube-scheduler kube-controller-manager etcd四个组件
node包含kube-proxy kubelet flannel 3个组件

  1. kube-apiserver:位于master节点,接受用户请求。
  2. kube-scheduler:位于master节点,负责资源调度,即pod建在哪个node节点。
  3. kube-controller-manager:位于master节点,包含ReplicationManager,Endpointscontroller,Namespacecontroller,and Nodecontroller等。
  4. etcd:分布式键值存储系统,共享整个集群的资源对象信息。
  5. kubelet:位于node节点,负责维护在特定主机上运行的pod。
  6. kube-proxy:位于node节点,它起的作用是一个服务代理的角色

1.准备工作

在3台服务器上都执行下面的操作。
master:192.168.52.130
node:192.168.52.132

1关闭防火墙

每台机器禁用iptables 避免和docker 的iptables冲突:

#systemctl stop firewalld
#systemctl disable firewalld
#iptables -P FORWARD ACCEPT

2安装NTP

为了让各个服务器的时间保持一致,还需要为所有的服务器安装NTP:

#yum -y install ntp
#systemctl start ntpd
#systemctl enable ntpd

3禁用selinux

#vi /etc/selinux/config

#SELINUX=enforcing
SELINUX=disabled

2.部署master

1.安装etcd和kubernetes(这会自动安装docker)

[root@localhost etc]#yum -y install etcd kubernetes-master

2.修改etcd.conf

[root@localhost etc]# vi /etc/etcd/etcd.conf    
ETCD_NAME=node1 
#数据存放位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#监听其他 Etcd 实例的地址
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
#监听客户端地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""#[cluster]
~ #通知其他 Etcd 实例地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.52.130:2380"
#if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#初始化集群内节点地址
ETCD_INITIAL_CLUSTER="node1=http://192.168.52.130:2380,node2=http://192.168.52.132:2380"
#初始化集群状态,new 表示新建
ETCD_INITIAL_CLUSTER_STATE="new"
#初始化集群 token
ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"
#通知 客户端地址
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.52.130:2379,http://192.168.52.130:4001"

3.修改kube-master配置文件

 root@localhost kubernetes]# vi /etc/kubernetes/apiserver 
###
#kubernetes system config
#
#The following values are used to configure the kube-apiserver
#
#The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
##The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
#Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
#Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16 --service-node-port-range=1-65535"
#default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#去掉ServiceAccount,解决kubectl get pods时 No resources found.问题
#Add your own!
KUBE_API_ARGS=""
[root@localhost /]# vi /etc/kubernetes/controller-manager
###
#the following values are used to configure the kubernetes controller-manager#defaults from config and apiserver should be adequate
#Add your own!
#KUBE_CONTROLLER_MANAGER_ARGS=""
KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --pod-eviction-timeout=10s"
~
[root@localhost /]# vi /etc/kubernetes/config  
###
#kubernetes system config
#
#the following values are used to configure various aspects of all
#kubernetes services, including
#
#kube-apiserver.service
#kube-controller-manager.service
#kube-scheduler.service
#kubelet.service#kube-proxy.service
#logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"#journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"#Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"#How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.52.130:8080"

其中的8080,如果被占用了,或者不想用这个端口,可以修改

4.启动服务

让 etcd kube-apiserver kube-scheduler kube-controller-manager 随开机启动

[root@localhost /]# systemctl enable etcd kube-apiserver kube-scheduler kube-controller-manager

启动

[root@localhost /]# systemctl start etcd kube-apiserver kube-scheduler kube-controller-manager

5.配置etcd中的网络

定义etcd中的网络配置,nodeN中的flannel service会拉取此配置

[root@localhost /]# etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'

3.部署minions(node节点)

1安装kubernetes-node和 flannel(会自动安装docker)

[root@localhost ~]# yum -y install kubernetes-node flannel

2修改kube-node

[root@localhost ~]# vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"#journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"#Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"#How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://127.0.0.1:8080"
KUBE_MASTER="--master=http://192.168.52.130:8080"

hostname改为node自己的ip或名称

[root@localhost ~]# vi /etc/kubernetes/kubelet

###
#kubernetes kubelet (minion) config#The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"#The port for the info server to serve on
#KUBELET_PORT="--port=10250"#You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=http://192.168.52.132"#location of the api-server
KUBELET_API_SERVER="--api-servers=http://http://192.168.52.130:8080"#pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"#Add your own!
#KUBELET_ARGS=""
KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause"

3修改flannel

为etcd服务配置flannel,修改配置文件 /etc/sysconfig/flanneld

[root@localhost ~]# vi /etc/sysconfig/flanneld 
#etcd url location.  Point this to the server where etcd runs
#FLANNEL_ETCD="http://127.0.0.1:2379"
FLANNEL_ETCD="http://192.168.52.130:2379"#etcd config key.  This is the configuration key that flannel queries
#For address range assignment
#FLANNEL_ETCD_KEY="/atomic.io/network"
FLANNEL_ETCD_KEY="/coreos.com/network"#Any additional options that you want to pass
FLANNEL_OPTIONS=" -iface=ens33"FLANNEL_OPTIONS=" -iface=ens33" 其中的ens33是网卡名称(用ifconfig可查询出来,centos7如果你没有改网卡名,那可以是enoXXXXX)

4.启动服务

[root@localhost ~]# systemctl restart flanneld docker
[root@localhost ~]# systemctl start kubelet kube-proxy
[root@localhost ~]# systemctl enable flanneld kubelet kube-proxy

ifconfig下,看到每个minions(node)会有docker0和flannel0这2个网卡。这2个网卡在不同的minons都是不同的.

4.验证

1.Testing Create First Pod

在master上创建docker Nginx Pod

[root@localhost ~]#  kubectl create deployment nginx --image=nginx
[root@localhost ~]#  kubectl describe deployment nginx

创建服务端口

[root@localhost ~]# kubectl create service nodeport nginx --tcp=80:80[root@localhost ~]# kubectl describe service nginx
Name:			nginx
Namespace:		default
Labels:			app=nginx
Selector:		app=nginx
Type:			NodePort
IP:			10.254.4.244
Port:			80-80	80/TCP
NodePort:		80-80	30862/TCP
Endpoints:		172.17.48.2:80
Session Affinity:	None
No events.

2.在node上查看docker nignx容器ip是否对应的 Endpoints

[root@localhost ~]# docker inspect 423a3b8b26b2
[{"Id": "423a3b8b26b2f511ceed97cdc5c5c14e0c4ce69dae5f5818406f0013566da67b""Created": "2019-02-26T01:02:22.4188594Z","Path": "/pause","Args": [],"State": {"Status": "running","Running": true,"Paused": false,"Restarting": false,"OOMKilled": false,"Dead": false,"Pid": 25352,"ExitCode": 0,"Error": "","StartedAt": "2019-02-26T01:02:24.196708758Z","FinishedAt": "0001-01-01T00:00:00Z"},"Image": "sha256:f9d5de0795395db6c50cb1ac82ebed1bd8eb3eefcebb1aa724e0123"ResolvConfPath": "/var/lib/docker/containers/423a3b8b26b2f511ceed97cdc5013566da67b/resolv.conf","HostnamePath": "/var/lib/docker/containers/423a3b8b26b2f511ceed97cdc5c53566da67b/hostname","HostsPath": "/var/lib/docker/containers/423a3b8b26b2f511ceed97cdc5c5c146da67b/hosts","LogPath": "","Name": "/k8s_POD.c73fd98d_nginx-3121059884-4k8vd_default_baed9fe9-38b9-406c","RestartCount": 0,"Driver": "overlay2","MountLabel": "","ProcessLabel": "","AppArmorProfile": "","ExecIDs": null,"HostConfig": {"Binds": null,"ContainerIDFile": "","LogConfig": {"Type": "journald","Config": {}},"NetworkMode": "default","PortBindings": {},"RestartPolicy": {"Name": "","MaximumRetryCount": 0},"AutoRemove": false,"VolumeDriver": "","VolumesFrom": null,"CapAdd": null,"CapDrop": null,"Dns": ["192.168.52.2"],"DnsOptions": null,"DnsSearch": ["localdomain"],"ExtraHosts": null,"GroupAdd": null,"IpcMode": "","Cgroup": "","Links": null,"OomScoreAdj": -998,"PidMode": "","Privileged": false,"PublishAllPorts": false,"ReadonlyRootfs": false,"SecurityOpt": ["seccomp=unconfined"],"UTSMode": "","UsernsMode": "","ShmSize": 67108864,"Runtime": "docker-runc","ConsoleSize": [0,0],"Isolation": "","CpuShares": 2,"Memory": 0,"NanoCpus": 0,"CgroupParent": "","BlkioWeight": 0,"BlkioWeightDevice": null,"BlkioDeviceReadBps": null,"BlkioDeviceWriteBps": null,"BlkioDeviceReadIOps": null,"BlkioDeviceWriteIOps": null,"CpuPeriod": 0,"CpuQuota": 0,"CpuRealtimePeriod": 0,"CpuRealtimeRuntime": 0,"CpusetCpus": "","CpusetMems": "","Devices": [],"DiskQuota": 0,"KernelMemory": 0,"MemoryReservation": 0,"MemorySwap": -1,"MemorySwappiness": -1,"OomKillDisable": false,"PidsLimit": 0,"Ulimits": null,"CpuCount": 0,"CpuPercent": 0,"IOMaximumIOps": 0,"IOMaximumBandwidth": 0},"GraphDriver": {"Name": "overlay2","Data": {"LowerDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdea2729e3e6f43b-init/diff:/var/lib/docker/overlay2/8b2860fbde3dec06a9b19e127c49cc9ac62c5/diff:/var/lib/docker/overlay2/5558c6c8eb694182c22e68a223223ff03cd64c70c6612ar/lib/docker/overlay2/602d9c3d734dba42cceddf7e88775efed7e477b95894775a7772149cc"MergedDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdead729e3e6f43b/merged","UpperDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdea2729e3e6f43b/diff","WorkDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdea2b29e3e6f43b/work"}},"Mounts": [],"Config": {"Hostname": "nginx-3121059884-4k8vd","Domainname": "","User": "","AttachStdin": false,"AttachStdout": false,"AttachStderr": false,"Tty": false,"OpenStdin": false,"StdinOnce": false,"Env": ["KUBERNETES_SERVICE_PORT=443","NGINX_SERVICE_HOST=10.254.4.244","NGINX_PORT=tcp://10.254.4.244:80","NGINX_PORT_80_TCP_PORT=80","NGINX_PORT_80_TCP_ADDR=10.254.4.244","KUBERNETES_PORT_443_TCP_ADDR=10.254.0.1","NGINX_PORT_80_TCP=tcp://10.254.4.244:80","KUBERNETES_SERVICE_HOST=10.254.0.1","KUBERNETES_SERVICE_PORT_HTTPS=443","NGINX_SERVICE_PORT_80_80=80","NGINX_PORT_80_TCP_PROTO=tcp","KUBERNETES_PORT=tcp://10.254.0.1:443","KUBERNETES_PORT_443_TCP=tcp://10.254.0.1:443","KUBERNETES_PORT_443_TCP_PROTO=tcp","KUBERNETES_PORT_443_TCP_PORT=443","NGINX_SERVICE_PORT=80","HOME=/","PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/b],"Cmd": null,"Image": "kubernetes/pause","Volumes": null,"WorkingDir": "","Entrypoint": ["/pause"],"OnBuild": null,"Labels": {"io.kubernetes.container.hash": "c73fd98d","io.kubernetes.container.name": "POD","io.kubernetes.container.restartCount": "0","io.kubernetes.container.terminationMessagePath": "","io.kubernetes.pod.name": "nginx-3121059884-4k8vd","io.kubernetes.pod.namespace": "default","io.kubernetes.pod.terminationGracePeriod": "30","io.kubernetes.pod.uid": "baed9fe9-38b9-11e9-bd0c-000c291410a9"}},"NetworkSettings": {"Bridge": "","SandboxID": "0fe4522e3b99b460d78ed3fea3beddef34044ce8e339f134998ca4"HairpinMode": false,"LinkLocalIPv6Address": "","LinkLocalIPv6PrefixLen": 0,"Ports": {},"SandboxKey": "/var/run/docker/netns/0fe4522e3b99","SecondaryIPAddresses": null,"SecondaryIPv6Addresses": null,"EndpointID": "e31d625288e553a87e5e7bae1da03badbfeddf83305dc3d43196e"Gateway": "172.17.48.1","GlobalIPv6Address": "","GlobalIPv6PrefixLen": 0,"IPAddress": "172.17.48.2","IPPrefixLen": 24,"IPv6Gateway": "","MacAddress": "02:42:ac:11:30:02","Networks": {"bridge": {"IPAMConfig": null,"Links": null,"Aliases": null,"NetworkID": "901845a1f83d292f893a37bfe735ab0ca022ed0be45817"EndpointID": "e31d625288e553a87e5e7bae1da03badbfeddf83305dc"Gateway": "172.17.48.1","IPAddress": "172.17.48.2","IPPrefixLen": 24,"IPv6Gateway": "","GlobalIPv6Address": "","GlobalIPv6PrefixLen": 0,"MacAddress": "02:42:ac:11:30:02"}}}}
]

3.在master上执行

[root@localhost ~]# kubectl get nodes
NAME              STATUS    AGE
192.168.52.132   Ready     20m
[root@localhost ~]#kubectl get pods
NAME                                           READY     STATUS        RESTARTS   AGE
nginx-3121059884-4k8vd   1/1         Running       12         21h
[root@localhost /]kubectl describe pods nginx-3121059884-4k8vd
Name:		nginx-3121059884-4k8vd
Namespace:	default
Node:		192.168.52.132/192.168.52.132
Start Time:	Mon, 25 Feb 2019 12:56:48 +0800
Labels:		app=nginxpod-template-hash=3121059884
Status:		Running
IP:		172.17.48.2
Controllers:	ReplicaSet/nginx-3121059884
Containers:
nginx:Container ID:		docker://b1f59f8025255f03c5f7f1a9c5c7847fc9e178d5d4bf5c51b6855db328894a70Image:			nginxImage ID:			docker-pullable://docker.io/nginx@sha256:dd2d0ac3fff2f007d99e033b64854be0941e19a2ad51f174d9240dda20d9f534Port:			State:			RunningStarted:			Tue, 26 Feb 2019 09:02:30 +0800Last State:			TerminatedReason:			CompletedExit Code:		0Started:			Mon, 25 Feb 2019 17:03:29 +0800Finished:			Tue, 26 Feb 2019 09:02:08 +0800Ready:			TrueRestart Count:		12Volume Mounts:		<none>Environment Variables:	<none>
Conditions:
Type		Status
Initialized 	True 
Ready 	True 
PodScheduled 	True 
No volumes.
QoS Class:	BestEffort
Tolerations:	<none>
No events.
[root@localhost ~]# kubectl get svc
NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   10.254.0.1     <none>        443/TCP        4d
nginx        10.254.4.244   <nodes>       80:30862/TCP   4d
[root@localhost ~]#kubectl describe svc nginx
Name:			nginx
Namespace:		default
Labels:			app=nginx
Selector:		app=nginx
Type:			NodePort
IP:			10.254.4.244
Port:			80-80	80/TCP
NodePort:		80-80	30862/TCP
Endpoints:		172.17.48.2:80
Session Affinity:	None
No events.

4.在node机上测试

[root@localhost ~]#docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
b1f59f802525        nginx               "nginx -g 'daemon ..."   About an hour ago   Up About an hour                        k8s_nginx.9179dbd3_nginx-3121059884-4k8vd_default_baed9fe9-38b9-11e9-bd0c-000c291410a9_e251e27f
423a3b8b26b2        kubernetes/pause    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.c73fd98d_nginx-3121059884-4k8vd_default_baed9fe9-38b9-11e9-bd0c-000c291410a9_bcb3406c

查看监听端口
[root@localhost ~]# netstat -lnpt|grep kube-proxy

tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      25180/kube-proxy    
tcp6       0      0 :::30862                :::*                    LISTEN      25180/kube-proxy    
[root@localhost ~]#curl http://192.168.52.132:30862/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>

2.ie上访问

http://192.168.52.132:30862/

在这里插入图片描述
这样etcd+flannel + kubernetes在centOS7上就搭建起来了.

这篇关于centOS7.2使用yum安装kubernetes的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/913287

相关文章

JAVA中安装多个JDK的方法

《JAVA中安装多个JDK的方法》文章介绍了在Windows系统上安装多个JDK版本的方法,包括下载、安装路径修改、环境变量配置(JAVA_HOME和Path),并说明如何通过调整JAVA_HOME在... 首先去oracle官网下载好两个版本不同的jdk(需要登录Oracle账号,没有可以免费注册)下载完

Spring StateMachine实现状态机使用示例详解

《SpringStateMachine实现状态机使用示例详解》本文介绍SpringStateMachine实现状态机的步骤,包括依赖导入、枚举定义、状态转移规则配置、上下文管理及服务调用示例,重点解... 目录什么是状态机使用示例什么是状态机状态机是计算机科学中的​​核心建模工具​​,用于描述对象在其生命

Java JDK1.8 安装和环境配置教程详解

《JavaJDK1.8安装和环境配置教程详解》文章简要介绍了JDK1.8的安装流程,包括官网下载对应系统版本、安装时选择非系统盘路径、配置JAVA_HOME、CLASSPATH和Path环境变量,... 目录1.下载JDK2.安装JDK3.配置环境变量4.检验JDK官网下载地址:Java Downloads

SQL server数据库如何下载和安装

《SQLserver数据库如何下载和安装》本文指导如何下载安装SQLServer2022评估版及SSMS工具,涵盖安装配置、连接字符串设置、C#连接数据库方法和安全注意事项,如混合验证、参数化查... 目录第一步:打开官网下载对应文件第二步:程序安装配置第三部:安装工具SQL Server Manageme

使用Python删除Excel中的行列和单元格示例详解

《使用Python删除Excel中的行列和单元格示例详解》在处理Excel数据时,删除不需要的行、列或单元格是一项常见且必要的操作,本文将使用Python脚本实现对Excel表格的高效自动化处理,感兴... 目录开发环境准备使用 python 删除 Excphpel 表格中的行删除特定行删除空白行删除含指定

深入理解Go语言中二维切片的使用

《深入理解Go语言中二维切片的使用》本文深入讲解了Go语言中二维切片的概念与应用,用于表示矩阵、表格等二维数据结构,文中通过示例代码介绍的非常详细,需要的朋友们下面随着小编来一起学习学习吧... 目录引言二维切片的基本概念定义创建二维切片二维切片的操作访问元素修改元素遍历二维切片二维切片的动态调整追加行动态

prometheus如何使用pushgateway监控网路丢包

《prometheus如何使用pushgateway监控网路丢包》:本文主要介绍prometheus如何使用pushgateway监控网路丢包问题,具有很好的参考价值,希望对大家有所帮助,如有错误... 目录监控网路丢包脚本数据图表总结监控网路丢包脚本[root@gtcq-gt-monitor-prome

Python通用唯一标识符模块uuid使用案例详解

《Python通用唯一标识符模块uuid使用案例详解》Pythonuuid模块用于生成128位全局唯一标识符,支持UUID1-5版本,适用于分布式系统、数据库主键等场景,需注意隐私、碰撞概率及存储优... 目录简介核心功能1. UUID版本2. UUID属性3. 命名空间使用场景1. 生成唯一标识符2. 数

SpringBoot中如何使用Assert进行断言校验

《SpringBoot中如何使用Assert进行断言校验》Java提供了内置的assert机制,而Spring框架也提供了更强大的Assert工具类来帮助开发者进行参数校验和状态检查,下... 目录前言一、Java 原生assert简介1.1 使用方式1.2 示例代码1.3 优缺点分析二、Spring Fr

Android kotlin中 Channel 和 Flow 的区别和选择使用场景分析

《Androidkotlin中Channel和Flow的区别和选择使用场景分析》Kotlin协程中,Flow是冷数据流,按需触发,适合响应式数据处理;Channel是热数据流,持续发送,支持... 目录一、基本概念界定FlowChannel二、核心特性对比数据生产触发条件生产与消费的关系背压处理机制生命周期