K3S部署

2024-06-15 08:20
文章标签 部署 k3s

本文主要是介绍K3S部署,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

k3s官网

一个server端、一个Agent端,
Kubernetes control plane 组件的操作都封装在单个二进制文件和进程中

requirements

  • 最低1U 512MB K3s 的性能取决于数据库的性能。
  • 为确保最佳速度,我们建议尽可能使用 SSD K3s 服务器
  • 需要端口 6443 才能被所有节点访问2379,2380。

脚本方式安装

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
  • kubeconfig 文件位于/etc/rancher/k3s/k3s.yaml

加载镜像(所有节点都需要操作)

github上找到对应release下载

mkdir -p /var/lib/rancher/k3s/agent/images/
wget https://github.com/k3s-io/k3s/releases/download/v1.30.0%2Bk3s1/k3s-airgap-images-amd64.tar.gz
ctr i import  k3s-airgap-images-amd64.tar

加入节点(如果你想-可选)

K3S_URL 参数会导致安装程序将 K3s 配置为 Agent 而不是 Server。K3s Agent 将注册到在 URL 上监听的 K3s Server。myserver写server IP地址
K3S_TOKEN 使用的值存储在 Server 节点上的 /var/lib/rancher/k3s/server/node-token 中

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -

=========================================

卸载server(在server上执行)

分别会在各个节点上有相应的卸载脚本

/usr/local/bin/k3s-uninstall.sh

卸载agent(在agent上执行)

/usr/local/bin/k3s-agent-uninstall.sh
#/usr/local/bin/k3s-killall.sh	停止
#systemctl start k3s-agent 启动

=========================================

创建镜像仓库

创建镜像仓库日后在写

没用的东西不看

k3s Installation
Introduction
In this lab we will be installing k3s.Prerequisite
All of our labs are built upon one another. Please make sure all previous labs are completed before starting this lab.Workflow
Hardware
For this lab we are going to need at least two virtual machine or a physical machine to install k3s on. The machines need to be of a particular architecture outlined here:x86_64
armhf
arm64/aarch64
s390x
Each of these machines need the minimum requirements:CPU: 1 core
Ram: 512 MB
Recommended:CPU: 2 cores
RAM: 1 GB
Operating System
K3s will work on most modern Linux system. Note there are differences for CentOS/Red Hat and Raspberry Pi OS.For these labs we will be using openSUSE Leap.k3s Version
In this course we are going to do an upgrade to a newer version of k3s. To do this we are going to need to find the latest version of k3s and take the stable release one minor version back.For example if the latest release of the latest version of k3s is v1.27.4+k3s1 we are going to want to use the v1.26.7+k3s1 of k3s.You can find the latest versions of k3s here.NOTE: don't try this lab with any marked as Pre-release.Server Script Install
One of the easiest ways to install k3s is by the installation script.You will need to ssh in to the server node.The install script lives here https://get.k3s.io/. To run this we are going to use a few flags:curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.26.7+k3s1 INSTALL_K3S_EXEC="server --cluster-init --node-ip 192.168.3.2 --node-external-ip 192.168.3.2" sh -s -
NOTE: the k3s version might not be the one prior to the latest! Check here for the correct version.This setups up the server installation on our server VM.To test to see if everything is running k3s comes with kubectl so while still logged in to our server instance we can do a quick check to make sure all is running:k3s kubectl get pods -A
And you should see the following output:NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE
kube-system   local-path-provisioner-957fdf8bc-sd297   1/1     Running     0          107s
kube-system   coredns-77ccd57875-sbrsl                 1/1     Running     0          107s
kube-system   helm-install-traefik-crd-7926x           0/1     Completed   0          107s
kube-system   helm-install-traefik-rswvl               0/1     Completed   1          107s
kube-system   metrics-server-648b5df564-5xbcw          1/1     Running     0          107s
kube-system   svclb-traefik-ea396d7d-4h9f6             2/2     Running     0          53s
kube-system   traefik-64f55bb67d-5cqxm                 1/1     Running     0          53s
Once you see that the system pods are either Running or Completed we need to get the token from the server to do so run the following command:sudo cat /var/lib/rancher/k3s/server/node-token
You will get the token needed for adding the agent (Note: it will not be this exact key):K105b9c68df80752c8f9d498097b764a35f9ba2c0220b2ea8951cef3aca111d9f33::server:2855bde078f38f3964f3f36e6e37dfbb
To logout of the server vm.Agent Script Install
Now we need to install k3s on the agent vm.First let's remote in to our agent vm.Again the install script lives here https://get.k3s.io/. To run this we are going to use a few flags (Note: this will take a few moments.):curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.26.7+k3s1 INSTALL_K3S_EXEC="agent --server https://192.168.3.2:6443 --node-ip 192.168.3.3 --node-external-ip 192.168.3.3" K3S_TOKEN=K105b9c68df80752c8f9d498097b764a35f9ba2c0220b2ea8951cef3aca111d9f33::server:2855bde078f38f3964f3f36e6e37dfbb K3S_NODE_NAME=agent sh -
NOTE: the k3s version might not be the one prior to the latest! Check here for the correct version.NOTE: The K3S_TOKEN is the token we pulled from the server earlier.kubectl Remotely
Just so we don't have to remain logged into the server VM let's copy down the config:ssh server -c "sudo cat /etc/rancher/k3s/k3s.yaml" > k3s.yaml
This will copy down the KUBECONFIG file down from the server.We are going to need to modify the k3s.yaml file:apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUserver: https://127.0.0.1:6443name: default
contexts:
- context:cluster: defaultuser: defaultname: default
current-context: default
kind: Config
preferences: {}
users:
- name: defaultuser:client-certificate-data: LS0tLS1CRUclient-key-data: LS0tLS1CRU(Note: the certificate data has been cut down.)We are going to need to change the variable of the server from:server: https://127.0.0.1:6443
To the following:server: https://192.168.3.2:6443
Now we are going to copy this k3s.yaml file to our .kube directory:cp k3s.yaml $HOME/.kube
From there we are going to set the KUBECONFIG variable to the location of the k3s.yaml file:export KUBECONFIG=$HOME/.kube/k3s.yaml
To test to see if this works we can run the following this command:kubectl get nodes
And you should see something like this:NAME     STATUS   ROLES                       AGE    VERSION
agent    Ready    <none>                      21s    v1.26.7+k3s1
server   Ready    control-plane,etcd,master   3m9s   v1.26.7+k3s1

这篇关于K3S部署的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1062920

相关文章

通过Docker容器部署Python环境的全流程

《通过Docker容器部署Python环境的全流程》在现代化开发流程中,Docker因其轻量化、环境隔离和跨平台一致性的特性,已成为部署Python应用的标准工具,本文将详细演示如何通过Docker容... 目录引言一、docker与python的协同优势二、核心步骤详解三、进阶配置技巧四、生产环境最佳实践

Nginx部署HTTP/3的实现步骤

《Nginx部署HTTP/3的实现步骤》本文介绍了在Nginx中部署HTTP/3的详细步骤,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学... 目录前提条件第一步:安装必要的依赖库第二步:获取并构建 BoringSSL第三步:获取 Nginx

redis-sentinel基础概念及部署流程

《redis-sentinel基础概念及部署流程》RedisSentinel是Redis的高可用解决方案,通过监控主从节点、自动故障转移、通知机制及配置提供,实现集群故障恢复与服务持续可用,核心组件包... 目录一. 引言二. 核心功能三. 核心组件四. 故障转移流程五. 服务部署六. sentinel部署

Linux部署中的文件大小写问题的解决方案

《Linux部署中的文件大小写问题的解决方案》在本地开发环境(Windows/macOS)一切正常,但部署到Linux服务器后出现模块加载错误,核心原因是Linux文件系统严格区分大小写,所以本文给大... 目录问题背景解决方案配置要求问题背景在本地开发环境(Windows/MACOS)一切正常,但部署到

使用IDEA部署Docker应用指南分享

《使用IDEA部署Docker应用指南分享》本文介绍了使用IDEA部署Docker应用的四步流程:创建Dockerfile、配置IDEADocker连接、设置运行调试环境、构建运行镜像,并强调需准备本... 目录一、创建 dockerfile 配置文件二、配置 IDEA 的 Docker 连接三、配置 Do

MySQL 主从复制部署及验证(示例详解)

《MySQL主从复制部署及验证(示例详解)》本文介绍MySQL主从复制部署步骤及学校管理数据库创建脚本,包含表结构设计、示例数据插入和查询语句,用于验证主从同步功能,感兴趣的朋友一起看看吧... 目录mysql 主从复制部署指南部署步骤1.环境准备2. 主服务器配置3. 创建复制用户4. 获取主服务器状态5

golang程序打包成脚本部署到Linux系统方式

《golang程序打包成脚本部署到Linux系统方式》Golang程序通过本地编译(设置GOOS为linux生成无后缀二进制文件),上传至Linux服务器后赋权执行,使用nohup命令实现后台运行,完... 目录本地编译golang程序上传Golang二进制文件到linux服务器总结本地编译Golang程序

如何在Ubuntu 24.04上部署Zabbix 7.0对服务器进行监控

《如何在Ubuntu24.04上部署Zabbix7.0对服务器进行监控》在Ubuntu24.04上部署Zabbix7.0监控阿里云ECS服务器,需配置MariaDB数据库、开放10050/1005... 目录软硬件信息部署步骤步骤 1:安装并配置mariadb步骤 2:安装Zabbix 7.0 Server

Web技术与Nginx网站环境部署教程

《Web技术与Nginx网站环境部署教程》:本文主要介绍Web技术与Nginx网站环境部署教程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录一、Web基础1.域名系统DNS2.Hosts文件3.DNS4.域名注册二.网页与html1.网页概述2.HTML概述3.

Nginx使用Keepalived部署web集群(高可用高性能负载均衡)实战案例

《Nginx使用Keepalived部署web集群(高可用高性能负载均衡)实战案例》本文介绍Nginx+Keepalived实现Web集群高可用负载均衡的部署与测试,涵盖架构设计、环境配置、健康检查、... 目录前言一、架构设计二、环境准备三、案例部署配置 前端 Keepalived配置 前端 Nginx