使用Python基于metricbeat和heartbeat采集数据进行告警

2024-01-20 03:52

本文主要是介绍使用Python基于metricbeat和heartbeat采集数据进行告警,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一、系统架构

IP主机名角色备注
11.0.1.11kafka1kafka和MySQL
11.0.1.12kafka2kafka
11.0.1.13kafka3kafka
11.0.1.14demo1metricbeat和heartbeat

二、部署Kafka
省略

二、部署Metricbeat和Heartbeat
metricbeat配置:

metricbeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: falsefields:ip: 11.0.1.14output.kafka:hosts: ["11.0.1.11:9092","11.0.1.12:9092","11.0.1.13:9092"]topic: "ELK-metricbeat"

heartbeat配置:

heartbeat.config.monitors:path: ${path.config}/monitors.d/*.ymlreload.enabled: falsereload.period: 5s# ----------------------------  Kafka Output ----------------------------
output.kafka:hosts: ["11.0.1.11:9092","11.0.1.12:9092","11.0.1.13:9092"]topic: "ELK-heartbeat"

heartbeat的tcp.yml配置:

- type: tcp id: my-tcp-monitorname: My TCP monitorenabled: trueschedule: '@every 20s' hosts: ["11.0.1.14:80","11.0.1.13:80","11.0.1.12:80"]ipv4: trueipv6: truemode: all

三、MariaDB表结构
cmdb_app表(存储应用系统的信息):

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;-- ----------------------------
-- Table structure for cmdb_app
-- ----------------------------
DROP TABLE IF EXISTS `cmdb_app`;
CREATE TABLE `cmdb_app`  (`id` int(11) NOT NULL AUTO_INCREMENT,`app_name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`ops_user` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`ops_tel` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`ops_dep` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB AUTO_INCREMENT = 2 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Compact;SET FOREIGN_KEY_CHECKS = 1;

解释:
app_name:系统名称
ops_user:运维人员姓名
ops_tel:运维人员手机号
ops_dep:运维责任部门

cmdb_os表(存储服务器信息):

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;-- ----------------------------
-- Table structure for cmdb_os
-- ----------------------------
DROP TABLE IF EXISTS `cmdb_os`;
CREATE TABLE `cmdb_os`  (`id` int(11) NOT NULL AUTO_INCREMENT,`app_name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`eip` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`module` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB AUTO_INCREMENT = 5 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Compact;SET FOREIGN_KEY_CHECKS = 1;

解释:
app_name:系统信息
eip:服务器IP
module:服务器用途

alert_list表(存储告警信息):

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;-- ----------------------------
-- Table structure for alert_list
-- ----------------------------
DROP TABLE IF EXISTS `alert_list`;
CREATE TABLE `alert_list`  (`id` int(11) NOT NULL AUTO_INCREMENT,`timestamp` datetime NULL DEFAULT NULL,`url` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`status` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`app_name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`ops_user` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`ops_tel` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`ops_dep` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,`module` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Compact;SET FOREIGN_KEY_CHECKS = 1;

四、使用Python程序,从Kafka读取数据,并将cmdb_os和cmdb_app信息根据kafka数据中的ip信息匹配起来,并将新的数据写入到新的Kafka

安装依赖:

pip install kafka-python pymysql apscheduler pyyaml

先说metricbeat_replace.py:

import json
import logging
from logging.handlers import RotatingFileHandler
import pymysql
from kafka import KafkaConsumer, KafkaProducer
import yaml
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTriggerclass DatabaseConnectionError(Exception):def __init__(self, message="数据库连接失败"):self.message = messagesuper().__init__(self.message)class KafkaCMDBProcessor:def __init__(self, kafka_config, mysql_config):self.kafka_config = kafka_configself.mysql_config = mysql_configself.logger = self.setup_logger()self.cmdb_data = None# 初始化调度器self.scheduler = BackgroundScheduler()self.scheduler.start()# 添加定时任务,每个整点、10分、20分、30分、40分、50分的时候执行self.scheduler.add_job(self.load_cmdb_data, CronTrigger(minute='0,10,20,30,40,50'))@staticmethoddef setup_logger():logger = logging.getLogger(__name__)logger.setLevel(logging.DEBUG)# 创建控制台处理程序并设置级别为调试ch = logging.StreamHandler()ch.setLevel(logging.DEBUG)# 创建文件处理程序并设置级别为调试,最大文件大小为1 MB,保留备份文件3个fh = RotatingFileHandler('metricbeat_replace.log', maxBytes=1e6, backupCount=3)fh.setLevel(logging.DEBUG)# 创建格式化器formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')# 将格式化器添加到处理程序ch.setFormatter(formatter)fh.setFormatter(formatter)# 将处理程序添加到记录器logger.addHandler(ch)logger.addHandler(fh)return loggerdef start_processing(self):self.connect_to_database()  # 初始化时第一次连接数据库self.load_cmdb_data()  # 初始化时加载数据到内存self.logger.info("开始处理...")consumer = KafkaConsumer(self.kafka_config['input_topic'],group_id=self.kafka_config['consumer_group_id'],bootstrap_servers=self.kafka_config['bootstrap_servers'],auto_offset_reset='earliest')self.logger.info("Kafka 消费者已创建.")producer = KafkaProducer(bootstrap_servers=self.kafka_config['bootstrap_servers'])self.logger.info("Kafka 生产者已创建.")try:for msg in consumer:metricbeat_data = msg.value.decode('utf-8')ip = self.extract_ip(metricbeat_data)cmdb_data = self.get_cmdb_data(ip)self.process_and_send_message(producer, metricbeat_data, cmdb_data)except KeyboardInterrupt:self.logger.info("接收到 KeyboardInterrupt。正在优雅地关闭。")except Exception as e:self.logger.error(f"发生错误:{str(e)}")# 如果在处理过程中发生异常,可以在这里添加适当的处理逻辑finally:consumer.close()producer.close()def connect_to_database(self):try:self.logger.info("正在连接数据库...")db = pymysql.connect(host=self.mysql_config['host'],port=self.mysql_config['port'],user=self.mysql_config['user'],password=self.mysql_config['password'],database=self.mysql_config['db'])self.logger.info("数据库连接成功.")self.db_connection_error_logged = False  # 连接成功后重置连接错误标志except pymysql.Error as e:error_message = f"连接数据库时发生错误:{str(e)}"self.logger.error(error_message.split('\n')[0])raise DatabaseConnectionError(error_message) from efinally:if db:db.close()def load_cmdb_data(self):db = Nonecursor = Nonetry:self.logger.info("开始加载数据.")db = pymysql.connect(host=self.mysql_config['host'],port=self.mysql_config['port'],user=self.mysql_config['user'],password=self.mysql_config['password'],database=self.mysql_config['db'])cursor = db.cursor()# 查询 cmdb_os 表中的数据sql_cmdb_os = "SELECT app_name, eip, module FROM cmdb_os"cursor.execute(sql_cmdb_os)cmdb_os_result = cursor.fetchall()# 查询 cmdb_app 表中的数据sql_cmdb_app = "SELECT app_name, ops_user, ops_tel, ops_dep FROM cmdb_app"cursor.execute(sql_cmdb_app)cmdb_app_result = cursor.fetchall()# 将数据保存到内存中self.cmdb_data = {"cmdb_os": cmdb_os_result,"cmdb_app": cmdb_app_result}self.logger.info("数据加载完成.")except pymysql.Error as e:error_message = f"加载数据时发生数据库错误:{str(e)}"self.logger.error(error_message.split('\n')[0])self.logger.warning("数据库连接异常,继续沿用上次的 CMDB 数据.")finally:if cursor:cursor.close()if db:db.close()@staticmethoddef extract_ip(metricbeat_data):data = json.loads(metricbeat_data)return data.get('fields', {}).get('ip', '')def get_cmdb_data(self, ip):if self.cmdb_data:# 在内存中查找数据cmdb_os_data = [row for row in self.cmdb_data["cmdb_os"] if row[1] == ip]cmdb_app_data = [row for row in self.cmdb_data["cmdb_app"] if row[0] == cmdb_os_data[0][0]]return cmdb_os_data, cmdb_app_dataelse:return Nonedef process_and_send_message(self, producer, original_data, cmdb_data):original_data_str = original_data.decode('utf-8') if isinstance(original_data, bytes) else original_datanew_message = json.loads(original_data_str)if cmdb_data:cmdb_os_data, cmdb_app_data = cmdb_datanew_message["cmdb_data"] = {"app_name": cmdb_os_data[0][0],"eip": cmdb_os_data[0][1],"module": cmdb_os_data[0][2],"ops_user": cmdb_app_data[0][1],"ops_tel": cmdb_app_data[0][2],"ops_dep": cmdb_app_data[0][3]}else:new_message["cmdb_data"] = Noneproducer.send(self.kafka_config['output_topic'], value=json.dumps(new_message, ensure_ascii=False).encode('utf-8'))producer.flush()if __name__ == "__main__":try:with open('application.yml', 'r') as config_file:config_data = yaml.safe_load(config_file)kafka_config_data = config_data.get('kafka', {})mysql_config_data = config_data.get('mysql', {})processor = KafkaCMDBProcessor(kafka_config_data, mysql_config_data)processor.start_processing()except FileNotFoundError:print("错误:找不到配置文件 'application.yml'。")except Exception as e:print(f"发生意外错误:{str(e)}")

application.yml配置如下:

kafka:bootstrap_servers:- '11.0.1.11:9092'- '11.0.1.12:9092'- '11.0.1.13:9092'consumer_group_id: 'metricbeat_replace'input_topic: 'ELK-metricbeat'output_topic: 'ELK-system_metricbeat'mysql:host: '11.0.1.11'port: 13306user: 'root'password: '123456'db: 'zll_python_test'

heartbeat_replace.py如下:

import json
import logging
from logging.handlers import RotatingFileHandler
import pymysql
from kafka import KafkaConsumer, KafkaProducer
import yaml
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTriggerclass DatabaseConnectionError(Exception):def __init__(self, message="数据库连接失败"):self.message = messagesuper().__init__(self.message)class KafkaCMDBProcessor:def __init__(self, kafka_config, mysql_config):self.kafka_config = kafka_configself.mysql_config = mysql_configself.logger = self.setup_logger()self.cmdb_data = None# 初始化调度器self.scheduler = BackgroundScheduler()self.scheduler.start()# 添加定时任务,每个整点、10分、20分、30分、40分、50分的时候执行self.scheduler.add_job(self.load_cmdb_data, CronTrigger(minute='0,10,20,30,40,50'))@staticmethoddef setup_logger():logger = logging.getLogger(__name__)logger.setLevel(logging.DEBUG)# 创建控制台处理程序并设置级别为调试ch = logging.StreamHandler()ch.setLevel(logging.DEBUG)# 创建文件处理程序并设置级别为调试,最大文件大小为1 MB,保留备份文件3个fh = RotatingFileHandler('heartbeat_replace.log', maxBytes=1e6, backupCount=3)fh.setLevel(logging.DEBUG)# 创建格式化器formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')# 将格式化器添加到处理程序ch.setFormatter(formatter)fh.setFormatter(formatter)# 将处理程序添加到记录器logger.addHandler(ch)logger.addHandler(fh)return loggerdef start_processing(self):self.connect_to_database()  # 初始化时第一次连接数据库self.load_cmdb_data()  # 初始化时加载数据到内存self.logger.info("开始处理...")consumer = KafkaConsumer(self.kafka_config['input_topic'],group_id=self.kafka_config['consumer_group_id'],bootstrap_servers=self.kafka_config['bootstrap_servers'],auto_offset_reset='earliest')self.logger.info("Kafka 消费者已创建.")producer = KafkaProducer(bootstrap_servers=self.kafka_config['bootstrap_servers'])self.logger.info("Kafka 生产者已创建.")try:for msg in consumer:heartbeat_data = msg.value.decode('utf-8')ip = self.extract_url_domain(heartbeat_data)cmdb_data = self.get_cmdb_data(ip)self.process_and_send_message(producer, heartbeat_data, cmdb_data)except KeyboardInterrupt:self.logger.info("接收到 KeyboardInterrupt。正在优雅地关闭。")except Exception as e:self.logger.error(f"发生错误:{str(e)}")# 如果在处理过程中发生异常,可以在这里添加适当的处理逻辑finally:consumer.close()producer.close()def connect_to_database(self):try:self.logger.info("正在连接数据库...")db = pymysql.connect(host=self.mysql_config['host'],port=self.mysql_config['port'],user=self.mysql_config['user'],password=self.mysql_config['password'],database=self.mysql_config['db'])self.logger.info("数据库连接成功.")self.db_connection_error_logged = False  # 连接成功后重置连接错误标志except pymysql.Error as e:error_message = f"连接数据库时发生错误:{str(e)}"self.logger.error(error_message.split('\n')[0])raise DatabaseConnectionError(error_message) from efinally:if db:db.close()def load_cmdb_data(self):db = Nonecursor = Nonetry:self.logger.info("开始加载数据.")db = pymysql.connect(host=self.mysql_config['host'],port=self.mysql_config['port'],user=self.mysql_config['user'],password=self.mysql_config['password'],database=self.mysql_config['db'])cursor = db.cursor()# 查询 cmdb_os 表中的数据sql_cmdb_os = "SELECT app_name, eip, module FROM cmdb_os"cursor.execute(sql_cmdb_os)cmdb_os_result = cursor.fetchall()# 查询 cmdb_app 表中的数据sql_cmdb_app = "SELECT app_name, ops_user, ops_tel, ops_dep FROM cmdb_app"cursor.execute(sql_cmdb_app)cmdb_app_result = cursor.fetchall()# 将数据保存到内存中self.cmdb_data = {"cmdb_os": cmdb_os_result,"cmdb_app": cmdb_app_result}self.logger.info("数据加载完成.")except pymysql.Error as e:error_message = f"加载数据时发生数据库错误:{str(e)}"self.logger.error(error_message.split('\n')[0])self.logger.warning("数据库连接异常,继续沿用上次的 CMDB 数据.")finally:if cursor:cursor.close()if db:db.close()@staticmethoddef extract_url_domain(heartbeat_data):data = json.loads(heartbeat_data)return data.get('url', {}).get('domain', '')def get_cmdb_data(self, ip):if self.cmdb_data:# 在内存中查找数据cmdb_os_data = [row for row in self.cmdb_data["cmdb_os"] if row[1] == ip]cmdb_app_data = [row for row in self.cmdb_data["cmdb_app"] if row[0] == cmdb_os_data[0][0]]return cmdb_os_data, cmdb_app_dataelse:return Nonedef process_and_send_message(self, producer, original_data, cmdb_data):original_data_str = original_data.decode('utf-8') if isinstance(original_data, bytes) else original_datanew_message = json.loads(original_data_str)if cmdb_data:cmdb_os_data, cmdb_app_data = cmdb_datanew_message["cmdb_data"] = {"app_name": cmdb_os_data[0][0],"eip": cmdb_os_data[0][1],"module": cmdb_os_data[0][2],"ops_user": cmdb_app_data[0][1],"ops_tel": cmdb_app_data[0][2],"ops_dep": cmdb_app_data[0][3]}else:new_message["cmdb_data"] = Noneproducer.send(self.kafka_config['output_topic'], value=json.dumps(new_message, ensure_ascii=False).encode('utf-8'))producer.flush()if __name__ == "__main__":try:with open('application.yml', 'r') as config_file:config_data = yaml.safe_load(config_file)kafka_config_data = config_data.get('kafka', {})mysql_config_data = config_data.get('mysql', {})processor = KafkaCMDBProcessor(kafka_config_data, mysql_config_data)processor.start_processing()except FileNotFoundError:print("错误:找不到配置文件 'application.yml'。")except Exception as e:print(f"发生意外错误:{str(e)}")

application.yml配置如下:

kafka:bootstrap_servers:- '11.0.1.11:9092'- '11.0.1.12:9092'- '11.0.1.13:9092'consumer_group_id: 'heartbeat_replace'input_topic: 'ELK-heartbeat'output_topic: 'ELK-system_heartbeat'mysql:host: '11.0.1.11'port: 13306user: 'root'password: '123456'db: 'zll_python_test'

总的来说,metricbeat_heartbeat和heartbeat_replace代码基本一致,只是个别地方heartbeat换成metricbeat,return data.get(‘fields’, {}).get(‘ip’, ‘’)和return data.get(‘url’, {}).get(‘domain’, ‘’)的差别而已

五、heartbeat告警
heartbeat_alarm.py如下:

# heartbeat_alarm.pyimport json
import logging
import mysql.connector
from collections import defaultdict
from datetime import datetime, timedelta
from kafka import KafkaConsumer
import yaml# 配置日志记录器
logging.basicConfig(level=logging.INFO,filename='heartbeat_checker.log',format='%(asctime)s [%(levelname)s] - %(message)s',datefmt='%Y-%m-%d %H:%M:%S'
)
logger = logging.getLogger(__name__)class HeartbeatChecker:def __init__(self, config_path='application.yml'):# 初始化 HeartbeatChecker 对象self.config_path = config_pathself.kafka_bootstrap_servers = Noneself.kafka_group_id = Noneself.kafka_topic = Noneself.mysql_host = Noneself.mysql_port = Noneself.mysql_user = Noneself.mysql_password = Noneself.mysql_database = Noneself.consecutive_down_threshold = Noneself.consecutive_up_threshold = None# 从 YAML 文件加载配置self.load_config()self.kafka_consumer = Nonedef load_config(self):try:# 从 YAML 文件加载配置with open(self.config_path, 'r') as file:config = yaml.safe_load(file)# 提取 Kafka 配置self.kafka_bootstrap_servers = config['kafka']['bootstrap_servers']self.kafka_group_id = config['kafka']['group_id']self.kafka_topic = config['kafka']['topic']# 提取 MySQL 配置self.mysql_host = config['mysql']['host']self.mysql_port = config['mysql']['port']self.mysql_user = config['mysql']['user']self.mysql_password = config['mysql']['password']self.mysql_database = config['mysql']['database']# 提取连续 down 和连续 up 的阈值self.consecutive_down_threshold = config['thresholds']['consecutive_down']self.consecutive_up_threshold = config['thresholds']['consecutive_up']except Exception as e:# 处理配置加载错误logger.error(f"加载配置时发生错误: {e}")raisedef create_kafka_consumer(self):try:# 创建 Kafka Consumer 实例self.kafka_consumer = KafkaConsumer(self.kafka_topic,bootstrap_servers=self.kafka_bootstrap_servers,group_id=self.kafka_group_id,auto_offset_reset='latest',enable_auto_commit=True,value_deserializer=lambda x: json.loads(x.decode('utf-8')))except Exception as e:# 处理创建 Kafka Consumer 错误logger.error(f"创建 Kafka Consumer 时发生错误: {e}")raisedef check_heartbeat_alerts(self):# 初始化 defaultdict 以存储每个 URL 的监测状态列表url_groups = defaultdict(list)mysql_connection = Nonetry:# 创建 Kafka Consumer 并连接到 MySQL 数据库self.create_kafka_consumer()mysql_connection = mysql.connector.connect(host=self.mysql_host,port=self.mysql_port,user=self.mysql_user,password=self.mysql_password,database=self.mysql_database)mysql_cursor = mysql_connection.cursor()# 遍历 Kafka 消息for message in self.kafka_consumer:json_data = message.valueurl = json_data.get('url', {}).get('full')monitor_status = json_data.get('monitor', {}).get('status')timestamp_str = json_data.get('@timestamp')cmdb_data = json_data.get('cmdb_data')if url and monitor_status and timestamp_str:timestamp = self.convert_to_local_time(timestamp_str)# 处理连续 up 的情况if monitor_status == 'up' and self.url_exists_down_in_mysql(mysql_cursor, url):url_groups[url].append(monitor_status)mysql_cursor.fetchall()if len(url_groups[url]) >= self.consecutive_up_threshold and all(status == 'up' for status in url_groups[url][-self.consecutive_up_threshold:]):self.delete_from_mysql(mysql_cursor, url, mysql_connection)logger.info(f"URL: {url} - 由于连续 up 被从 MySQL 中移除")else:# 处理连续 down 的情况if monitor_status == 'down' and not self.url_exists_down_in_mysql(mysql_cursor, url):url_groups[url].append(monitor_status)mysql_cursor.fetchall()if len(url_groups[url]) >= self.consecutive_down_threshold and all(status == 'down' for status in url_groups[url][-self.consecutive_down_threshold:]):self.send_alert(url)self.write_to_mysql(mysql_cursor, timestamp, url, monitor_status, mysql_connection, cmdb_data)url_groups[url] = []logger.info(f"URL: {url} - 被添加到 MySQL 中")elif monitor_status == 'up' and self.url_exists_down_in_mysql(mysql_cursor, url):url_groups[url].append(monitor_status)mysql_cursor.fetchall()if len(url_groups[url]) >= self.consecutive_up_threshold and all(status == 'up' for status in url_groups[url][-self.consecutive_up_threshold:]):self.delete_from_mysql(mysql_cursor, url, mysql_connection)url_groups[url] = []logger.info(f"URL: {url} - 由于连续 up 被从 MySQL 中移除")except Exception as e:# 处理运行时错误logger.error(f"发生错误: {e}")finally:# 关闭 Kafka Consumer 和 MySQL 连接if self.kafka_consumer:self.kafka_consumer.close()if mysql_connection:mysql_connection.close()def send_alert(self, url):# 记录告警信息logger.info(f"告警: URL {url} 连续 {self.consecutive_down_threshold} 次掉线")@staticmethoddef write_to_mysql(cursor, timestamp, url, status, connection, cmdb_data=None):try:# 插入数据到 MySQL,包括 "cmdb_data" 字段insert_query = """INSERT INTO alert_list (timestamp, url, status, app_name, module, ops_user, ops_tel, ops_dep)VALUES (%s, %s, %s, %s, %s, %s, %s, %s)"""cursor.execute(insert_query, (timestamp,url,status,cmdb_data.get('app_name', '') if cmdb_data else '',cmdb_data.get('module', '') if cmdb_data else '',cmdb_data.get('ops_user', '') if cmdb_data else '',cmdb_data.get('ops_tel', '') if cmdb_data else '',cmdb_data.get('ops_dep', '') if cmdb_data else '') if cmdb_data else (timestamp, url, status, '', '', '', '', ''))connection.commit()logging.info(f"Inserted into MySQL: URL {url}, Status {status}, cmdb_data {cmdb_data}")except Exception as e:# 处理写入 MySQL 错误logger.error(f"Error writing to MySQL: {e}")@staticmethoddef delete_from_mysql(cursor, url, connection):try:# 从 MySQL 删除数据delete_query = "DELETE FROM alert_list WHERE url = %s AND status = 'down'"cursor.execute(delete_query, (url,))connection.commit()logging.info(f"从 MySQL 中删除: URL {url}")except Exception as e:# 处理从 MySQL 删除错误logger.error(f"从 MySQL 中删除时发生错误: {e}")@staticmethoddef url_exists_down_in_mysql(cursor, url):try:# 检查 URL 是否存在于 MySQL 中query = "SELECT * FROM alert_list WHERE url = %s AND status = 'down'"cursor.execute(query, (url,))return bool(cursor.fetchone())except Exception as e:# 处理检查 URL 存在性错误logger.error(f"检查 URL 是否存在于 MySQL 中时发生错误: {e}")return False@staticmethoddef convert_to_local_time(timestamp_str):# 将 UTC 时间转换为本地时间timestamp_utc = datetime.strptime(timestamp_str, "%Y-%m-%dT%H:%M:%S.%fZ")timestamp_local = timestamp_utc + timedelta(hours=8)return timestamp_local.strftime("%Y-%m-%d %H:%M:%S")if __name__ == "__main__":try:# 运行主程序heartbeat_checker = HeartbeatChecker()heartbeat_checker.check_heartbeat_alerts()except KeyboardInterrupt:print("退出...")

appllication.yml如下:

# application.ymlkafka:bootstrap_servers:- '11.0.1.11:9092'- '11.0.1.12:9092'- '11.0.1.13:9092'group_id: 'python_alert'topic: 'ELK-system_heartbeat'mysql:host: '11.0.1.11'port: 13306user: 'root'password: '123456'database: 'zll_python_test'thresholds:consecutive_down: 1consecutive_up: 1

其中consecutive_down表示连续down几次触发告警,consecutive_up表示连续up几次告警恢复。

六、metricbeat告警

metricbeat可以配置的告警比较多,比如CPU、内存、文件系统、网络流量和进程数等,下一步研究。

这篇关于使用Python基于metricbeat和heartbeat采集数据进行告警的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/624709

相关文章

Python FastMCP构建MCP服务端与客户端的详细步骤

《PythonFastMCP构建MCP服务端与客户端的详细步骤》MCP(Multi-ClientProtocol)是一种用于构建可扩展服务的通信协议框架,本文将使用FastMCP搭建一个支持St... 目录简介环境准备服务端实现(server.py)客户端实现(client.py)运行效果扩展方向常见问题结

Java Lambda表达式的使用详解

《JavaLambda表达式的使用详解》:本文主要介绍JavaLambda表达式的使用方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录一、前言二、Lambda表达式概述1. 什么是Lambda表达式?三、Lambda表达式的语法规则1. 无参数的Lambda表

Spring组件实例化扩展点之InstantiationAwareBeanPostProcessor使用场景解析

《Spring组件实例化扩展点之InstantiationAwareBeanPostProcessor使用场景解析》InstantiationAwareBeanPostProcessor是Spring... 目录一、什么是InstantiationAwareBeanPostProcessor?二、核心方法解

详解如何使用Python构建从数据到文档的自动化工作流

《详解如何使用Python构建从数据到文档的自动化工作流》这篇文章将通过真实工作场景拆解,为大家展示如何用Python构建自动化工作流,让工具代替人力完成这些数字苦力活,感兴趣的小伙伴可以跟随小编一起... 目录一、Excel处理:从数据搬运工到智能分析师二、PDF处理:文档工厂的智能生产线三、邮件自动化:

Spring @RequestMapping 注解及使用技巧详解

《Spring@RequestMapping注解及使用技巧详解》@RequestMapping是SpringMVC中定义请求映射规则的核心注解,用于将HTTP请求映射到Controller处理方法... 目录一、核心作用二、关键参数说明三、快捷组合注解四、动态路径参数(@PathVariable)五、匹配请

Python实现自动化Word文档样式复制与内容生成

《Python实现自动化Word文档样式复制与内容生成》在办公自动化领域,高效处理Word文档的样式和内容复制是一个常见需求,本文将展示如何利用Python的python-docx库实现... 目录一、为什么需要自动化 Word 文档处理二、核心功能实现:样式与表格的深度复制1. 表格复制(含样式与内容)2

Java 枚举的基本使用方法及实际使用场景

《Java枚举的基本使用方法及实际使用场景》枚举是Java中一种特殊的类,用于定义一组固定的常量,枚举类型提供了更好的类型安全性和可读性,适用于需要定义一组有限且固定的值的场景,本文给大家介绍Jav... 目录一、什么是枚举?二、枚举的基本使用方法定义枚举三、实际使用场景代替常量状态机四、更多用法1.实现接

python获取cmd环境变量值的实现代码

《python获取cmd环境变量值的实现代码》:本文主要介绍在Python中获取命令行(cmd)环境变量的值,可以使用标准库中的os模块,需要的朋友可以参考下... 前言全局说明在执行py过程中,总要使用到系统环境变量一、说明1.1 环境:Windows 11 家庭版 24H2 26100.4061

springboot项目中使用JOSN解析库的方法

《springboot项目中使用JOSN解析库的方法》JSON,全程是JavaScriptObjectNotation,是一种轻量级的数据交换格式,本文给大家介绍springboot项目中使用JOSN... 目录一、jsON解析简介二、Spring Boot项目中使用JSON解析1、pom.XML文件引入依

Java中的record使用详解

《Java中的record使用详解》record是Java14引入的一种新语法(在Java16中成为正式功能),用于定义不可变的数据类,这篇文章给大家介绍Java中的record相关知识,感兴趣的朋友... 目录1. 什么是 record?2. 基本语法3. record 的核心特性4. 使用场景5. 自定