flink源码分析 - standalone模式下jobmanager启动过程配置文件加载

本文主要是介绍flink源码分析 - standalone模式下jobmanager启动过程配置文件加载,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

flink版本: flink-1.11.2

代码位置: org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint#main

/** Licensed to the Apache Software Foundation (ASF) under one* or more contributor license agreements.  See the NOTICE file* distributed with this work for additional information* regarding copyright ownership.  The ASF licenses this file* to you under the Apache License, Version 2.0 (the* "License"); you may not use this file except in compliance* with the License.  You may obtain a copy of the License at**     http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/package org.apache.flink.runtime.entrypoint;import org.apache.flink.configuration.Configuration;
import org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory;
import org.apache.flink.runtime.entrypoint.parser.CommandLineParser;
import org.apache.flink.runtime.resourcemanager.StandaloneResourceManagerFactory;
import org.apache.flink.runtime.util.EnvironmentInformation;
import org.apache.flink.runtime.util.JvmShutdownSafeguard;
import org.apache.flink.runtime.util.SignalHandler;/************************************************** TODO_MA 马中华 https://blog.csdn.net/zhongqi2513*  注释: flink有三种方式执行应用程序:session mode, per-job mode, applocation mode*  模型的区别主要包含:*  1. 集群生命周期和资源隔离保证*  2. 应用程序的main()方法是在客户机上执行还是在集群上执行*//*** Entry point for the standalone session cluster.*/
public class StandaloneSessionClusterEntrypoint extends SessionClusterEntrypoint {public StandaloneSessionClusterEntrypoint(Configuration configuration) {super(configuration);}@Overrideprotected DefaultDispatcherResourceManagerComponentFactory createDispatcherResourceManagerComponentFactory(Configuration configuration) {/************************************************** TODO_MA 马中华 https://blog.csdn.net/zhongqi2513*  注释:*  1、参数是:StandaloneResourceManagerFactory 实例*  2、返回值:DefaultDispatcherResourceManagerComponentFactory 实例*/return DefaultDispatcherResourceManagerComponentFactory.createSessionComponentFactory(StandaloneResourceManagerFactory.getInstance());}/************************************************** TODO_MA 马中华 https://blog.csdn.net/zhongqi2513*  注释: 入口*/public static void main(String[] args) {// TODO_MA 注释:提供对 JVM 执行环境的访问的实用程序类,如执行用户(getHadoopUser())、启动选项或JVM版本。// startup checks and loggingEnvironmentInformation.logEnvironmentInfo(LOG, StandaloneSessionClusterEntrypoint.class.getSimpleName(), args);// TODO_MA 注释:注册一些信号处理SignalHandler.register(LOG);// TODO_MA 注释: 安装安全关闭的钩子// TODO_MA 注释: 你的 Flink集群启动过程中,或者在启动好了之后的运行中,// TODO_MA 注释: 都有可能接收到关闭集群的命令JvmShutdownSafeguard.installAsShutdownHook(LOG);EntrypointClusterConfiguration entrypointClusterConfiguration = null;// TODO_MA 注释:final CommandLineParser<EntrypointClusterConfiguration> commandLineParser = new CommandLineParser<>(new EntrypointClusterConfigurationParserFactory());try {/************************************************** TODO_MA 马中华 https://blog.csdn.net/zhongqi2513*  注释: 对传入的参数进行解析*  内部通过 EntrypointClusterConfigurationParserFactory 解析配置文件,*  返回 EntrypointClusterConfiguration 为 ClusterConfiguration 的子类*/entrypointClusterConfiguration = commandLineParser.parse(args);} catch(FlinkParseException e) {LOG.error("Could not parse command line arguments {}.", args, e);commandLineParser.printHelp(StandaloneSessionClusterEntrypoint.class.getSimpleName());System.exit(1);}/************************************************** TODO_MA 马中华 https://blog.csdn.net/zhongqi2513*  注释: 解析配置参数, 解析 flink 的配置文件: fink-conf.ymal*/Configuration configuration = loadConfiguration(entrypointClusterConfiguration);/************************************************** TODO_MA 马中华 https://blog.csdn.net/zhongqi2513*  注释:创建 StandaloneSessionClusterEntrypoint*/StandaloneSessionClusterEntrypoint entrypoint = new StandaloneSessionClusterEntrypoint(configuration);/************************************************** TODO_MA 马中华 https://blog.csdn.net/zhongqi2513*  注释:启动集群的entrypoint*  这个方法接受的是父类 ClusterEntrypoint,可想而知其他几种启动方式也是通过这个方法。*/ClusterEntrypoint.runClusterEntrypoint(entrypoint);}
}

加载配置文件主要分两步:

1.  解析命令行传入参数。核心代码:

entrypointClusterConfiguration = commandLineParser.parse(args);

原理参考:

flink源码分析 - 命令行参数解析-CommandLineParser-CSDN博客

2. flink-yaml配置加载:

核心代码:

Configuration configuration = loadConfiguration(entrypointClusterConfiguration);

其他部分略过,仅记录最关键yaml文件解析部分:  注意下方: org.apache.flink.configuration.GlobalConfiguration#loadYAMLResource

/** Licensed to the Apache Software Foundation (ASF) under one* or more contributor license agreements.  See the NOTICE file* distributed with this work for additional information* regarding copyright ownership.  The ASF licenses this file* to you under the Apache License, Version 2.0 (the* "License"); you may not use this file except in compliance* with the License.  You may obtain a copy of the License at**     http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/package org.apache.flink.configuration;import org.apache.flink.annotation.Internal;
import org.apache.flink.util.Preconditions;import org.slf4j.Logger;
import org.slf4j.LoggerFactory;import javax.annotation.Nullable;import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;/*** Global configuration object for Flink. Similar to Java properties configuration* objects it includes key-value pairs which represent the framework's configuration.*/
@Internal
public final class GlobalConfiguration {private static final Logger LOG = LoggerFactory.getLogger(GlobalConfiguration.class);public static final String FLINK_CONF_FILENAME = "flink-conf.yaml";// the keys whose values should be hiddenprivate static final String[] SENSITIVE_KEYS = new String[] {"password", "secret", "fs.azure.account.key"};// the hidden content to be displayedpublic static final String HIDDEN_CONTENT = "******";// --------------------------------------------------------------------------------------------private GlobalConfiguration() {}// --------------------------------------------------------------------------------------------/*** Loads the global configuration from the environment. Fails if an error occurs during loading. Returns an* empty configuration object if the environment variable is not set. In production this variable is set but* tests and local execution/debugging don't have this environment variable set. That's why we should fail* if it is not set.* @return Returns the Configuration*/public static Configuration loadConfiguration() {return loadConfiguration(new Configuration());}/*** Loads the global configuration and adds the given dynamic properties* configuration.** @param dynamicProperties The given dynamic properties* @return Returns the loaded global configuration with dynamic properties*/public static Configuration loadConfiguration(Configuration dynamicProperties) {final String configDir = System.getenv(ConfigConstants.ENV_FLINK_CONF_DIR);if (configDir == null) {return new Configuration(dynamicProperties);}return loadConfiguration(configDir, dynamicProperties);}/*** Loads the configuration files from the specified directory.** <p>YAML files are supported as configuration files.** @param configDir*        the directory which contains the configuration files*/public static Configuration loadConfiguration(final String configDir) {// TODO_MA 注释:return loadConfiguration(configDir, null);}/*** Loads the configuration files from the specified directory. If the dynamic properties* configuration is not null, then it is added to the loaded configuration.** @param configDir directory to load the configuration from* @param dynamicProperties configuration file containing the dynamic properties. Null if none.* @return The configuration loaded from the given configuration directory*/public static Configuration loadConfiguration(final String configDir, @Nullable final Configuration dynamicProperties) {if (configDir == null) {throw new IllegalArgumentException("Given configuration directory is null, cannot load configuration");}final File confDirFile = new File(configDir);if (!(confDirFile.exists())) {throw new IllegalConfigurationException("The given configuration directory name '" + configDir +"' (" + confDirFile.getAbsolutePath() + ") does not describe an existing directory.");}// TODO_MA 注释: Flink 配置文件: flink-conf.yaml// get Flink yaml configuration filefinal File yamlConfigFile = new File(confDirFile, FLINK_CONF_FILENAME);if (!yamlConfigFile.exists()) {throw new IllegalConfigurationException("The Flink config file '" + yamlConfigFile +"' (" + confDirFile.getAbsolutePath() + ") does not exist.");}// TODO_MA 注释: 读取 flink-conf.xml 配置文件Configuration configuration = loadYAMLResource(yamlConfigFile);if (dynamicProperties != null) {configuration.addAll(dynamicProperties);}return configuration;}/*** Loads a YAML-file of key-value pairs.** <p>Colon and whitespace ": " separate key and value (one per line). The hash tag "#" starts a single-line comment.** <p>Example:** <pre>* jobmanager.rpc.address: localhost # network address for communication with the job manager* jobmanager.rpc.port   : 6123      # network port to connect to for communication with the job manager* taskmanager.rpc.port  : 6122      # network port the task manager expects incoming IPC connections* </pre>** <p>This does not span the whole YAML specification, but only the *syntax* of simple YAML key-value pairs (see issue* #113 on GitHub). If at any point in time, there is a need to go beyond simple key-value pairs syntax* compatibility will allow to introduce a YAML parser library.** @param file the YAML file to read from* @see <a href="http://www.yaml.org/spec/1.2/spec.html">YAML 1.2 specification</a>*/private static Configuration loadYAMLResource(File file) {// TODO_MA 注释: 存储 配置解析结果的 容器final Configuration config = new Configuration();/************************************************** TODO_MA 马中华 https://blog.csdn.net/zhongqi2513*  注释:  读取 flink-conf.yaml 文件*/try (BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream(file)))){String line;int lineNo = 0;// TODO_MA 注释: 读取一行while ((line = reader.readLine()) != null) {lineNo++;// 1. check for comments/*zhouxianfu 2023-07-30: 此处为了防止下面这种情况导致后期取值错误key: value ## comment  以下是示例high-availability.cluster-id: /flink-1.12.0_cluster_yarn   ## 注意: yarn模式下不能配置这个参数,而是由yarn自动生成*/String[] comments = line.split("#", 2);String conf = comments[0].trim();// 2. get key and valueif (conf.length() > 0) {String[] kv = conf.split(": ", 2);// skip line with no valid key-value pairif (kv.length == 1) {LOG.warn("Error while trying to split key and value in configuration file " + file + ":" + lineNo + ": \"" + line + "\"");continue;}String key = kv[0].trim();String value = kv[1].trim();// sanity checkif (key.length() == 0 || value.length() == 0) {LOG.warn("Error after splitting key and value in configuration file " + file + ":" + lineNo + ": \"" + line + "\"");continue;}LOG.info("Loading configuration property: {}, {}", key, isSensitive(key) ? HIDDEN_CONTENT : value);config.setString(key, value);}}} catch (IOException e) {throw new RuntimeException("Error parsing YAML configuration.", e);}// TODO_MA 注释: 返回 Configurationreturn config;}/*** Check whether the key is a hidden key.** @param key the config key*/public static boolean isSensitive(String key) {Preconditions.checkNotNull(key, "key is null");final String keyInLower = key.toLowerCase();for (String hideKey : SENSITIVE_KEYS) {if (keyInLower.length() >= hideKey.length()&& keyInLower.contains(hideKey)) {return true;}}return false;}
}

这篇关于flink源码分析 - standalone模式下jobmanager启动过程配置文件加载的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/451384

相关文章

oracle 11g导入\导出(expdp impdp)之导入过程

《oracle11g导入导出(expdpimpdp)之导入过程》导出需使用SEC.DMP格式,无分号;建立expdir目录(E:/exp)并确保存在;导入在cmd下执行,需sys用户权限;若需修... 目录准备文件导入(impdp)1、建立directory2、导入语句 3、更改密码总结上一个环节,我们讲了

ShardingProxy读写分离之原理、配置与实践过程

《ShardingProxy读写分离之原理、配置与实践过程》ShardingProxy是ApacheShardingSphere的数据库中间件,通过三层架构实现读写分离,解决高并发场景下数据库性能瓶... 目录一、ShardingProxy技术定位与读写分离核心价值1.1 技术定位1.2 读写分离核心价值二

MyBatis-plus处理存储json数据过程

《MyBatis-plus处理存储json数据过程》文章介绍MyBatis-Plus3.4.21处理对象与集合的差异:对象可用内置Handler配合autoResultMap,集合需自定义处理器继承F... 目录1、如果是对象2、如果需要转换的是List集合总结对象和集合分两种情况处理,目前我用的MP的版本

Java Kafka消费者实现过程

《JavaKafka消费者实现过程》Kafka消费者通过KafkaConsumer类实现,核心机制包括偏移量管理、消费者组协调、批量拉取消息及多线程处理,手动提交offset确保数据可靠性,自动提交... 目录基础KafkaConsumer类分析关键代码与核心算法2.1 订阅与分区分配2.2 拉取消息2.3

SpringBoot通过main方法启动web项目实践

《SpringBoot通过main方法启动web项目实践》SpringBoot通过SpringApplication.run()启动Web项目,自动推断应用类型,加载初始化器与监听器,配置Spring... 目录1. 启动入口:SpringApplication.run()2. SpringApplicat

解决Nginx启动报错Job for nginx.service failed because the control process exited with error code问题

《解决Nginx启动报错Jobfornginx.servicefailedbecausethecontrolprocessexitedwitherrorcode问题》Nginx启... 目录一、报错如下二、解决原因三、解决方式总结一、报错如下Job for nginx.service failed bec

MySQL的配置文件详解及实例代码

《MySQL的配置文件详解及实例代码》MySQL的配置文件是服务器运行的重要组成部分,用于设置服务器操作的各种参数,下面:本文主要介绍MySQL配置文件的相关资料,文中通过代码介绍的非常详细,需要... 目录前言一、配置文件结构1.[mysqld]2.[client]3.[mysql]4.[mysqldum

AOP编程的基本概念与idea编辑器的配合体验过程

《AOP编程的基本概念与idea编辑器的配合体验过程》文章简要介绍了AOP基础概念,包括Before/Around通知、PointCut切入点、Advice通知体、JoinPoint连接点等,说明它们... 目录BeforeAroundAdvise — 通知PointCut — 切入点Acpect — 切面

C++ STL-string类底层实现过程

《C++STL-string类底层实现过程》本文实现了一个简易的string类,涵盖动态数组存储、深拷贝机制、迭代器支持、容量调整、字符串修改、运算符重载等功能,模拟标准string核心特性,重点强... 目录实现框架一、默认成员函数1.默认构造函数2.构造函数3.拷贝构造函数(重点)4.赋值运算符重载函数

MySQ中出现幻读问题的解决过程

《MySQ中出现幻读问题的解决过程》文章解析MySQLInnoDB通过MVCC与间隙锁机制在可重复读隔离级别下解决幻读,确保事务一致性,同时指出性能影响及乐观锁等替代方案,帮助开发者优化数据库应用... 目录一、幻读的准确定义与核心特征幻读 vs 不可重复读二、mysql隔离级别深度解析各隔离级别的实现差异