配置Hadoop伪分布模式时,SecondoryNamenode启动不了

2024-02-13 11:18

本文主要是介绍配置Hadoop伪分布模式时,SecondoryNamenode启动不了,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

配置Hadoop伪分布模式时,Secondorynamenode启动不了,可是日志文件没有报错,被这个问题困扰好多天了,求大佬指点一二!

一、运行结果截图

很奇怪的是,先开始jps时,显示了SecondoryNamenode的进程,第二次却没有了。

二、这是Secondorynamenode的日志文件(没看出来有错误的地方)
2019-04-12 21:32:45,103 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-04-12 21:32:50,288 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2019-04-12 21:32:50,940 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2019-04-12 21:32:50,941 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: SecondaryNameNode metrics system started
2019-04-12 21:32:55,638 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit logging is async:true
2019-04-12 21:32:55,788 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/tmp/dfs/namesecondary/in_use.lock acquired by nodename 28426@localhost.localdomain
2019-04-12 21:32:55,896 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: KeyProvider: null
2019-04-12 21:32:55,897 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2019-04-12 21:32:55,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2019-04-12 21:32:55,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = ubuntu (auth:SIMPLE)
2019-04-12 21:32:55,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2019-04-12 21:32:55,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2019-04-12 21:32:55,922 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2019-04-12 21:32:56,214 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-04-12 21:32:56,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2019-04-12 21:32:56,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2019-04-12 21:32:56,315 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2019-04-12 21:32:56,321 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2019 Apr 12 21:32:56
2019-04-12 21:32:56,327 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2019-04-12 21:32:56,327 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-04-12 21:32:56,329 INFO org.apache.hadoop.util.GSet: 2.0% max memory 454.4 MB = 9.1 MB
2019-04-12 21:32:56,329 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2019-04-12 21:32:56,376 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable = false
2019-04-12 21:32:56,457 INFO org.apache.hadoop.conf.Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2019-04-12 21:32:56,474 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2019-04-12 21:32:56,474 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2019-04-12 21:32:56,474 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2019-04-12 21:32:56,493 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2019-04-12 21:32:56,493 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2019-04-12 21:32:56,493 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2019-04-12 21:32:56,493 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2019-04-12 21:32:56,494 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms
2019-04-12 21:32:56,494 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2019-04-12 21:32:56,494 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2019-04-12 21:32:57,340 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2019-04-12 21:32:57,340 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-04-12 21:32:57,340 INFO org.apache.hadoop.util.GSet: 1.0% max memory 454.4 MB = 4.5 MB
2019-04-12 21:32:57,340 INFO org.apache.hadoop.util.GSet: capacity = 2^19 = 524288 entries
2019-04-12 21:32:57,351 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2019-04-12 21:32:57,351 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: POSIX ACL inheritance enabled? true
2019-04-12 21:32:57,351 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2019-04-12 21:32:57,351 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occurring more than 10 times
2019-04-12 21:32:57,406 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2019-04-12 21:32:57,436 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: SkipList is disabled
2019-04-12 21:32:57,495 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2019-04-12 21:32:57,495 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-04-12 21:32:57,496 INFO org.apache.hadoop.util.GSet: 0.25% max memory 454.4 MB = 1.1 MB
2019-04-12 21:32:57,496 INFO org.apache.hadoop.util.GSet: capacity = 2^17 = 131072 entries
2019-04-12 21:32:57,611 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-04-12 21:32:57,611 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2019-04-12 21:32:57,611 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-04-12 21:32:57,711 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint Period :3600 secs (60 min)
2019-04-12 21:32:57,711 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Log Size Trigger :1000000 txns
2019-04-12 21:32:57,742 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for secondary at: http://0.0.0.0:9868
2019-04-12 21:32:58,037 INFO org.eclipse.jetty.util.log: Logging initialized @17365ms
2

这篇关于配置Hadoop伪分布模式时,SecondoryNamenode启动不了的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/705397

相关文章

SpringBoot结合Knife4j进行API分组授权管理配置详解

《SpringBoot结合Knife4j进行API分组授权管理配置详解》在现代的微服务架构中,API文档和授权管理是不可或缺的一部分,本文将介绍如何在SpringBoot应用中集成Knife4j,并进... 目录环境准备配置 Swagger配置 Swagger OpenAPI自定义 Swagger UI 底

解决hive启动时java.net.ConnectException:拒绝连接的问题

《解决hive启动时java.net.ConnectException:拒绝连接的问题》Hadoop集群连接被拒,需检查集群是否启动、关闭防火墙/SELinux、确认安全模式退出,若问题仍存,查看日志... 目录错误发生原因解决方式1.关闭防火墙2.关闭selinux3.启动集群4.检查集群是否正常启动5.

Oracle迁移PostgreSQL隐式类型转换配置指南

《Oracle迁移PostgreSQL隐式类型转换配置指南》Oracle迁移PostgreSQL时因类型差异易引发错误,需通过显式/隐式类型转换、转换关系管理及冲突处理解决,并配合验证测试确保数据一致... 目录一、问题背景二、解决方案1. 显式类型转换2. 隐式转换配置三、维护操作1. 转换关系管理2.

IDEA中配置Tomcat全过程

《IDEA中配置Tomcat全过程》文章介绍了在IDEA中配置Tomcat的六步流程,包括添加服务器、配置部署选项、设置应用服务器及启动,并提及Maven依赖可能因约定大于配置导致问题,需检查依赖版本... 目录第一步第二步第三步第四步第五步第六步总结第一步选择这个方框第二步选择+号,找到Tomca

Win10安装Maven与环境变量配置过程

《Win10安装Maven与环境变量配置过程》本文介绍Maven的安装与配置方法,涵盖下载、环境变量设置、本地仓库及镜像配置,指导如何在IDEA中正确配置Maven,适用于Java及其他语言项目的构建... 目录Maven 是什么?一、下载二、安装三、配置环境四、验证测试五、配置本地仓库六、配置国内镜像地址

Springboot项目启动失败提示找不到dao类的解决

《Springboot项目启动失败提示找不到dao类的解决》SpringBoot启动失败,因ProductServiceImpl未正确注入ProductDao,原因:Dao未注册为Bean,解决:在启... 目录错误描述原因解决方法总结***************************APPLICA编

SpringBoot多环境配置数据读取方式

《SpringBoot多环境配置数据读取方式》SpringBoot通过环境隔离机制,支持properties/yaml/yml多格式配置,结合@Value、Environment和@Configura... 目录一、多环境配置的核心思路二、3种配置文件格式详解2.1 properties格式(传统格式)1.

Debian系和Redhat系防火墙配置方式

《Debian系和Redhat系防火墙配置方式》文章对比了Debian系UFW和Redhat系Firewalld防火墙的安装、启用禁用、端口管理、规则查看及注意事项,强调SSH端口需开放、规则持久化,... 目录Debian系UFW防火墙1. 安装2. 启用与禁用3. 基本命令4. 注意事项5. 示例配置R

kkFileView启动报错:报错2003端口占用的问题及解决

《kkFileView启动报错:报错2003端口占用的问题及解决》kkFileView启动报错因office组件2003端口未关闭,解决:查杀占用端口的进程,终止Java进程,使用shutdown.s... 目录原因解决总结kkFileViewjavascript启动报错启动office组件失败,请检查of

PyCharm中配置PyQt的实现步骤

《PyCharm中配置PyQt的实现步骤》PyCharm是JetBrains推出的一款强大的PythonIDE,结合PyQt可以进行pythion高效开发桌面GUI应用程序,本文就来介绍一下PyCha... 目录1. 安装China编程PyQt1.PyQt 核心组件2. 基础 PyQt 应用程序结构3. 使用 Q