问题 HBase RegionServer频繁挂掉

2024-02-24 22:20

本文主要是介绍问题 HBase RegionServer频繁挂掉,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

错误日志
2019-09-21 20:42:17,264 INFO org.apache.hadoop.hbase.ScheduledChore: Chore: CompactionChecker missed its start time
2019-09-21 20:42:17,273 WARN org.apache.hadoop.hbase.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 156013ms
GC pool 'ParNew' had collection(s): count=1 time=156080ms
2019-09-21 20:42:17,264 WARN org.apache.hadoop.hbase.util.Sleeper: We slept 158843ms instead of 3000ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2019-09-21 20:42:17,281 WARN org.apache.hadoop.hbase.ipc.RpcServer: (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1569069581136,"responsesize":2051,"method":"Scan","processingtimems":156145,"client":"10.97.202.19:58322","queuetimems":0,"class":"HRegionServer"}
2019-09-21 20:42:17,300 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server hdh19,60020,1568940808648: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; currently processing hdh19,60020,1568940808648 as dead serverat org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:426)at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:331)at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerReport(MasterRpcServices.java:345)at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8617)at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)org.apache.hadoop.hbase.YouAreDeadException: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; currently processing hdh19,60020,1568940808648 as dead serverat org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:426)at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:331)at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerReport(MasterRpcServices.java:345)at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8617)at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:327)at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1158)at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:966)at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.YouAreDeadException): org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; currently processing hdh19,60020,1568940808648 as dead server
......
2019-09-21 20:42:17,621 INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x86cf6a57553f9a7 has expired, closing socket connection
2019-09-21 20:42:17,621 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server hdh19,60020,1568940808648: regionserver:60020-0x86cf6a57553f9a7, quorum=hdh12:2181,hdh53:2181,hdh1-07.p.xyidc:2181,hdh52:2181,hdh1-10.p.xyidc:2181, baseZNode=/hbase regionserver:60020-0x86cf6a57553f9a7 received expired from ZooKeeper, aborting
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expiredat org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:700)at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:611)at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
2019-09-21 20:42:42,269 ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper getChildren failed after 4 attempts
2019-09-21 20:42:42,269 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: regionserver:60020-0x86cf6a57553f9a7, quorum=hdh12:2181,hdh53:2181,hdh1-07.p.xyidc:2181,hdh52:2181,hdh1-10.p.xyidc:2181, baseZNode=/hbase Unable to list children of znode /hbase/replication/rs/hdh19,60020,1568940808648
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase/replication/rs/hdh19,60020,1568940808648at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468)at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:295)at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchForNewChildren(ZKUtil.java:456)at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchThem(ZKUtil.java:484)at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenBFSAndWatchThem(ZKUtil.java:1476)at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursivelyMultiOrSequential(ZKUtil.java:1398)at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursively(ZKUtil.java:1280)at org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.removeAllQueues(ReplicationQueuesZKImpl.java:187)at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.join(ReplicationSourceManager.java:310)at org.apache.hadoop.hbase.replication.regionserver.Replication.join(Replication.java:180)at org.apache.hadoop.hbase.replication.regionserver.Replication.stopReplicationService(Replication.java:172)at org.apache.hadoop.hbase.regionserver.HRegionServer.stopServiceThreads(HRegionServer.java:2162)at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1088)at java.lang.Thread.run(Thread.java:748)
2019-09-21 20:42:42,270 ERROR org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:60020-0x86cf6a57553f9a7, quorum=hdh12:2181,hdh53:2181,hdh1-07.p.xyidc:2181,hdh52:2181,hdh1-10.p.xyidc:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase/replication/rs/hdh19,60020,1568940808648at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468)at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:295)at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchForNewChildren(ZKUtil.java:456)at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchThem(ZKUtil.java:484)at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenBFSAndWatchThem(ZKUtil.java:1476)at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursivelyMultiOrSequential(ZKUtil.java:1398)at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursively(ZKUtil.java:1280)at org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.removeAllQueues(ReplicationQueuesZKImpl.java:187)at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.join(ReplicationSourceManager.java:310)at org.apache.hadoop.hbase.replication.regionserver.Replication.join(Replication.java:180)

通过线上日志可以看到 hbase由于GC时间较长,zk服务自动剔除该hbase节点,关闭当前连接,这种情况下,hbase框架选择停止了不能连接到zookeeper的 hbase regionserver,因为请求到这个超时节点的请求可能已经转到其他的节点。

解决方法
提高hbase zk的超时时间

hbase设置超时时间5分钟
只设置hbase的超时时间是不够的的,还需要设置zk的最大超时时间
zk最大超时时间5分钟

这篇关于问题 HBase RegionServer频繁挂掉的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/743547

相关文章

Django开发时如何避免频繁发送短信验证码(python图文代码)

《Django开发时如何避免频繁发送短信验证码(python图文代码)》Django开发时,为防止频繁发送验证码,后端需用Redis限制请求频率,结合管道技术提升效率,通过生产者消费者模式解耦业务逻辑... 目录避免频繁发送 验证码1. www.chinasem.cn避免频繁发送 验证码逻辑分析2. 避免频繁

解决pandas无法读取csv文件数据的问题

《解决pandas无法读取csv文件数据的问题》本文讲述作者用Pandas读取CSV文件时因参数设置不当导致数据错位,通过调整delimiter和on_bad_lines参数最终解决问题,并强调正确参... 目录一、前言二、问题复现1. 问题2. 通过 on_bad_lines=‘warn’ 跳过异常数据3

解决RocketMQ的幂等性问题

《解决RocketMQ的幂等性问题》重复消费因调用链路长、消息发送超时或消费者故障导致,通过生产者消息查询、Redis缓存及消费者唯一主键可以确保幂等性,避免重复处理,本文主要介绍了解决RocketM... 目录造成重复消费的原因解决方法生产者端消费者端代码实现造成重复消费的原因当系统的调用链路比较长的时

深度解析Nginx日志分析与499状态码问题解决

《深度解析Nginx日志分析与499状态码问题解决》在Web服务器运维和性能优化过程中,Nginx日志是排查问题的重要依据,本文将围绕Nginx日志分析、499状态码的成因、排查方法及解决方案展开讨论... 目录前言1. Nginx日志基础1.1 Nginx日志存放位置1.2 Nginx日志格式2. 499

kkFileView启动报错:报错2003端口占用的问题及解决

《kkFileView启动报错:报错2003端口占用的问题及解决》kkFileView启动报错因office组件2003端口未关闭,解决:查杀占用端口的进程,终止Java进程,使用shutdown.s... 目录原因解决总结kkFileViewjavascript启动报错启动office组件失败,请检查of

SpringBoot 异常处理/自定义格式校验的问题实例详解

《SpringBoot异常处理/自定义格式校验的问题实例详解》文章探讨SpringBoot中自定义注解校验问题,区分参数级与类级约束触发的异常类型,建议通过@RestControllerAdvice... 目录1. 问题简要描述2. 异常触发1) 参数级别约束2) 类级别约束3. 异常处理1) 字段级别约束

Python错误AttributeError: 'NoneType' object has no attribute问题的彻底解决方法

《Python错误AttributeError:NoneTypeobjecthasnoattribute问题的彻底解决方法》在Python项目开发和调试过程中,经常会碰到这样一个异常信息... 目录问题背景与概述错误解读:AttributeError: 'NoneType' object has no at

Spring的RedisTemplate的json反序列泛型丢失问题解决

《Spring的RedisTemplate的json反序列泛型丢失问题解决》本文主要介绍了SpringRedisTemplate中使用JSON序列化时泛型信息丢失的问题及其提出三种解决方案,可以根据性... 目录背景解决方案方案一方案二方案三总结背景在使用RedisTemplate操作redis时我们针对

Kotlin Map映射转换问题小结

《KotlinMap映射转换问题小结》文章介绍了Kotlin集合转换的多种方法,包括map(一对一转换)、mapIndexed(带索引)、mapNotNull(过滤null)、mapKeys/map... 目录Kotlin 集合转换:map、mapIndexed、mapNotNull、mapKeys、map

nginx中端口无权限的问题解决

《nginx中端口无权限的问题解决》当Nginx日志报错bind()to80failed(13:Permissiondenied)时,这通常是由于权限不足导致Nginx无法绑定到80端口,下面就来... 目录一、问题原因分析二、解决方案1. 以 root 权限运行 Nginx(不推荐)2. 为 Nginx