发布网友 发布时间:2022-04-22 05:11
共2个回答
懂视网 时间:2022-05-06 12:05
今天 hadoop 集群任务执行失败了。报错信息如下 2013-10-26 08:00:03,229 ERROR server.TThreadPoolServer TThreadPoolServer.java:run182 - Error occurred during processing of message. at org.apache.hadoop.hive.service.HiveServer$ThriftHiveProcess
今天hadoop集群任务执行失败了。报错信息如下
- 2013-10-26 08:00:03,229 ERROR server.TThreadPoolServer (TThreadPoolServer.java:run(182)) - Error occurred during processing of message.
- at org.apache.hadoop.hive.service.HiveServer$ThriftHiveProcessorFactory.getProcessor(HiveServer.java:553)
- at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:169)
- at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- at java.lang.Thread.run(Thread.java:662)
- at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:277)
- at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.init(HiveServer.java:136)
- at org.apache.hadoop.hive.service.HiveServer$ThriftHiveProcessorFactory.getProcessor(HiveServer.java:550)
- ... 4 more
- at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:199)
- at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:272)
- ... 6 more
- Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: /home/hadoop/hadoop-0.20.205.0/conf/mapred-site.xml (Too many open files)
- at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1231)
- at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1093)
- at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1037)
- at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
- at org.apache.hadoop.hive.conf.HiveConf.setVar(HiveConf.java:762)
- at org.apache.hadoop.hive.conf.HiveConf.setVar(HiveConf.java:770)
- at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:169)
- at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
- at java.lang.Thread.run(Thread.java:662)
- Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.io.FileNotFoundException: /home/hadoop/hadoop-0.20.205.0/conf/core-site.xml (Too many open files)
- at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:277)
- at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.init(HiveServer.java:136)
- at org.apache.hadoop.hive.service.HiveServer$ThriftHiveProcessorFactory.getProcessor(HiveServer.java:550)
- ... 4 more
- Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.io.FileNotFoundException: /home/hadoop/hadoop-0.20.205.0/conf/core-site.xml (Too many open files)
- at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:199)
- at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:272)
- ... 6 more
- ulimit -HSn 32768
原文地址:hadoop too many files异常处理, 感谢原作者分享。
热心网友 时间:2022-05-06 09:13
符号链接层次太多[hadoop@hddcluster2 script]$ ls /etc/init.d/hadoop.sh
ls: cannot access /etc/init.d/hadoop.sh: Too many levels of symbolic links
[hadoop@hddcluster2 script]$ ls /home/hadoop/script/hadoop.sh /etc/init.d/hadoop.sh
ls: cannot access /etc/init.d/hadoop.sh: Too many levels of symbolic links
/home/hadoop/script/hadoop.sh
解决办法:sudo 删除链接,然后补上全路径。
在做ln的时候要将文件的绝对路径下的完整目录写上去!例子如下:[hadoop@hddcluster2 script]$ sudo rm /etc/init.d/hadoop.sh
[hadoop@hddcluster2 script]$ sudo ln -s /home/hadoop/script/hadoop.sh /etc/init.d/hadoop.sh
[hadoop@hddcluster2 script]$ /etc/init.d/hadoop.sh st
/etc/init.d/hadoop.sh {start|stop|restart|status}
[hadoop@hddcluster2 script]$ /etc/init.d/hadoop.sh status
11283 ResourceManager
12323 Jps
10836 DataNode
10694 NameNode
11033 SecondaryNameNode
11610 NodeManager
11756 JobHistoryServer