Phoenix报错(3)java.io.IOException: Broken pipe

发布时间:2019-11-18 发布网站:脚本宝典
脚本宝典收集整理的这篇文章主要介绍了Phoenix报错(3)java.io.IOException: Broken pipe脚本宝典觉得挺不错的,现在分享给大家,也给大家做个参考。
  • 解决办法

1,在cm上加环境变量
cm→spark→gateway→高级

export HADOOP_CONF_DIR=/etc/hbase/conf:/etc/hadoop/conf:/etc/hive/conf
clipboard.png
2,在代码里创建hbase链接的时候,使用hbaseconfiguration.create()

Configuration configuration = HBaseConfiguration.create();

  • 报错信息如下
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Mon Jan 29 16:00:03 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68171: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat209.life.com,60020,1517191627572, seqNum=0      at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:286)     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:231)     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)     at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)     at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)     at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)     at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)     at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:862)     at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)     at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)     at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:421)     at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2333)     ... 21 more Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68171: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat209.life.com,60020,1517191627572, seqNum=0     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)     at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)     ... 3 more Caused by: java.io.IOException: Broken pipe     at sun.nio.ch.FileDispatcherImpl.write0(Native Method)     at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)     at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)     at sun.nio.ch.IOUtil.write(IOUtil.java:65)     at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)     at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)     at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)     at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)     at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)     at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)     at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)     at java.io.DataOutputStream.flush(DataOutputStream.java:123)     at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:278)     at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:266)     at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:921)     at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:874)     at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1243)     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)     at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)     at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)     at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:400)     at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)     at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65)     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:381)     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:355)     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)     ... 4 more  Driver stacktrace:     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1457)     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1445)     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1444)     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1444)     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)     at scala.Option.foreach(Option.scala:236)     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1668)     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1627)     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1616)     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1862)     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1875)     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1888)     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1959)     at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:920)     at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918)     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)     at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)     at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918)     at scala.com.chinalife.StreamingToHbase$$anonfun$main$2.apply(StreamingToHbase.scala:99)     at scala.com.chinalife.StreamingToHbase$$anonfun$main$2.apply(StreamingToHbase.scala:98)     at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)     at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)     at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)     at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)     at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)     at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)     at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)     at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)     at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)     at scala.util.Try$.apply(Try.scala:161)     at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)     at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224)     at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)     at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)     at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)     at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)     at java.lang.Thread.run(Thread.java:745) Caused by: java.sql.SQLException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Mon Jan 29 16:00:03 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68171: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat209.life.com,60020,1517191627572, seqNum=0 

脚本宝典总结

以上是脚本宝典为你收集整理的Phoenix报错(3)java.io.IOException: Broken pipe全部内容,希望文章能够帮你解决Phoenix报错(3)java.io.IOException: Broken pipe所遇到的问题。

如果觉得脚本宝典网站内容还不错,欢迎将脚本宝典推荐好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。