sparkStreaming报错Failed to send RPC 6254780973500208805 to /10.11.10.10:48838: java.nio.channels.ClosedChannelException

21/04/09 06:33:44 ERROR client.TransportClient: Failed to send RPC 6254780973500208805 to /10.11.10.10:48838: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
	at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
21/04/09 06:33:44 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, vf-uat-pdc-04, 36905, None)
21/04/09 06:33:44 INFO storage.BlockManagerMaster: Removed 1 successfully in removeExecutor
21/04/09 06:33:44 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 1 at RPC address 10.11.10.7:55572, but got no response. Marking as slave lost.

原因:
1、spark给的内存太少,yarn把sparkApplication kill掉了
2、集群重启了…
解决:
1、给yarn-site.xml增加配置,然后重启hadoop
2、提交任务时,增加内存

#方法一:
	<property>
    		<name>yarn.nodemanager.pmem-check-enabled</name>
    		<value>false</value>
	</property>
	<property>
		<name>yarn.nodemanager.vmem-check-enabled</name>
    		<value>false</value>
	</property>
#方法二:
--driver-memory 2g \
--executor-memory 2g \
Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐