Hadoop: Unable to allocate more than 8 GB Memory to my nodes -


my hadoop setup allocating 8 gb of ememory each node tough machines have 126gb ram , 32 cpu.

i added following properties yarn-site.xml allocate 24 gb memory each node.

<property>    <name>yarn.scheduler.minimum-allocation-mb</name>     <value>1024</value> </property> <property>    <name>yarn.scheduler.maximum-allocation-mb</name>     <value>2048</value> </property> <property>    <name>yarn.nodemanager.resource.memory-mb</name>     <value>24576</value> </property> 

i added following mapred-site.xml

<property>   <name>mapreduce.tasktracker.map.tasks.maximum</name>   <value>32</value> </property> <property>   <name>mapreduce.tasktracker.reduce.tasks.maximum</name>   <value>16</value> </property> <property>   <name>mapreduce.job.reduce.slowstart.completedmaps</name>   <value>0.95</value> </property> 

now nodes show 24 gb each of allocated memory when run task lot of reduce tasks start before mappers done , tasks keep failing. of them fail exception:

15/04/09 19:24:45 info mapreduce.job: task id : attempt_1428621766709_0001_m_000031_2, status : failed exception container-launch: org.apache.hadoop.util.shell$exitcodeexception: org.apache.hadoop.util.shell$exitcodeexception: 

the same process used run earlier default configuration of 8 gb per node. take couple of hours complete used work. cant process run.

any pointers here helpful.


Popular posts from this blog