SAS Data Integration Studio, DataFlux Data Management Studio, SAS/ACCESS, SAS Data Loader for Hadoop and others

Error:SAS Map reduce job using CDH VM 5.8

Reply
Occasional Contributor
Posts: 17

Error:SAS Map reduce job using CDH VM 5.8

Hi,

 

I am submitting map reduce job a word count example but getting an error message. Can some one provide the fix around it.


proc hadoop username='cloudera' password='cloudera' verbose;
mapreduce
input="/user/cloudera/hamlet.txt"
output="/user/sas/outputs"
jar="C:\Users\ajain59\Desktop\abc\WordCount\Hadoop.jar"
map="org.apache.hadoop.examples.WordCount$TokenizerMapper"
reduce="org.apache.hadoop.examples.WordCount$IntSumReducer"
combine="org.apache.hadoop.examples.WordCount$IntSumReducer"
outputvalue="org.apache.hadoop.io.IntWritable"
outputkey="org.apache.hadoop.io.Text"
;
run;

 

NOTE: Hadoop Job (HDP_JOB_ID), job_1489673305643_0008, SAS Map/Reduce Job,
http://quickstart.cloudera:8088/proxy/application_1489673305643_0008/

ERROR: Job job_1489673305643_0008 has failed. Please, see job log for details. Job tracking URL :
http://quickstart.cloudera:8088/cluster/app/application_1489673305643_0008

 

Also, I checkd the hadoop log and got the following error message:

http://quickstart.cloudera:8088/cluster/app/application_1489673305643_0008

 

 

------ Error Log----

Application application_1489673305643_0008 failed 2 times due to AM Container for appattempt_1489673305643_0008_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://quickstart.cloudera:8088/proxy/application_1489673305643_0008/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1489673305643_0008_02_000001
Exit code: 1
Exception message: /bin/bash: line 0: fg: no job control
 
Stack trace: ExitCodeException exitCode=1: /bin/bash: line 0: fg: no job control
 
at org.apache.hadoop.util.Shell.runCommand(Shell.java:578)
at org.apache.hadoop.util.Shell.run(Shell.java:481)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:763)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
 
 
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.

----------------------

 

Community Manager
Posts: 486

Re: Error:SAS Map reduce job using CDH VM 5.8

Hi ajain59,

 

Thanks for your question! I suggest opening a track with Technical Support for this scenario. Check out this page for information on other ways to contact Tech Support.

 

Best,

Anna

SAS Employee
Posts: 203

Re: Error:SAS Map reduce job using CDH VM 5.8

Hi @ajain59

 

@AnnaBrown's answer here is solid.

 

What I am about to say may not be your issue, but I have encounter it numerous times.

 

When using the CDH Quickstart VM is that you have to ensure that it has enough resources (I use 16GB RAM and 2 processors - I have a fairly robust desktop PC). Pay special attention to the Cloudera recommendations regarding resources and exceed them if you can. Back to a potential solutionn...

 

Once you have the resources then you need patience. I find that it takes a long time for the CDH services to start. It takes patience. Something I have little of.

 

Pro Tip:  Go into Cloudera Manager and stop unnecessary services. this frees up resources and allows the quickstart to run better in a constrained environment.

 

Best wishes,

Jeff 

Ask a Question
Discussion stats
  • 2 replies
  • 188 views
  • 0 likes
  • 3 in conversation