1.1 Case 1: Basic Oozie Functions
1.1.1 Development of Configuration Files
1.1.1.1 Description
Development Process
1. Configure the workflow.xml workflow configuration file (coordinator.xml schedules the workflow, and bundle.xml manages a pair of Coordinators) and job.properties.
2. If you want to implement codes, develop relevant JAR files, for example, Java Action. If you want to use Hive, develop SQL files.
3. Upload the configuration file and JAR files (including dependent JAR files) to the HDFS. The upload path is toozie.wf.application.path in workflow.xml.
4. The workflow can be implemented by using the following three methods. For details, see More Information.
− Shell command
− Java API
− Hue
5. The Oozie client provides examples for your reference, involving various Actions and how to use Coordinator and Bundle. For example, if the installation directory of the Oozie client is /opt/client, the example directory is /opt/client/Oozie/oozie-client-4.2.0/examples/apps.
The following example shows you how to configure a configuration file by using the Mapreduce workflow and invoke the configuration file by running the Shell command.
Description
Provides that a user needs to analyze website logs offline every day, and collect statistics on the access frequency of each module of the website. Log files are stored in the HDFS.
Jobs are submitted through templates and configuration files in the client.
1.1.1.2 Development Procedure
Step 1 Analyze the service.
1. Analyze and process logs using Mapreduce in the client example directory.
2. Move Mapreduce analysis results to the data analysis result directory, and set the data file access permission to 660.
3. To analyze data every day, perform Step 1.1 and Step 1.2 every day.
Step 2 Implement the service.
1. Use PuTTY to log in to the node where the Oozie client is located, and create the dataLoad directory, for example, /opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/examples/apps/dataLoad/. This directory is used as a program running directory to store files that are edited subsequently.
![]()
You can directly copy the content in the map-reduce directory of the example directory to the dataLoad directory and edit the content.
2. Compile a workflow job property file job.properties.
For details, see job.properties.
3. Compile a workflow job using workflow.xml.
Table 1-1 Actions in a Workflow
|
No. |
Procedure |
Description |
|
1 |
Define the startaction. |
For details, see Start Action |
|
2 |
Define the MapReduceaction. |
For details, see MapReduce Action |
|
3 |
Define the FS action. |
For details, see FS Action |
|
4 |
Define the end action. |
For details, see End Action |
|
5 |
Define the killaction. |
For details, see Kill Action |
![]()
Dependent or newly developed JAR files must be saved in dataLoad/lib.
The following provides an example workflow file:
<workflow-app
xmlns="uri:oozie:workflow:0.2"
name="data_load">
<start to="mr-dataLoad"/>
<action name="mr-dataLoad">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete
path="${nameNode}/user/${wf:user()}/${dataLoadRoot}/output-data/map-reduce"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>mapred.mapper.class</name>
<value>org.apache.oozie.example.SampleMapper</value>
</property>
<property>
<name>mapred.reducer.class</name>
<value>org.apache.oozie.example.SampleReducer</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>1</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>/user/oozie/${dataLoadRoot}/input-data/text</value>
</property>
<property>
<name>mapred.output.dir</name>
<value>/user/${wf:user()}/${dataLoadRoot}/output-data/map-reduce</value>
</property>
</configuration>
</map-reduce>
<ok
to="copyData"/>
<error
to="fail"/>
</action>
<action name="copyData">
<fs>
<delete path='${nameNode}/user/oozie/${dataLoadRoot}/result'/>
<move
source='${nameNode}/user/${wf:user()}/${dataLoadRoot}/output-data/map-reduce'
target='${nameNode}/user/oozie/${dataLoadRoot}/result'/>
<chmod path='${nameNode}/user/oozie/${dataLoadRoot}/result'
permissions='-rwxrw-rw-' dir-files='true'></chmod>
</fs>
<ok
to="end"/>
<error
to="fail"/>
</action>
<kill name="fail">
<message>This workflow failed,
error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
4. Compile a Coordinator job using coordinator.xml.
The Coordinator job is used to analyze data every day. For details, see coordinator.xml.
Step 3 Upload the workflow file.
1. Use or switch to the user account that is granted with rights to upload files to the HDFS. For details about developer account preparation, see Development and Operating Environment.
2. Implement Kerberos authentication for the user account. For details, see Environment Preparation.
3. Run the HDFS upload command to upload the dataLoad folder to a specified directory on the HDFS (user developuser must have the read/write permission for the directory).
![]()
The specified directory must be the same as oozie.coord.application.path and workflowAppUri defined in job.properties.
Step 4 Execute the workflow file.
1. Implement Kerberos authentication for user developuser.
2. Run the following command to start the workflow:
Command:
oozie client installation directory/bin/oozie job -oozie https://oozie server hostname https://oozie server hostname:port/oozie -config job.propertiesfile path -run
Parameter list:
Table 1-2 Parameters
|
Parameter |
Description |
|
job |
Indicates that a job is to be executed. |
|
-oozie |
Indicates the (any instance) Oozie server address. |
|
-config |
Indicates the path of job.properties. |
|
-run |
Indicates the starts workflow. |
For example:
/opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/bin/oozie job -oozie https://10-1-130-10:21003/oozie -configjob.properties -run
----End
1.1.1.3 Running a Process
Scenario
Configure the installed Oozie client so that shell scripts can be used to execute Oozie-related operations.
![]()
You are advised to download and use the latest client.
Prerequisites
l The Oozie component and client has been installed on FusionInsight HD and is running properly.
l The man-machine user account and password for accessing the Oozie service have been created or obtained.
l The uniform resource locator (URL) of the Oozie server (any node) in running state has been obtained, for example, https://10.1.130.10:21003/oozie.
l The IP address of the active Yarn ResourceManager has been obtained, for example, 10.1.130.11.
Procedure
Step 1 Use PuTTY to log in as user root to the node on which the Oozie client is located, and run the following command to obtain the installation environment information:
source /opt/FusionInsight_Client/bigdata_env
Step 2 Run the kinit command to perform user authentication.
![]()
To ensure that jobs can be executed properly, you must grant the user component permissions in advance, for example, HDFS and ZooKeeper read/write permission.
For example, the admin account, for accessing the Oozie service:
kinit admin
Step 3 Run the following command to switch to the example directory:
cd /opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/examples/apps/map-reduce/
![]()
l Access /opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/examples/apps/hive/ when submitting a Hive task.
l Access /opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/examples/apps/java-main/ when submitting a JAVA task.
l Access /opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/examples/apps/shell/ when submitting a Shell task.
l Access /opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/examples/apps/cron/ when submitting a Cron task.
The directory includes the following files and directory:
Table 1-3 Example files and directory on the Oozie client
|
Name |
Description |
|
hive-site.xml |
Hive task configuration file in hive. |
|
job.properties |
File that defines workflow attributes. |
|
script.q |
Hive task SQL script in hive. |
|
workflow.xml |
Workflow control file. |
|
lib |
In map-reduce and java-main, jar directory on which workflow running is dependent. |
|
coordinator.xml |
Configuration file of scheduled tasks, which is used to configure scheduling policies. |
Step 4 Run the following command to edit the job.properties file.
vi job.properties
Modify the following parameters in the job.properties file:
Change the value of userName to the name of the human-machine user who submits the task, for example, userName=admin.
Step 5 Create a root directory for running examples.
hdfs dfs -mkdir /user/admin/
![]()
admin indicates the name of the human-machine user who submits the task.
Step 6 If a Hive task is submitted, perform this step; otherwise, perform Step 7.
Run the following command to upload /opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/share/lib/hive to /user/oozie/share/lib/ on the active NameNode node:
hdfs dfs -put /opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/share/lib/hive /user/oozie/share/lib/
Step 7 Run the following command to upload the examples:
cd /opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/
hdfs dfs -put examples/ /user/admin/
hdfs dfs -put share/lib/oozie/ /user/oozie/share/lib/
Step 8 Run oozie job commands to execute the workflow file:
cd /opt/FusionInsight_Client/Oozie/oozie-client-4.2.0/examples/apps/map-reduce/
oozie job -oozie https://10-1-130-10:21003/oozie -config job.properties -run
![]()
l Command parameters are explained as follows:
-oozie Oozie URL
-config job configuration file
-run run a job
l If a job ID, for example, "job: 0000021-140222101051722-oozie-omm-W", is displayed after the workflow file is executed, the job is successfully submitted.
l You can view the execution results on the Oozie management page.
On the Oozie WebUI, query information about the submitted workflow based on the job ID.
----End
1.1.2 Example Codes
1.1.2.1 job.properties
Function
job.properties is a workflow property file that defines external parameters used for workflow execution.
Parameter Description
Table 1-57 describes parameters in job.properties.
Table 1-4 Parameters
|
Parameter |
Meaning |
|
nameNode |
Indicates the Hadoop distributed file system (HDFS) NameNode cluster address. |
|
jobTracker |
Indicates the MapReduce ResourceManager address. |
|
queueName |
Identifies the MapReduce queue where a workflow job is executed. |
|
dataLoadRoot |
Identifies the folder where the workflow job resides. |
|
oozie.coord.application.path |
Indicates the storage path of a Coordinator job in the HDFS. |
|
start |
Indicates the time when a scheduled workflow job is started. |
|
end |
Indicates the time when a scheduled workflow job is stopped. |
|
workflowAppUri |
Indicates the storage path of a workflow job in the HDFS. |
![]()
You can define parameters in the key=value format based on service requirements.
Example Codes
nameNode=hdfs://hacluster
jobTracker=10.1.130.10:26004
queueName=QueueA
dataLoadRoot=examples
oozie.coord.application.path=${nameNode}/user/oozie_cli/${dataLoadRoot}/apps/dataLoad
start=2013-04-02T00:00Z
end=2014-04-02T00:00Z
workflowAppUri=${nameNode}/user/oozie_cli/${dataLoadRoot}/apps/dataLoad
1.1.2.2 workflow.xml
Function
workflow.xml describes a complete service workflow. A workflow consists of a start node, an end node, and multiple action nodes.
Parameter Description
Table 1-58 describes parameters in workflow.xml.
Table 1-5 Parameters
|
Parameter |
Meaning |
|
name |
Identifies a workflow file. |
|
start |
Indicates the workflow start node. |
|
end |
Indicates the workflow end node. |
|
action |
Indicates nodes (one or multiple) that are used to implement a service. |
Example Codes
<workflow-app xmlns="uri:oozie:workflow:0.2" name="data_load">
<start to="copyData"/>
<action name="copyData">
</action>
……
<end name="end"/>
</workflow-app>
1.1.2.3 Start Action
Function
The Start Action node indicates the start point of a workflow job. Each workflow job has only one Start Action node.
Parameter Description
Table 1-59 describes the parameter used on the Start Action node.
Table 1-6 Parameters
|
Parameter |
Meaning |
|
to |
Identifies a subsequent action node. |
Example Codes
<start to="mr-dataLoad"/>
1.1.2.4 End Action
Function
The End Action node indicates the end point of a workflow job. Each workflow job has only one End Action node.
Parameter Description
Table 1-60 describes the parameter used on the End Action node.
Table 1-7 Parameters
|
Parameter |
Meaning |
|
name |
Identifies an end action. |
Example Codes
<end name="end"/>
1.1.2.5 Kill Action
Function
The Kill Action node indicates the end point of a workflow job when an error occurs.
Parameter Description
Table 1-61 describes parameters used on the Kill Action node.
Table 1-8 Parameters
|
Parameter |
Meaning |
|
name |
Identifies a kill action. |
|
message |
Provides error messages. |
|
${wf:errorMessage(wf:lastErrorNode())} |
Indicates the internal error message function in the Oozie system. |
Example Codes
<kill name="fail">
<message>
This workflow failed, error message[${wf:errorMessage(wf:lastErrorNode())}]
</message>
</kill>
1.1.2.6 FS Action
Function
The FS Action node is a Hadoop distributed file system (HDFS) operation node. You can create and delete HDFS files and folders and grant permissions for HDFS files and folders using this node.
Parameter Description
Table 1-62 describes parameters used on the FS Action node.
Table 1-9 Parameters
|
Parameter |
Meaning |
|
name |
Identifies an FS action. |
|
delete |
Deletes a specified file or folder. |
|
move |
Moves a file from the source directory to the target directory. |
|
chmod |
Modifies file or folder access rights. |
|
path |
Indicates the current file path. |
|
source |
Indicates the source file path. |
|
target |
Indicates the target file path. |
|
permissions |
Indicates permissions. |
![]()
${variable name} indicates the value defined in job.properties.
For example, ${nameNode} indicates hdfs://hacluster. (See job.properties.)
Example Codes
<action name="copyData">
<fs>
<delete path='${nameNode}/user/oozie_cli/${dataLoadRoot}/result'/>
<move source='${nameNode}/user/${wf:user()}/${dataLoadRoot}/output-data/map-reduce' target='${nameNode}/user/oozie_cli/${dataLoadRoot}/result'/>
<chmod path='${nameNode}/user/oozie_cli/${dataLoadRoot}/reuslt' permissions='-rwxrw-rw-' dir-files='true'></chmod>
</fs>
<ok to="end"/>
<error to="fail"/>
</action>
1.1.2.7 MapReduce Action
Function
The MapReduce Action node is used to execute a map-reduce job.
Parameter Description
Table 1-63 describes parameters used on the MapReduce Action node.
Table 1-10 Parameters
|
Parameter |
Meaning |
|
name |
Identifies a map-reduce action. |
|
job-tracker |
Indicates the MapReduce ResourceManager address. |
|
name-node |
Indicates the Hadoop distributed file system (HDFS) NameNode address. |
|
queueName |
Identifies the MapReduce queue where a job is executed. |
|
mapred.mapper.class |
Identifies the Mapper class. |
|
mapred.reducer.class |
Identifies the Reducer class. |
|
mapred.input.dir |
Indicates the input directory of MapReduce processed data. |
|
mapred.output.dir |
Indicates the output directory of MapReduce processing results. |
|
mapred.map.tasks |
Indicates the number of map tasks. |
![]()
${variable name} indicates the value defined in job.properties.
For example, ${nameNode} indicates hdfs://hacluster. (See job.properties.)
Example Codes
<action name="mr-dataLoad">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/${dataLoadRoot}/output-data/map-reduce"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>mapred.mapper.class</name>
<value>org.apache.oozie.example.SampleMapper</value>
</property>
<property>
<name>mapred.reducer.class</name>
<value>org.apache.oozie.example.SampleReducer</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>1</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>/user/oozie/${dataLoadRoot}/input-data/text</value>
</property>
<property>
<name>mapred.output.dir</name>
<value>/user/${wf:user()}/${dataLoadRoot}/output-data/map-reduce</value>
</property>
</configuration>
</map-reduce>
<ok to="copyData"/>
<error to="fail"/>
</action>
1.1.2.8 coordinator.xml
Function
coordinator.xml is used to periodically execute a workflow job.
Parameter Description
Table 1-64 describes parameters in coordinator.xml.
Table 1-11 Parameters
|
Parameter |
Meaning |
|
frequency |
Indicates the workflow execution interval. |
|
start |
Indicates the time when a scheduled workflow job is started. |
|
end |
Indicates the time when a scheduled workflow job is stopped. |
|
workflowAppUri |
Indicates the storage path of a workflow job in the HDFS. |
|
jobTracker |
Indicates the MapReduce ResourceManager address. |
|
queueName |
Identifies the MapReduce queue where a job is executed. |
|
nameNode |
Indicates the Hadoop distributed file system (HDFS) NameNode address. |
![]()
${variable name} indicates the value defined in job.properties.
For example, ${nameNode} indicates hdfs://hacluster. (See job.properties.)
Example Codes
<coordinator-app name="cron-coord" frequency="${coord:days(1)}" start="${start}" end="${end}" timezone="UTC" xmlns="uri:oozie:coordinator:0.2">
<action>
<workflow>
<app-path>${workflowAppUri}</app-path>
<configuration>
<property>
<name>jobTracker</name>
<value>${jobTracker}</value>
</property>
<property>
<name>nameNode</name>
<value>${nameNode}</value>
</property>
<property>
<name>queueName</name>
<value>${queueName}</value>
</property>
</configuration>
</workflow>
</action>
</coordinator-app>
1.1.3 Development of Java
1.1.3.1 Description
These typical scenarios help you quickly understand the development procedure of Oozie and learn key API functions.
This example shows you how to submit a MapReduce job and query the job status by using the Java API. The example code relates to the MapReduce job only, but the API invocation codes of other jobs are the same. The difference is that the configuration of job.properties in a job and that of workflow.xml in a workflow is different.
1.1.3.2 Sample Code
Function
Oozie submits a job from the run method of org.apache.oozie.client.OozieClient and obtains job information from getJobInfo.
Sample Code
Change OOZIE_URL_DEFALUT in the code example to the actual host name of any Oozie node, for example, https://10-1-131-131:21003/oozie/.
public void test() throws Exception
{
try
{
System.out.println("cluset status is " + isSecury);
if (isSecury)
{
UserGroupInformation.getLoginUser().doAs(new PrivilegedExceptionAction<Void>()
{
public Void run() throws Exception
{
runMapReduceJob();
return null;
}
});
}
else
{
runMapReduceJob();
}
}
catch (Exception e)
{
e.printStackTrace();
}
}
private void runMapReduceJob() throws OozieClientException, InterruptedException
{
String mrJobFilePath = userConfDir + JOB_PROPERTIES_FILE;
Properties conf = getJobProperties(mrJobFilePath);
// submit and start the workflow job
String jobId = wc.run(conf);
System.out.println("Workflow job submitted: " + jobId);
// wait until the workflow job finishes printing the status every 10 seconds
while (wc.getJobInfo(jobId).getStatus() == WorkflowJob.Status.RUNNING)
{
System.out.println("Workflow job running ..." + jobId);
Thread.sleep(10 * 1000);
}
// print the final status of the workflow job
System.out.println("Workflow job completed ..."+jobId);
System.out.println(wc.getJobInfo(jobId));
}
Precautions
Implement the security authentication when you use the Java API to access the Oozie. For details, see section "Preparing for the Development Environment". Upload the dependent configuration file (For details about how to develop the Workflow.xml configuration file, see workflow.xml) and the JAR file to the HDFS, and ensure that users who have passed the security authentication are granted the rights to access the relevant directory on the HDFS. (The owner of the directory is the authenticated users, or is in the same user group with the users).
1.1.4 Obtaining Example Codes
1.1.5 Commissioning the Application
1.1.5.1 Commissioning an Application in the Windows Environment
Compiling and Running an Application
Scenario
After the code development is complete, you can run the application in the Windows development environment.
Procedure
In the development environment (such as Eclipse), right-click OozieMain.java, and choose Run as > Java Application to run the application project.
Checking the Commissioning Result
Scenario
The results can be viewed on the console after the Oozie example project is completed.
Procedure
The following information is displayed if the example project is successful:
log4j:WARN No appenders could be found for logger (com.huawei.hadoop.security.LoginUtil).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/D:/temp/newClientSec/oozie-example/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/D:/temp/newClientSec/oozie-example/lib/slf4j-simple-1.7.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
current user is developuser@HADOOP.COM (auth:KERBEROS)
login user is developuser@HADOOP.COM (auth:KERBEROS)
cluset status is true
Warning: Could not get charToByteConverterClass!
Workflow job submitted: 0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job completed ...0000071-160729120057089-oozie-omm-W
Workflow id[0000071-160729120057089-oozie-omm-W] status[SUCCEEDED]
-----------finish Oozie -------------------
Directory user/developuser/examples/output-data/map-reduce is generated on the HDFS. The directory contains the following two files:
l _SUCCESS
l part-00000
You can view the files by using the file browser of the Hue or running the following commands on the HDFS:
hdfs dfs -ls /user/developuser/examples/output-data/map-reduce
![]()
In the Windows environment, the following exception may occur but does not affect services.
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
1.1.5.2 Commissioning an Application in the Linux Environment
1.8.2.5.2.1 Compiling and Running an Application
Scenario
In a Linux environment where no Oozie client is installed, you can upload the JAR file to the Linux and then run an application after the code development is complete.
Prerequisites
If the Linux host is not a node in the cluster, set the mapping between the host name and the IP address in the hosts file on the node. The host name and IP address must be in one-to-one mapping.
Procedure
Step 1 Export a JAR file.
1. Right-click the example project and choose Export from the shortcut menu.
2. Select JAR file and click Next.
3. Select the src and conf directories, and export the JAR file to the specified location. Click Next twice.
4. Click Browse, select Main class, and click OK.
5. Click Finish.
Step 2 Prepare for the required JAR files and configuration files.
1. In the Linux environment, create a directory, for example, /opt/test, and create subdirectories conf and lib. Upload the JAR files in the example project and the JAR files exported in Step 1 to the lib directory in Linux. Upload the conf configuration files in the example project to the conf directory in Linux.
2. In /opt/test, create the run.sh script, modify the following content, and save the file:
#!/bin/sh
BASEDIR=`cd $(dirname $0);pwd`
cd ${BASEDIR}
for file in ${BASEDIR}/lib/*.jar
do
i_cp=$i_cp:$file
echo "$file"
done
for file in ${BASEDIR}/conf/*
do
i_cp=$i_cp:$file
done
java -cp .${i_cp} com.huawei.hadoop.hbase.example.TestMain
Step 3 Go to /opt/test and run the following commands to run the JAR files:
sh run.sh
----End
1.8.2.5.2.2 Checking the Commissioning Result
Scenario
The results can be viewed by checking the execution status after the Oozie example project is completed.
Procedure
The following information is displayed if the example project is successful:
log4j:WARN No appenders could be found for logger (com.huawei.hadoop.security.LoginUtil).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/D:/temp/newClientSec/oozie-example/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/D:/temp/newClientSec/oozie-example/lib/slf4j-simple-1.7.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
current user is developuser@HADOOP.COM (auth:KERBEROS)
login user is developuser@HADOOP.COM (auth:KERBEROS)
cluset status is true
Warning: Could not get charToByteConverterClass!
Workflow job submitted: 0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job running ...0000071-160729120057089-oozie-omm-W
Workflow job completed ...0000071-160729120057089-oozie-omm-W
Workflow id[0000071-160729120057089-oozie-omm-W] status[SUCCEEDED]
-----------finish Oozie -------------------
Directory user/developuser/examples/output-data/map-reduce is displayed on the HDFS.
The following files are generated:
l _SUCCESS
l part-00000
You can view the files by using the file browser of the Hue or running the following commands on the HDFS:
hdfs dfs -ls /user/developuser/examples/output-data/map-reduce
This post was last edited by chz at 2018-08-03 07:30.

