Got it

Spark:Case 1: Spark Core Development Example

Latest reply: Mar 2, 2022 08:01:00 1007 7 4 0 0

1.1.1 Case 1: Spark Core Development Example

1.1.1.1 Scenario

Applicable Versions

FusionInsight HD V100R002C70, FusionInsight HD V100R002C80

Scenario

Develop a Spark application to perform the following operations on logs about netizens who dwell on online for shopping on a weekend.

l   Collect statistics on female netizens who dwell on online shopping for over 2 hours on the weekend.

l   The first column in the log file records names, the second column records gender, and the third column records the dwell duration in the unit of minute. Three columns are separated by comma (,).

log1.txt: logs collected on Saturday

LiuYang,female,20  
YuanJing,male,10  
GuoYijun,male,5  
CaiXuyu,female,50  
Liyuan,male,20  
FangBo,female,50  
LiuYang,female,20  
YuanJing,male,10  
GuoYijun,male,50  
CaiXuyu,female,50  
FangBo,female,60

log2.txt: logs collected on Sunday

LiuYang,female,20  
YuanJing,male,10  
CaiXuyu,female,50  
FangBo,female,50  
GuoYijun,male,5  
CaiXuyu,female,50  
Liyuan,male,20  
CaiXuyu,female,50  
FangBo,female,50  
LiuYang,female,20  
YuanJing,male,10  
FangBo,female,50  
GuoYijun,male,50  
CaiXuyu,female,50  
FangBo,female,60

Data Planning

Save the original log files in the HDFS.

1.         Create two text files input_data1.txt and input_data2.txt on the local host, copy the content in log1.txt to input_data1.txt, and copy the content in log2.txt to input_data2.txt.

2.         Create the /tmp/input folder in the HDFS, and run the following commands to upload input_data1.txt and input_data2.txt to the /tmp/input directory:

a.         On the HDFS client, run the following commands for authentication:

cd/opt/hadoopclient

kinit< service user for authentication >

b.         On the HDFS client of the Linux OS, run the hadoop fs -mkdir/tmp/input command (the hdfs dfs command has the same function) to create a directory.

c.         On the HDFS client of the Linux OS, run the hadoop fs -putinput_data1.txt/tmp/input and hadoop fs -putinput_data2.txt/tmp/input commands to upload data files.

1.1.1.2 Development Guidelines

Collect statistics on female netizens who dwell on online shopping for over 2 hours on the weekend.

To achieve the objective, the process is as follows:

l   Read the source file data.

l   Filter data information of the time that female netizens spend online.

l   Summarize the total time that each female netizen spends online.

l   Filter the information of female netizens who spend over 2 hours online.

1.1.1.3 Sample Code Description

1.1.1.3.1 Java Code Example

Function

Collect statistics on female netizens who dwell on online shopping for over 2 hours on the weekend.

Sample Code

The following code snippets are used as an example. For complete code, see the com.huawei.bigdata.spark.examples.FemaleInfoCollection class.

// Create a configuration class SparkConf and create a SparkContext.

 SparkConf conf = new SparkConf().setAppName("CollectFemaleInfo");

    JavaSparkContext jsc = new JavaSparkContext(conf);

 // Read the original file data and converts each record into an element in the RDD.

 // Divide each column of each record and generates a Tuple.

    JavaRDD<Tuple3<String,String,Integer>> person = data.map(new Function<String,Tuple3<String,String,Integer>>()

    {

        private static final long serialVersionUID = -2381522520231963249L;

        public Tuple3<String, String, Integer> call(String s) throws Exception

        {

// Separate a line of data by comma (,).

           String[] tokens = s.split(",");

 // Combine the three elements to form a triplet Tuple.

           Tuple3<String, String, Integer> person = new Tuple3<String, String, Integer>(tokens[0], tokens[1], Integer.parseInt(tokens[2]));

            return person;

        }

    });

// Use the filter function to filter the Internet access time data of female netizens.

    JavaRDD<Tuple3<String,String,Integer>> female = person.filter(new Function<Tuple3<String,String,Integer>, Boolean>()

    {

        private static final long serialVersionUID = -4210609503909770492L;

        public Boolean call(Tuple3<String, String, Integer> person) throws Exception

        {

// Filter female records based on the gender in the second column.

           Boolean isFemale = person._2().equals("female");

            return isFemale;

        }

    });

    // Summarize the total Internet access time of each female.

 JavaPairRDD<String, Integer> females = female.mapToPair(new PairFunction<Tuple3<String, String, Integer>, String, Integer>()

    {

        private static final long serialVersionUID = 8313245377656164868L;

        public Tuple2<String, Integer> call(Tuple3<String, String, Integer> female) throws Exception

        {

// Obtain the name and stay time columns to obtain the sum of stay time based on the name.

           Tuple2<String, Integer> femaleAndTime = new  Tuple2<String, Integer>(female._1(), female._3());

            return femaleAndTime;

        }

    });

      JavaPairRDD<String, Integer> femaleTime = females.reduceByKey(new Function2<Integer, Integer, Integer>()

    {

        private static final long serialVersionUID = -3271456048413349559L;

        public Integer call(Integer integer, Integer integer2) throws Exception

        {

// Sum up the two stay durations of the same female.

            return (integer + integer2);

        }

    });

// Screen the information about female netizens who stay over two hours.

 JavaPairRDD<String, Integer> rightFemales = females.filter(new Function<Tuple2<String, Integer>, Boolean>()

    {

        private static final long serialVersionUID = -3178168214712105171L;

        public Boolean call(Tuple2<String, Integer> s) throws Exception

      {

// Obtain the total stay time of female netizens and check whether the duration is longer than 2 hours.

           if(s._2() > (2 * 60))

            {

                return true;

            }

            return false;

        }

    });

 // Print and display the female information that meets the requirements.

 for(Tuple2<String, Integer> d: rightFemales.collect())

    {

        System.out.println(d._1() + "," + d._2());

    }

1.1.1.3.2 Scala Code Example

Function

Collect statistics on female netizens who dwell on online shopping for over 2 hours on the weekend.

Sample Code

The following code snippets are used as an example. For complete code, see the com.huawei.bigdata.spark.examples.FemaleInfoCollection class.

Example: CollectMapper class

// Set the Spark application name.

val conf = new SparkConf().setAppName("CollectFemaleInfo")

// Submit Spark jobs.

val sc = new SparkContext(conf)

// Read data. The input parameter args (0) specifies the data path.

val text = sc.textFile(args(0))

// Filter female netizens' Internet access time data.

val data = text.filter(_.contains("female"))

// Summarize the Internet access time of each female.

val femaleData:RDD[(String,Int)] = data.map{line =>

    val t= line.split(',')

    (t(0),t(2).toInt)

}.reduceByKey(_ + _)

// Filter the information about female netizens who spend over two hours and generate the information.

val result = femaleData.filter(line => line._2 > 120)

result.collect().map(x => x._1 + ',' + x._2).foreach(println)

sc.stop()

1.1.1.3.3 Python Code Example

Function

Collect statistics on female netizens who dwell on online shopping for over 2 hours on the weekend.

Sample Code

The following code snippets are used as an example. For complete code, see collectFemaleInfo.py.

def contains(str, substr):

  if substr in str:

    return True

  return False

if __name__ == "__main__":

  if len(sys.argv) < 2:

    print "Usage: CollectFemaleInfo <file>"

    exit(-1)

# Create SparkContext and set AppName.

 sc = SparkContext(appName = "CollectFemaleInfo")?

  """

The following steps are performed to implement the following functions:

1. Read data. The input parameter argv[1] specifies the data path -textFile.

2. Filter the Internet access time data of female netizens -filter

3. Summarize the Internet access time of each female -map/map/reduceByKey.

4. Filter the information about female netizens whose time is longer than two hours –filter.

  """

  inputPath = sys.argv[1]

  result = sc.textFile(name = inputPath, use_unicode = False) \

    .filter(lambda line: contains(line, "female")) \

    .map(lambda line: line.split(',')) \

    .map(lambda dataArr: (dataArr[0], int(dataArr[2]))) \

    .reduceByKey(lambda v1, v2: v1 + v2) \

    .filter(lambda tupleVal: tupleVal[1] > 120) \

    .collect()

  for (k, v) in result:

    print k + "," + str(v)

# Stop SparkContext.

  sc.stop()

1.1.1.4 Obtaining Sample Code

Using the FusionInsight Client

Obtain the sample project in the sampleCode directory in the Spark directory in the FusionInsight_Services_ClientConfig file extracted from the client.

Security mode: SparkJavaExample and SparkScalaExample in the spark-examples-security directory

Non-security mode: SparkJavaExample and SparkScalaExample in the spark-examples-normal directory

Using the Maven Project

Log in to Huawei DevClod (https://codehub-cn-south-1.devcloud.huaweicloud.com/codehub/7076065/home) to download code udner to local PC.

Security mode:

components/spark/spark-examples-security/SparkJavaExample

components/spark/spark-examples-security/SparkScalaExample

Non-security mode:

components/spark/spark-examples-normal/SparkJavaExample

components/spark/spark-examples-normal/SparkScalaExample

1.1.1.5 Application Commissioning

1.1.1.5.1 Compiling and Running the Application

Scenario

After the program code is developed, you can upload the code to the Linux client for running. The running procedures of applications developed in Scala or Java are the same.

note

l  The Spark application can run only in the Linux environment but not in the Windows environment.

l  The Spark application developed in Python does not need to build Artifacts as a jar. You just need to copy the sample projects to the compiler.

l  It is needed to ensure that the version of Python installed on the worker and driver is consistent, otherwise the following error will be reported: "Python in worker has different version %s than that in driver %s."

Procedure

                               Step 1      In the IntelliJ IDEA, configure the Artifacts information about the project before the jar is created.

1.         On the main page of the IDEA, choose File > Project Structures... to enter the Project Structure page.

2.         On the Project Structure page, select Artifacts, click + and choose Jar > From modules with dependencies....

Figure 1-1 Adding the Artifacts

152603mhe0ee08i0mymekm.png

 

3.         Select the corresponding module. The module corresponding to the Java sample projects is CollectFemaleInfo. Click OK.

Figure 1-2 Create Jar from Modules

152604tgyr9ml9ezuyermm.png

 

4.         Configure the name, type and output directory of the Jar based on the actual condition.

Figure 1-3 Configuring the basic information

152605ogzf8mh99pg7gx83.png

 

5.         Right-click CollectFemaleInfo, choose Put into Output Root, and click Apply.

Figure 1-4 Put into Output Root

152606ay1zzbwk5zkaymh7.png

 

6.         Click OK to complete the configuration.

                               Step 2      Create the jar.

1.         On the main page of the IDEA, choose Build > Build Artifacts....

Figure 1-5 Build Artifacts

152607q7hchh7o00yfj677.png

 

2.         On the displayed menu, choose CollectFemaleInfo > Build to create a jar.

Figure 1-6 Build

152608ss0jcsqccb0jqszq.png

 

3.         If the following information is displayed in the event log, the jar is created successfully. You can obtain the jar from the directory configured in Step 1.4.

21:25:43 Compilation completed successfully in 36 sec

                               Step 3      Copy the jar created in Step 2 to the Spark running environment (Spark client), such as /opt/hadoopclient/Spark to run the Spark application.

 

Notice

When a Spark task is running, it is prohibited to restart the HDFS service or restart all DataNode instances. Otherwise, the Spark task may fail, resulting in JobHistory data loss.

l   Run the sample projects of Spark Core (including Scala and Java).

Access the Spark client directory and implement the bin/spark-submit script to run the code.

<inputPath> indicates the input directory in the HDFS.

bin/spark-submit --classcom.huawei.bigdata.spark.examples.FemaleInfoCollection --master yarn-client/opt/female/FemaleInfoCollection.jar <inputPath>

l   Run the sample projects of Spark SQL (Java and Scala).

Access the Spark client directory and implement the bin/spark-submit script to run the codes.

<inputPath> indicates the input directory in the HDFS.

bin/spark-submit --classcom.huawei.bigdata.spark.examples.FemaleInfoCollection --master yarn-client/opt/female/FemaleInfoCollection.jar <inputPath>

l   Run the sample projects of Spark Streaming (Java and Scala).

Access the Spark client directory and implement the bin/spark-submit script to run the codes.

note

The location of Spark Streaming Kafka dependency package on the client is different from the location of other dependency packages. For example, the path to the Spark Streaming Kafka dependency package is $SPARK_HOME/lib/streamingClient, whereas the path to other dependency packages is $SPARK_HOME/lib. When running an application, you must add the configuration option to the spark-submit command to specify the path of Spark Streaming Kafka dependency package. The following is an example path:

--jars $SPARK_HOME/lib/streamingClient/kafka-clients-0.8.2.1.jar,$SPARK_HOME/lib/streamingClient/kafka_2.10-0.8.2.1.jar,$SPARK_HOME/lib/streamingClient/spark-streaming-kafka_2.10-1.5.1.jar

Example codes of the Spark Streaming Write To Print is as follows:

bin/spark-submit --master yarn-client--jars $SPARK_HOME/lib/streamingClient/kafka-clients-0.8.2.1.jar,$SPARK_HOME/lib/streamingClient/kafka_2.10-0.8.2.1.jar,$SPARK_HOME/lib/streamingClient/spark-streaming-kafka_2.10-1.5.1.jar --classcom.huawei.bigdata.spark.examples.FemaleInfoCollectionPrint /opt/female/FemaleInfoCollectionPrint.jar <checkPointDir> <batchTime> <topics> <brokers>

Example codes of the Spark Streaming Write To Kafka is as follows:

bin/spark-submit --master yarn-client--jars $SPARK_HOME/lib/streamingClient/kafka-clients-0.8.2.1.jar,$SPARK_HOME/lib/streamingClient/kafka_2.10-0.8.2.1.jar,$SPARK_HOME/lib/streamingClient/spark-streaming-kafka_2.10-1.5.1.jar --classcom.huawei.bigdata.spark.examples.FemaleInfoCollectionKafka /opt/female/FemaleInfoCollectionKafka.jar <checkPointDir> <batchTime> <windowTime> <topics> <brokers>

l   Run the sample projects of Accessing the Spark SQL Through JDBC (Java and Scala).

Access the Spark client directory and implement the java -cp command to run the codes.

java -cp$SPARK_HOME/lib/*:$SPARK_HOME/conf:/opt/female/ThriftServerQueriesTest.jar com.huawei.bigdata.spark.examples.ThriftServerQueriesTest $SPARK_HOME/conf/hive-site.xml $SPARK_HOME/conf/spark-defaults.conf

l   Run the Spark on HBase sample application(Java and Scala).

a.         Verify that the configuration options in the Spark client configuration file spark-defaults.conf are correctly configured.

When running the Spark on HBase sample application, set the configuration option spark.hbase.obtainToken.enabled in the Spark client configuration file spark-defaults.conf to true (The default value is false. Changing the value to true does not affect existing services. If you want to uninstall the HBase service, change the value back to false first. Set the configuration option spark.inputFormat.cache.enabled to false.

Table 1-1 Parameters

Parameter

Description

Default    Value

spark.hbase.obtainToken.enabled

Indicates whether to enable the function of obtaining the HBase token.

false

spark.inputFormat.cache.enabled

Indicates whether to cache the InputFormat that maps to HadoopRDD. If the parameter is set to true, the tasks of the same Executor use the same InputFormat object. In this case, the InputFormat must be thread-safe. If caching the InputFormat is not required, set the parameter to false.

true

 

b.         Access the Spark client directory and implement the bin/spark-submit script to run the code.

Run sample applications in the sequence: TableCreation > TableInputData > TableOutputData.

When the TableInputData sample application is running, <inputPath> needs to be specified. <inputPath>indicates the input path in the HDFS.

bin/spark-submit --classcom.huawei.bigdata.spark.examples.TableInputData --master yarn-client/opt/female/TableInputData.jar <inputPath>

l   Run the Spark Hbase to HBase sample application(Scala and Java).

Access the Spark client directory and implement the bin/spark-submit script to run the code.

bin/spark-submit --classcom.huawei.bigdata.spark.examples.SparkHbasetoHbase --master yarn-client/opt/female/FemaleInfoCollection.jar

l   Run the Spark Hive to HBase sample application(Scala and Java).

Access the Spark client directory and implement the bin/spark-submit script to run the code.

bin/spark-submit --classcom.huawei.bigdata.spark.examples.SparkHivetoHbase --master yarn-client/opt/female/FemaleInfoCollection.jar

l   Run the Spark Streaming Kafka to HBase sample application(Scala and Java).

Access the Spark client directory and implement the bin/spark-submit script to run the code.

When the sample application is running, specify the <checkPointDir><topic><brokerList>. <checkPointDir> indicates the directory where the application result is backed up, <topic> indicates the topic that is read from Kafka, <brokerList> indicates the IP address of the Kafka server.

note

On the client, the directory of Spark Streaming Kafka dependency package is different from the directory of other dependency packages. For example, the directory of another dependency package is $SPARK_HOME/lib and the directory of a Spark Streaming Kafka dependency package is $SPARK_HOME/lib/streamingClient. Therefore, when running the application, add the configuration option in the spark-submit command to specify the directory for the Spark Streaming Kafka dependency package, for example, --jars $SPARK_HOME/lib/streamingClient/kafka-clients-0.8.2.1.jar,$SPARK_HOME/lib/streamingClient/kafka_2.10-0.8.2.1.jar,$SPARK_HOME/lib/streamingClient/spark-streaming-kafka_2.10-1.5.1.jar.

Example code of Spark Streaming To HBase

bin/spark-submit --master yarn-client --jars$SPARK_HOME/lib/streamingClient/kafka-clients-0.8.2.1.jar,$SPARK_HOME/lib/streamingClient/kafka_2.10-0.8.2.1.jar,$SPARK_HOME/lib/streamingClient/spark-streaming-kafka_2.10-1.5.1.jar --classcom.huawei.bigdata.spark.examples.streaming.SparkOnStreamingToHbase /opt/female/FemaleInfoCollectionPrint.jar <checkPointDir> <topic> <brokerList>

l   Submit the application developed in Python.

Access the Spark client directory and implement the bin/spark-submit script to run the codes.

<inputPath> indicates the input directory in the HDFS.

note

Because the sample code does not contain the authentication information, specify the authentication information by configuring the spark.yarn.keytab and spark.yarn.principle when the application is run.

bin/spark-submit --master yarn-client --conf spark.yarn.keytab=/opt/FIclient/user.keytab --conf spark.yarn.principal=sparkuser/opt/female/SparkPythonExample/collectFemaleInfo.py <inputPath>

----End

References

The runtime dependency packages for the sample projects of Accessing the Spark SQL Through JDBC (Java and Scala) are as follows:

l   The sample projects of Accessing the Spark SQL Through JDBC (Scala):

           avro-1.7.7.jar

           commons-collections-3.2.2.jar

           commons-configuration-1.6.jar

           commons-io-2.4.jar

           commons-lang-2.6.jar

           commons-logging-1.1.3.jar

           guava-12.0.1.jar

           hadoop-auth-2.7.2.jar

           hadoop-common-2.7.2.jar

           hadoop-mapreduce-client-core-2.7.2.jar

           hive-exec-1.2.1.spark.jar

           hive-jdbc-1.2.1.spark.jar

           hive-metastore-1.2.1.spark.jar

           hive-service-1.2.1.spark.jar

           httpclient-4.5.2.jar

           httpcore-4.4.4.jar

           libthrift-0.9.3.jar

           log4j-1.2.17.jar

           slf4j-api-1.7.10.jar

           zookeeper-3.5.1.jar

           scala-library-2.10.4.jar

l   The sample projects of Accessing the Spark SQL Through JDBC (Java):

           commons-collections-3.2.2.jar

           commons-configuration-1.6.jar

           commons-io-2.4.jar

           commons-lang-2.6.jar

           commons-logging-1.1.3.jar

           guava-2.0.1.jar

           hadoop-auth-2.7.2.jar

           hadoop-common-2.7.2.jar

           hadoop-mapreduce-client-core-2.7.2.jar

           hive-exec-1.2.1.spark.jar

           hive-jdbc-1.2.1.spark.jar

           hive-metastore-1.2.1.spark.jar

           hive-service-1.2.1.spark.jar

           httpclient-4.5.2.jar

           httpcore-4.4.4.jar

           libthrift-0.9.3.jar

           log4j-1.2.17.jar

           slf4j-api-1.7.10.jar

           zookeeper-3.5.1.jar

1.1.1.5.2 Checking the Commissioning Result

Scenario

After a Spark application is run, you can check the running result through one of the following methods:

l   Viewing the command output.

l   Logging in to the Spark WebUI.

l   Viewing Spark logs.

Procedure

l   Check the operating result data of the Spark application.

The data storage directory and format are specified by users in the Spark application. You can obtain the data in the specified file.

l   Check the status of the Spark application.

The Spark contains the following two Web UIs:

           The Spark UI displays the status of applications being executed.

The Spark UI contains the Spark Jobs, Spark Stages, Storage, Environment, and Executors parts. Besides these parts, Spark Streaming is displayed for the Streaming application.

Access to the interface: On the Web UI of the YARN, find the corresponding Spark application, and click the final column ApplicationMaster of the application information to access the Spark UI.

           The History Server UI displays the status of all Spark applications.

The History Server UI displays information such as the application ID, application name, start time, end time, execution time, and user to whom the application belongs. After the application ID is clicked, the Spark UI of the application is displayed.

l   View Spark logs to learn application running conditions.

The logs of Spark offers immediate visibility into application running conditions. You can adjust application programs based on the logs. Log related information can be referenced to Spark in the Log Description in theAdministrator Guide.

 


This post was last edited by chz at 2018-09-07 07:27.

This article contains more resources

You need to log in to download or view. No account? Register

x
  • x
  • convention:

chz
Created Sep 7, 2018 07:28:05

welcome
Spark:Case 1: Spark Core Development Example-2744359-1
View more
  • x
  • convention:

E.DR_91
MVE Author Created Jun 10, 2019 06:48:56

Thanks
View more
  • x
  • convention:

E.DR_91
MVE Author Created Jun 29, 2019 07:40:45

:)
View more
  • x
  • convention:

TriNguyen
Created Jan 29, 2022 13:20:00

Thanks for sharing
View more
  • x
  • convention:

Saqibaz
Created Jan 29, 2022 14:32:50

Thanks for sharing
View more
  • x
  • convention:

user_4358465
Created Feb 19, 2022 08:04:03

An excellent troubleshooting piece to read
View more
  • x
  • convention:

kita
Created Mar 2, 2022 08:01:00

Thanks for sharing
View more
  • x
  • convention:

Comment

You need to log in to comment to the post Login | Register
Comment

Notice: To protect the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but are not limited to the following:
  • Politically sensitive content
  • Content concerning pornography, gambling, and drug abuse
  • Content that may disclose or infringe upon others ' commercial secrets, intellectual properties, including trade marks, copyrights, and patents, and personal privacy
Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see " User Agreement."

My Followers

Login and enjoy all the member benefits

Login

Block
Are you sure to block this user?
Users on your blacklist cannot comment on your post,cannot mention you, cannot send you private messages.
Reminder
Please bind your phone number to obtain invitation bonus.
Information Protection Guide
Thanks for using Huawei Enterprise Support Community! We will help you learn how we collect, use, store and share your personal information and the rights you have in accordance with Privacy Policy and User Agreement.