Got it

Solution:Real-Time Processing

Latest reply: Dec 31, 2021 05:53:03 994 4 4 0 0

1.1 Real-Time Processing

1.1.1 Preparing a Development Environment

1.1.1.1 Development and Operating Environment

The real-time processing scenario is developed based on the collaboration between Spark and other components. Java, Scala, and Python languages can be used to develop applications. Table 1-1 describes the development and operating environment to be prepared.

Table 1-1 Development environment

Item

Description

OS

l  Development environment: Windows OS. Windows 7 or later is recommended.

l  Operating environment: Linux OS

JDK installation

Basic configuration for the Java/Scala development and operating environment. The version requirements are as follows:

The FusionInsight HD cluster's server and client support only the built-in Oracle JDK 1.8, and therefore a JDK replacement is not allowed.

Customers' applications that need to reference the JAR files of SDK to run in the application processes support Oracle JDK and IBM JDK.

l  Oracle JDK versions: 1.7 and 1.8

l  Recommended IBM JDK versions: 1.7.8.10, 1.7.9.40, and 1.8.3.0

NOTE

FusionInsight servers support only TLSv1.1 and TLSv1.2 to meet security requirements. IBM JDK supports only TLSv1.0 by default. Therefore, if IBM JDK is used, set com.ibm.jsse2.overrideDefaultTLS to true for IBM JDK to support TLSv1.0, TLSv1.1, and TLSv1.2.

For details, see https://www.ibm.com/support/knowledgecenter/zh/SSYKE2_8.0.0/com.ibm.java.security.

component.80.doc/security-component/jsse2Docs/matchsslcontext_tls.html#matchsslcontext_tls.

Oracle JDK requires security hardening. The operations are as follows:

1.     Obtain the Oracle JCE package whose version matches that of JDK from the Oracle official website. The JCE package contains local_policy.jar and US_export_policy.jar. Copy the JAR file to the following directory:

Linux OS: JDK installation directory/jre/lib/security

Windows OS: JDK installation directory\jre\lib\security

2.     Copy SMS4JA.jar in the Client installation directory/JDK/jdk/jre/lib/ext/ directory to the following directory:

Linux OS: JDK installation directory/jre/lib/ext/

Windows OS: JDK installation directory\jre\lib\ext\

IDEA installation and configuration

Tool used to develop Spark applications. The required version is 13.1.7.

Scala installation

Basic configuration for the Scala development environment. The required version is 2.10.4.

Scala plug-in installation

Basic configuration for the Scala development environment. The required version is 0.35.683.

Notepad++ installation

Editor in the Python development environment. It is used to compile Python programs. You can also use other IDEs for Python programming.

Developer account preparation

For details, see Application Development Guide > Security Mode > Security Authentication > Preparing the Developer Account in the FusionInsight HD Product Documentation.

Client installation

l  Development environment: For details, see Application Development Guide > Security Mode > Security Authentication > Configuring Client Files in the FusionInsight HD Product Documentation.

l  Operating environment: Install the client by referring to Software Installation > Initial Configuration > Configuring Client > Installing a Client in the FusionInsight HD Product Documentation.

 

1.1.1.2 Preparing for Security Authentication

1.1.1.2.1 Security Authentication

1.1.1.2.1.1 Security Authentication Principle and Mechanism

Function

Kerberos, named after the character Cerberus from Greek mythology, the ferocious three-headed guard dog of Hades, is now used as a concept for security authentication. Systems using Kerberos adopts the client/server structure and encryption technologies such as AES, and allows the client and server to authenticate each other. Kerberos is used to prevent interception and replay attacks, and protect data integrity. It is a system that manages keys by using an asymmetric key mechanism.

Structure

Figure 1-1 shows the Kerberos architecture and Table 1-2 describes the Kerberos modules.

Figure 1-1 Kerberos architecture

20180810095317172001.png

 

Table 1-2 Kerberos modules

Module

Description

Application Client

An application client, which is usually an application that submits tasks or jobs.

Application Server

An application server, which is usually an application that an application client accesses.

Kerberos

A service that provides security authentication.

KerberosAdmin

A process that provides authentication user management.

KerberosServer

A process that provides authentication ticket distribution.

 

The process and principle are described as follows:

An application client can be a service in the cluster or a secondary development application of the customer. An application client can submit tasks or jobs to an application service.

1.         Before submitting a task or job, the application client needs to apply for a ticket granting ticket (TGT) from the Kerberos service to establish a secure session with the Kerberos server.

2.         After receiving the TGT request, the Kerberos service resolves parameters in the request to generate a TGT, and uses the key of the username specified by the client to encrypt the response.

3.         After receiving the TGT response, the application client (based on the underlying RPC) resolves the response and obtains the TGT, and then applies for a server ticket (ST) of the application server from the Kerberos service.

4.         After receiving the ST request, the Kerberos service verifies the TGT validity in the request and generates an ST of the application service, and then uses the application service key to encrypt the response.

5.         After receiving the ST response, the application client packages the ST into a request and sends the request to the application server.

6.         After receiving the request, the application server uses its local application service key to resolves the ST. After successful verification, the request becomes valid.

Basic Concepts

The following concepts can help users learn the Kerberos architecture quickly and understand the Kerberos service better. The following uses security authentication for HDFS as an example:

TGT

A TGT is generated by the Kerberos service and used to establish a secure session between an application and the Kerberos server. The validity period of a TGT is 24 hours. After 24 hours, the TGT expires automatically.

The following describes how to apply for a TGT (HDFS is used as an example):

1.         You can obtain a TGT through an interface provided by HDFS.

/**
  * login Kerberos to get TGT, if the cluster is in security mode
  * @throws IOException if login is failed
  */
  private void login() throws IOException {       
  // not security mode, just return
    if (! "kerberos".equalsIgnoreCase(conf.get("hadoop.security.authentication"))) {
        return;
    }
        
    //security mode
    System.setProperty("java.security.krb5.conf", PATH_TO_KRB5_CONF);
        
    UserGroupInformation.setConfiguration(conf);
    UserGroupInformation.loginUserFromKeytab(PRNCIPAL_NAME, PATH_TO_KEYTAB);        
  }

2.         You can obtain a TGT by running shell commands of the client in kinit mode. For details, see the Shell O&M command description.

ST

An ST is generated by the Kerberos service and used to establish a secure session between an application and application service. An ST is valid only once.

In FusionInsight products, the generation of an ST is based on the Hadoop-RPC communication. The underlying RPC submits a request to the Kerberos server and the Kerberos server generates an ST.

Authentication Code Example Elaboration

 
package com.huawei.bigdata.hdfs.examples;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.security.UserGroupInformation;
public class KerberosTest {
    private static String PATH_TO_HDFS_SITE_XML = KerberosTest.class.getClassLoader().getResource("hdfs-site.xml")
            .getPath();
    private static String PATH_TO_CORE_SITE_XML = KerberosTest.class.getClassLoader().getResource("core-site.xml")
            .getPath();
    private static String PATH_TO_KEYTAB = KerberosTest.class.getClassLoader().getResource("user.keytab").getPath();
    private static String PATH_TO_KRB5_CONF = KerberosTest.class.getClassLoader().getResource("krb5.conf").getPath();
    private static String PRNCIPAL_NAME = "develop";
    private FileSystem fs;
    private Configuration conf;
    
    /**
     * initialize Configuration
     */
    private void initConf() {
        conf = new Configuration();
        
        // add configuration files
        conf.addResource(new Path(PATH_TO_HDFS_SITE_XML));
        conf.addResource(new Path(PATH_TO_CORE_SITE_XML));
    }
    
    /**
     * login Kerberos to get TGT, if the cluster is in security mode
     * @throws IOException if login is failed
     */
    private void login() throws IOException {       
        // not security mode, just return
        if (! "kerberos".equalsIgnoreCase(conf.get("hadoop.security.authentication"))) {
            return;
        }
        
        //security mode
        System.setProperty("java.security.krb5.conf", PATH_TO_KRB5_CONF);
        
        UserGroupInformation.setConfiguration(conf);
        UserGroupInformation.loginUserFromKeytab(PRNCIPAL_NAME, PATH_TO_KEYTAB);        
    }
    
    /**
     * initialize FileSystem, and get ST from Kerberos
     * @throws IOException
     */
    private void initFileSystem() throws IOException {
        fs = FileSystem.get(conf);
    }
    
    /**
     * An example to access the HDFS
     * @throws IOException
     */
    private void doSth() throws IOException {
        Path path = new Path("/tmp");
        FileStatus fStatus = fs.getFileStatus(path);
        System.out.println("Status of " + path + " is " + fStatus);
        //other thing
    }
    public static void main(String[] args) throws Exception {
        KerberosTest test = new KerberosTest();
        test.initConf();
        test.login();
        test.initFileSystem();
        test.doSth();       
    }
}

note

1.     During Kerberos authentication, you need to configure the file parameters required for configuring the Kerberos authentication, including the keytab path, Kerberos authentication username, and krb5.conf configuration file of the client for Kerberos authentication.

2.     Method login() indicates invoking the Hadoop interface to perform Kerberos authentication and generating a TGT.

3.     Method doSth() indicates invoking the Hadoop interface to access the file system. In this situation, the underlying RPC automatically carries the TGT to Kerberos for verification and then an ST is generated.

4.     The preceding code can be used to create KerberosTest.java in the HDFS secondary development sample project in security mode and run and view the commissioning result. For details, see the HDFS Development Guide.

1.1.1.2.1.2 Preparing the Developer Account

Scenario

A developer account is used to run the sample project. During the development of different service components, different user rights must be granted. (For details about rights configuration, see the development guide of the corresponding services.)

Procedure

                               Step 1      Log in to FusionInsight Manager and choose System > Role Management > Add Role.

1.         Enter a role name, for example, developrole.

2.         Edit the role. Table 1-3 describes rights to be granted for different services.

Table 1-3 Rights list

Service

Rights to Be Granted

HDFS

In Rights, choose HDFS > File System, select Read, Write, and Execute for the hdfs://hacluster/, and click OK.

MapReduce/YARN

1.    In Rights, choose HDFS > File System > hdfs://hacluster/, select Read, Write, and Execute for the user, and click OK.

2.    Edit the role. In Rights, choose Yarn > Scheduler Queue > root, select the default option Submit, and click OK.

HBase

In Rights, choose Hbase > HBase Scope > global. Select the admin, create, read, write, and execute permissions, and click OK.

Spark/Spark2x

1.    In Rights, choose Hbase > HBase Scope > global. Select the default option create and click OK.

2.    In Rights, choose Hbase > HBase Scope > global > hbase. Select execute for hbase:meta and click OK.

3.    Edit the role. In Rights, choose HDFS > File System > hdfs://hacluster/ > user, select Execute for hive, and click OK.

4.    Edit the role. In Rights, choose HDFS > File System > hdfs://hacluster/ > user > hive, select Read, Write, and Execute for warehouse, and click OK.

5.    Edit the role. In Rights, choose Hive > Hive Read Write Privileges, select the default option Create, and click OK.

6.    Edit the role. In Rights, choose Yarn > Scheduler Queue > root, select the default option Submit, and click OK.

Hive

In Rights, choose Yarn > Scheduler Queue > root, select the default option Submit and Admin, and click OK.

NOTE

Extra operation permissions required for Hive application development must be obtained from the system administrator. For details about permission requirements, see Required Permissions section in Hive Development Guide.

 

Flink

1.    In the Rights table, choose HDFS > File System > hdfs://hacluster/ > flink, select Read, Write, and Execute, and click Service in the Rights table to return.

2.    In the Rights table, choose Yarn > Scheduler Queue > root, select the default option Submit, and click OK.

Solr

-

Kafka

-

Storm/CQL

-

Redis

In Rights, choose Redis > Redis Access Manage, select Read, Write, and Management, and click OK.

Oozie

1.    In Rights, choose Oozie > Common User Privileges, and click OK.

2.    Edit the role. In Rights, choose HDFS > File System, select Read, Write, and Execute for hdfs://hacluster, and click OK.

3.    Edit the role. In Rights, choose Yarn, select Cluster Admin Operations, and click OK.

Unified SQL (Fiber)

1.    Edit the role. In Rights, choose HDFS > File System > hdfs://hacluster/ > user > hive, select Execute, and click OK.

2.    Edit the role. In Rights, choose HDFS > File System > hdfs://hacluster/ > user > hive > warehouse, select Read, Write, and Execute, and click OK.

3.    Edit the role. In Rights, choose Yarn > Scheduler Queue > root, select the default option Submit, and click OK.

4.    Grant the following permissions if the Phoenix engine is to be used:

a.     In Rights, choose Hbase > HBase Scope > global, select the default options create, read, write, and execute, and click OK.

b.    Edit the role. In Rights, choose Hbase > HBase Scope > global > hbase. Select execute for hbase:meta,and click OK.

5.    Perform the following operations if the Hive and Spark engines are to be used:

a.     Perform 4 first if you need to access HBase data.

b.    Edit the role. In Rights, choose HDFS > File System > hdfs://hacluster/ > tmp > hive-scratch, select Read, Write, and Execute, and click OK.

c.     Edit the role. In Rights, choose Hive > Hive Read Write Privileges, select the default option Create, and click OK.

 

                               Step 2      Choose System > User Group Management >Add Groupto create a user group for the sample project, for example, developgroup.

                               Step 3      Choose System > User Management > User > Add User to create a user for the sample project.

                               Step 4      Enter a user name, for example, developuser. Select the corresponding Usertype and User Group to which the user is to be added according to Table 1-4, bind the role developrole to obtain rights, and click OK.

Table 1-4 User type and user group list

Service

User Type

User Group

HDFS

Machine-Machine

Joining the developgroup and supergroup groups

Set the primary group to supergroup.

MR/Yarn

Machine-Machine

Joining the developgroup group.

HBase

Machine-Machine

Joining the hbase group.

Set the primary group to hbase.

Spark

Machine-Machine

Joining the developgroup group If the user needs to interconnect with Kafka, add the Kafka user group.

Hive

Machine-Machine/Human-Machine

Joining the hive group.

Solr

Machine-Machine

Joining the solr group.

Kafka

Machine-Machine

Joining the kafkaadmin group.

Storm/CQL

Human-Machine

Joining the storm group.

Redis

Machine-Machine

Joining the developgroup group.

Oozie

Human-Machine

Joining the hadoop, supergroup,and hive groups

If the multi-instance function is enabled for Hive, the user must belong to a specific Hive instance group, for example, hive3.

Unified SQL (Fiber)

Machine-Machine

Joining the developgroup group.

 

                               Step 5      On the homepage of FusionInsight Manager, choose System > User Management. Select developuser from the user list and click 20180810095318560003.png to download authentication credentials. Save the downloaded package and decompress the file to obtain user.keytab and krb5.conf files. These files are used for security authentication during the sample project. For details, see the corresponding service development guide.

note

If the user type is human-machine, you need to change the initial password before downloading the authentication credential file. Otherwise, Password has expired - change password to reset is displayed when you use the authentication credential file. As a result, security authentication fails.For details about how to change the password, see section "Changing an Operation User Password" in the Administrator Guide..

 

----End

1.1.1.2.1.3 Configuring Client Files

During application development, download the cluster client to the local PC.

                               Step 1      Confirm that the components required by the FusionInsight HD cluster have been installed and are running properly.

                               Step 2      Ensure that the time difference between the client and the FusionInsight HD cluster is less than 5 minutes.

Time of the FusionInsight HD cluster can be viewed in the upper-right corner on the FusionInsight Manager page.

                               Step 3      Download the client to the local PC by following instructions in Software Installation > Initial Configuration > Configuring Client > Installing a Client and decompress the installation package. For example, decompress the package to D:\FusionInsight_Services_ClientConfig. The path cannot contain spaces.

                               Step 4      Go to the directory and double-click the install. bat file.

The project dependent package is automatically imported to the lib directory and configuration files are automatically imported to the configuration file directory of each service sample project.

Dependency packages and configuration files are required for running a sample project. Table 1-5 lists the path of the configuration file for each component sample project.

Table 1-5 Paths

Project

Path

CQL

src\main\resource

HBase

conf

HDFS

conf

Hive

conf

Kafka

src\main\resource

MapReduce

conf

Oozie

oozie-example\conf

Redis

src\config

Solr

conf

Storm

src\main\resource

 

                               Step 5      Configure network connections for the client.

Copy all items from the hosts file in the decompression directory to the hosts file on the host where the client is installed. Ensure that the network communication between the local PC and hosts listed in the hosts file in the decompression directory is normal.

note

l  If the host where the client is installed is not a node in the cluster, configure network connections for the client to prevent errors from occurring when you run commands on the client.

l  The local hosts file in a Windows environment is stored in, for example, C:\WINDOWS\system32\drivers\etc\hosts.

----End

1.1.1.2.1.4 Handling an Authentication Failure

Symptom

An authentication failure occurs during the commissioning and running of an example project.

Procedure

There are many reasons that will cause an authentication failure. The following steps are recommended for troubleshooting in different scenarios:

                               Step 1      Check whether the network connection between the device where this application runs and the FusionInsight cluster is normal, and check whether the TCP and UDP ports required by Kerberos authentication can be accessed.

                               Step 2      Check whether each configuration file is correctly read and stored in a correct directory.

                               Step 3      Check whether the username and keytab file are obtained as instructed.

                               Step 4      Check whether the configuration information is properly set before initiating an authentication.

                               Step 5      Check whether multiple authentication requests are initiated in the same process. That is, check whether the login() method is invoked repeatedly.

                               Step 6      If the problem persists, contact Huawei engineers for further analysis.

----End

Authentication Failure Example

If "clock skew too great" is displayed, handle the problem using the following method:

                               Step 1      Check the FusionInsight cluster time.

                               Step 2      Check the time of the machine where the development environment is located. The difference between the machine time and the cluster time must be less than 5 minutes.

----End

If "(Receive time out) can not connect to kdc server" is displayed, handle the problem using the following method:

                               Step 1      Check whether the content of the krb5.conf file is correct. That is, check whether the file content is the same as the service IP address configuration of KerberoServer in the cluster.

                               Step 2      Check whether the Kerberos service is running properly.

                               Step 3      Check whether the firewall is disabled.

----End

1.1.1.2.2 Preparing Authentication Mechanism Code

Scenario

In a secure cluster environment, components must perform mutual authentication before communicating with each other to ensure communication security. HBase application development requires ZooKeeper and Kerberos security authentication. For the jaas.conf file used for ZooKeeper authentication and the keytab file and principal file used for Kerberos authentication, you can contact the administrator to create the files and obtain them. For details about how to use the files, see related description in the example code.

Security authentication uses the code authentication mode. This example project applies to the Oracle Java platform and the IBM Java platform.

The following code snippet belongs to the TestMain class of the com.huawei.bigdata.hbase.examples package.

l   Code authentication

try { 
 init(); 
 login(); 
 }  
catch (IOException e) { 
 LOG.error("Failed to login because ", e); 
 return; 
}

l   Initial configuration

private static void init() throws IOException{ 
     // Default load from conf directory 
     conf = HBaseConfiguration.create(); 
     String userdir = System.getProperty("user.dir") + File.separator + "conf" + File.separator; 
     conf.addResource(new Path(userdir + "core-site.xml")); 
     conf.addResource(new Path(userdir + "hdfs-site.xml")); 
     conf.addResource(new Path(userdir + "hbase-site.xml")); 
}

l   Secure login

Set userName to the actual user name based on the actual situation, for example, developuser.

private static void login() throws IOException { 
    if (User.isHBaseSecurityEnabled(conf)) { 
      String userdir = System.getProperty("user.dir") + File.separator + "conf" + File.separator; 
      userName = "developuser"; 
      userKeytabFile = userdir + "user.keytab"; 
      krb5File = userdir + "krb5.conf"; 
 
      /* 
       * if need to connect zk, please provide jaas info about zk. of course, 
       * you can do it as below: 
       * System.setProperty("java.security.auth.login.config", confDirPath + 
       * "jaas.conf"); but the demo can help you more : Note: if this process 
       * will connect more than one zk cluster, the demo may be not proper. you 
       * can contact us for more help 
       */ 
      LoginUtil.setJaasConf(ZOOKEEPER_DEFAULT_LOGIN_CONTEXT_NAME, userName, userKeytabFile); 
      LoginUtil.setZookeeperServerPrincipal(ZOOKEEPER_SERVER_PRINCIPAL_KEY, 
          ZOOKEEPER_DEFAULT_SERVER_PRINCIPAL); 
      LoginUtil.login(userName, userKeytabFile, krb5File, conf); 
    } 
  }

1.1.1.3 Using the Maven Repository

Procedure

                               Step 1      Download the code under solutions/offineProcessing/RerievalOperation from the Huawei DevCloud website to the local computer. Huawei DevCloud URL: https://codehub-cn-south-1.devcloud.huaweicloud.com/codehub/7076065/home

                               Step 2      After the IntelliJ IDEA and JDK are installed, configure the JDK in IntelliJ IDEA.

1.         Start IntelliJ IDEA and select Configure.

Figure 1-2 Quick Start

20180810095321009006.png

 

2.         On the Configure page, select Project Defaults.

Figure 1-3 Configure page

20180810095322553007.png

 

3.         On the Project Defaults page, select Project Structure.

Figure 1-4 Project Defaults page

20180810095323972008.png

 

4.         On the displayed Project Structure page, select SDKs and click the green plus sign to add the JDK.

Figure 1-5 Adding the JDK

20180810095324906009.png

 

5.         In the displayed Select Home Directory for JDK window, select a home directory for the JDK and click OK.

Figure 1-6 Selecting a home directory for the JDK

20180810095325960010.jpg

 

6.         After selecting the JDK, click OK to complete the configuration.

Figure 1-7 Completing the configuration

20180810095326340011.png

 

                               Step 3      (Optional) If the Scala development environment is used, install the Scala plug-in in IntelliJ IDEA.

1.         On the Configure page, select Plugins.

Figure 1-8 Selecting Plugins

20180810095326117012.png

 

2.         On the Plugins page, select Install plugin from disk.

Figure 1-9 Selecting Install plugin from disk

20180810095327878013.png

 

3.         On the Choose Plugin File page, select the Scala plug-in file of the corresponding version and click OK.

20180810095328545014.png

4.         On the Plugins page, click Apply to install the Scala plug-in.

5.         In the displayed Plugins Changed dialog box, click Restart to enable the configuration to take effect.

Figure 1-10 Plugins Changed dialog box

20180810095329523015.jpg

 

                               Step 4      Import the Java example project to the IDEA.

1.         Start the IntelliJ IDEA and select Import Project on the Quick Start page.

Alternatively, for the used IDEA tool, add projects directly from the IDEA homepage. Choose File > Import project... to import projects.

Figure 1-11 Selecting Import Project on the Quick Start page

20180810095330422016.png

 

2.         Select the directory to store the imported project and the pom file, and click OK.

Figure 1-12 Select File or Directory to Import page

20180810095331080017.png

 

3.         Confirm the import directory and project name, and click Next.

Figure 1-13 Import Project from Maven page

20180810095332969018.png

 

4.         Select the projects to import and click Next.

Figure 1-14 Selecting Maven projects to import

20180810095333789019.png

 

5.         Confirm the project JDK and click Next.

Figure 1-15 Selecting the project SDK

20180810095334899020.png

 

6.         Confirm the project name and project file location, and click Finish to complete the import.

Figure 1-16 Confirming the project name and file location

20180810095335930021.png

 

7.         After the import, the imported projects are displayed on the IDEA homepage.

Figure 1-17 Imported project

20180810095336307022.jpg

 

                               Step 5      (Optional) If a sample application developed in Scala is imported, configure the language for the project.

1.         On the main page of the IDEA, choose File > Project Structures... to access the Project Structure page.

2.         Choose Modules, right-click the project name, and choose Add > Scala.

Figure 1-18 Selecting the Scala language

20180810095337574023.png

 

3.         On the setting page, select the compiled dependent jar library and click Apply to use the setting.

Figure 1-19 Selecting the compiled dependent library

20180810095338718024.png

 

4.         Click OK to save the settings.

                               Step 6      Configure the Maven library information to automatically download the dependent jar library.

1.         On the IDEA homepage, choose File > Settings....

Figure 1-20 Choosing Settings

20180810095339599025.png

 

2.         On the Settings page, select Maven. Select or enter the directory of the settings.xml file in the User settings file text box on the right. Then click Apply to enable the settings to take effect.

Figure 1-21 Configuring the Maven information on the Settings page

20180810095340846026.png

 

                               Step 7      Configure the file encoding of IDEA and prevent the display of garbled characters.

1.         On the IDEA homepage, choose File > Settings....

Figure 1-22 Choosing Settings

20180810095339599025.png

 

2.         On the Settings page, choose File Encodings. Select UTF-8 from the IDE Encoding drop-down list box on the right. Then click Apply to enable the settings to take effect.

20180810095341392027.png

3.         Click OK to complete the encoding settings.

----End

 


This article contains more resources

You need to log in to download or view. No account? Register

x
  • x
  • convention:

chz
Created Aug 10, 2018 01:57:50

welcomeSolution:Real-Time Processing-2719545-1
View more
  • x
  • convention:

nagu
Created Dec 31, 2021 04:05:17

Thanks for sharing
View more
  • x
  • convention:

Imnh
Created Dec 31, 2021 05:51:48

Thanks for sharing
View more
  • x
  • convention:

maithi
Created Dec 31, 2021 05:53:03

Good share
View more
  • x
  • convention:

Comment

You need to log in to comment to the post Login | Register
Comment

Notice: To protect the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but are not limited to the following:
  • Politically sensitive content
  • Content concerning pornography, gambling, and drug abuse
  • Content that may disclose or infringe upon others ' commercial secrets, intellectual properties, including trade marks, copyrights, and patents, and personal privacy
Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see " User Agreement."

My Followers

Login and enjoy all the member benefits

Login

Block
Are you sure to block this user?
Users on your blacklist cannot comment on your post,cannot mention you, cannot send you private messages.
Reminder
Please bind your phone number to obtain invitation bonus.
Information Protection Guide
Thanks for using Huawei Enterprise Support Community! We will help you learn how we collect, use, store and share your personal information and the rights you have in accordance with Privacy Policy and User Agreement.