<> one ,Flow 2.0 brief introduction

<>1.1 Flow 2.0 Generation of

Azkaban Currently supported at the same time Flow 1.0 and Flow2.0 , But it is more recommended in official documents Flow 2.0, because Flow 1.0
Will be removed in a future version .Flow 2.0 The main design idea of is to provide 1.0 Stream level definitions not available . The user can assign all the job / properties
Merge files into a single stream definition file , Its content adopts YAML Syntax to define , At the same time, it also supports redefining streams in streams , Called embedded flow or subflow .

<>1.2 Basic structure

project zip Will contain multiple streams YAML file , A project YAML Files and optional libraries and source code .Flow YAML The basic structure of the document is as follows :

* each Flow All in a single YAML Defined in the document ;
* The stream file is named after the stream name , as :my-flow-name.flow;
* contain DAG All nodes in ;
* Each node can be a job or a process ;
* Each node Can own name, type, config, dependsOn and nodes sections Equal attribute ;
* By listing dependsOn The parent node in the list to specify node dependencies ;
* Contains other configurations related to flow ;
* current properties All common attributes of streams in the file will be migrated to each stream YAML In the file config part .
The official provides a relatively perfect configuration example , as follows :
config: user.to.proxy: azktest param.hadoopOutData: /tmp/wordcounthadoopout
param.inData: /tmp/wordcountpigin param.outData: /tmp/wordcountpigout # This
section defines the list of jobs # A node can be a job or a flow # In this
example, all nodes are jobs nodes: # Job definition # The job definition is
like a YAMLified version of properties file # with one major difference. All
custom properties are now clubbed together # in a config section in the
definition. # The first line describes the name of the job - name: AZTest type:
noop# The dependsOn section contains the list of parent nodes the current #
node depends on dependsOn: - hadoopWC1 - NoOpTest1 - hive2 - java1 - jobCommand2
- name: pigWordCount1 type: pig # The config section contains custom arguments
or parameters which are # required by the job config: pig.script:
src/main/pig/wordCountText.pig- name: hadoopWC1 type: hadoopJava dependsOn: -
pigWordCount1config: classpath: ./* force.output.overwrite: true input.path: ${
param.inData} job.class: com.linkedin.wordcount.WordCount main.args: ${
param.inData} ${param.hadoopOutData} output.path: ${param.hadoopOutData} - name:
hive1type: hive config: hive.script: src/main/hive/showdb.q - name: NoOpTest1
type: noop - name: hive2 type: hive dependsOn: - hive1 config: hive.script:
src/main/hive/showTables.sql- name: java1 type: javaprocess config: Xms: 96M
java.class: com.linkedin.foo.HelloJavaProcessJob - name: jobCommand1 type:
commandconfig: command: echo "hello world from job_command_1" - name:
jobCommand2type: command dependsOn: - jobCommand1 config: command: echo "hello
world from job_command_2"
<> two ,YAML grammar

Want to use Flow 2.0 Configure workflow , First of all, we need to understand YAML .YAML Is a concise non markup language , With strict format requirements , If your format configuration fails , Upload to
Azkaban The parsing exception will be thrown when .

<>2.1 Basic rules

* Case sensitive ;
* Use indentation to represent hierarchical relationships ;
* There is no limit to the indent length , As long as the elements are aligned, they belong to a hierarchy ;
* use # Indicates a comment ;
* Strings do not need single or double quotation marks by default , But single quotation marks and double quotation marks can be used , Double quotation marks indicate that special characters do not need to be escaped ;
* YAML Various constant structures are provided in , include : integer , Floating point number , character string ,NULL, date , Boolean , time .
<>2.2 Writing of objects
# value And : There must be a space between symbols key: value
<>2.3 map Writing method of
# Writing method 1 All key value pairs with the same indentation belong to one map key: key1: value1 key2: value2 # Writing method 2 {key1: value1,
key2: value2}
<>2.3 How to write an array
# Writing method 1 Use a dash plus a space to represent an array item - a - b - c # Writing method 2 [a,b,c]
<>2.5 Single and double quotation marks

Support single quotation marks and double quotation marks , But double quotation marks do not escape special characters :
s1: ' content \n character string ' s2: " content \n character string " After conversion : { s1: ' content \\n character string ', s2: ' content \n character string ' }
<>2.6 Special symbols

One YAML Multiple documents can be included in the file , use --- Split .

<>2.7 Configuration reference

Flow 2.0 It is recommended to define public parameters in config lower , And passed ${} Reference .

<> three , Simple task scheduling

<>3.1 Task configuration

newly build flow configuration file :
nodes: - name: jobA type: command config: command: echo "Hello Azkaban Flow
2.0."
In the current version ,Azkaban Simultaneously support Flow 1.0 and Flow 2.0, If you want to 2.0 Operation mode , You need to create a new one project
file , Indicate yes yes Flow 2.0:
azkaban-flow-version: 2.0
<>3.2 Package upload

<>3.3 results of enforcement

Because in 1.0 It has been introduced in version Web UI Use of , I won't repeat it here . about 1.0 and 2.0
edition , Only the configuration method is different , Other upload execution methods are the same . The implementation results are as follows :

<> four , Multitask scheduling

and 1.0 The case given is the same , Let's assume that we have five tasks (jobA——jobE), D The task needs to be A,B,C The task cannot be executed until it is completed , and E Tasks need to be D
The task cannot be executed until it is completed , The relevant configuration files should be as follows . You can see in 1.0 In, we need to define five configuration files respectively , And in 2.0 In, we only need one configuration file to complete the configuration .
nodes: - name: jobE type: command config: command: echo "This is job E" # jobE
depends on jobD dependsOn: - jobD - name: jobD type: command config: command:
echo "This is job D"# jobD depends on jobA,jobB,jobC dependsOn: - jobA - jobB -
jobC- name: jobA type: command config: command: echo "This is job A" - name:
jobBtype: command config: command: echo "This is job B" - name: jobC type:
commandconfig: command: echo "This is job C"
<> five , Embedded flow

Flow2.0 Support in one Flow Define another in Flow, It is called embedded flow or subflow . Here is an example of an embedded flow , his Flow The configuration is as follows :
nodes: - name: jobC type: command config: command: echo "This is job C"
dependsOn: - embedded_flow - name: embedded_flow type: flow config: prop: value
nodes: - name: jobB type: command config: command: echo "This is job B"
dependsOn: - jobA - name: jobA type: command config: command: echo "This is job
A"
Inline flow DAG The figure is as follows :

The implementation is as follows :

Technology