How to filter the execution of your project on a single Job or specific Jobs - 6.5

Talend Software Development Life Cycle Best Practices Guide

author
Talend Documentation Team
EnrichVersion
6.5
EnrichProdName
Talend Big Data
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
task
Administration and Monitoring
Deployment
Design and Development
EnrichPlatform
Talend Administration Center
Talend Artifact Repository
Talend CommandLine
Talend JobServer
Talend Repository Manager
Talend Studio
How to filter the execution of your project items to build only the Job(s) you want. To do that, you need to declare the filter in the Maven parameters entered when configuring the build project which generates your project sources on the Continuous Integration server.

Before you begin

You have created the build project to generate your sources (GenerateSources) on your Continuous Integration server, Jenkins for example. See Creating the Jenkins projects.

Procedure

  1. Open the configuration page of the GenerateSources build project.
  2. In the MAVEN_OPTS field of the Advanced part of the Build area (plugin execution information as well as parameters needed to generate the sources)., define the <itemfilter> parameter value according to your needs:
    • Note: There should be no space between the filters and the operators (or and and).

      filter on Job types:

      -DitemFilter=(type=process) to filter on all Standard Jobs of the project

      -DitemFilter=(type=process_mr) to filter on all Big Data Map/Reduce and Spark Batch Jobs of the project

      -DitemFilter=(type=process_storm) to filter on all Big Data Storm and Spark Streaming Jobs of the project

      -DitemFilter=(type=route) to filter on all Routes of the project

    • filter on Job labels:

      -DitemFilter=(type=process_mr)and(label=job_ProcessWeatherData) to filter on one specific Big Data Map/Reduce Job named job_ProcessWeatherData

      -DitemFilter=(type=process)and(label%job_dev*) to filter on Jobs which names start with job_dev

    • filter on Job paths:

      -DitemFilter=(type=process)and(path=Integration) to filter on Jobs located in subfolder called Integration

      -DitemFilter=(type=process)and(path%Integration*) to filter on Jobs located in subfolders with a name starting with Integration

    • filter on the person who created the Job:

      -DitemFilter=(type=process_storm)and(author=rbunch@talend.com) to filter on Big Data Storm Streaming Jobs whose author ID is rbunch@talend.com

    • exclusion filter:

      -DitemFilter=(!path=sandbox)and(type=process)and(label%job_Export*)or(label%job_Monitor*) to filter on Jobs with a name starting with job_Export or job_Monitor, but that are not in the sandbox folder.

      -DitemFilter=(!path%MainProcess/Import*)and(type=process)and(label%job_Export*) to filter on Jobs with a name starting with job_Export, but that are not in the subfolders with a name starting with Import under the MainProcess folder.

    Example of filter applied in order to execute Big Data Spark Batch Jobs located in the subfolders with a name starting with Export under the MainProcess folder, and with the exception of the Job named job_batch_feature22.

    -DitemFilter=(type=process_mr)and(!label=job_batch_feature22)and(path%MainProcess/Export*)
  3. Save your changes and close the file.

Results

When you will execute your project on your Continuous Integration server, the filter will be applied and only the Jobs you have filtered will be generated and executed.