
Online or onsite, instructor-led live Data Integration training courses demonstrate through interactive hands-on practice the fundamentals and advanced concepts of Data Integration.
Data Integration training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live Data Integration training can be carried out locally on customer premises in Qatar or in NobleProg corporate training centers in Qatar.
NobleProg -- Your Local Training Provider
Testimonials
It's a hands-on session.
Vorraluck Sarechuer - Total Access Communication Public Company Limited (dtac)
Course: Talend Open Studio for ESB
Very useful in because it helps me understand what we can do with the data in our context. It will also help me
Nicolas NEMORIN - Adecco Groupe France
Course: KNIME Analytics Platform for BI
The capacitance of the tool
Gerardo Avila - Reckitt Benckizer
Course: KNIME Analytics Platform for BI
Machine Translated
Learning a new tool
MARIA ELENA DOMINGUEZ ESCUDERO - Reckitt Benckizer
Course: KNIME Analytics Platform for BI
Machine Translated
Data Integration Subcategories in Qatar
Data Integration Course Outlines in Qatar
By the end of this training, participants will be able to:
- Install and configure Oracle GoldenGate.
- Understand Oracle databases replication using the Oracle GoldenGate tool.
- Understand the Oracle GoldenGate architecture.
- Configure and perform a database replication and migration.
- Optimize Oracle GoldenGate performance and troubleshoot issues.
In this instructor-led, live training, participants will learn how to use Pentaho Data Integration's powerful ETL capabilities and rich GUI to manage an entire big data lifecycle and maximize the value of data within their organization.
By the end of this training, participants will be able to:
- Create, preview, and run basic data transformations containing steps and hops
- Configure and secure the Pentaho Enterprise Repository
- Harness disparate sources of data and generate a single, unified version of the truth in an analytics-ready format.
- Provide results to third-part applications for further processing
Audience
- Data Analyst
- ETL developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
In this instructor-led, live training, participants will learn how to maximize the features of Pentaho Open Source BI Suite Community Edition (CE).
By the end of this training, participants will be able to:
- Install and configure Pentaho Open Source BI Suite Community Edition (CE)
- Understand the fundamentals of Pentaho CE tools and their features
- Build reports using Pentaho CE
- Integrate third party data into Pentaho CE
- Work with big data and analytics in Pentaho CE
Audience
- Programmers
- BI Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- To request a customized training for this course, please contact us to arrange.
This course for KNIME Analytics Platform is an ideal opportunity for beginners, advanced users and KNIME experts to be introduced to KNIME, to learn how to use it more effectively, and how to create clear, comprehensive reports based on KNIME workflows
Since 2006, KNIME has been used in pharmaceutical research, it also used in other areas like CRM customer data analysis, business intelligence and financial data analysis.
Objectives
After successfully completing this course, students should be able to:
- Mass ingest data to Hive and HDFS
- Perform incremental loads in Mass Ingestion
- Perform initial and incremental loads
- Integrate with relational databases using SQOOP
- Perform transformations across various engines
- Execute a mapping using JDBC in Spark mode
- Perform stateful computing and windowing
- Process complex files
- Parse hierarchical data on Spark engine
- Run profiles and choose sampling options on Spark engine
- Execute Dynamic Mappings
- Create Audits on Mappings
- Monitor logs using REST Operations Hub
- Monitor logs using Log Aggregation and troubleshoot
- Run mappings in Databricks environment
- Create mappings to access Delta Lake tables
- Tune performances of Spark and Databricks jobs
Sensor Fusion implementations require algorithms to filter and integrate different data sources.
Audience
This course is targeted at engineers, programmers and architects who deal with multi-sensor implementations.
By the end of this training, participants will be able to
- Integrate, enhance and deliver ESB technologies as single packages in a variety of deployment environments.
- Understand and utilize Talend Open Studio's most used components.
- Integrate any application, database, API, or Web services.
- Seamlessly integrate heterogeneous systems and applications.
- Embed existing Java code libraries to extend projects.
- Leverage community components and code to extend projects.
- Rapidly integrate systems, applications and data sources within a drag-and-drop Eclipse environment.
- Reduce development time and maintenance costs by generating optimized, reusable code.
By the end of this training, participants will be able to:
- Install and configure Talend Open Studio for Big Data.
- Connect with Big Data systems such as Cloudera, HortonWorks, MapR, Amazon EMR and Apache.
- Understand and set up Open Studio's big data components and connectors.
- Configure parameters to automatically generate MapReduce code.
- Use Open Studio's drag-and-drop interface to run Hadoop jobs.
- Prototype big data pipelines.
- Automate big data integration projects.
By the end of this training, participants will be able to:
- Install and configure Talend Administration Center.
- Understand and implement Talend management fundamentals.
- Build, deploy, and run business projects or tasks in Talend.
- Monitor the security of datasets and develop business routines based on the TAC framework.
- Obtain a broader comprehension of big data applications.
By the end of this training, participants will be able to:
- Navigate the Talend Management Console to manage users and roles in the platform.
- Evaluate data to find and understand relevant datasets.
- Create a pipeline to process and monitor data at rest or in action.
- Prepare data for analysis to generate insights relevant to the business.
Last Updated: