
Online or onsite, instructor-led live Stream Processing training courses demonstrate through interactive discussion and hands-on practice the fundamentals and advanced topics of Stream Processing.
Stream Processing training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live Stream Processing training can be carried out locally on customer premises in Hong Kong or in NobleProg corporate training centers in Hong Kong.
NobleProg -- Your Local Training Provider
Testimonials
I enjoyed the good balance between theory and hands-on labs.
N. V. Nederlandse Spoorwegen
Course: Apache Ignite: Improve Speed, Scale and Availability with In-Memory Computing
I generally was benefit from the more understanding of Ignite.
N. V. Nederlandse Spoorwegen
Course: Apache Ignite: Improve Speed, Scale and Availability with In-Memory Computing
I mostly liked the good lectures.
N. V. Nederlandse Spoorwegen
Course: Apache Ignite: Improve Speed, Scale and Availability with In-Memory Computing
Recalling/reviewing keypoints of the topics discussed.
Paolo Angelo Gaton - SMS Global Technologies Inc.
Course: Building Stream Processing Applications with Kafka Streams
-
Roxane Santiago - SMS Global Technologies Inc.
Course: Building Stream Processing Applications with Kafka Streams
The lab exercises. Applying the theory from the first day in subsequent days.
Dell
Course: A Practical Introduction to Stream Processing
The trainer was passionate and well-known what he said I appreciate his help and answers all our questions and suggested cases.
Course: A Practical Introduction to Stream Processing
I genuinely liked work exercises with cluster to see performance of nodes across cluster and extended functionality.
CACI Ltd
Course: Apache NiFi for Developers
The trainers in depth knowledge of the subject
CACI Ltd
Course: Apache NiFi for Administrators
Ajay was a very experienced consultant and was able to answer all our questions and even made suggestions on best practices for the project we are currently engaged on.
CACI Ltd
Course: Apache NiFi for Administrators
That I had it in the first place.
Peter Scales - CACI Ltd
Course: Apache NiFi for Developers
The NIFI workflow excercises
Politiets Sikkerhetstjeneste
Course: Apache NiFi for Administrators
answers to our specific questions
MOD BELGIUM
Course: Apache NiFi for Administrators
Exercises.
David Lehotak - NVision Czech Republic ICT a.s.
Course: Apache Ignite for Developers
Training topics and engagement of the trainer
Izba Administracji Skarbowej w Lublinie
Course: Apache NiFi for Administrators
Machine Translated
Communication with people attending training.
Andrzej Szewczuk - Izba Administracji Skarbowej w Lublinie
Course: Apache NiFi for Administrators
Machine Translated
usefulness of exercises
Algomine sp.z.o.o sp.k.
Course: Apache NiFi for Administrators
Machine Translated
I really enjoyed the training. Anton has a lot of knowledge and laid out the necessary theory in a very accessible way. It is great that the training was a lot of interesting exercises, so we have been in contact with the technology we know from the very beginning.
Szymon Dybczak - Algomine sp.z.o.o sp.k.
Course: Apache NiFi for Administrators
Machine Translated
The trainer was passionate and well-known what he said I appreciate his help and answers all our questions and suggested cases.
Course: A Practical Introduction to Stream Processing
Stream Processing Subcategories in Hong Kong
Stream Processing Course Outlines in Hong Kong
By the end of this training, participants will be able to:
- Use Ignite for in-memory, on-disk persistence as well as a purely distributed in-memory database.
- Achieve persistence without syncing data back to a relational database.
- Use Ignite to carry out SQL and distributed joins.
- Improve performance by moving data closer to the CPU, using RAM as a storage.
- Spread data sets across a cluster to achieve horizontal scalability.
- Integrate Ignite with RDBMS, NoSQL, Hadoop and machine learning processors.
This instructor-led, live training introduces Apache Apex's unified stream processing architecture, and walks participants through the creation of a distributed application using Apex on Hadoop.
By the end of this training, participants will be able to:
- Understand data processing pipeline concepts such as connectors for sources and sinks, common data transformations, etc.
- Build, scale and optimize an Apex application
- Process real-time data streams reliably and with minimum latency
- Use Apex Core and the Apex Malhar library to enable rapid application development
- Use the Apex API to write and re-use existing Java code
- Integrate Apex into other applications as a processing engine
- Tune, test and scale Apex applications
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training (onsite or remote), participants will learn how to implement the Apache Beam SDKs in a Java or Python application that defines a data processing pipeline for decomposing a big data set into smaller chunks for independent, parallel processing.
By the end of this training, participants will be able to:
- Install and configure Apache Beam.
- Use a single programming model to carry out both batch and stream processing from withing their Java or Python application.
- Execute pipelines across multiple environments.
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- This course will be available Scala in the future. Please contact us to arrange.
By the end of this training, participants will be able to:
- Install and configure Confluent Platform.
- Use Confluent's management tools and services to run Kafka more easily.
- Store and process incoming stream data.
- Optimize and manage Kafka clusters.
- Secure data streams.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- This course is based on the open source version of Confluent: Confluent Open Source.
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.
By the end of this training, participants will be able to:
- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
Audience
- Developers
- Software architects
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice
Notes
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training, participants will learn how to integrate Kafka Streams into a set of sample Java applications that pass data to and from Apache Kafka for stream processing.
By the end of this training, participants will be able to:
- Understand Kafka Streams features and advantages over other stream processing frameworks
- Process stream data directly within a Kafka cluster
- Write a Java or Scala application or microservice that integrates with Kafka and Kafka Streams
- Write concise code that transforms input Kafka topics into output Kafka topics
- Build, package and deploy the application
Audience
- Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Notes
- To request a customized training for this course, please contact us to arrange
By the end of this training, participants will be able to:
- Install and configure Confluent KSQL.
- Set up a stream processing pipeline using only SQL commands (no Java or Python coding).
- Carry out data filtering, transformations, aggregations, joins, windowing, and sessionization entirely in SQL.
- Design and deploy interactive, continuous queries for streaming ETL and real-time analytics.
By the end of this training, participants will be able to build producer and consumer applications for real-time stream data procesing.
Audience
- Developers
- Administrators
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- To request a customized training for this course, please contact us to arrange.
By the end of this training, participants will be able to:
- Install and configure Apachi NiFi.
- Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
- Automate dataflows.
- Enable streaming analytics.
- Apply various approaches for data ingestion.
- Transform Big Data and into business insights.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.
By the end of this training, participants will be able to:
- Use Samza to simplify the code needed to produce and consume messages.
- Decouple the handling of messages from an application.
- Use Samza to implement near-realtime asynchronous computation.
- Use stream processing to provide a higher level of abstraction over messaging systems.
Audience
- Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
"Storm is for real-time processing what Hadoop is for batch processing!"
In this instructor-led live training, participants will learn how to install and configure Apache Storm, then develop and deploy an Apache Storm application for processing big data in real-time.
Some of the topics included in this training include:
- Apache Storm in the context of Hadoop
- Working with unbounded data
- Continuous computation
- Real-time analytics
- Distributed RPC and ETL processing
Request this course now!
Audience
- Software and ETL developers
- Mainframe professionals
- Data scientists
- Big data analysts
- Hadoop professionals
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.
By the end of this training, participants will be able to:
- Create powerful, stream processing applications for handling large volumes of data
- Process stream sources such as Twitter and Webserver Logs
- Use Tigon for rapid joining, filtering, and aggregating of streams
Audience
- Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice