Big Data Training Courses

Big Data Training Courses

Local, instructor-led live Big Data training courses start with an introduction to elemental concepts of Big Data, then progress into the programming languages and methodologies used to perform Data Analysis. Tools and infrastructure for enabling Big Data storage, Distributed Processing, and Scalability are discussed, compared and implemented in demo practice sessions. Big Data training is available as "onsite live training" or "remote live training". Hong Kong onsite live Big Data trainings can be carried out locally on customer premises or in NobleProg corporate training centers. Remote live training is carried out by way of an interactive, remote desktop. NobleProg -- Your Local Training Provider

Testimonials

★★★★★
★★★★★

Big Data Course Outlines

CodeNameDurationOverview
smtwebintSemantic Web Overview7 hoursThe Semantic Web is a collaborative movement led by the World Wide Web Consortium (W3C) that promotes common formats for data on the World Wide Web. The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.
datameerDatameer for Data Analysts14 hoursDatameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion.

In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.

By the end of this training, participants will be able to:

- Create, curate, and interactively explore an enterprise data lake
- Access business intelligence data warehouses, transactional databases and other analytic stores
- Use a spreadsheet user-interface to design end-to-end data processing pipelines
- Access pre-built functions to explore complex data relationships
- Use drag-and-drop wizards to visualize data and create dashboards
- Use tables, charts, graphs, and maps to analyze query results

Audience

- Data analysts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
bigdatabicriminalBig Data Business Intelligence for Criminal Intelligence Analysis35 hoursAdvances in technologies and the increasing amount of information are transforming how law enforcement is conducted. The challenges that Big Data pose are nearly as daunting as Big Data's promise. Storing data efficiently is one of these challenges; effectively analyzing it is another.

In this instructor-led, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime. Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results.

By the end of this training, participants will be able to:

- Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation
- Implement industrial big data storage and processing solutions for data analysis
- Prepare a proposal for the adoption of the most adequate tools and processes for enabling a data-driven approach to criminal investigation

Audience

- Law Enforcement specialists with a technical background

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
BDATRBig Data Analytics for Telecom Regulators16 hoursTo meet compliance of the regulators, CSPs (Communication service providers) can tap into Big
Data Analytics which not only help them to meet compliance but within the scope of same
project they can increase customer satisfaction and thus reduce the churn. In fact since
compliance is related to Quality of service tied to a contract, any initiative towards meeting the
compliance, will improve the “competitive edge” of the CSPs. Therefore, it is important that
Regulators should be able to advise/guide a set of Big Data analytic practice for CSPs that will
be of mutual benefit between the regulators and CSPs.

2 days of course : 8 modules, 2 hours each = 16 hours
graphcomputingIntroduction to Graph Computing28 hoursMany real world problems can be described in terms of graphs. For example, the Web graph, the social network graph, the train network graph and the language graph. These graphs tend to be extremely large; processing them requires a specialized set of tools and processes -- these tools and processes can be referred to as Graph Computing (also known as Graph Analytics).

In this instructor-led, live training, participants will learn about the technology offerings and implementation approaches for processing graph data. The aim is to identify real-world objects, their characteristics and relationships, then model these relationships and process them as data using a graph computing approach. We start with a broad overview and narrow in on specific tools as we step through a series of case studies, hands-on exercises and live deployments.

By the end of this training, participants will be able to:

- Understand how graph data is persisted and traversed
- Select the best framework for a given task (from graph databases to batch processing frameworks)
- Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel
- View real-world big data problems in terms of graphs, processes and traversals

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
matlabpredanalyticsMatlab for Predictive Analytics21 hoursPredictive analytics is the process of using data analytics to make predictions about the future. This process uses data along with data mining, statistics, and machine learning techniques to create a predictive model for forecasting future events.

In this instructor-led, live training, participants will learn how to use Matlab to build predictive models and apply them to large sample data sets to predict future events based on the data.

By the end of this training, participants will be able to:

- Create predictive models to analyze patterns in historical and transactional data
- Use predictive modeling to identify risks and opportunities
- Build mathematical models that capture important trends
- Use data from devices and business systems to reduce waste, save time, or cut costs

Audience

- Developers
- Engineers
- Domain experts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
nifidevApache NiFi for Developers7 hoursApache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.

By the end of this training, participants will be able to:

- Understand NiFi's architecture and dataflow concepts
- Develop extensions using NiFi and third-party APIs
- Custom develop their own Apache Nifi processor
- Ingest and process real-time data from disparate and uncommon file formats and data sources

Audience

- Developers
- Data engineers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
nifiApache NiFi for Administrators21 hoursApache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn how to deploy and manage Apache NiFi in a live lab environment.

By the end of this training, participants will be able to:

- Install and configure Apachi NiFi
- Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes
- Automate dataflows
- Enable streaming analytics
- Apply various approaches for data ingestion
- Transform Big Data and into business insights

Audience

- System administrators
- Data engineers
- Developers
- DevOps

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
solrcloudSolrCloud14 hoursApache SolrCloud is a distributed data processing engine that facilitates the searching and indexing of files on a distributed network.

In this instructor-led, live training, participants will learn how to set up a SolrCloud instance on Amazon AWS.

By the end of this training, participants will be able to:

- Understand SolCloud's features and how they compare to those of conventional master-slave clusters
- Configure a SolCloud centralized cluster
- Automate processes such as communicating with shards, adding documents to the shards, etc.
- Use Zookeeper in conjunction with SolrCloud to further automate processes
- Use the interface to manage error reporting
- Load balance a SolrCloud installation
- Configure SolrCloud for continuous processing and fail-over

Audience

- Solr Developers
- Project Managers
- System Administrators
- Search Analysts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
datavaultData Vault: Building a Scalable Data Warehouse28 hoursData vault modeling is a database modeling technique that provides long-term historical storage of data that originates from multiple sources. A data vault stores a single version of the facts, or "all the data, all the time". Its flexible, scalable, consistent and adaptable design encompasses the best aspects of 3rd normal form (3NF) and star schema.

In this instructor-led, live training, participants will learn how to build a Data Vault.

By the end of this training, participants will be able to:

- Understand the architecture and design concepts behind Data Vault 2.0, and its interaction with Big Data, NoSQL and AI.
- Use data vaulting techniques to enable auditing, tracing, and inspection of historical data in a data warehouse
- Develop a consistent and repeatable ETL (Extract, Transform, Load) process
- Build and deploy highly scalable and repeatable warehouses

Audience

- Data modelers
- Data warehousing specialist
- Business Intelligence specialists
- Data engineers
- Database administrators

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
tigonTigon: Real-time Streaming for the Real World14 hoursTigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users.

This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.

By the end of this training, participants will be able to:

- Create powerful, stream processing applications for handling large volumes of data
- Process stream sources such as Twitter and Webserver Logs
- Use Tigon for rapid joining, filtering, and aggregating of streams

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
hadooppythonHadoop with Python28 hoursHadoop is a popular Big Data processing framework. Python is a high-level programming language famous for its clear syntax and code readibility.

In this instructor-led, live training, participants will learn how to work with Hadoop, MapReduce, Pig, and Spark using Python as they step through multiple examples and use cases.

By the end of this training, participants will be able to:

- Understand the basic concepts behind Hadoop, MapReduce, Pig, and Spark
- Use Python with Hadoop Distributed File System (HDFS), MapReduce, Pig, and Spark
- Use Snakebite to programmatically access HDFS within Python
- Use mrjob to write MapReduce jobs in Python
- Write Spark programs with Python
- Extend the functionality of pig using Python UDFs
- Manage MapReduce jobs and Pig scripts using Luigi

Audience

- Developers
- IT Professionals

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
ApacheIgniteApache Ignite: Improve Speed, Scale and Availability with In-Memory Computing14 hoursApache Ignite is an in-memory computing platform that sits between the application and data layer to improve speed, scale, and availability.

In this instructor-led, live training, participants will learn the principles behind persistent and pure in-memory storage as they step through the creation of a sample in-memory computing project.

By the end of this training, participants will be able to:

- Use Ignite for in-memory, on-disk persistence as well as a purely distributed in-memory database.
- Achieve persistence without syncing data back to a relational database.
- Use Ignite to carry out SQL and distributed joins.
- Improve performance by moving data closer to the CPU, using RAM as a storage.
- Spread data sets across a cluster to achieve horizontal scalability.
- Integrate Ignite with RDBMS, NoSQL, Hadoop and machine learning processors.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
vespaVespa: Serving Large-Scale Data in Real-Time14 hoursVespa is an open-source big data processing and serving engine created by Yahoo. It is used to respond to user queries, make recommendations, and provide personalized content and advertisements in real-time.

This instructor-led, live training introduces the challenges of serving large-scale data and walks participants through the creation of an application that can compute responses to user requests, over large datasets in real-time.

By the end of this training, participants will be able to:

- Use Vespa to quickly compute data (store, search, rank, organize) at serving time while a user waits
- Implement Vespa into existing applications involving feature search, recommendations, and personalization
- Integrate and deploy Vespa with existing big data systems such as Hadoop and Storm.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
apexApache Apex: Processing Big Data-in-Motion21 hoursApache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable.

This instructor-led, live training introduces Apache Apex's unified stream processing architecture, and walks participants through the creation of a distributed application using Apex on Hadoop.

By the end of this training, participants will be able to:

- Understand data processing pipeline concepts such as connectors for sources and sinks, common data transformations, etc.
- Build, scale and optimize an Apex application
- Process real-time data streams reliably and with minimum latency
- Use Apex Core and the Apex Malhar library to enable rapid application development
- Use the Apex API to write and re-use existing Java code
- Integrate Apex into other applications as a processing engine
- Tune, test and scale Apex applications

Audience

- Developers
- Enterprise architects

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
alluxioAlluxio: Unifying Disparate Storage Systems7 hoursAlluxio is an open-source virtual distributed storage system that unifies disparate storage systems and enables applications to interact with data at memory speed. It is used by companies such as Intel, Baidu and Alibaba.

In this instructor-led, live training, participants will learn how to use Alluxio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio.

By the end of this training, participants will be able to:

- Develop an application with Alluxio
- Connect big data systems and applications while preserving one namespace
- Efficiently extract value from big data in any storage format
- Improve workload performance
- Deploy and manage Alluxio standalone or clustered

Audience

- Data scientist
- Developer
- System administrator

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
flinkFlink for Scalable Stream and Batch Data Processing28 hoursApache Flink is an open-source framework for scalable stream and batch data processing.

This instructor-led, live training introduces the principles and approaches behind distributed stream and batch data processing, and walks participants through the creation of a real-time, data streaming application.

By the end of this training, participants will be able to:

- Set up an environment for developing data analysis applications
- Package, execute, and monitor Flink-based, fault-tolerant, data streaming applications
- Manage diverse workloads
- Perform advanced analytics using Flink ML
- Set up a multi-node Flink cluster
- Measure and optimize performance
- Integrate Flink with different Big Data systems
- Compare Flink capabilities with those of other big data processing frameworks

Audience

- Developers
- Architects
- Data engineers
- Analytics professionals
- Technical managers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
samzaSamza for Stream Processing14 hoursApache Samza is an open-source near-realtime, asynchronous computational framework for stream processing. It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management.

This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.

By the end of this training, participants will be able to:

- Use Samza to simplify the code needed to produce and consume messages.
- Decouple the handling of messages from an application.
- Use Samza to implement near-realtime asynchronous computation.
- Use stream processing to provide a higher level of abstraction over messaging systems.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
zeppelinZeppelin for Interactive Data Analytics14 hoursApache Zeppelin is a web-based notebook for capturing, exploring, visualizing and sharing Hadoop and Spark based data.

This instructor-led, live training introduces the concepts behind interactive data analytics and walks participants through the deployment and usage of Zeppelin in a single-user or multi-user environment.

By the end of this training, participants will be able to:

- Install and configure Zeppelin
- Develop, organize, execute and share data in a browser-based interface
- Visualize results without referring to the command line or cluster details
- Execute and collaborate on long workflows
- Work with any of a number of plug-in language/data-processing-backends, such as Scala (with Apache Spark), Python (with Apache Spark), Spark SQL, JDBC, Markdown and Shell.
- Integrate Zeppelin with Spark, Flink and Map Reduce
- Secure multi-user instances of Zeppelin with Apache Shiro

Audience

- Data engineers
- Data analysts
- Data scientists
- Software developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
magellanMagellan: Geospatial Analytics on Spark14 hoursMagellan is an open-source distributed execution engine for geospatial analytics on big data. Implemented on top of Apache Spark, it extends Spark SQL and provides a relational abstraction for geospatial analytics.

This instructor-led, live training introduces the concepts and approaches for implementing geospacial analytics and walks participants through the creation of a predictive analysis application using Magellan on Spark.

By the end of this training, participants will be able to:

- Efficiently query, parse and join geospatial datasets at scale
- Implement geospatial data in business intelligence and predictive analytics applications
- Use spatial context to extend the capabilities of mobile devices, sensors, logs, and wearables

Audience

- Application developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
hdpHortonworks Data Platform (HDP) for Administrators21 hoursHortonworks Data Platform is an open-source Apache Hadoop support platform that provides a stable foundation for developing big data solutions on the Apache Hadoop ecosystem.

This instructor-led live training introduces Hortonworks and walks participants through the deployment of Spark + Hadoop solution.

By the end of this training, participants will be able to:

- Use Hortonworks to reliably run Hadoop at a large scale
- Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
- Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project
- Process different types of data, including structured, unstructured, in-motion, and at-rest.

Audience

- Hadoop administrators

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
sparkpythonPython and Spark for Big Data (PySpark)21 hoursPython is a high-level programming language famous for its clear syntax and code readibility. Spark is a data processing engine used in querying, analyzing, and transforming big data. PySpark allows users to interface Spark with Python.

In this instructor-led, live training, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.

By the end of this training, participants will be able to:

- Learn how to use Spark with Python to analyze Big Data
- Work on exercises that mimic real world circumstances
- Use different tools and techniques for big data analysis using PySpark

Audience

- Developers
- IT Professionals
- Data Scientists

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
memsqlMemSQL28 hoursMemSQL is an in-memory, distributed, SQL database management system for cloud and on-premises. It's a real-time data warehouse that immediately delivers insights from live and historical data.

In this instructor-led, live training, participants will learn the essentials of MemSQL for development and administration.

By the end of this training, participants will be able to:

- Understand the key concepts and characteristics of MemSQL
- Install, design, maintain, and operate MemSQL
- Optimize schemas in MemSQL
- Improve queries in MemSQL
- Benchmark performance in MemSQL
- Build real-time data applications using MemSQL

Audience

- Developers
- Administrators
- Operation Engineers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
TalendDITalend Open Studio for Data Integration 28 hoursTalend Open Studio for Data Integration is an open-source data integration product used to combine, convert and update data in various locations across a business.

In this instructor-led, live training, participants will learn how to use the Talend ETL tool to carry out data transformation, data extraction, and connectivity with Hadoop, Hive, and Pig.

By the end of this training, participants will be able to

- Explain the concepts behind ETL (Extract, Transform, Load) and propagation
- Define ETL methods and ETL tools to connect with Hadoop
- Efficiently amass, retrieve, digest, consume, transform and shape big data in accordance to business requirements
- Upload to and extract large records from Hadoop (optional), Hive (optional), and NoSQL databases

Audience

- Business intelligence professionals
- Project managers
- Database professionals
- SQL Developers
- ETL Developers
- Solution architects
- Data architects
- Data warehousing professionals
- System administrators and integrators

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- To request a customized training for this course, please contact us to arrange.
sparksqlApache Spark SQL7 hoursSpark SQL is Apache Spark's module for working with structured and unstructured data. Spark SQL provides information about the structure of the data as well as the computation being performed. This information can be used to perform optimizations. Two common uses for Spark SQL are:
- to execute SQL queries.
- to read data from an existing Hive installation.

In this instructor-led, live training (onsite or remote), participants will learn how to analyze various types of data sets using Spark SQL.

By the end of this training, participants will be able to:

- Install and configure Spark SQL.
- Perform data analysis using Spark SQL.
- Query data sets in different formats.
- Visualize data and query results.

Audience

- Data analysts
- Data scientists
- Data engineers

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Notes

- To request a customized training for this course, please contact us to arrange.
dataminpythonData Mining with Python14 hoursThis instructor-led, live training (onsite or remote) is aimed at data analysts and data scientists who wish to implement more advanced data analytics techniques for data mining using Python.

By the end of this training, participants will be able to:

- Understand important areas of data mining, including association rule mining, text sentiment analysis, automatic text summarization, and data anomaly detection.
- Compare and implement various strategies for solving real-world data mining problems.
- Understand and interpret the results.

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
sparkcloudApache Spark in the Cloud21 hoursApache Spark's learning curve is slowly increasing at the begining, it needs a lot of effort to get the first return. This course aims to jump through the first tough part. After taking this course the participants will understand the basics of Apache Spark , they will clearly differentiate RDD from DataFrame, they will learn Python and Scala API, they will understand executors and tasks, etc. Also following the best practices, this course strongly focuses on cloud deployment, Databricks and AWS. The students will also understand the differences between AWS EMR and AWS Glue, one of the lastest Spark service of AWS.

AUDIENCE:

Data Engineer, DevOps, Data Scientist
bigdataanahealthBig Data Analytics in Health21 hoursBig data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights.

The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.

In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.

By the end of this training, participants will be able to:

- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to deal with medical data
- Study big data systems and algorithms in the context of health applications

Audience

- Developers
- Data Scientists

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice.

Note

- To request a customized training for this course, please contact us to arrange.
arrowApache Arrow for Data Analysis across Disparate Data Sources14 hoursApache Arrow is an open-source in-memory data processing framework. It is often used together with other data science tools for accessing disparate data stores for analysis. It integrates well with other technologies such as GPU databases, machine learning libraries and tools, execution engines, and data visualization frameworks.

In this onsite instructor-led, live training, participants will learn how to integrate Apache Arrow with various Data Science frameworks to access data from disparate data sources.

By the end of this training, participants will be able to:

- Install and configure Apache Arrow in a distributed clustered environment
- Use Apache Arrow to access data from disparate data sources
- Use Apache Arrow to bypass the need for constructing and maintaining complex ETL pipelines
- Analyze data across disparate data sources without having to consolidate it into a centralized repository

Audience

- Data scientists
- Data engineers

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- To request a customized training for this course, please contact us to arrange.
sqoopMoving Data from MySQL to Hadoop with Sqoop14 hoursSqoop is an open source software tool for transfering data between Hadoop and relational databases or mainframes. It can be used to import data from a relational database management system (RDBMS) such as MySQL or Oracle or a mainframe into the Hadoop Distributed File System (HDFS). Thereafter, the data can be transformed in Hadoop MapReduce, and then re-exported back into an RDBMS.

In this instructor-led, live training, participants will learn how to use Sqoop to import data from a traditional relational database to Hadoop storage such HDFS or Hive and vice versa.

By the end of this training, participants will be able to:

- Install and configure Sqoop
- Import data from MySQL to HDFS and Hive
- Import data from HDFS and Hive to MySQL

Audience

- System administrators
- Data engineers

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- To request a customized training for this course, please contact us to arrange.

Upcoming Big Data Courses

CourseCourse DateCourse Price [Remote / Classroom]
Apache Arrow for Data Analysis across Disparate Data Sources - Grand Century Place - NobleProg Hong KongWed, 2019-01-09 09:30HK$21458 / HK$35058
Apache Arrow for Data Analysis across Disparate Data Sources - International Commerce Centre - NobleProg Hong KongMon, 2019-01-14 09:30HK$21458 / HK$35058
Apache Arrow for Data Analysis across Disparate Data Sources - Central Plaza - NobleProg Hong KongMon, 2019-01-14 09:30HK$21458 / HK$36058
Apache Arrow for Data Analysis across Disparate Data Sources - Yen Sheng Centre - NobleProg Hong KongThu, 2019-01-24 09:30HK$21458 / HK$34658
Apache Arrow for Data Analysis across Disparate Data Sources - The Center - NobleProg Hong KongTue, 2019-02-05 09:30HK$21458 / HK$36058
Weekend Big Data courses, Evening Big Data training, Big Data boot camp, Big Data instructor-led, Weekend Big Data training, Evening Big Data courses, Big Data coaching, Big Data instructor, Big Data trainer, Big Data training courses, Big Data classes, Big Data on-site, Big Data private courses, Big Data one on one training

Course Discounts

CourseVenueCourse DateCourse Price [Remote / Classroom]
Matlab for Predictive AnalyticsGrand Century Place - NobleProg Hong KongWed, 2019-02-06 09:30HK$41342 / HK$57742
Machine Learning on iOSThe Center - NobleProg Hong KongWed, 2019-02-13 09:30HK$27562 / HK$42162
Introduction to RInternational Commerce Centre - NobleProg Hong KongMon, 2019-03-18 09:30HK$41342 / HK$57742
Machine Learning Fundamentals with RMiramar - NobleProg Hong Kong Tue, 2019-04-30 09:30HK$27562 / HK$41162
BPMN 2.0 for Business AnalystsYen Sheng Centre - NobleProg Hong KongWed, 2019-05-01 09:30HK$41342 / HK$57142

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients

is growing fast!

We are looking to expand our presence in Hong Kong!

As a Business Development Manager you will:

  • expand business in Hong Kong
  • recruit local talent (sales, agents, trainers, consultants)
  • recruit local trainers and consultants

We offer:

  • Artificial Intelligence and Big Data systems to support your local operation
  • high-tech automation
  • continuously upgraded course catalogue and content
  • good fun in international team

If you are interested in running a high-tech, high-quality training and consulting business.

Apply now!