We will be offering hands-on workshops on how to use Apache Beam in different scenarios and with different infrastructure.
This workshop is for participants who are getting started with Apache Beam. We will get hands-on writing Beam pipelines, as well as discuss the fundamentals of Beam programming model and SDKs.
We will work through an end-to-end example of using the Python Beam SDK to build maintainable, reliable, and scalable production pipelines for deep learning.
We will implement a Google Cloud Dataflow pipeline through a series of labs where we will explore functionalities like auto-scaling, monitoring and optimization.
Learn how to install Conda on your worker machine, personalize your workers’ environment and integrate with your Apache Beam pipeline.
We will explore an end to end example that combines batch and streaming aspects in one uniform Beam pipeline, and deploy it to a fully managed environment with Amazon Kinesis Data Analytics.
We will introduce Interactive Beam by presenting Jupyter notebooks with examples using publicly available real world data.
Step-by-step walkthrough of how to write custom TFX components with Apache Beam to customize your ML pipelines beyond the standard components and tailor the components for their ML pipelines.