Dataflow ML Development Template
As a component of Big Query’s comprehensive feature set, Dataflow ML makes it possible to conduct scalable local and remote inference using batch and streaming pipelines. It also makes it easier to prepare data for model training and handle the outcomes of model predictions. Google’s brand-new Dataflow ML Starter project offers all of the boilerplate and scaffolding needed to quickly and simply build and start a Beam pipeline.
In order to classify the provided images, the Dataflow ML Starter project uses a simple Beam Run Inference pipeline that applies a few image classification models. As seen in Figure 1, the pipeline pre-processes the input images, runs a PyTorch or Tensor Flow image classification model, post-processes the results, and then writes all predictions back to the GCS output file. Alternatively, it subscribes to a Pub/Sub source to receive image GCS paths.
The project guides the user through each stage of the Dataflow ML development process, illuminating each one in turn, including:
- Beam pipeline development in a local Python environment and unit test creation for pipeline validation
- Using DataflowRunner and CPUs to run the Beam RunInference job.
- Utilizing GPUs to accelerate inference, creating and testing a custom container with GCE virtual machines, and supplying some Dockerfile samples.
- Demonstrating how to classify images using Pub/Sub as the streaming source.
- Demonstrating how to use a Dataflow Flex Template and package all of the code.
- In conclusion, the project generates a boilerplate template that is standard and easily customizable to meet your unique requirements.
Go to the GitHub repository and follow the directions to get started. They think that anyone using Dataflow ML will find this starter project to be a useful resource. Google are excited to share their expertise with the community and see how it will enable data engineers and developers to accomplish their objectives. Kindly remember to give it a star if you find it useful!