
Ultimate access to all questions.
In the context of developing a Kubeflow pipeline on Google Kubernetes Engine, you are tasked with issuing a query against BigQuery as the initial step and using its results as the input for the subsequent step. Considering the need for efficiency, reusability, and minimal setup complexity, which of the following approaches is the most straightforward and aligns with best practices for Kubeflow pipeline development? (Choose one correct option)
A
Develop a custom Python script that utilizes the BigQuery API to execute queries. This script should be executed as the first step in your Kubeflow pipeline, with the results passed to the next step.
B
Manually execute the query using the BigQuery console, save the results into a new BigQuery table, and then configure your pipeline to read from this table.
C
Create a custom component using the Kubeflow Pipelines domain-specific language (DSL) that leverages the Python BigQuery client library to perform the query operation.
D
Locate the BigQuery Query Component in the Kubeflow Pipelines GitHub repository, use its URL to import the component into your pipeline, and configure it to execute your query.