
Answer-first summary for fast verification
Answer: Locate the Kubeflow Pipelines repository on GitHub. Find the BigQuery Query Component, copy that component's URL, and use it to load the component into your pipeline. Use the component to execute queries against BigQuery.
The correct answer is D. Kubeflow Pipelines provide pre-built components for various tasks, including querying BigQuery. By using the BigQuery Query Component from the Kubeflow Pipelines repository on GitHub, you can simply load this component into your pipeline, which reduces the amount of custom code you need to write. This approach is more efficient and less error-prone compared to manually running queries in the BigQuery console (Option A) or writing custom Python scripts (Option B and C). Therefore, using the pre-built component simplifies your workflow and ensures that the pipeline can be executed smoothly with minimal manual intervention.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are developing a Kubeflow pipeline on Google Kubernetes Engine (GKE) to automate your machine learning workflow. The first step in the pipeline is to issue a query against BigQuery to extract data. The results of this query will be used as input for the next step in your pipeline. Considering the need for simplicity and efficiency in implementing this step, what should you do?
A
Use the BigQuery console to execute your query, and then save the query results into a new BigQuery table.
B
Write a Python script that uses the BigQuery API to execute queries against BigQuery. Execute this script as the first step in your Kubeflow pipeline.
C
Use the Kubeflow Pipelines domain-specific language to create a custom component that uses the Python BigQuery client library to execute queries.
D
Locate the Kubeflow Pipelines repository on GitHub. Find the BigQuery Query Component, copy that component's URL, and use it to load the component into your pipeline. Use the component to execute queries against BigQuery.
No comments yet.