Ultimate access to all questions.
You developed a Vertex AI pipeline that trains a classification model on data stored in a large BigQuery table. The pipeline has four steps, each created by a Python function that uses the KubeFlow v2 API. You observe high costs associated with the development, particularly during the data export and preprocessing steps. You need to reduce model development costs, especially for frequent model iterations that adjust the code and parameters of the training step. What should you do to optimize the costs?