
Answer-first summary for fast verification
Answer: experiment_dir
The correct answer is **C. experiment_dir**. This parameter allows you to specify a directory where AutoML will store the generated notebooks and experiment files, enabling a structured workspace for easy access and analysis. **How to use experiment_dir:** ```python from databricks import automl # Assuming your regression dataset is a Spark DataFrame named ‘data‘ automl_model = automl.regression( data=data, target_col=“target_column“, # Column containing the target values to predict experiment_dir=“/dbfs/my_experiments/regression_run“, # Specify the experiment directory # … other AutoML parameters ) ``` **Incorrect Options:** - **feature_store_lookups:** Manages feature lookups from a feature store, unrelated to experiment storage. - **exclude_cols:** Excludes specific columns from consideration, not experiment storage. - **primary_metric:** Specifies the primary metric for model evaluation, not experiment storage. **Key Points:** - Use **experiment_dir** for organized experiment management. - Ensure the specified directory exists in your Databricks file system (DBFS). **Benefits of using experiment_dir:** - Enhanced organization and traceability. - Easier collaboration and sharing of results. - Ability to revisit experiments for analysis or model re-deployment. - Facilitates tracking progress and comparing different AutoML runs.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.