
Ultimate access to all questions.
You are tasked with processing a large dataset of genomic sequences for research purposes. The data is highly unstructured and requires complex transformations and analysis. Describe how you would use Apache Spark to create an ETL pipeline for this use case, and explain the considerations involved in handling such data.
A
Use Apache Spark's built-in functions to directly process the genomic sequences without any data transformation or schema definition.
B
Define a custom schema for the genomic sequences and use Apache Spark to read, transform, and process the data according to the defined schema, leveraging its machine learning and graph processing libraries.
C
Use a traditional database system to store and process the genomic sequences, as it can handle complex transformations and analysis more effectively than Apache Spark.
D
Ignore the unstructured nature of the data and apply a one-size-fits-all approach to processing the genomic sequences using Apache Spark.