
Answer-first summary for fast verification
Answer: All Delta Lake transactions are ACID compliant against a single table, and Databricks does not enforce foreign key constraints.
## Explanation **Correct Answer: D** This is the correct answer because it accurately describes key architectural differences between traditional relational databases and Databricks Lakehouse: 1. **Delta Lake ACID compliance**: Delta Lake provides ACID (Atomicity, Consistency, Isolation, Durability) transactions, but these are guaranteed **within a single table**, not across multiple tables in a single transaction. 2. **No foreign key constraint enforcement**: Databricks does not enforce foreign key constraints at the database level. While you can define foreign key relationships in your data model for documentation purposes, Databricks will not validate or enforce these constraints during write operations. **Why other options are incorrect:** - **A**: Incorrect - Databricks does not enforce foreign key constraints at all, regardless of the identifier type. - **B**: Incorrect - While Databricks supports Spark SQL and JDBC, the underlying architecture is different from traditional RDBMS. Multi-table inserts with foreign key validation cannot be directly migrated without refactoring. - **C**: Incorrect - This describes a traditional RDBMS concern with table locking, but Delta Lake uses optimistic concurrency control rather than table-level locking. - **E**: Incorrect - While Delta Lake has upsert functionality (MERGE), this is not specifically related to foreign key constraints, which Databricks doesn't enforce. **Migration Implications:** When migrating from a traditional RDBMS with foreign key constraints to Databricks Lakehouse, the data engineer must: 1. Implement data quality checks separately (using tools like Delta Live Tables expectations or custom validation logic) 2. Handle referential integrity at the application level 3. Consider using Delta Lake's MERGE operations for upserts 4. Understand that multi-table transactions with rollback across tables are not natively supported
Author: Keng Suppaseth
Ultimate access to all questions.
No comments yet.
A junior data engineer is migrating a workload from a relational database system to the Databricks Lakehouse. The source system uses a star schema, leveraging foreign key constraints and multi-table inserts to validate records on write. Which consideration will impact the decisions made by the engineer while migrating this workload?
A
Databricks only allows foreign key constraints on hashed identifiers, which avoid collisions in highly-parallel writes.
B
Databricks supports Spark SQL and JDBC; all logic can be directly migrated from the source system without refactoring.
C
Committing to multiple tables simultaneously requires taking out multiple table locks and can lead to a state of deadlock.
D
All Delta Lake transactions are ACID compliant against a single table, and Databricks does not enforce foreign key constraints.
E
Foreign keys must reference a primary key field; multi-table inserts must leverage Delta Lake's upsert functionality.