A nightly Spark batch job ingests Parquet data from an upstream source located at `/mnt/raw_orders/{{date}}`. The job applies `dropDuplicates(["customer_id", "order_id"])` to the incoming DataFrame before writing to the target table `orders` using the `append` mode. If the upstream system occasionally generates duplicate order entries across different batches, how will duplicate records be handled in the target table? | Databricks Certified Data Engineer - Professional Quiz - LeetQuiz