Ultimate access to all questions.
Consider a scenario where you are tasked with splitting a large distributed dataset using Spark ML. The dataset contains 10 million records and is stored in a Hive table. Describe the steps you would take to ensure an effective split while minimizing data skew and ensuring that the training and testing subsets are representative of the overall dataset. Additionally, discuss potential challenges and how you would address them.