To perform text analytics at scale using Apache Spark, you would first represent the dataset as an RDD (Resilient Distributed Dataset) or a DataFrame in Spark. Then, you would use Spark's built-in text processing functions, such as tokenization, stop word removal, and stemming, to preprocess the text data. Finally, you would apply text analytics algorithms, such as topic modeling or sentiment analysis, to the preprocessed data to extract insights and patterns.