
Ultimate access to all questions.
Question: 3
You are preparing a large legal document to be used in a generative AI model for text summarization. The document has many chapters, and each chapter contains multiple sections with varying lengths. The model you're using has a token limit of 2048 tokens for processing. Which of the following chunking strategies would best ensure efficient processing of the document without exceeding the token limit?