
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Q6 – How do Transformer-based LLMs generate text?
A
By classifying text into predefined categories
B
By predicting the next token based on all previous tokens using attention
C
By copying and paraphrasing input directly
D
By compressing data using latent vectors
Explanation:
Transformer-based LLMs (Large Language Models) generate text using an autoregressive approach where they predict the next token in a sequence based on all previous tokens. The key innovation is the attention mechanism, which allows the model to weigh the importance of different tokens in the input sequence when making predictions. This enables the model to capture long-range dependencies and contextual relationships effectively, rather than simply classifying, copying, or compressing text.