
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Q6 – How do Transformer-based LLMs generate text?
A
By classifying text into predefined categories
B
By predicting the next token based on all previous tokens using attention
C
By copying and paraphrasing input directly
D
By compressing data using latent vectors
Explanation:
Transformer-based LLMs generate text by predicting the next token in a sequence based on all previous tokens using attention mechanisms. This is the core principle of autoregressive language modeling where the model processes the entire context of previous tokens to generate the most probable next token. The attention mechanism allows the model to weigh the importance of different parts of the input sequence when making predictions, enabling it to capture long-range dependencies and contextual relationships in text.