
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
In a translation model based on the Transformer architecture (like T5 or MarianMT), what is the role of the decoder?
A
Encodes the input sentence into embeddings
B
Predicts the next word using encoded context
C
Removes noise from text
D
Performs tokenization
Explanation:
In Transformer-based translation models like T5 or MarianMT:
Option B is correct because the decoder's primary function is to autoregressively generate the output sequence by predicting the next word based on the encoded input context and previously generated tokens.
Why other options are incorrect: