
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
In a translation model based on the Transformer architecture (like T5 or MarianMT), what is the role of the decoder?
A
Encodes the input sentence into embeddings
B
Predicts the next word using encoded context
C
Removes noise from text
D
Performs tokenization
Explanation:
In Transformer-based translation models like T5 or MarianMT:
The encoder processes the input sentence and creates contextual embeddings
The decoder generates the output translation one token at a time, using:
The encoded context from the encoder (via cross-attention)
Previously generated tokens (via self-attention with masking)
The encoded representations to predict the next word in the target language
Option B is correct because the decoder's primary function is to autoregressively generate the output sequence by predicting the next word based on the encoded input context and previously generated tokens.
Why other options are incorrect:
A: Encoding the input is the encoder's role, not the decoder's
C: Removing noise from text is not the primary function of the decoder in translation models
D: Tokenization is typically a preprocessing step, not the decoder's main role