
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Q2 – In a translation model based on the Transformer architecture (like T5 or MarianMT), what is the role of the decoder?
A
Encodes the input sentence into embeddings
B
Predicts the next word using encoded context
C
Removes noise from text
D
Performs tokenization
Explanation:
In Transformer-based translation models like T5 or MarianMT:
Role of the Decoder:
The decoder's primary function is to generate the target language output sequence (translated text) one token at a time
It uses the encoded context from the encoder to predict the next word in the output sequence
The decoder attends to both the encoder's output and its own previous outputs through self-attention and encoder-decoder attention mechanisms
Why other options are incorrect:
A: Encoding the input sentence into embeddings is the role of the encoder, not the decoder
C: Removing noise from text is not a function of the decoder in translation models
D: Tokenization is a preprocessing step performed before the input reaches the model architecture
Key Points:
Encoder processes the source language input and creates contextual representations
Decoder uses these representations to generate the target language output
The decoder operates autoregressively, predicting each token based on previous tokens and encoder context