
Answer-first summary for fast verification
Answer: Predicts the next word using encoded context
In Transformer-based translation models like T5 or MarianMT: **Role of the Decoder:** - The decoder's primary function is to generate the target language output sequence (translated text) one token at a time - It uses the encoded context from the encoder to predict the next word in the output sequence - The decoder attends to both the encoder's output and its own previous outputs through self-attention and encoder-decoder attention mechanisms **Why other options are incorrect:** - **A**: Encoding the input sentence into embeddings is the role of the encoder, not the decoder - **C**: Removing noise from text is not a function of the decoder in translation models - **D**: Tokenization is a preprocessing step performed before the input reaches the model architecture **Key Points:** 1. Encoder processes the source language input and creates contextual representations 2. Decoder uses these representations to generate the target language output 3. The decoder operates autoregressively, predicting each token based on previous tokens and encoder context
Author: Ritesh Yadav
Ultimate access to all questions.
Q2 – In a translation model based on the Transformer architecture (like T5 or MarianMT), what is the role of the decoder?
A
Encodes the input sentence into embeddings
B
Predicts the next word using encoded context
C
Removes noise from text
D
Performs tokenization
No comments yet.