
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Q2 – In a translation model based on the Transformer architecture (like T5 or MarianMT), what is the role of the decoder?
A
Encodes the input sentence into embeddings
B
Predicts the next word using encoded context
C
Removes noise from text
D
Performs tokenization
Explanation:
In Transformer-based translation models like T5 or MarianMT, the decoder's primary role is to generate the output sequence (translated text) by predicting the next word using the encoded context from the encoder. The encoder processes the input sentence and creates contextual embeddings, while the decoder uses these embeddings along with previously generated tokens to predict subsequent tokens in the output sequence. This autoregressive generation process is fundamental to sequence-to-sequence tasks like machine translation.