
Answer-first summary for fast verification
Answer: Self-Attention mechanism
The correct answer is **B. Self-Attention mechanism**. **Explanation:** - **Self-Attention mechanism** is the core component in Transformer architecture that allows the model to weigh the importance of different words in a sentence relative to each other. It processes all words in parallel and captures contextual relationships simultaneously, unlike sequential models. - **Recurrent loops** (Option A) are used in RNNs and LSTMs, which process sequences sequentially and cannot capture all relationships simultaneously due to their sequential nature. The Self-Attention mechanism enables Transformers to handle long-range dependencies efficiently and has become fundamental in modern NLP models like BERT and GPT.
Author: Ritesh Yadav
Ultimate access to all questions.
No comments yet.