
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Q3 – Why do Transformers use positional encodings?
A
To increase randomness in generation
B
To provide sequence order information to the model
C
To reduce model size
D
To avoid gradient vanishing
Explanation:
Transformers use positional encodings to provide sequence order information to the model. Unlike recurrent neural networks (RNNs) or convolutional neural networks (CNNs) that inherently capture sequence order through their architecture, Transformers process all tokens in parallel without any inherent notion of position. Positional encodings are added to the input embeddings to give the model information about the relative or absolute position of tokens in the sequence. This allows the model to understand word order, which is crucial for many natural language processing tasks where word position affects meaning.
Key points: