
Ultimate access to all questions.
When an LLM processes the word "unbelievable", it splits it into sub-units before encoding. What are these sub-units called?
Explanation:
In Large Language Models (LLMs), when processing text like the word "unbelievable", the model first splits the input into smaller sub-units called tokens.
This tokenization process allows LLMs to handle vocabulary efficiently and process text in manageable pieces.