
Answer-first summary for fast verification
Answer: Text generation
## Analysis of LLM Features for Code Generation from Natural Language ### Understanding the Task The requirement is to produce **code** from **natural language code comments**. This involves taking human-readable descriptions (like "create a function that calculates factorial") and generating corresponding programming code. ### Evaluation of Each Option **A: Text Summarization** - **Purpose**: Condenses longer text into shorter versions while preserving key information. - **Why it's unsuitable**: Code generation is not about summarizing text; it's about creating new, structured programming syntax from descriptive input. The output (code) is typically longer and structurally different from the input (natural language), not a condensed version. **B: Text Generation** - **Purpose**: Creates new text content based on input prompts, producing coherent and contextually relevant output. - **Why it's optimal**: This feature directly enables LLMs to generate novel text sequences, including code, from natural language descriptions. When given a prompt like "write a Python function to sort a list," the model generates the corresponding code by predicting the most appropriate sequence of tokens. This aligns perfectly with the requirement of producing code from comments. **C: Text Completion** - **Purpose**: Predicts and completes partially provided text by adding the most likely continuation. - **Why it's less suitable**: While text completion can generate code, it's typically used for continuing existing text patterns (like finishing a sentence or code snippet that's already started). The requirement specifically involves starting from natural language comments, not completing partial code. Text generation is a broader capability that encompasses this task more accurately. **D: Text Classification** - **Purpose**: Categorizes input text into predefined classes or labels. - **Why it's unsuitable**: This involves assigning categories (like sentiment analysis or topic labeling), not generating new code. The task requires creation, not classification. ### Key Distinctions - **Text Generation vs. Text Completion**: While both can produce code, text generation is the broader capability that includes creating entirely new content from prompts, which matches the requirement of starting from natural language comments. Text completion is more specific to continuing existing text patterns. - **Practical Application**: In AWS AI services and LLM implementations, text generation features (like those in Amazon Bedrock or foundation models) are designed for tasks requiring creative output from prompts, including code generation from descriptions. ### Conclusion **Text Generation (B)** is the most appropriate LLM feature because it enables the creation of new, structured code from natural language inputs, which is exactly what the company requires. While text completion could technically accomplish this in some implementations, text generation is the more accurate and comprehensive feature for this specific use case of starting from descriptive comments rather than completing partial code.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
Which feature of a large language model (LLM) enables the generation of code from natural language descriptions?
A
Text summarization
B
Text generation
C
Text completion
D
Text classification