
Answer-first summary for fast verification
Answer: Use prompt engineering.
## Detailed Explanation When a company wants to improve the accuracy of responses from a generative AI application using a foundation model on Amazon Bedrock, the most cost-effective approach is **prompt engineering (Option D)**. Here's why: ### Why Prompt Engineering is Optimal 1. **Minimal Cost**: Prompt engineering involves refining the input prompts to the foundation model without modifying the model itself. This requires no additional training costs, compute resources, or model hosting fees beyond standard inference usage. 2. **Immediate Implementation**: Changes can be tested and deployed quickly since they don't involve model retraining or fine-tuning cycles. 3. **Preserves Model Integrity**: The foundation model remains unchanged, maintaining its original capabilities while improving output quality through better input guidance. 4. **Amazon Bedrock Context**: Bedrock provides access to multiple foundation models through API calls. Prompt engineering leverages these pre-trained models efficiently by optimizing how queries are structured. ### Why Other Options Are Less Suitable - **Option A (Fine-tune the FM)**: While fine-tuning can improve accuracy for specific tasks, it requires additional training data, compute resources for training, and ongoing management of the fine-tuned model. This incurs significant costs compared to prompt engineering. - **Option B (Retrain the FM)**: Retraining a foundation model from scratch is prohibitively expensive and impractical for most organizations. Foundation models require massive datasets, extensive compute infrastructure, and specialized expertise. - **Option C (Train a new FM)**: Training a new foundation model has the highest cost and complexity. It involves all the expenses of retraining plus additional development overhead, making it the least cost-effective option. ### Best Practice Considerations In AWS AI/ML best practices, prompt engineering is recommended as the first approach for improving generative AI application performance because: - It provides rapid iteration and testing capabilities - It maintains the scalability and reliability of the underlying foundation model - It allows leveraging the full knowledge base of pre-trained models without customization overhead - It aligns with the serverless, pay-per-use model of Amazon Bedrock services For a company seeking the **most cost-effective** solution to improve response accuracy, prompt engineering provides the optimal balance of improved performance with minimal additional investment.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A company aims to enhance the response accuracy of its generative AI application, which utilizes a foundation model (FM) on Amazon Bedrock. What is the most cost-effective solution to achieve this?
A
Fine-tune the FM.
B
Retrain the FM.
C
Train a new FM.
D
Use prompt engineering.