
Ultimate access to all questions.
How can companies use large language models (LLMs) securely on Amazon Bedrock?
Explanation:
Explanation:
To use large language models (LLMs) securely on Amazon Bedrock, the most fundamental security measure is to configure AWS Identity and Access Management (IAM) roles and policies using the principle of least privilege access. This ensures that only authorized users and services have access to the Bedrock models and APIs, minimizing the risk of unauthorized access or data breaches.
Why other options are not correct:
Option B (AWS Audit Manager): While AWS Audit Manager can help with compliance and auditing, it's not specifically designed for securing LLM usage on Bedrock. It's more focused on compliance frameworks and audit evidence collection.
Option C (Amazon Bedrock automatic model evaluation jobs): This feature helps evaluate model performance and quality, but it's not primarily a security control. It's more about model quality assessment rather than securing access to the models.
Option D (Amazon CloudWatch Logs): CloudWatch Logs can help with monitoring and observability, including some aspects of model behavior, but it doesn't provide the fundamental access control security that IAM does. While monitoring for bias is important, it's not the primary security mechanism for controlling access to the models.
Key Security Principles for Amazon Bedrock: