
Ultimate access to all questions.
A Generative AI Engineer is building a production-ready LLM system that uses the Foundation Model API with provisioned throughput to reply directly to customers. They need to prevent the LLM from generating toxic or unsafe responses with the least amount of effort.
Which approach should they use?
A
Ask users to report unsafe responses
B
Host Llama Guard on Foundation Model API and use it to detect unsafe responses.
C
Add some LLM calls to their chain to detect unsafe content before returning text
D
Add a regex expression on inputs and outputs to detect unsafe responses.