This article discusses the security challenges and best practices for generative AI applications, focusing on those built using cloud services like Amazon Bedrock. It outlines the unique nature of these applications, including their prompt-driven and context-aware characteristics. The article highlights prompt injection as the primary attack vector, along with other threats such as data poisoning and model denial of service. To mitigate these risks, the author recommends implementing various security measures, including input validation, defense in depth, private networking, and the principle of least privilege. The article also emphasizes the importance of monitoring AI interactions, protecting sensitive data, and conducting regular security assessments. It concludes by discussing Amazon Bedrock’s built-in security features and encourages readers to study relevant resources for building well-architected generative AI applications.