Securing Generative AI Applications
Abstract
Generative Artificial Intelligence (AI) systems, especially large language models (LLMs), are redefining technology across domains – from content creation to decision support – but they also introduce critical security challenges. This literature review synthesises findings from twenty recent studies to survey major security concerns in generative AI and the countermeasures proposed to address them. Key issues include the generation of false or misleading content (“hallucinations”) that undermine accuracy, privacy leakage through memorisation of sensitive training data, prompt injection and “jailbreak” attacks that bypass model safeguards, and users’ overreliance on AI outputs de spite potential errors. Furthermore, generative AI can be maliciously exploited for phishing, malware development, misinformation, and other cyberattacks. When integrated into real-world applications, LLMs present new vulnerabilities, from insecure plugin interfaces to unsafe handling of model outputs leading to injection flaws. We categorize these threats and examine defensive strategies from the literature, including alignment techniques to reduce toxic and incorrect outputs, privacy-enhancing methods (differential privacy, data governance) to curb leakage, robust prompting guidelines and filters to resist injection, and frameworks for human-AI collaboration and oversight in high-stakes uses. By drawing on a comprehensive set of academic and industry studies, this review highlights emerging best practices and research directions for securing generative AI applications against both technical and human-centric vulnerabilities.
References
leakage and memorization attacks on large language models (llms) in generative ai applications. Journal
of Software Engineering and Applications. 2024 May 20;17(5):421-47.
2. Zeng S, Zhang J, He P, Xing Y, Liu Y, Xu H, Ren J, Wang S, Yin D, Chang Y, Tang J. The good and the bad: Exploring
privacy issues in retrieval-augmented generation (rag). arXiv preprint arXiv:2402.16893. 2024 Feb 23.
3. Christodoulou E, Iordanou K. Democracy under attack: challenges of addressing ethical issues of AI and big
data for more democratic digital media and societies. Frontiers in Political Science. 2021 Jul 21;3:682945.