News

Detection and analysis of prompt injection in indian multilingual large language models

  • Jagadeesan Srinivasan, Silla Ann Regi, Anbarasa Kumar Anbarasan, Suresh A, Vetriselvi T, Sivakumar Venu--Nature.com
  • published date: 2026-04-04 00:00:00 UTC

With the use of Large Language Models (LLMs) becoming more significant in many applications, it has become a great need to ask the question how these LLMs can be made safe and robust, and hence make it behave in an ethical way. Among the most seen and evident…

<li>S. Omri, M. Abdelkader, and M. Hamdi MalPID: Malicious prompt injection detection dataset for large language model based applications In Proc. 2024 IEEE 11th Int. Conf. Commun. Netw. (ComNet) IEE… [+4431 chars]