GENERATIVE ARTIFICIAL INTELLIGENCE AND LARGE LANGUAGE MODEL IN PHARMACOVIGILANCE
Pawan Vishwakarma*, Karan Gupta, Abdul Quaiyoom, Shekhar Singh, Navneet Kumar Verma
ABSTRACT
Large language models (LLMs) and generative AI (GenAI) in healthcare offer previously unheard-of potential and difficulties that need for creative regulatory strategies. Applications for generative AI and Large language models are numerous, ranging from personalising diagnostics to automating clinical operations. However, current medical device regulatory frameworks, such as the whole product life cycle (TPLC) approach, are challenged by the non-deterministic outputs, wide functions, and intricate interaction of generative AI and Large language models. Here, we propose for international cooperation in regulatory science research and address the limitations of the TPLC approach to generative AI and Large language models -based medical device regulation. This is the basis for creating novel strategies to test and improve governance in practical contexts, such as regulatory sandboxes and adaptive policies. To manage the effects of Large language models on global health, particularly the concerns of growing health disparities caused by intrinsic model biases, international harmonisation is crucial, as demonstrated by the International Medical Device Regulators Forum. Global regulatory science research facilitates the responsible and fair advancement of Large language models breakthroughs in healthcare by utilising multidisciplinary skills, emphasising iterative, data-driven techniques, and concentrating on the needs of varied populations.
Keywords: Pharmacovigilance, Artificial Intelligence, Adverse Drug Event, Large Language Models.
[Full Text Article]
[Download Certificate]