Unlock the Potential of ACMA's MedAffairsAI: Features, Implementation, and Risk Mitigation Strategies

ACMA Author Logo

Rachel Hohe, PhD

Oct 16, 2024

9 minutes read

Introduction

In this article, the function of ACMA’s MedAffairsAI tool will be discussed in-depth, followed by considerations for clients implementing AI software into their practice. The final section will broadly describe risk mitigation in using AI tools before narrowing to the specific ways that ACMA’s MedAffairsAI tool meets those expectations. In this article, we address MedAffairsAI’s capacity to compliantly overcome long-standing pain points within Medical Affairs teams.

ACMA’s MedAffairsAI Tool

Primary Functionalities of ACMA’s MedAffairsAI

MedAffairsAI is ACMA’s cloud-based language learning application model trained in-house on the largest compendium of medical affairs information using machine learning. The software itself is customizable to client-specific needs. To ensure easy implementation, the MedAffairsAI software integrates with pre-existing workflows, ensuring a seamless user experience. Additionally, the software is capable of accepting proprietary internal documents. The following sections will describe each feature’s function and its direct benefits to a medical affairs team. 

Model Hosting

ACMA utilizes a series of high-quality services that host, store, secure, compute, and serve as a framework upon which the MedAffairsAI operates. Amazon Web Services (AWS) serves as the secure hosting platform for all cloud computing services which MedAffairsAI uses. Two-factor authentication, provided by Auth0, ensures safe access to the MedAffairsAI tool. Support for ACMA’s MedAffairs tool is available 24/7 through online chats with live representatives on both the MedAffairsAI platform and ACMA’s homepage (ACMA, n.d.). By providing a secure and intuitive AI platform, MedAffairsAI is poised to be easily adaptable across the field. 

User Interface & Basic Navigation

Upon launching the MedAffairsAI tool, the user dashboard appears. The dashboard consists of a menu and a welcome message. Within the left-aligned menu there are a series of features displayed: user profile, dashboard tab, external Q/A, internal Q/A, dropdown documents menu, and software support request form. An accessible user interface simplifies workflow and stratifies navigation. 

External data

By selecting the ‘External Q/A’ tab from the menu, the user will launch a page that contains a language AI trained with the largest compendium of medical affairs information, courtesy ACMA. This page provides sample text prompts and a textbox below, in which a user may submit their own text-based questions. The user may then submit their query by clicking the blue paper plane button in the right side of the text submission box. The software will then generate a concise text-based result from ACMA’s wealth of knowledge. By having the option to pose general industry questions to the MedAffairsAI, users are empowered to access and utilize around a decade’s worth of educational medical affairs content. Since its founding in 2015, ACMA has developed the largest compendium of information in this field, ranging from regulatory standards on the conduct of clinical trials to regulatory document identification to government legislation in the pharmaceutical industry. 

Internal data

Alternatively, a user may choose to utilize MedAffairsAI’s  ‘Internal Q/A’ feature by selecting it from the left-aligned menu. Although the use of this feature mirrors that of ‘External Q/A’, the database of knowledge must be trained on internal documents that may include proprietary information. This unique feature allows purposeful customization of the MedAffairsAI, with considerations for personal document upload and API integration with pre-existing data management tools. The AI output will provide a concise text answer to your query with specific references to internal document sources, a feature unique to MedAffairsAI internal data. By providing this integration capability, ACMA empowers its clients to accelerate their internal searches, widen the scope of their searches, and readily reference relevant documentation in answering queries.

Documents

In order to train the AI with internal documents, the user must navigate to the ‘Documents’ drop-down menu and select ‘Train Documents’. Now, the user can upload documents in myriad formats including pdf, txt, png, and jpg. Once uploaded, the files appear within the submission window. To proceed with training, the user must click the button titled ‘Train AI’. After training, these documents will be available under the ‘Trained Documents’ tab in the ‘Documents’ drop-down menu, which will list the file name, the user who uploaded the file, the file type, the upload date, and options to download or delete selected files from the database. These documents are fully searchable, with a text-based search bar at the top of this page. If a user only wants to search a subset of documents, they can select individual groups for clustering. By allowing a wide range of document accessibility, the MedAffairsAI software can digest, compile, and specify documents for restricted usage. This capability enhances specificity of results, therefore heightening quality of query responses.

Modifiable Accessibility Settings

When the MedAffairsAI is purchased for an organization, member permissions can be managed within the software by the group manager. Here, the group manager can view all user profiles, control who can upload or download resources, and moderate access to internal or external queries. Moderators can change permissions of individuals over time to the tool capabilities. In this way, the MedAffairsAI tool can be integrated with user roles in mind and reduce content modification, therefore ensuring content compliance and trustability.

ACMA’s Commitment to Risk Mitigation with AI Use Cases

Ultimately, artificial intelligence is built upon a language-based probability model, where generative output is synthesized on a series of mathematical training data calculations. The output quality of any artificial intelligence model is therefore only as good as the information it was trained upon. The errors in generative output as a result of poor training data are called hallucinations. Reducing the presence of these hallucinations is critical across industries, but is especially significant for integration within medical affairs workflows because of the inherent importance of evidence-based communication. Similarly, the implementation of artificial intelligence poses risks to intellectual property infringement if the training model incorporates data from non-consenting third parties. The potential of artificial intelligence to infringe these rights must be mitigated to protect both clients and the content of third parties.

Depending on the use case, a multitude of factors may contribute to overall risk for specialized AI tools. AI developers must take into consideration the following spectrum of risk-conferring factors to mitigate both ethical concerns and expensive legal consequences: 

Origin of Training Data

The training data origin determines the level of risk in using an AI model. When considering intellectual property infringement risk, AI models trained on work within the public domain have a much lower risk of use than a model trained on 3rd-party copyrighted work. Copyrighted 3rd-party works exist as protected entities unless purchased under license, and unpermitted use of this protected data may open developers and clients alike to legal repercussions. In creating an AI model, developers must mitigate this data origin risk by ensuring permission from primary authors on training data usage (Soliman & Shandro, 2024). To best mitigate client risk, MedAffairsAI empowers purchasers with an AI model trained on an in-house compendium of medical affairs data. This positions MedAffairsAI as a low-risk software choice for users.

Learning Type

When developing an AI model on training data, the learning type confers a level of risk. Training can fall into the following categories, ranging from high-risk to low-risk: supervised, unsupervised, and reinforcement learning (Soliman & Shandro, 2024). Supervised training is based on pairwise input-output data classification, which is later refined in a supervised learning process where the output is known. Over time, the gap between the expected versus model output is narrowed as the model becomes more refined. Supervised learning is most useful in predictive forecasts (Taye, 2023). Unsupervised learning differs by providing an unbiased learning model. Outputs are not labeled as categorically right or wrong, instead, this learning type creates groups of clustered inputs by their inherent characteristics. Unsupervised learning is most useful in identifying common groups of data, especially in customer-centric marketing strategies (Taye, 2023). Lastly, reinforcement learning pulls information from the operational environment and refines itself by trial and error. Refinement is encouraged by enforcing learning pathway adherence by magnitude of model correctness. Reinforcement learning is considered the lowest-risk AI learning model because it encourages efficiency in pathway generation while reducing outside bias (Taye, 2023). MedAffairsAI has been built upon the Meta Llama model, meaning that the training model is supervised learning. Although a higher-risk learning type, ACMA has further modified this tool to enhance retrieval quality of vector data in a process called Retrieval-Augmented Generation (RAG). Further, ACMA’s MedAffairsAI has been fine-tuned to detect and identify incorrect information before it is supplied to the end user by using a series of context-based clues. By refining the supervised learning of the MedAffairsAI tool with RAG and finetuning, this tool can detect validity of outputs. If avalid answer is unable to be generated, the AI recognizes its’ training limitations and reiterates this acknowledgement to the end user. 

Jurisdiction 

Changing legislation on AI usage, at both the federal and international level, confers legal risks to both developers and end users alike. Heavily-restricted AI governance is much higher risk than legislation that allows for liberal fair use, and yet these laws are constantly evolving to keep pace with AI innovation. Jurisdiction risks can be mitigated by utilizing an AI model that adapts to current regulatory standards, and in some cases, even predicts legislative changes before they happen. ACMA’s MedAffairsAI is compliant with current international regulatory requirements and stays abreast of AI-driven policy changes. In this way, MedAffairsAI is primed to reduce user risk by anticipating and incorporating jurisdiction related to the evolving AI landscape.

Task type 

An AI user may need to consider risk depending on the end use of the AI output. Bloom’s  Taxonomy describes a hierarchical framework of learning objectives that was first established in 1956 for higher education. This concept has been revisited in the context of AI capabilities and can explain model limitations (Armstrong, 2010). Therefore, if the intended purpose for utilizing AI is for summarizing input, the risk is much lower than if the AI is researching ideas. When synthesizing “new” ideas, AI models lack nuance to formulate truly novel concepts and instead will pull data from existing sources. This raises the risk of model hallucination, which can be mitigated by recognizing AI’s limitations and adhering to use cases in lower-tier skills within Bloom’s Taxonomy. ACMA recognizes that the use case for MedicalAffairsAI will be primarily factual in purpose, rather than metacognitive. This reduces risk to end users of these tools’ outputs. 

Intention/Use of Output

How stringently will the AI output be reviewed prior to publication, and how does this relate to conferred risk? A model designed to have its output thoroughly reviewed by human parties prior to publication or copyright is considered to be a low risk use because users are performing their due diligence in content review. On the other hand, a model designed to generate copyrighted material without this review process in-place is considered to be high risk. The copyrighting of unreviewed AI-generated material is even considered by the United States Copyright Office to be in violation of its human author policy, which intersects with jurisdiction risk described in the paragraphs above. To mitigate intention risk, all material produced by AI must be reviewed and edited by human parties prior to copyright or publication in a manner that confers output ownership. MedAffairsAI has a low intention risk because the users are medical affairs professionals who review the content for factual accuracy prior to utilization, made possible by a software element that cites the resources used to synthesize an answer. A disclaimer on the MedAffairsAI software also functions to remind users to check the correctness of important information.

Key Considerations for MedAffairsAI Implementation

Overall, evaluating the use case for AI implementation helps assess inherent risk. It is important to recognize contributing risk factors from the origin of their AI’s training data, model learning type, local jurisdiction concerning AI use, the specific task type, and the intention of output use. Before using AI models for tasks, users and companies alike must evaluate and plan to mitigate risk using the above frameworks. Further, by consulting expert legal and compliance guidance, ACMA is poised to not only meet risk mitigation requirements, but to anticipate changes in the regulatory landscape before they happen. In addition, ACMA’s MedAffairsAI is also built on an enormous database of up-to-date medical affairs information and long-standing industry partnerships.

References:

Accreditation Council for Medical Affairs. (2024). MedAffairs AI Tool Demo

Accreditation Council for Medical Affairs. (n.d.). ACMA Homepage. https://medicalaffairsspecialist.org/about

Armstrong, P. (2010). Bloom’s Taxonomy. Vanderbilt University Center for Teaching. https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/ 

Soliman, W., & Shandro, A. (2024, March 22). The Next Frontier: Generative AI in pharma and legal considerations. YouTube. https://www.youtube.com/watch?v=Kg9LpCIZqs0 

Taye, M. M. (2023). Understanding of machine learning with Deep Learning: Architectures, workflow, applications and future directions. Computers, 12(5), 91. https://doi.org/10.3390/computers12050091 

Excel your medical affairs career with BCMAS

Recognized Globally as the Badge of Excellence for Medical Science Liaisons & Medical Affairs Professionals

Keep up with medical affairs trends

Sign up for our newsletter (no spam)