The EU AI Act - Its potential impact on the Pharma Industry
Written by Ouisseme Hsine
The EU AI Act - finalized in 2024 – is a regulation brought up by the EU in response to the boom in AI use through the world and across all industries - especially illustrated by ChatGPT coming out late 2022, enabling the use of AI in a very easy and accessible manner to the General Public. This act came out as part of what the EU is calling the “Digital Decade” Strategy. As the steppingstone of this strategy, the EU AI Act did not come out alone. Indeed between 2023 and 2024 five Acts were finalized: the Digital Services Act (DSA), the Digital Markets Act (DMA), the Data Governance Act (DGA), the Data Act and finally the AI Act being the last one. This global boom of what is to be considered a pivotal piece of technology does indeed require regulation, especially as it is becoming part of all industries including such a potential direct impact on health and safety, especially within the life-science and pharmaceutical industries.
Before moving forward with the analysis of the AI Act and its potential impact, we would like to preface it with a look back on when the General Data Protection Regulation (GDPR) came out back in 2018.
The GDPR brought significant changes to the management of data. This initiative has emerged as a foundational steppingstone for the proper management of personal data and has now become a core principle in any regulated environment, especially within the pharmaceutical industry. There was indeed some level of anxiety in regards “what will companies be allowed to do” and “how much of a hindrance it might be” – justified or not – but it now proven to be a very needed regulation that no one could do without. The GDPR especially triggered additional laws and regulations outside of the EU and therefore helping with the protection of people’s data across the globe. Even if the GDPR never got new versions, various clarification pieces and related regulations came out with the goal of helping with what the original regulation might not have answered - The AI Act being one of them.
Note that similarly to the Annex 11 coming out to complement the GMP Guidelines, the EU AI Act is being complemented by Annex 22.
Annex 22 is still in draft with about 5000 comments gathered in November 2025, that need to be worked through by the EU committee responsible. A lot of those comments bring up the many contradictions between the Act itself, and the Annex as well as within those documents themselves.
It is probably safe to say that after the AI Act is finally enforced, other companion documents will come out. Realistically, the AI Act and other regulations that might come out on how to use AI are indeed needed but it might bring more challenges and exacerbate some of already present issues in the pharmaceutical industry and never really answer the very valid concerns that it should aim to answer.
We will be especially focusing here on the potential impacts the EU AI Act could have on the Pharmaceutical Industry.
To ensure clear understanding, we will specify that the AI Act defined AI systems as “any machine-based system for which the outputs defined from their inputs are generated with a varying degree of autonomy and that they may adapt to new situations”.
Let’s start by breaking down what exactly this act says and what it would mean to the pharmaceutical industry.
The Act is made up of six key principles on how to interpret it:
As its core seats the need for a Risk-Based Classification, broken down into:
- Unacceptable risk: AI systems that pose a clear threat to safety or fundamental rights (e.g., social scoring by governments) will be banned.
- High risk: These AI systems (e.g., in healthcare, critical infrastructure, etc.) must meet strict requirements, such as transparency, accountability, data governance, and human oversight. A pre-approval by an accredited Notified Body will be required prior to using/marketing such type of systems.
- Limited risk: These require lighter obligations like transparency but not complete regulatory compliance.
- Minimal risk: AI systems with low risk (e.g., spam filters) are minimally regulated.
Overall, the definition of the level of AI Risks should be akin to what we’re using today as part of a GxP risk assessment which involves evaluating the potential threats to patient health and safety, as well as their privacy. This AI risk assessment (RA) will have to either be integrated into the existing RA framework or adapted from existing RA frameworks specifically to the AI systems.
This is followed by the need for a Regulation of High-Risk AI.
This is especially focused on in the Annex III of the AI Act and will be one of the key points for the pharmaceutical industry:
- AI used in medical devices, pharmaceuticals, or clinical trials, which are classified as high-risk, must comply with strict documentation, risk assessment, and auditing requirements.
- This includes obligations on data quality, model transparency, traceability, and explainability.
This poses certain questions, especially within the Medical Device space, where EU Medical Device Regulation (MDR) for a while already imposes different risk classes (Class I, Class IIa, Class IIb, Class III) depending on the risk to the patients. With the EU AI Act, it appears that any Medical Device leveraging AI falls under High Risk.
It then poses the following question: will we potentially have different regulatory pathways to follow for the Legal Manufacturer, one for the AI aspect and one for the Medical Device aspect?
These principles are to be supported through the need for Human Oversight and Accountability regarding the AI systems and their use:
- AI systems must allow for human oversight, particularly in the healthcare sector, where decisions can impact patient safety.
- There will be a requirement for clear accountability in case of AI failure or harm.
Coming back to other acts, as well as GDPR, the AI act reiterates the need for Data Governance:
- Strong requirements are imposed on data quality, data protection, and data privacy, especially when AI is used for patient data or clinical trials.
- This includes ensuring that AI systems are trained on high-quality, representative, and unbiased datasets.
This data governance requirement will have to be covered for the Act but also for the Data Governance Act as well as GDPR. Will also ask itself of the GAMP guideline requirements that will in GxP also have to be followed through.
Overall, that brings forth the need for AI Monitoring and Conformity Assessment:
- High-risk AI systems will need to undergo regular monitoring and assessments to ensure ongoing compliance with the Act’s standards.
- Companies must demonstrate compliance through audits, certifications, and documentation.
The need for human oversight brings the question of skill, recourse availability, ownership and is directly linked to the next two principles, data governance, as well as monitoring and conformity. That will then imply to be translated into positions within companies with procedures etc. - this can also then fully again be interpreted vastly differently from one company to another.
Another interesting metric for future analysis will be the level of compliance with the Transparency and Disclosure requirements:
- AI systems need to be transparent about their decision-making processes, especially in high-risk sectors like pharmaceuticals.
- Users must be informed when they interact with AI systems (e.g., when using AI-driven tools in drug development or patient care).
- AI systems and their evolution also need to ensure data integrity that needs to be reflected in how data is shared.
This requirement around Transparency and Disclosure might then pose a contradiction to GDPR when pushed to the extreme. Indeed, the pharmaceutical industry is opaque and who shares what is not always fully equal. For example, the bigger the company is the least inclined to share it tends to be. This poses the question of unbalanced sharing or of oversharing, especially when looking at niches in the field such as digital health. But also, smaller companies might not be able to level the fields as much as bigger ones with their knowledge due to limited amounts of assets and resources.
As previously mentioned, these principles are essential for understanding the impact of AI on the world today. However, they also present certain challenges.
- Indeed, the need for a Risk-Based Classification can lead to significant variations in the interpretation of risk across various companies within the same industry, as different companies may define risk differently based on the intended application of their AI model or system.
- Now looking at the Regulation of High-Risk AI.As previously stated, this is the key point for the pharmaceutical industry - and to be fulfilled it requires the other principles to be also fulfilled. If you're developing or deploying an AI system in pharma which is classified as High-Risk AI it must follow risk management, documentation, human oversight, and conformity assessments before it hits the market.
Despite the previously presented criticism, the EU AI Act or at least some variation of it is necessary, but the way it has been written today has the potential to be a hindrance to innovation.
Beyond the previously presented “obvious challenges”, some correlated new questions and issues will need answers for its proper application
– beyond being ready for its enforcement, to name a few:
- A big change in mindset on how to collaborate across companies and especially across borders will also have to happen and continue to happen as things move forward. It will be crucial to address the question of where and what data is stored, as well as against whom, where, and why it is being used. These issues must be clarified while ensuring compliance with the defined AI Risks and GDPR, among other regulations. Consequently, data homogenization pipelines and best practices will have to shift and evolve. This will then pose various legal, ethical questions that will also have to be answered.
- One of the major potential consequences of this AI Act might be its impact on Innovation. Indeed, while the Act aims to promote innovation, the heavy regulatory burden may slow down the development and deployment of new AI-driven technologies in the pharmaceutical industry. The requirement for transparency and explainability may limit the use of certain advanced AI techniques (like deep learning), which are often seen as "black box" models.
- Initial compliance assessments are necessary to determine costs, especially regulatory compliance costs. The complexity of these assessments can vary depending on the company's size. Following the conclusion of said assessment, definitions and documentations and general compliance requirements will require to be created, introduced and adopted. To be fully compliant, technical modifications, migration, and system update will also be necessary. To fulfill all this, a high level of monetary investment will be required.
- Regarding the Data Governance and Privacy Concerns: Beyond what presented above about needing to meet the AI act as well as GDPR, and other Data Integrity Regulations – pose the very critical question of the bias in training datasets that could lead to very flawed AI models, which would be Direct GxP risk for patients, products and data in areas such as drug discovery and patient diagnostics.
- Attached to both previous points, there will be the need for human oversight. This human oversight, through Human-In-The-Loop (HITL) processes, will require both time and monetary costs to be implanted. Policies, procedures, as well as general documentation will have to be defined and written. Teams will have to be trained. It is also important to consider that by enforcing heavy HITL processes and human monitory on AI model, especially with dynamic mode, we risk losing the advantages of AI such as its autonomy and automation. AI validation becomes tricky as if a human needs to always confirm the testing - through testing that should be Idempotent which goes against how AI works - as it evolves so hard to preform constant validation.This point as one of the key principals of the Act will have to be enforced – consulting firms might be able to helps alleviate some of the effort, but some cost will still be needed.
Due to statistical nature of these models the process validation will have to be adjusted to not only focus on absolute predictions of the results of the testing but more on the bounds of can be considered acceptable results.
And from all of this we believe the biggest consequence will be for the smaller companies.
Indeed, taking the example of a small digital health/software as a medical device company the level of investment might not be able to be met, therefore, it will give even more power and monopoly to the same few companies having enough resources, but also to heavily hinders innovation, as stated above.
Despite this, we continue to believe that, currently, the EU AI Act is raising more issues and creating additional barriers to the industry's progress.
Now as of the publishing date the EU AI has been postponed to late 2028 for its final enforcement, the Annex 22 is still in draft but in parallel the FDA is putting out rules around AI that might themselves put in question the EU AI Act itself and how the very disparate treatment of AI by those two entities will itself impact the industry.
To conclude, while this Act may be necessary, its consequences and challenges bring some serious concerns that will have to be tackled as soon as possible, especially to alleviate the upcoming costs and spread the effort needed. This will ensure that any risk for patients is mitigated as early as possible.
And this is where companies such as wega can support and guide you on how to tackle the upcoming challenges.