The EU AI Act entered into force on August 1, 2024. Its provisions are phased: prohibited AI practices became enforceable on February 2, 2025. General-purpose AI model obligations and the governance framework apply from August 2, 2025. The most operationally significant provisions, covering high-risk AI systems, take effect on August 2, 2026. For PE deal teams evaluating acquisitions with AI components in EU markets, that deadline is not distant. It is four months away.
The critical insight for acquirers: the EU AI Act does not operate in isolation. It explicitly preserves GDPR's requirements. Article 2(7) of the AI Act states that it does not affect GDPR, and Article 10 imposes data governance obligations for high-risk AI systems that must be satisfied in addition to, not instead of, GDPR's existing requirements. An acquisition target must comply with both frameworks simultaneously. Validating compliance with one and ignoring the other leaves half the risk surface unexamined.
Where the Two Frameworks Overlap
The overlap between GDPR and the EU AI Act is concentrated in three areas: lawful basis for data processing, data governance and quality requirements, and transparency obligations. Each area creates a compound obligation where both frameworks impose requirements on the same data and the same processing activities.
On lawful basis: GDPR requires a documented lawful basis for every processing activity. When personal data is used to train, validate, or test an AI model, that use constitutes processing under GDPR. The lawful basis must cover AI model training as a specific processing purpose. Consent collected for "personalization" does not automatically extend to "model training." Legitimate interest asserted for "service improvement" may not survive the balancing test when applied to large-scale automated processing. The AI Act does not create a new lawful basis. It requires compliance with existing data protection law. A company that cannot demonstrate a valid GDPR lawful basis for its AI training data cannot comply with the AI Act's data governance requirements.
On data governance: the AI Act's Article 10 requires providers of high-risk AI systems to use training, validation, and testing datasets that are "relevant, sufficiently representative, and to the extent possible, free of errors and complete." The data governance practices must include examination of possible biases, identification of data gaps, and measures to address them. GDPR's data quality principle (Article 5(1)(d)) requires that personal data be "accurate and, where necessary, kept up to date." These are complementary but distinct obligations. An AI system that meets GDPR's accuracy requirements may still fail the AI Act's representativeness and bias examination requirements. Both must be satisfied.
On transparency: GDPR Article 22 gives data subjects the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects, unless specific conditions are met. The AI Act imposes its own transparency requirements on high-risk AI systems, including notification to users that they are interacting with an AI system and documentation of the system's capabilities and limitations. A company deploying AI-driven lead scoring, credit assessment, or hiring tools must satisfy both sets of transparency requirements. GDPR's requirements focus on individual rights and notifications. The AI Act's requirements focus on system-level documentation and disclosure. Both apply.
High-Risk Classification and Its PE Implications
The AI Act uses a risk-based classification system. AI systems classified as "high-risk" under Annex III face the most stringent requirements: risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, and cybersecurity. The high-risk categories include AI used in employment and worker management (recruitment, promotion, task allocation), creditworthiness assessment, insurance pricing, and access to essential services.
For PE acquirers, the high-risk classification question is concrete. Does the target use AI in any of the Annex III categories? If the answer is yes, the target must have the compliance infrastructure to support it by August 2, 2026. That infrastructure includes: a quality management system, a risk management system with documented risk identification and mitigation, data governance practices meeting Article 10 requirements, technical documentation meeting Annex IV specifications, automatic logging capabilities, transparency disclosures, human oversight mechanisms, and accuracy and robustness testing protocols.
Building this infrastructure from scratch takes 12 to 18 months in our experience. If the target has not started, the acquirer is inheriting a compliance build-out that must be completed before the enforcement deadline. The cost of that build-out is a direct input to the deal model. So is the enforcement exposure if the deadline is missed: fines under the AI Act can reach 35 million euros or 7% of global annual turnover for the most serious violations.
The Training Data Problem
The most significant overlap between GDPR and the AI Act concerns training data. An AI system is only as defensible as the data it was trained on. If the training data was collected without a valid GDPR lawful basis, the model itself is built on a non-compliant foundation. The AI Act's data governance requirements (Article 10) do not cure this deficiency. They require the provider to ensure that data governance practices "take into account" the specific geographic, behavioral, or functional context in which the AI system is intended to operate. A model trained on non-compliant data does not meet this standard.
The practical question for acquirers: can the target demonstrate, for every dataset used in AI model training, that the data was collected with a lawful basis that covers the specific purpose of model training? Can the target demonstrate that data subjects were informed that their data would be used for this purpose? Can the target demonstrate that cross-border transfers of training data complied with GDPR Chapter V requirements?
In our experience, the answer is almost always no. Training datasets are assembled from multiple sources over time. Some data was collected under consent regimes that predate the AI use case. Some data was obtained from third-party providers whose own collection practices are opaque. Some data was scraped from public sources under a legitimate interest rationale that has not been tested. The result is a training corpus with mixed legal status, undocumented data lineage, and no systematic audit trail connecting data points to lawful bases.
Field observation: A PE-backed HR tech company used customer data (resumes, hiring outcomes, performance reviews) to train a candidate matching algorithm. The algorithm processed data from candidates across 14 EU member states. The company's privacy policy referenced "service improvement" as a processing purpose but did not specifically reference AI model training. The Article 30 records documented "automated matching" as a processing activity but listed "legitimate interest" as the lawful basis without a documented balancing test. Under GDPR, the lawful basis was arguably insufficient. Under the AI Act, the system was clearly high-risk (Annex III, category 4: employment). The company had no risk management system, no bias testing documentation, and no human oversight protocol. Four months before the AI Act enforcement deadline, the compliance gap spanned both frameworks.
Practical Diligence Framework for the Overlap
PE deal teams evaluating targets with AI capabilities in EU markets need a diligence framework that addresses both frameworks simultaneously. Evaluating GDPR compliance and AI Act readiness as separate workstreams creates blind spots at every point where the frameworks intersect. The overlap areas, training data lawful basis, data governance, and transparency, are exactly the areas where the most significant risks concentrate.
The framework we apply covers four questions. First: what AI systems does the target operate, and which (if any) fall under the AI Act's high-risk classification? This determines the compliance scope. Second: for each AI system, what personal data is processed, and under what GDPR lawful basis? This tests the data foundation. Third: does the target have the technical and organizational infrastructure to meet both GDPR's ongoing obligations and the AI Act's system-level requirements? This assesses compliance readiness. Fourth: what is the remediation cost and timeline to close any gaps identified in questions one through three? This produces the deal model input.
The EU AI Act hub in our Risk Register covers the Act's requirements in detail. The GDPR hub covers the data protection framework. This page covers the intersection. All three are inputs to a complete acquisition risk assessment for targets with AI capabilities and EU data exposure.