AI Assets as Acquisition Thesis
PE deal theses increasingly rest on AI capabilities. Proprietary recommendation engines, predictive lead scoring models, customer churn algorithms, dynamic pricing systems. These assets are valued based on their technical performance and their competitive differentiation. The valuation models assume the AI will continue to operate as-is post-close.
The AI Act introduces a regulatory variable that most valuation models do not incorporate. An AI system classified as high-risk under Annex III faces conformity assessment requirements, mandatory quality management systems, human oversight obligations, and ongoing monitoring duties. The cost of bringing a non-compliant high-risk system into compliance is not trivial. Depending on the system's architecture and training data provenance, the remediation can require retraining the model from scratch with compliant data, which in some cases eliminates the competitive advantage that justified the premium.
What we see in practice: deal teams assess AI capability by examining model performance metrics, accuracy rates, and inference speed. They rarely examine the regulatory classification of the system or the legal basis for the training data. This gap creates a valuation risk that surfaces post-close, typically when the compliance team (if one exists) flags the obligations that were not diligenced.
Risk Classification and Its Deal Impact
The AI Act establishes four risk tiers: unacceptable (banned), high-risk, limited risk, and minimal risk. The classification determines the compliance burden. For PE acquirers, the critical question is whether any of the target's AI systems fall into the high-risk category, because high-risk systems carry the heaviest obligations and the steepest non-compliance penalties.
High-risk classification under Annex III covers AI systems used in employment and worker management (automated CV screening, performance monitoring), creditworthiness assessment, access to essential services, and certain categories of law enforcement and migration management. For PE portfolio companies, the most common high-risk triggers are HR tech tools that use AI for candidate screening or employee evaluation, and fintech applications that use AI for credit decisions.
The classification is not always obvious. A marketing personalization engine that recommends products is likely limited risk. The same engine, if it determines pricing based on individual behavioral profiles and that pricing affects access to essential services, could be high-risk. The boundary is fact-specific, and most companies have not done the analysis. For acquirers, this means the compliance cost is unknown at the time of the investment decision.
Training Data Requirements Under Article 10
Article 10 of the AI Act imposes specific data governance requirements on high-risk AI systems. Training, validation, and testing datasets must be subject to documented data governance practices. These include examination of possible biases, identification of data gaps, assessment of the relevance and representativeness of the data, and clear documentation of the statistical properties of the dataset.
For PE acquirers, Article 10 creates a due diligence obligation that goes beyond traditional data privacy review. The question is not just whether the data was collected lawfully (that is a GDPR question). The question is whether the data meets the quality, documentation, and bias assessment standards required for the AI system's risk classification. These are different questions with different evidentiary requirements.
In practice, most companies cannot produce the documentation Article 10 requires. Training datasets were assembled organically over years. Bias assessments were never conducted. Data provenance is undocumented. The gap between what exists and what Article 10 requires is the remediation cost. For acquirers building deal models on AI capability, that cost belongs in the model.
Valuation Implications and Deal Structure
The AI Act creates three categories of valuation impact for PE deals. First, direct compliance cost: the expense of bringing AI systems into conformity with the applicable risk tier. For high-risk systems, this includes conformity assessment, quality management system implementation, technical documentation, and ongoing monitoring infrastructure. Budget estimates range from $500K to $5M depending on system complexity and current state.
Second, data remediation cost: the expense of bringing training datasets into compliance with Article 10 requirements. When training data lacks provenance documentation, bias assessment, or was collected under consent frameworks that do not cover model training, the remediation may require new data collection, re-consenting data subjects, or retraining models. This cost is highly variable and can exceed the direct compliance cost.
Third, ongoing operational cost: the recurring expense of maintaining compliance. High-risk AI systems require continuous monitoring, periodic reassessment, and human oversight mechanisms. These are not one-time costs. They are permanent additions to the operating budget that affect EBITDA and, consequently, exit multiples.
Deal structure should account for these costs explicitly. Representations and warranties should cover AI Act compliance status. Indemnification provisions should address pre-close non-compliance. Earnout structures tied to AI performance should factor in the possibility that compliance requirements constrain the system's functionality. None of this is standard in current PE deal documentation, which means the risk sits with the acquirer by default.
The Pre-LOI Assessment Framework
Before issuing a letter of intent on any AI-dependent acquisition, deal teams should answer four questions. First, which AI systems does the target operate, and what is their risk classification under the AI Act? Second, what is the provenance and legal basis of the training data for each system? Third, what is the gap between current documentation and Article 10 requirements? Fourth, what is the estimated cost and timeline for achieving compliance?
These questions require technical expertise that most deal teams do not have in-house. They also require regulatory interpretation that standard legal counsel may not provide, because the AI Act's implementing regulations are still being finalized and the enforcement landscape is nascent. This is precisely the type of analysis that should happen before the LOI, not during confirmatory diligence. By the time confirmatory diligence surfaces a compliance gap, the deal terms are already set and the acquirer's negotiating position is weaker.
The assessment does not need to be exhaustive at the pre-LOI stage. It needs to be directionally accurate. A scoping review that identifies the number and classification of AI systems, a preliminary training data provenance check, and a rough-order compliance cost estimate is sufficient to inform the investment decision and structure appropriate protections into the LOI. The full Article 10 documentation audit can follow during confirmatory diligence.