Bip San Francisco

collapse
Home / Daily News Analysis / What the EU AI Act requires for AI agent logging

What the EU AI Act requires for AI agent logging

Apr 17, 2026  Twila Rosenbaum  6 views
What the EU AI Act requires for AI agent logging

The EU AI Act, comprising 144 pages, establishes essential logging requirements for AI agent developers through several interrelated articles. As the deadline approaches, understanding these requirements is critical.

Your Agent May Be High-Risk

While the term "AI agents" is not explicitly mentioned in the Act, the classification hinges on the system's operational impact. If your AI agent engages in actions such as scoring credit applications, filtering resumes, determining healthcare benefits, pricing insurance, or triaging emergency calls, it is categorized under Annex III as high-risk.

According to Article 6(3), there is a potential exemption if the system does not significantly influence decision outcomes. However, proving this for an agent that autonomously calls tools and acts on their results is a challenging task.

General-purpose AI models have distinct obligations outlined in Chapter V. Although the model itself is not classified as high-risk, any system built upon it may be deemed high-risk once deployed in an appropriate context. The model provider retains its obligations under Chapter V, while the integrator assumes high-risk provider responsibilities under Article 25.

Key Articles to Consider

Article 12 mandates that high-risk AI systems must facilitate automatic event logging throughout their lifecycle. Two critical terms in this directive are 'automatic' and 'lifetime.' Automatic implies that logs must be generated independently by the system, and manual documentation will not suffice. Lifetime refers to the entire duration from deployment to decommissioning, not just the current version.

Furthermore, Article 12(2) specifies that logs must encompass three categories: instances of potential risk or significant modifications, data for post-market monitoring, and data for operational oversight by deployers. The regulation does not dictate a specific format or require designated fields, only that the logs serve these three purposes.

Article 13 emphasizes the need to document how deployers can collect and interpret these logs, functioning as a technical integration guide for the logging framework rather than a compliance manual.

Articles 19 and 26 establish a minimum retention period of six months for logs. Financial services can integrate AI logs into existing regulatory documentation, while other sectors must retain them for at least six months, with potential extensions based on sector-specific regulations.

Limitations of Standard Logging

AI agents perform various tasks, including calling tools, delegating to sub-agents, retrieving LLM responses, and generating final outputs. Regular application logging can capture these actions effectively. However, issues arise when regulators request proof of log integrity six months post-incident.

Application logs reside on infrastructure controlled by individuals, making them susceptible to unauthorized edits or replacements. Although Article 12 does not explicitly require tamper-proof logs, logs that can be altered without detection hold no evidentiary weight, which poses significant challenges for high-risk systems.

This concern led to the exploration of cryptographic signing for agent logs, an approach currently being integrated into a project called Asqav. The proposed solution involves signing each agent action with a key that the agent does not possess, linking each signature to the previous one, and securely storing the receipt in a manner inaccessible to the agent. Any alteration in the logs would visibly disrupt the signature chain.

The pattern is more crucial than any individual tool; the signing key should exist outside the agent's trust perimeter, ensuring that each action has a corresponding receipt, forming a verifiable chain. Implementations following these principles can satisfy the requirements outlined in Article 12, regardless of whether NIST FIPS 204 post-quantum signatures or alternative methods are utilized.

No Established Standard Yet

As of now, there is no finalized technical standard for logging as per Article 12. Two drafts to monitor are prEN 18229-1, which addresses AI logging and human oversight, and ISO/IEC DIS 24970, focusing on AI system logging. Both documents are still in development.

Developers must prepare for regulations that articulately define outcomes without specifying methodologies. Teams that proactively establish effective logging mechanisms will be well-positioned when the standards are finalized, while those delaying may face challenges in retrofitting under pressure.

Deadlines and Penalties

The obligations outlined in Annex III are set to take effect on August 2, 2026. Although the Commission proposed a delay through the Digital Omnibus package in November, potentially extending this to December 2027, no formal law has been enacted, leaving August 2026 as the enforceable date.

Failure to comply may result in penalties of up to 15 million euros or 3% of global annual turnover, whichever amount is greater. This penalty structure applies universally, but Article 99 stipulates that penalties should be proportionate and dissuasive, allowing national authorities to consider a company's size and economic viability. Consequently, startups and SMEs may face lower fines despite the unchanged formula.

Three Critical Questions

  • Is your system capable of generating logs automatically at every decision juncture?
  • Can those logs withstand tampering attempts?
  • Are you able to retain them for six months in a format accessible to regulators?

Time is of the essence, and the deadline is approaching faster than anticipated.


Source: Help Net Security News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy