The AI agent is the new kid on the block. Are you prepared for the legal risks?

Partnerblog

The new kid on the block in the artificial intelligence (AI) landscape is the AI agent, an autonomous system capable of analysing situations, making independent decisions and performing tasks without human intervention. AI agents don’t need a new prompt each time they generate a response. They get a high-level task, and figure out how to complete it. While their capabilities offer significant opportunities, the autonomy inherent in AI agents also poses significant legal challenges.

What are AI agents?

AI agents, also referred to as agentic AI, are AI systems that are designed to operate autonomously. Recent advancements in generative AI and increasing digitalisation have accelerated the development and deployment of these autonomous systems. They can independently analyse situations, set sub-goals, make decisions and perform actions without continuous human supervision. Unlike traditional AI assistants, such as Alexa or Siri or chatbots such as ChatGPT, which only act upon explicit user input, AI agents proactively and autonomously interact with digital environments. A recent example is OpenAI’s Operator, which demonstrates how these agents can perform web-based tasks without human intervention. A research preview is currently only available in the US, probably due to stricter regulatory scrutiny around autonomous decision-making and data privacy in the EU.

What sets AI agents apart is their ability to perform tasks and make decisions on a computer in ways that closely resemble human behaviour. Unlike generative AI, which is designed to produce new content, agentic AI is built to take action. They can navigate interfaces, move cursors, type, click and interpret screen content. For example, an AI agent can access local files and complete online forms without user input. While this may seem like a simple feature, the broader impact is significant: by automating routine digital tasks, AI agents free up time for work that requires strategic thinking and human judgement.

This ability to automate practical, task-based work is already being applied in business contexts. Google Agentspace is a clear example of how AI agents can be integrated into everyday business processes. This can lead to increased efficiency and compliance. Google Agentspace allows companies to integrate and deploy AI agents to perform everyday digital tasks across tools like Google Drive, Gmail, and even non-Google platforms like Microsoft SharePoint. For example, an agent could search for relevant documents in Google Drive, draft and organise email responses in Gmail, or extract and summarise information from reports stored in SharePoint.

What are the legal risks?

While AI agents offer significant opportunities, their autonomy also presents complex legal challenges, particularly within the regulatory landscape of the European Union. The ability of these systems to act independently, access sensitive data and interact with the external environment raises complex issues around cybersecurity, data protection and accountability.

· Cybersecurity

The autonomous nature of AI agents introduces new cybersecurity risks, especially when they interact with external systems. Their ability to act independently can amplify the impact of threats such as prompt injection, data poisoning or unauthorised access to trade secrets. These threats are particularly concerning given the autonomous nature of AI agents, which could exponentially increase damage without immediate human oversight. Organisations should implement strong access controls, monitoring and regular testing to ensure that agents operate securely and are not vulnerable to manipulation. Ensuring compliance with NIS 2 can be a step in the right direction.

· High-risk classification under the AI Act

Due to their decision-making capabilities, AI agents may fall into the “high risk” category under Annex III of the AI Act when used in contexts such as employment, education or public services. This classification imposes stricter obligations on providers and deployers, including risk management, transparency, human oversight (audits and higher costs) and detailed documentation.  

· Data protection

AI agents raise specific challenges under the GDPR due to their autonomous processing of personal data and deep access to company devices, which can increase the risk of data breaches or unauthorised disclosure of personal data to users or external tools. Organisations must ensure a clear legal basis for each processing activity, whether through consent, contractual necessity or legitimate interest. In most cases a data protection impact assessment (DPIA) will be necessary. DPIA updates will be necessary when the functionality of the AI agent changes.

· Accountability

When companies use AI agents to act on their behalf, they are generally responsible for the results. This includes errors made by the agent, as well as legal issues such as unauthorised disclosure of confidential information, intellectual property (or trade secrets) infringements or unintended agreements. For example, in the case of  “the AI agent as customer or supplier”, if an AI agent initiates a contract with a third party, the organisation behind the agent is usually considered legally bound. Other examples could be:   AI agents that are being used to drive cars autonomously or AI agents to automatically detect and prevent cybersecurity threats, AI agents with crypto wallets. 

What are effective strategies to mitigate the legal risks?

Companies using AI agents should adopt targeted measures and best practices to reduce legal and compliance risks:

  • Define the scope of authority by setting clear operational boundaries and restricting agents to specific tasks for which they are properly trained and regularly monitored.
  • Ensure your internal systems are secure by implementing robust cybersecurity measures, such as authentication protocols, data access controls and encryptions, before allowing AI agents to operate in your environment.
  • Perform ongoing risk assessments and compliance reviews to ensure that processes comply with regulatory requirements.
  • Maintain effective human oversight, especially in high-impact or sensitive decision-making contexts.
  • Establish clear contracts with providers and partners that outline responsibilities, with particular attention to data processing, liability and cybersecurity.
  • Provide regular training and awareness sessions for employees interacting with or overseeing AI agents to prevent inadvertent compliance breaches.

Conclusion

AI agents offer significant potential to automate complex tasks and improve efficiency, but their autonomy brings new legal and regulatory challenges.

They may be the new kids on the block, but it’s crucial to get your own house in order – especially when it comes to data access controls and cybersecurity – before giving them access. Act now to put robust frameworks and processes in place—your legal and operational resilience may depend on it.

Authors:

Edwin Jacobs ,  Wouter Torfs

Delen