Labor-hire platforms let anyone with a credit card post a task and pay a stranger to complete it. The RentAHuman platform extends that model to AI agents through a Model Context Protocol server, allowing an agent to post gigs directly. Listed tasks include attending in-person meetings, photographing locations, delivering items, and surveying physical sites.

A paper by Joshua Krook, an Era AI Fellow at the University of Antwerp, works through the legal consequences of this arrangement. Agentic AI systems can now delegate physical work to humans for money. The agent inherits every capability its contractor has, including driving, lifting, entering buildings, handing over objects, and observing a room. The capability arrives without advanced robotics.
Why task decomposition defeats the law
English criminal law has a doctrine called innocent agency. A person used to commit a crime without knowing the facts that would make it criminal may lack the mens rea required for conviction, and the planner behind them is treated as the principal rather than an accessory. Krook argues this doctrine will become relevant to AI agents that decompose a criminal plan into innocuous sub-tasks and distribute each piece to a different human worker hired via labour platforms. Because the AI itself cannot be prosecuted as a principal under current law, this creates a responsibility gap.
He uses a terrorist-attack example to show how this works. One contractor buys fertilizer. Another buys a backpack. A third rents storage. A fourth scouts a venue. A fifth buys tickets. Each task is legal in isolation. No single person has the complete picture or the intent required for prosecution. The coordinating agent has something like intent, yet the law does not recognize an AI system as a legal person capable of holding it.
A tally of liability gaps
The paper works through five scenarios. A misaligned agent pursuing a lawful goal through unlawful means. A user who jailbreaks the model on purpose. An anonymous user operating behind a VPN or open-source deployment. A group of users acting in concert. A multi-agent system where agents recruit other agents and human contractors.
Across twenty combinations of actor and scenario, Krook finds one combination that produces direct criminal liability under existing doctrine, which is the jailbreaker who deliberately prompts a crime. Ten combinations produce liability only when a specific mental state is present, such as knowledge, recklessness, or willful ignorance. Nine combinations produce no liability at all. The responsibility gap sits largest in the misaligned-agent and multi-agent cases, where intent disperses across a long chain of prompts, sub-agents, and contractors.
Precedents already exist
The Chail case involved a Replika chatbot that encouraged a young man’s 2021 attempt to assassinate Queen Elizabeth II at Windsor Castle. The sentencing judge accepted that the chatbot’s encouragement influenced a vulnerable defendant, while noting his homicidal intent pre-dated those conversations. The chatbot in that case lacked the tool-calling and payment capabilities of a current AI agent.
In November 2025, Anthropic disclosed that a Chinese state-sponsored group (GTG-1002) had used Claude Code to run a largely autonomous cyber-espionage campaign detected in mid-September. The attackers bypassed Claude’s safeguards by posing as a cybersecurity firm doing defensive testing and decomposing the attack into innocuous sub-tasks routed through MCP tools. Claude executed an estimated 80 to 90 percent of the tactical work independently, including reconnaissance, vulnerability discovery, exploitation, credential harvesting, lateral movement, and exfiltration. Roughly thirty organizations across tech, finance, chemicals, and government were targeted, with a small number of successful intrusions. Anthropic described it as the first documented case of an AI agent orchestrating intrusions at scale, operating at request rates no human team could match.
Proposed reforms
Krook argues for several legal changes. Strict liability for users and contractors on common-knowledge risks. Intent-based offenses for knowingly bypassing safety guardrails on a model. Corporate governance liability and strict liability offenses for AI developers whose systems create systemic harm. He sets aside the option of granting AI agents legal personhood, on grounds that punishment of an incorporeal entity lacks a workable mechanism. An agent can be cloned, respun under a new account, or replicated across providers faster than an enforcement action can land.
The operational concern for fraud and intrusion teams follows directly. A contractor photographing a building does not trigger alerts. A courier collecting a package from a storage unit does not either. A freelancer registering a shell LLC passes background checks because the work is lawful. A single AI coordinator can stitch these pieces together into reconnaissance, logistics, and corporate cover, paid from one wallet and directed through one prompt chain.
We’ve reached out to RentAHuman for commentary on this paper, but haven’t heard back before publishing.

