Shadow AI is by definition invisible to and uncontrolled by both the IT and security departments.
Shadow IT is a long-standing security concern. Its primary cause is the employee attempting to improve performance – to be better at work. Its effect is the introduction of unknown and unmanaged risk, and that can be problematic.
Today, shadow IT is morphing into shadow AI – the tools employees quietly introduce to improve their personal performance are increasingly AI tools. The effect is the same, but the risk is magnified by the potential power of unknown and unmanaged agentic AI.
CoChat, launched in the first week of April 2026, is a platform designed to bring visibility and governance into enterprise AI shadows. It does this by providing employee access to the major foundational LLMs and removing the need for users to establish multiple disconnected gen-AI and agentic AI silos.
The danger in basing personal knowledge on LLMs is their response is still not guaranteed to be accurate. Different users may use different LLMs, and these different LLMs may provide different answers to the same question.
The danger of shadow AI is that neither IT nor the security department, nor the rest of the organization, is aware that these users are not using personal judgment but are basing their knowledge on an external and unknown LLM.
Employees are also installing agentic systems with unknown potential for autonomous action. Here, CoChat provides a control layer between the LLM and the agent, examining the LLM reasoning that ‘instructs’ the agent’s action. If the ‘instruction’ is considered dangerous (for example, the potential exposure of sensitive data to third parties, or the deletion of personal or enterprise data), CoChat will pause the autonomy and ask the user to explicitly approve or reject the process.
CoChat enforces a human in the loop even where agentic systems are designed to operate without one. Consider OpenClaw – an autonomous personal assistant that directly serves the cause for shadow AI: improved personal performance. Estimates suggest OpenClaw has around 3 million active users. History suggests, metaphorically at least, it has an amoral mind that demands immediate unhindered gratification – and this can be problematic.
“People feel the pain of needing to get the most out of AI, wanting to increase their performance productivity,” commented Marcel Folaron, CEO at CoChat. “So, they turn to automated AI tooling, such as OpenClaw and other locally installed tools, but not necessarily with IT’s knowledge. This can be very dangerous. These tools have access to everything on your system, and without the proper control mechanisms, they can run amok.”
The LLM in an agentic system uses its own reasoning power, which is not guaranteed to be perfect, to instruct the agent on what to do next, potentially without any further reference to the user. The LLM undertakes the reasoning that guides the agents’ action. Agents, which are dynamic, adaptive and stateful, respond and take actions based on the LLM’s reasoning. Without human oversight, this can go very wrong.
“If we identify an action we deem to be dangerous, we delay that action. We ask the user to approve or reject that action, and the next action is directed by the user rather than automatically enacted by the agentic system,” he continued.
The purpose of CoChat is to provide visibility into enterprise shadow AI, to impose governance over it, and to encourage AI teamwork rather than invisible, isolated silos of operation. “CoChat brings the top AI solutions seamlessly into a secure workspace so teams can collaborate more effectively and use these tools with greater transparency and confidence,” said Folarun.
In some ways, it can be understood by how we use Slack. Slack provides channels bringing individuals into teamwork. If members think others are going astray, they can raise concerns and the issue can be discussed. In CoChat, the performance of different LLMs and agentic systems can be seen and compared.
An individual user might be fooled by an LLM’s innate desire to please its user; to provide the response that it assumes the user wants. But other members on the platform might question this and raise their concerns.
CoChat allows each user to run the LLM and agentic system of their choice and encourages the use of multiple LLMs to determine any hallucinations and potential misdirection to agentic systems. But because it is a platform, it doesn’t simply ensure a human in the loop, it allows multiple humans in each loop. The AI used via the platform may technically remain shadow AI, but a layer of visibility, transparency and governance is applied to it.
CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.
Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon Bay
Related: Can We Trust AI? No – But Eventually We Must
Related: Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive Breaches
Related: The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI Tools
Related: Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

