
As AI Agents Gain Traction, Security Concerns Trigger Corporate Crackdown. The photo shows a capture of the OpenClaw homepage.
SEOUL, Feb. 9 (Korea Bizwire) — Artificial intelligence systems that can navigate computer screens, click a mouse and type on a keyboard — effectively performing tasks on behalf of humans — are spreading rapidly among developers and early adopters.
But as the open-source AI agent known as “OpenClaw” gains popularity, major technology companies in South Korea are tightening security controls, warning that the tool could expose sensitive corporate data and personal information.
Often likened to Jarvis, the fictional AI assistant in the film “Iron Man,” OpenClaw can autonomously execute complex instructions without human intervention. Yet its ability to access internal systems has raised alarms over potential data leaks, cyberattacks and operational errors.
According to industry officials on Sunday, Naver, Kakao and Danggeun (Karrot) have recently instructed employees — particularly developers — not to use OpenClaw on company networks or work devices.
Kakao said it was restricting the tool, previously known as ClaudeBot or MaltBot, to safeguard corporate information assets. Naver has issued a similar internal ban, while Danggeun has blocked access entirely, citing risks beyond the company’s control.
The moves mark the first coordinated domestic pushback against a specific AI tool since early 2025, when several public institutions and corporations restricted the Chinese AI model DeepSeek over data security concerns.
In the semiconductor sector, where intellectual property protection is paramount, companies such as Samsung Electronics and SK hynix have already barred employees from using external generative AI models on internal networks since the ChatGPT boom began in 2023.
While neither company has formally announced a ban on OpenClaw, online workplace forums suggest internal security teams are monitoring its use.
China has taken a more explicit stance. The Ministry of Industry and Information Technology warned last week that improperly configured OpenClaw deployments could serve as gateways for cyberattacks and data breaches, calling for stringent authentication and access controls.
Microsoft’s AI safety team has also publicly questioned whether the tool is secure enough for enterprise use.
Security experts point to documented vulnerabilities, including the storage of API keys in plain text and susceptibility to “indirect prompt injection” attacks, which could trick AI agents into extracting sensitive financial or personal data.
Wiz, a cybersecurity firm, said a design flaw in a community platform used by OpenClaw-based agents had exposed thousands of users’ personal data.
Yet even as companies move to block it, enthusiasm among individual users is growing.
Developers and tech enthusiasts are setting up isolated computers to run OpenClaw outside corporate networks, reviving interest in compact devices such as the Mac Mini. An online Korean OpenClaw community on X has attracted more than 1,700 members who exchange usage tips and security patches.
Users employ the AI agents for tasks ranging from compiling morning briefings — weather, traffic conditions and breaking news — to booking transportation, tracking investments and monitoring local business rankings for marketing insights. Some are experimenting with lower-cost AI models to manage API expenses, which can climb to tens of thousands of won per day.
There are also efforts to localize the tool, integrating it with Korean websites and messaging platforms such as KakaoTalk instead of global services like Telegram or Slack.
The broader shift toward enterprise AI agents accelerated recently with the release of Anthropic’s Claude Cowork, a specialized workplace AI tool that has drawn corporate interest. Analysts say the appeal of automation is undeniable.
“AI agents and automation tools are becoming increasingly practical and accessible for enterprise adoption,” said Alex Michaels, an analyst at Gartner. “Cybersecurity leaders must identify both sanctioned and unsanctioned AI agents, apply strong controls and develop incident response playbooks to address potential risks.”
As businesses weigh efficiency gains against mounting security threats, the rise of autonomous AI agents presents a paradox: powerful enough to transform productivity, yet fragile enough to expose the very systems they aim to optimize.








