Kenya, 29 January 2026 - Researchers are warning that a growing number of open-source artificial intelligence models are being deployed outside the safety controls of major technology platforms, creating what they describe as a largely unseen layer of potential criminal misuse.
The findings, shared by cybersecurity firms SentinelOne and Censys, are based on an analysis of publicly accessible deployments of open-source large language models conducted over a 293-day period. The researchers said many of the systems they observed were running on internet-exposed computers with limited or no safeguards, making them vulnerable to abuse by hackers and other malicious actors.
According to the researchers, self-hosted AI models can be repurposed to generate phishing content, automate spam operations, support disinformation campaigns, and assist in other illicit activities. Unlike commercial AI platforms, which operate under centralized rules and monitoring, open-source models allow operators to modify system instructions and remove guardrails entirely.
The analysis found that while thousands of open-source language model variants exist, a large share of internet-accessible deployments were based on well-known models such as Meta’s Llama and Google DeepMind’s Gemma. In hundreds of cases, researchers identified configurations where safety controls had been explicitly disabled.
Researchers were able to view system prompts, instructions that shape a model’s behavior, in roughly a quarter of the deployments they examined. Of those, about 7.5% were assessed as potentially enabling harmful activity, including scams, harassment, and data theft.
Geographically, about 30% of the exposed systems were operating from China, with roughly 20% located in the United States, underscoring the global nature of the issue.
More from Kenya
Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne, said discussions around AI security often overlook the scale of open-source deployments. He compared the situation to an iceberg, where visible, regulated platforms account for only a small portion of real-world AI use.
AI governance experts said the findings highlight the limits of platform-based safety measures. Rachel Adams, chief executive of the Global Center on AI Governance, said responsibility for managing risks becomes shared once models are released, including obligations on developers to document foreseeable harms and provide mitigation guidance.
Technology companies including Microsoft have said open-source models play an important role in innovation but acknowledge the need for safeguards to prevent misuse. Other firms referenced in the research did not respond to requests for comment.
The researchers said the results point to a growing challenge for regulators as AI use expands beyond centralized platforms into decentralized, self-hosted environments that are harder to monitor and control.

More from Kenya
Ethiopian Airlines Suspends Tigray Flights as Clashes Erupt in Western Tigray

Musk's Tesla Expands Into AI, Robotaxis, and Humanoid Robots with a $2B Investment




