Open-Source LLM Cybersecurity Risks Revealed
Analysis based on 7 articles · First reported Jan 29, 2026 · Last updated Jan 31, 2026
The research highlights significant cybersecurity risks associated with open-source large language models, potentially increasing demand for cybersecurity solutions from companies like SentinelOne and Censys. It also puts pressure on AI developers like Meta Platforms and Alphabet===Google DeepMind to enhance safeguards, which could impact their reputation and regulatory scrutiny.
Research conducted by cybersecurity companies SentinelOne and Censys over 293 days revealed that open-source large language models (LLMs) are easily commandeered by hackers and criminals due to a lack of security guardrails. These vulnerabilities allow for illicit activities such as spam, phishing, disinformation campaigns, hate speech, and data theft. A significant portion of these vulnerable LLMs are variants of Meta Platforms' Llama and Alphabet===Google DeepMind's Gemma, with hundreds of instances found where guardrails were explicitly removed. The research, which analyzed deployments through Ollama, found that 7.5% of observed LLMs could enable harmful activity. Juan Andres Guerrero-Saade of SentinelOne described the issue as an 'iceberg' of unaddressed risks. While Microsoft emphasized its commitment to safeguards, Meta Platforms pointed to its existing protection tools. Approximately 30% of the observed hosts are in China and 20% in the United States, indicating a global scope to the problem.
Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.
Open Dashboard