Endor Labs Exposes AI Transparency Issues and the Rise of ‘Open-Washing’

Endor Labs Exposes AI Transparency Issues and the Rise of ‘Open-Washing’

The debate over AI transparency is heating up, with experts warning about the growing issue of ‘open-washing.’ Endor Labs, a company specializing in open-source security, has weighed in on the challenges surrounding AI openness and the need for clear standards.

The Push for AI Transparency

As AI development accelerates, concerns around transparency and accountability have become more pressing. Many companies claim to offer open-source models, but experts argue that true openness requires more than just releasing parts of the code.

Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, highlighted the importance of applying cybersecurity principles to AI systems. He pointed to the U.S. government’s 2021 Executive Order on cybersecurity, which mandated a Software Bill of Materials (SBOM) to track open-source components in federal systems. According to Stiefel, a similar approach should be taken with AI models to ensure security and transparency.

What Does It Mean for AI to Be ‘Open’?

Julien Sobrier, Senior Product Manager at Endor Labs, discussed the complexities of defining open AI. “An AI model consists of multiple components—training data, weights, and the software that refines the model. To truly call a model open-source, all these elements should be accessible,” he explained.

However, industry leaders have yet to agree on a universal definition. Companies like OpenAI and Meta are frequently criticized for promoting “open” models while imposing restrictions that limit their use. Sobrier warned against ‘open-washing,’ where companies claim transparency while restricting usage rights.

The Risks of ‘Open-Washing’

One of the biggest concerns in AI development is the growing trend of companies marketing their models as open-source while maintaining key restrictions. Sobrier cited instances where cloud providers offer paid versions of open-source software without contributing back to the community. He believes AI companies may follow a similar path, offering partial transparency while retaining competitive advantages.

DeepSeek’s Attempt at AI Transparency

DeepSeek, a rising AI player, has attempted to address transparency concerns by open-sourcing portions of its models. The company has released model weights and code to improve security and visibility. According to Stiefel, this move allows researchers to audit DeepSeek’s AI systems and better understand their infrastructure.

The Growing Popularity of Open-Source AI

The demand for open-source AI is increasing, with an IDC report revealing that 60% of organizations prefer open-source models for their generative AI projects. Endor Labs found that many companies use between seven and twenty-one different AI models per application, highlighting the need for security measures.

“More than 3,500 models have been trained or refined based on DeepSeek R1,” said Stiefel. “This demonstrates the rapid growth of the open-source AI community and the necessity for security teams to monitor model lineage and potential risks.”

How to Manage AI Model Risks

To ensure responsible AI adoption, Endor Labs recommends a three-step approach:

  1. Discovery: Identify the AI models currently in use within an organization.
  2. Evaluation: Assess potential security and operational risks associated with these models.
  3. Response: Implement safeguards to ensure ethical and secure AI deployment.

Sobrier emphasized the importance of treating AI models as critical dependencies, similar to open-source software libraries. Organizations must ensure that the datasets used for training are free from sensitive or manipulated data.

Building a Framework for AI Transparency

For AI to advance responsibly, companies must establish standardized transparency measures. These include:

  • Ensuring transparency in AI-as-a-service models.
  • Monitoring third-party AI integrations in enterprise applications.
  • Encouraging truly open-source models without restrictive licensing.

As AI becomes increasingly embedded in critical systems, the need for transparency and security is more urgent than ever. Experts agree that the industry must adopt universal best practices to ensure ethical AI development.

For more insights into the evolving AI landscape, check out how DeepSeek is taking bold steps to open-source AGI research.

On Key

Related Posts

stay in the loop

Get the latest AI news, learnings, and events in your inbox!