AI

Shadow AI: the inevitable failure of cybersecurity policies?

Publié le
13 February 2025
Restez connectés aux idées qui comptent
Recevez nos émissions en avant-première, accédez aux coulisses des débats, et rejoignez les professionnels qui façonnent l’écosystème Cyber, Tech et Défense.
S’inscrire à la newsletter

Businesses increasingly want to rely on artificial intelligence (AI) to optimize their operations and remain competitive. However, a new threat is emerging: Shadow AI. Do organizations have to choose between business opportunities and a lack of data privacy? What measures should they deploy?

“DeepSeek: the AI that is revolutionizing the sector”, “How did DeepSeek succeed in developing a low-cost and efficient AI model”, “DeepSeek, a phantom threat to Wall Street?” ”, etc. ChatGPT's Chinese competitor, called Web Chat, hit the headlines at the end of January.

Since then, the gag has subsided a bit, as numerous experts have analyzed and tested this solution developed by DeepSeek, a Chinese startup founded in 2023 and owned by High-Flyer, a hedge investment fund.

DeepSeek indeed stores information on Chinese servers and its web version integrates a hidden code that has the capacity to send user data to companies close to the Chinese government. This is not really a surprise because you only need to read the general conditions of service to see all the data used by DeepSeek, it is also more explicit than OpenAI! But in the end, this data recovery is not new since all applications do it. This did not prevent some countries (Taiwan, Italy, Australia...) from banning it.

The DeepSeek case is the tip of a new iceberg which is increasingly approaching the perimeter of many companies: the ShadowAI. Similar to Shadow IT, it refers to the use of AI technologies without the approval or adequate supervision of IT or security departments. This can include data analysis tools, chatbots, or even decision-making algorithms built into third-party applications. Unlike approved solutions, these tools are not subject to the same security and compliance controls, which can lead to critical vulnerabilities.

While adopting AI can offer significant benefits, Shadow AI poses significant risks to data privacy. And not all businesses are prepared for it.

“Almost half of our major customers are mature and have an AI policy because they consider data breaches to be the main risk. They list what is and is not allowed in the context of the company to do its job.”, notes Vincent XXX, Cyber Security Expert for CISCO

Julien Dreano, CISO of the Framatomé group, provides a nuance by indicating

“that we need to distinguish between internal and external Shadow AI. On the one hand, there are services used by the company that are outside, such as those of OpenAI, and that must be monitored. And on the other hand, models that are put into the company and bring new risks. Overall, for external services, we already had rules; for example, it is forbidden to translate documents with Google Translate. There are forbidden keywords in search engines. The real problem is how to guess the intention of the human being talking to the AI.”

Rightly so, Jérôme Delaville, technical director of customer services at Olfeo, remind us that we must not forget

“the user whom we will have to support. A recent study revealed that 49% of users defy the forbidden! ”.

But what is the current situation regarding the use of AI in business? Not yet sufficiently aware of risks, employees download and use GenAI solutions without the approval of their hierarchy. Traditional corporate bans on AI tools are proving to be difficult to enforce. As a result, sensitive business information can be shared inadvertently during informal conversations.

The main risks of AI in business

  • Critical data leaks

In fact, one of the main threats is the leak of sensitive data. For example, an employee using an unapproved data analysis tool could unintentionally expose customer information to malicious third parties.

For Michel Truong, CIO of FED Group, “you need to understand the needs of businesses. If they do not find their answer in relation to the AI set up, they will look for it with another solution. Hence the need to have global visibility.”

  • Regulatory non-compliance

Using Shadow AI can also lead to violations of data protection regulations, such as the GDPR in Europe. Businesses need to ensure that all personal data is handled in accordance with regulations, which becomes difficult when unregulated tools are used.

  • Security vulnerabilities

Unapproved AI solutions may contain flaws that can be exploited by cybercriminals. They can serve as entry points for larger attacks, compromising the entire corporate network.

  • The loss of control over the data

Using Shadow AI can result in a loss of control over business data. Unauthorized tools can store data in unknown locations or share data with third parties without company consent.

There is also “the loss of control over the quality of what we do. If no one checks the AI result again, and we take that at face value, we will soon have more ways to guarantee our quality, our content, because it will change over time. AIs are very good. You have to go, but you have to control and support”, insists Julien Dreano RSSI from the Framatomé group.

DeepSeek won't be the last ShadowAI application to be wary of. So what steps should businesses take to prevent these applications from stealing their trade secrets?

Raise awareness among employees

The first line of defense against Shadow AI is employee awareness. Businesses should train their staff on the risks associated with the unauthorized use of AI solutions and encourage them to use only approved tools.

“It is necessary to define the uses, to know what really brings value. For example, I have employees who write thank you emails using AI, does that create value? Or does that just misrepresent the content? This is why we need to raise awareness with concrete examples from everyday life. This is what we do at FED and I am quite happy because employees now doubt everything,” notes Michel Truong.

Define strict governance policies

Establishing clear policies on the use of AI within the organization is crucial. These policies should define approval processes for new AI tools and security and compliance requirements. “At Fed, the AI is hosted by us and we made it learn our own data. But there are a few limitations. The first is the disappointing effect. We cannot say to employees “it's just for this function”, “it's just for this function”, “it's just for doing an activity”. However, they want to be able to do everything with AI for professional use, but also for personal use such as their children's homework. And the second limitation is that AI is expensive,” recognizes Michel Truong.

Monitoring and auditing your IS

Businesses need to set up monitoring mechanisms to detect the use of Shadow AI. Regular audits can help identify unauthorized tools and assess associated risks.

In conclusion, Shadow AI represents a serious threat to corporate data privacy. DeepSeek to retrieve data. But let's not be naive, all artificial intelligence companies collect data in one way or another.

However, with increased awareness, strong governance policies, and integrated security solutions, businesses can mitigate these risks. It is essential to integrate businesses into the ShadowAI problem in order to precisely understand the needs. By taking a proactive approach, IT security managers and business leaders can take advantage of the benefits of AI while protecting their data.