More than half of Russian companies are concerned about data leaks due to artificial intelligence (AI) applications. Automation increases the risk of compromising both personal information and trade secrets. Experts point out that without data encryption, the leak may be irreversible.

According to a study by MTS Web Services, 59% of companies consider the threat of personal data leakage to be the most serious. Second is the risk of leaking commercial secret data, noted by 56% of businesses. When scaling AI solutions, IT departments place security concerns above the anticipated benefits of streamlining routine tasks and increasing the efficiency of business processes with AI.
At the same time, according to McKinsey, the level of AI system deployment in the world is 88%. Synthetic neural networks are actively used in the field of marketing as well as in the development of IT solutions.
Sergey Yudin, head of Content AI technology, said the high risk of data leakage when using neural networks is due to the nature of this technology. While classical software only processes data without storing or analyzing it beyond a specific task, AI systems are built on large language models that literally “absorb” information and use it to understand context internally. And in the case of public service – for additional training.
“The problem is, once in an open AI system, information cannot actually be deleted. It can be stored in logs, used in training samples, or accidentally disclosed to third parties. As a result, the leak becomes irreparable,” Yudin explains.
In this case, an important step to protect data is to create an internal policy for the use of AI. Employees need to be trained on what data is allowed to be transmitted to the neural network and what data is strictly prohibited.
For example, public AI services often make no secret that they save conversation history and use it to improve models. Even if data is aggregated anonymously, there is still a risk of unauthorized reproduction or access. Therefore, when introducing AI systems into their operations, companies will create an overall strategy for safely interacting with technology.
“One of the recommended techniques to protect data is to use 'placeholders', when fictitious ones are used instead of real names, amounts, bank details or company names. This allows you to maintain the logic and structure of the task and get relevant answers from the AI without the risk of revealing confidential information,” says Yudin.
Required security measures also include privacy settings in the services used: disabling the option to allow the use of conversations for model training, regularly deleting correspondence history, and separating accounts for personal and work tasks.
Solar Group's press agency said that a large amount of data from enterprise systems of government and commercial organizations is loaded into the neural network. These can be files containing both general information about the company and sensitive data: for example, equipment drawings with detailed calculations or proprietary financial indicators. To safely implement AI into their operations, companies can use a variety of solution sets.
“In the future, we can see that AI technology will be used in information security products to automate processes and repel cyber attacks. This is because attackers are already using various AI tools to invade infrastructure. It turns out that AI will fight against itself,” the company concluded.
















