In psychiatric organizations, a new trend begins to manifest: Some patients come to false beliefs and sometimes dangerous to have a specific operational interaction with chatbots. This has been reported by wired publications, interviewing more than ten psychiatrists.

Some patients persuaded doctors with the presence of the reason in BOT, while others brought thousands of letters from the nerve networks to support their unhealthy thoughts. In some cases, communicating with artificial intelligence has led to serious consequences, including losing jobs, breaking up, hospitalization, prison and even murder. However, psychiatrists do not rush to realize that AI-Satchosis is an official diagnosis. According to experts, this concept can deceive and simplify a complicated process to understand the mental.
Mental is considered not a separate disease, but a combination of symptoms, such as hallucinations and thinking disorders. It can be linked both with schizophrenia or bipolar disorder, and with the consequences of serious sleep or stress. In cases related to communication with AI, delusional ideas are often more observed – stable, but does not correspond to the reality of reality is difficult to reject.
Nina Vasan's psychiatrist Stanford considers the creation of a new diagnosis as soon. It warns against haste in classification, which can lead to errors in understanding common difficulties. In addition, there is a fear that society will begin to consider technology as the main cause of the issues, although direct communication has not been established.
Experts also expressed concerns about mental style possible. People with its expression may feel ashamed and avoid seeking help, aggravating their situation. Currently, research on the theme AI-Dychosis is in the early stages and mental doctors face difficulties in understanding the size and cause of this phenomenon.