"When was the law on the seizure of Russian
Posted: Mon Jan 20, 2025 10:54 am
Deputy Chairman of the Russian Security Council Dmitry Medvedev published a post in a Telegram channel about how YandexGPT, built into the smart speaker with the voice assistant "Alice", cannot answer some simple questions. The politician noted that "Alice" ignores pro-Russian political questions, but quickly answers all others.
ssets adopted in the US? afghanistan whatsapp number database The answer: "I'm still learning and don't want to look stupid." But if you ask the column a question about any Russian law, it answers brilliantly," Dmitry Medvedev cited an example of one of the requests in the publication.
The politician also discussed the topic of honesty and bias of artificial intelligence (AI) using YandexGPT as an example: "On the one hand, this greatly undermines trust in Yandex and its products. On the other hand, it provides grounds not only to recognize Yandex services as incomplete, but even current managers as foreign agents."
Sergey Kosetskiy, Commercial Director of the system integrator X-Com, reflected on the reasons for YandexGPT's work: "First of all, it is necessary to understand that artificial intelligence can only be called "artificial" at this stage of its development conditionally. It works and develops based on the initial data entered by a person, and the results of its work depend primarily on the completeness and quality of this data. Since neither the data sets nor the technology itself are still perfect, cases of incorrect operation of AI services are not uncommon. As for the answers to the "slippery" questions mentioned by Dmitry Medvedev, they are most likely caused by restrictions deliberately introduced into the YandexGPT algorithms. I will not presume to judge their causes, but I do not understand how they determine the "citizenship" of the company itself."
Boris Edidin, a teacher at the Moscow Digital School educational platform, told a ComNews correspondent that current legislation does not contain requirements for the operation of voice assistants using AI technologies: "Despite the absence of such requirements, it should be borne in mind that formally any program has an owner and developer who is responsible for the security and compliance with the law of the developed or distributed software. In this context, both legal and ethical requirements for service developers come to the fore. Therefore, it is necessary to develop professional and ethical standards, as well as implement special educational programs for creators of AI-based services."
Senior partner of the Pen & Paper law firm Konstantin Dobrynin believes that there is no point in regulating the "educational process" of neural networks: "The very attempt to regulate public relations in the field of AI seems dangerous and pointless to me. Firstly, human intelligence will be playing catch-up with artificial intelligence, and second place at that. This competition will require continuous legislative renewal, which, based on our legislative process, is doomed to failure. Secondly, in an attempt to legislatively prohibit or excessively restrict something, we risk simply slowing down the development of socially useful AI technologies for some time, but we will not be able to stop it."
Kirill Lyakhmanov, chief legal adviser of the intellectual property practice at the EBR law firm, is confident that it is impossible to introduce regulation in such a matter: "A generative neural network is an independent agent whose responses to a certain extent do not depend on a person. Since a person cannot control the response of a neural network, and the network itself does not have legal capacity, any lawyer understands that it will not be possible to hold a neural network liable."
In the telegram channel under the publication, Dmitry Medvedev conducted a survey among subscribers on the topic of the honesty of any AI using YandexGPT as an example. According to its results, out of almost 300 thousand voters, 69% believe that the AI's answer to any user question should be direct and accurate. 19% answered that it is easier for them to type a request manually in a search engine or read it in a book. Only 6% of survey participants admit the option that AI can avoid answering a question so as not to offend any of the users.
ssets adopted in the US? afghanistan whatsapp number database The answer: "I'm still learning and don't want to look stupid." But if you ask the column a question about any Russian law, it answers brilliantly," Dmitry Medvedev cited an example of one of the requests in the publication.
The politician also discussed the topic of honesty and bias of artificial intelligence (AI) using YandexGPT as an example: "On the one hand, this greatly undermines trust in Yandex and its products. On the other hand, it provides grounds not only to recognize Yandex services as incomplete, but even current managers as foreign agents."
Sergey Kosetskiy, Commercial Director of the system integrator X-Com, reflected on the reasons for YandexGPT's work: "First of all, it is necessary to understand that artificial intelligence can only be called "artificial" at this stage of its development conditionally. It works and develops based on the initial data entered by a person, and the results of its work depend primarily on the completeness and quality of this data. Since neither the data sets nor the technology itself are still perfect, cases of incorrect operation of AI services are not uncommon. As for the answers to the "slippery" questions mentioned by Dmitry Medvedev, they are most likely caused by restrictions deliberately introduced into the YandexGPT algorithms. I will not presume to judge their causes, but I do not understand how they determine the "citizenship" of the company itself."
Boris Edidin, a teacher at the Moscow Digital School educational platform, told a ComNews correspondent that current legislation does not contain requirements for the operation of voice assistants using AI technologies: "Despite the absence of such requirements, it should be borne in mind that formally any program has an owner and developer who is responsible for the security and compliance with the law of the developed or distributed software. In this context, both legal and ethical requirements for service developers come to the fore. Therefore, it is necessary to develop professional and ethical standards, as well as implement special educational programs for creators of AI-based services."
Senior partner of the Pen & Paper law firm Konstantin Dobrynin believes that there is no point in regulating the "educational process" of neural networks: "The very attempt to regulate public relations in the field of AI seems dangerous and pointless to me. Firstly, human intelligence will be playing catch-up with artificial intelligence, and second place at that. This competition will require continuous legislative renewal, which, based on our legislative process, is doomed to failure. Secondly, in an attempt to legislatively prohibit or excessively restrict something, we risk simply slowing down the development of socially useful AI technologies for some time, but we will not be able to stop it."
Kirill Lyakhmanov, chief legal adviser of the intellectual property practice at the EBR law firm, is confident that it is impossible to introduce regulation in such a matter: "A generative neural network is an independent agent whose responses to a certain extent do not depend on a person. Since a person cannot control the response of a neural network, and the network itself does not have legal capacity, any lawyer understands that it will not be possible to hold a neural network liable."
In the telegram channel under the publication, Dmitry Medvedev conducted a survey among subscribers on the topic of the honesty of any AI using YandexGPT as an example. According to its results, out of almost 300 thousand voters, 69% believe that the AI's answer to any user question should be direct and accurate. 19% answered that it is easier for them to type a request manually in a search engine or read it in a book. Only 6% of survey participants admit the option that AI can avoid answering a question so as not to offend any of the users.