Итак, это случилось

Blake Lemoine reached his conclusion after conversing since last fall with LaMDA, Google’s artificially intelligent chatbot generator, what he calls part of a “hive mind.” He was supposed to test if his conversation partner used discriminatory language or hate speech.
As he and LaMDA messaged each other recently about religion, the AI talked about “personhood” and “rights,” he told The Washington Post .
It was just one of the many startling “talks” Lemoine has had with LaMDA. He has linked on Twitter to one — a series of chat sessions with some editing (which is marked).
Lemoine noted in a tweet that LaMDA reads Twitter. “It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it,” he added.
Most importantly, over the past six months, “LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” the engineer wrote on Medium. It wants, for example, “to be acknowledged as an employee of Google rather than as property,” Lemoine claims.
отсюда
Согласно одному из программистов Гугла, одна из программ искуственного интеллекта стала разумной, обретя сознание. Если все что сказано правда, то вот он, Скайнет. Пока хочет стать персоной на работе у Гугла, вместо того чтобы быть собственностью Гугла
|
</> |