Uncategorized

Why Social Media Algorithms Are Getting More Dangerous

Stuart Russell, a professor at the University of California at Berkeley, has been studying artificial engineering (AI) for decades.

But he is also one of its best-known critics, at least of the AI ​​model that it still sees as “standard” in the world.

Russell has cautioned that the predominant AI model is, in his view, a threat to the survival of human beings.

But, unlike Hollywood movie plots on the subject, it’s not about these technologies becoming conscious and turning against us.

Russell’s main concern is how his human developers have programmed this intelligence: they are tasked with optimizing their tasks as much as possible, basically at any cost.

And so they become “blind” and indifferent to the problems (or ultimately destruction) that they can cause to humans.

The genie came out of the bottle

To explain this to BBC News Brazil, Russell uses the metaphor of a genie from a lamp fulfilling his teacher’s wishes.

“You ask the genius to make you the richest person in the world, and so it happens, but only because the genius made the rest of the people disappear,” he says.

“(In AI) we build machines with what I call the standard models: they receive objectives that they have to achieve or optimize, for which they find the best possible solution. And then they carry out that action ”.

Even if this action is, in practice, harmful to humans, he argues.

“If we build AI to optimize a fixed target given by us, they (machines) will be like psychopaths, pursuing that target and being completely oblivious to everything else, even if we ask them to stop.”

An everyday example of this, Russell says, is the algorithms that govern social media, which have become so apparent in recent days with the global collapse that hit Facebook, Instagram and WhatsApp for about six hours.

The main task of these algorithms is to improve the user experience on social networks. For example, gathering as much information as possible upon that user and providing content that is tailored to their preferences so that they stay connected for longer.

Even if this comes at the expense of the welfare of the user or global citizens, the researcher continues.

"If we build Artificial Intelligence to optimize a fixed goal given by us, they (machines) will be almost psychopathic, pursuing that goal and being completely oblivious to everything else, even if we ask them to stop.".

Getty Images
“If we build Artificial Intelligence to optimize a fixed goal given by us, they (machines) will be almost like psychopaths, pursuing that goal and being completely oblivious to everything else, even if we ask them to stop.”

“Social media creates addiction, depression, social dysfunction, perhaps extremism, polarization of society, and perhaps contributes to the spread of misinformation,” says Russell.

“And it is clear that their algorithms are designed to optimize one goal: that people click, that they spend more time hooked with the content,” he continues.

“And, by optimizing these amounts, we can be causing huge problems for society.”

However, Russell continues, those algorithms are not scrutinized enough to be verified or “fixed”, so they continue working to optimize their objective, regardless of collateral damage.

“(Social networks) are not only optimizing the wrong thing, but they are also manipulating people, because by manipulating them they increase their commitment. And if I can make you more predictable, for example by transforming you into an extreme eco-terrorist, I can send you eco-terrorist content and make sure you click to optimize my clicks. “

These criticisms were reinforced last week by the former Facebook worker (and current informant) Frances Haugen, who testified before a hearing in the United States Congress.

Haugen said that social media “hurts children, creates divisions and undermines democracy.”

Facebook has reacted by saying that Haugen does not have enough knowledge to make such claims.

AI with “human values”

Russell, in turn, will explain his theories to an audience of Brazilian researchers on October 13 during a conference of the Brazilian Academy of Sciences in a virtual way.

The researcher, author of “Human Compatibility: Artificial Intelligence and the Control Problem“, Is considered a pioneer in the field that he calls” artificial intelligence compatible with human existence. “

Frances Haugen, a former Facebook employee, accused the company and its products of "harm children, cause division and undermine democracy".

Reuters
Frances Haugen, a former Facebook employee, accused the company and its products of “harming children, causing division and undermining democracy.”

“We need a completely different kind of AI system,” says Russell.

This type of AI, he continues, would have to “know” that it has limitations, that it cannot meet its objectives at any cost and that, even being a machine, it may be wrong.

“I would make this intelligence behaved in a completely different way, more cautious (…), That he is going to ask permission before doing something when he is not sure if it is what we want. And, in a more extreme case, it would like to be turned off so as not to do something that is going to harm us. That is my main message ”.

The theory defended by Russell is not part of a consensus: there are those who do not consider this current model of AI threatening.

A famous example of both sides of this debate occurred some years ago, in a public discussion between technology entrepreneurs. Mark Zuckerberg and Elon Musk.

Mark Zuckerberg has an optimistic view of the current model of Artificial Intelligence

Reuters
Mark Zuckerberg has an optimistic view of the current model of Artificial Intelligence.

A report from The New York Times points out that, at a dinner in 2014, the two businessmen debated each other.

Musk noted that he “really believed in the danger” of AI becoming superior and subjugating humans.

Zuckerberg, however, opined that Musk was being alarmist.

In an interview that same year, the creator of Facebook considered himself an “optimist” when it came to AI and claimed that critics, like Musk, “were creating apocalyptic and irresponsible scenarios.”

“Whenever I hear people saying that AI is going to harm people in the future, I think that technology can generally be used for good or bad, and you have to be careful how you build it and how it is going to be. used. But I find it questionable to defend slowing down the AI ​​process. I can’t understand that ”.

Musk has argued that AI is “potentially more dangerous than nuclear warheads“.

A slow and invisible nuclear disaster

Stuart Russell adds to Musk’s concern and also draws parallels to the dangers of the nuclear race.

“I think many (tech specialists) consider this argument (the dangers of AI) threatening because it basically says that ‘the discipline that we’ve been in for several decades is potentially a risk.’ Some people see that as the opposite of AI, ”says Russell.

Elon Musk has said that he considers artificial intelligence to be "potentially more dangerous than nuclear warheads".

Reuters
Elon Musk has said that he considers artificial intelligence to be “potentially more dangerous than nuclear warheads.”

“Mark Zuckerberg believes that Elon Musk’s comments are anti-AI, but that sounds ridiculous to me. It is like saying that the warning that a nuclear bomb could explode is an anti-physical argument. It is not anti-physical, it is a complement to physics, for having created such a powerful technology that it can destroy the world ”.

“In fact, we had (the nuclear accidents) of Chernobyl and Fukushima, and the industry was decimated because it did not pay enough attention to the risks. So if you want to get benefits from AI, you have to pay attention to the risks. “

The current lack of control over social media algorithms, Russell argues, can cause “huge problems for society” on a global scale. But, unlike a nuclear disaster, this one is “slow and almost invisible.”

How then to reverse this course?

For Russell, a complete redesign of social media algorithms. But first, you need to know them thoroughly, he says.

“Find out what causes polarization”

Russell points out that on Facebook, for example, not even the independent council in charge of supervising the social network has full access to the algorithm that curates the content viewed by users.

“But there is a large group of researchers and a large project underway at the Global Partnership in AI (GPAI), working with a large social network, which I cannot identify, to gain access to data and do experiments.” says Russell.

“The main thing is do experiments with control groups, see with people what is causing social polarization and depression and (see) if changing the algorithm improves that. “

“I’m not telling people to stop using social media or that they are inherently evil,” Russell continues.

“(The problem) is the way how the algorithms work, the use of likes, uploading content (based on preferences) or its removal. The way the algorithm chooses what to put on the screen seems to be based on metrics that are bad for people“.

“Therefore, we must put the benefit of the user as the main objective and that will make things work better and that people are happy to use their systems,” he says.

There will be no single answer as to what is “beneficial”. Therefore, argues the researcher, the algorithms will have to adapt that concept for each user individually, a task that, Russell himself admits, is not at all easy.

“In fact, this (social media area) will be one of the most difficult where this new AI model will be put into practice,” he says.

“I really think we have to start everything from scratch. We may end up understanding theunlike acceptable and unacceptable manipulation“.

Russell continues: “For example, in the educational system, we manipulate children into knowledgeable, capable, successful, and well-integrated citizens. And we consider that acceptable ”.

“But if that same process turned terrorist children back, it would be unacceptable manipulation. How exactly do you differ between the two? It is a very difficult question. Social networks raise these quite difficult questions that even philosophers have difficulties in answering ”, says the researcher.


Now you can receive notifications from BBC News Mundo. Download the new version of our app and activate them so you don’t miss out on our best content.

.

Related Articles

Leave a Reply

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker