AI will delete terrible videos while creating similar ones

There's a solution? Can artificial intelligence (AI) be trained with such a level of precision that it detects atrocities posted on social media and eliminates them? Will it prevent content moderators from having to spend long days labeling trash? After revealing that 20% of the Barcelona staff of Telus – the company that cleans Facebook and Instagram of all kinds of savagery – are on leave largely due to the trauma caused by the content, La Vanguardia has gathered the opinions of various experts about possible solutions and the solution is… there is no solution.

Oliver Thansan
Oliver Thansan
07 October 2023 Saturday 10:21
2 Reads
AI will delete terrible videos while creating similar ones

There's a solution? Can artificial intelligence (AI) be trained with such a level of precision that it detects atrocities posted on social media and eliminates them? Will it prevent content moderators from having to spend long days labeling trash? After revealing that 20% of the Barcelona staff of Telus – the company that cleans Facebook and Instagram of all kinds of savagery – are on leave largely due to the trauma caused by the content, La Vanguardia has gathered the opinions of various experts about possible solutions and the solution is… there is no solution. Because the controversial AI can effectively help detect and eliminate terrible videos, but it can also be an infinite generator of new images.

The emeritus researcher of the Higher Research Council Ramon López de Mántaras, an AI expert, knows first-hand about Facebook's concern about content control because since 2018 he has been part of a committee of experts that analyzed the implementation of automation for this purpose. He signed a strict confidentiality document and cannot speak about the content or conclusions, but he explains to La Vanguardia that “automated mechanisms fail because they do not understand the nuances… Humans have something they lack: common sense.”

“My assumption is that Meta and other platforms use the work of moderators and annotations to train their systems,” says Eugenia Siapera, director of the digital policy center at University College Dublin.

One of the Telus content moderators in Barcelona interviewed by La Vanguardia corroborated this idea; He himself was part of the area of ​​the company that trains automation.

“It will not be sustainable for content control to be done by humans, for a mere economic reason. There will be no budget that can support content control templates, which will reach millions,” predicts Genís Roca, president of the puntCat Foundation and a long-time internet analyst.

“At first the only solution that companies found were people. Today much of it is automatic, it is true, but even if it is only a small part, the material that passes through humans causes disproportionate damage. It is one thing to eliminate spam about Viagra and another to stop a video of a child being raped. One thing is irritating, the other causes psychological damage,” reflects Sarah Roberts, a professor at the University of California, Los Angeles (UCLA), who since 2010, at the beginning, has been researching social networks.

“They shrug their shoulders,” he adds, “and say: My God, it's a horrible reality that someone has to do this job and, thank God, these people are doing it. But the fact is that they decided to open their floodgates and tell everyone to upload whatever they wanted and they would fix it in the back. Because it's a commodity, and they want it all, the good, the bad, and the ugly, and they'll fix it later. That was a decision they made at the beginning, they didn't have to do that. And the rest of us suffer the consequences, both the users and the workers of Barcelona. I was there and spoke to some of them and tried to get the company to tell me why they were in Spain and why they were in Barcelona. And they didn't want to tell me, but the answer is: because there was an economic crisis in Spain. I mean, that was the answer, an increasingly cheaper salary, but they don't say it.”

In the second quarter of 2023 alone, according to Meta data, 7.2 million contents were removed from their networks due to child sexual abuse. On Facebook, 6.4 million for suicide and self-harm and 17.5 million for hate speech. And from Instagram, 6.2 million files.

For Roberts, “the worker needs to know what the consequences may be, because you are really selling something much bigger than it seems at first. It is a cheapening of humanity. Locating yourself in places around the world, whether in Spain, Colombia, Romania or other economically disadvantaged places, even if they are in Europe, is deeply cynical. They chase the lowest wages and that's why this is a global quasi-industry. Companies came to market with their products without a real plan for how to manage the types of material that their business model not only allowed, but we could say was actively requested of them, in the sense that some people feel inspired to do things, of violence or brutality, precisely because of the possibility of uploading it and sharing it with the world.”

Would an identification of Internet users be a solution, that is, the end of a certain anonymity? “It is true that many network names are false, but anonymity does not really exist, precisely the digital world is characterized by the trace it leaves, and it will become increasingly difficult to try,” says Roca.

Sarah Roberts estimates that the main interest of big technology companies is not in freedom of expression but in business. “We are not the customers, we are what is on offer. Within these companies, the so-called trust and safety teams or operations that deal with and touch on these issues tend to be the parts of the company with the least resources. And they're always in a confrontational position because they're trying to defend certain things that people in executive positions see as a threat to revenue generation. And just in the last two years, after the massive layoffs at Twitter, other companies like Facebook and Google use it as an excuse to lay off tons of people, especially from their trust and security teams.”

“There are two things I would propose: the first is to listen to the moderators and what they say about their needs and how to protect them; and the second is to think about the problem of toxic content in a more holistic way. This is a society-wide problem, reflecting deep-rooted divisions, for example around gender, race, sexual attitudes, etc. We have to address these divisions because they generate hatred and toxicity, and not limit ourselves to trying to control their circulation on digital platforms,” Siapera thinks.

“I do not believe – he adds – that punishing this type of content by legal means is the solution, and it can also seriously compromise our right to freedom of expression. In my opinion, we have to address toxicity at its sources, and these are found in power structures and systems, not in individual users and their actions. Unless they are dismantled, we are unlikely to overcome the game of cat and mouse, where as soon as a toxic message is removed, a new one takes its place. Obviously, this does not absolve either Big Tech or regulators, but their efforts are unlikely to succeed unless social divisions are addressed.”

"The key," adds Roca, "is that behind specific software there is a specific morality, be it that of the United States, China, Europe, the East... and not all of them are the same."

This analyst, in any case, is “optimistic” because the distinction between reality and fiction “has always been very problematic, it is not a problem today. AI is probably not that different from Photoshop, let's remember that when it appeared everyone thought that photography was over and that was not the case.

López de Mántaras is more radical: “Let's eliminate the networks. It's not an occurrence. Let's think: do they bring more benefits or more harm? For me it is clear. Very brilliant minds are dedicated to thinking about how to keep us in their clutches for the maximum amount of time, which is not ethically acceptable.”