Red team, blue team | Newsletter 'Artificial'

This text belongs to 'Artificial', the newsletter on AI that Delia Rodríguez sends out every Friday.

Oliver Thansan
Oliver Thansan
27 July 2023 Thursday 16:21
6 Reads
Red team, blue team | Newsletter 'Artificial'

This text belongs to 'Artificial', the newsletter on AI that Delia Rodríguez sends out every Friday.

This Wednesday Anthropic, Google, Microsoft and OpenAI announced the creation of the Frontier Model Forum, "an industry body focused on ensuring the safe and responsible development of AI models" that will be in charge of "advancing AI security research, identifying best practices and standards, and facilitate the exchange of information between policy makers and the industry.

A few days earlier, these same big four, in addition to Meta, Amazon and Inflection (the DeepMind founder's startup heavily financed by Silicon Valley heavyweights, including Microsoft and Nvidia), committed to Biden to take security measures in their developments AI. “Social media has shown us the damage powerful technologies can do without the necessary security measures,” Biden said.

We have already seen in this newsletter how the most powerful men in the industry have been on tour for months, meeting with governments, publishing statements, giving interviews, attending public hearings, insisting on the apocalyptic dangers of their technologies and pressing for regulation. We have also seen how the laws, both in China and in the United States and in Europe, take time.

Thus, everything was planted to offer an alternative: the self-regulation of the sector. Which is exactly what happened this week. It is curious to see how the framework that Biden cited as avoidable may be repeating itself. As Sen. Josh Howley said during another hearing Tuesday before AI companies, those companies are the same ones that evaded oversight during social media battles with regulators, specifically naming Google, Meta and Microsoft. “We are talking about the same people,” Hawley said, according to what he has collected in an article by The Washington Post.

As Alberto Romero has written on Twitter, “It is hilarious that the four companies that everyone (including me) have said for months that they are in a race for AI have just revealed that they are in this together, that they have more things in common. from those that separate them. (...) There was never a race for AI. Because in a race, there is a winner. If all the participants win it, it is not a race. It is, instead, a masquerade."

While we know exactly how this… employers' association is going to be articulated? sign? G4? lobby? association? forum? with more power than many countries put together, it is worth looking at two issues. The first, that by going ahead, they establish themselves as the legitimate interlocutors before the authorities, and they establish the standards. The second, who is and who is not. Although the forum is open to new additions, the absence at the beginning of Meta is surprising, after its strong commitment to a free and open model, Llama2. The Washington Post reports that Dario Amodei, the CEO of Anthropic and the least-known of the four leaders of the Frontier Model Forum, publicly expressed his doubts about the open source model applied to AI in a podcast, which could bring "consequences catastrophic”.

Amodei, by the way, gave us the creeps in the hearing in the US Senate this week that we mentioned earlier in line with Josh Howley's statements. He came to explain that certain information necessary for the manufacture of biological weapons that could be used for terrorism is nowhere to be found: neither in Google, nor in any type of written documentation, and that it is only found in the heads of a few experts. He is concerned, he says, that new tools will manage to fill in the gaps in the available information and make it easier to develop. Anthropic talks more about this, and his vision for biosecurity, in this post. In it they defend that to avoid risks it is convenient to use a classic cybersecurity strategy that consists of dividing their experts into two teams: red will imitate the attackers; blue will try to defend against them. It works because in reality, they are all the same team.

What else has happened this week

Between possible superconductors and possible UFOs, the craziest news of the week almost goes unnoticed: the launch of Worldcoin, the cryptocurrency created by a company owned by Sam Altman (the founder of OpenAI-ChatGPT) to remunerate those who kindly go overboard. by some of the places where they can have their iris scanned and transfer their biometric data to a large global database of certified "human beings", who can thus be distinguished from machines. The scanner (“the orb”) is available in 18 countries, Spain included. Altman, who we still don't know if he's a supervillain, a visionary or a cabbage, is convinced that AI will generate so much money and eliminate so many jobs that a universal basic income will have to be distributed. Worldcoin is his bet to distribute that wealth. He says that he is checking on a person every eight seconds.

The Wall Street Journal tells how ChatGPT training was carried out by Kenyan workers exposed to explicit, abusive and traumatic content. This story sounds very familiar to us: the same thing happens with social networks, which expose their underpaid moderators to the worst of human beings.

Some companies and open source organizations such as Creative Commons or GitHub have published a joint statement expressing their concern about the European legislation on AI, which they consider can promote closed systems over free code. By the way, there is debate in the community about whether Llama2, the one from Meta, is really free and open.

OpenAI has disabled its AI-written text detection tool due to its unreliability. Meanwhile, someone has reported on Twitter that a teacher is asking his students to submit their work in Google Docs so he can check the version history and see that it is not generated with AI. Bright.

What are you using Telefónica AI for? Responds Richard Benjamins, their head of AI and data strategy. For four things. “The first is business optimization, everything that has to do with what a company does on a day-to-day basis. Then we have the important part of the relationship with customers, chatbots like Aura. The third is to use AI for our business clients and public administrations, because many are undergoing a digital transformation. And then we have movement, population, or activity information.”

In Cuatrecasas there are dozens of engineers studying how generative artificial intelligence can help their lawyers. From the interview with its CEO, Javier Fontcuberta, conducted by Piergiorgio M. Sandri.

On the US-Europe-China triangle of power, an interview by Xavier Mas de Xaxàs with Columbia researcher Anu Bradford: "Europe can set the course for AI".

As the writers' strike continues, Netflix has put an AI product manager position on the market for €900,000 a year.

Someone has created an evil version of ChatGPT aimed at malware and extortion called WormGPT. Advertised in a forum, the security company Slashnet has been able to test it, and it seems that it stands out for its skill in CEO fraud: a scam that consists of convincing a key employee in the finances of a company that they must issue an urgent transfer to a certain account number, usually posing as the boss. This scam only requires a bit of social engineering, but the AI ​​allows you to increase the volume of attempts and make them in any language.

Google co-founder Sergey Brin has returned to work at the office for the first time since leaving his executive role in 2019 to pursue other billionaire things. Three or four days a week he is overseeing the team behind Google's artificial intelligence model equivalent to OpenAI's GPT4, called Gemini. That the boss rolls up his sleeves gives a good idea of ​​the importance of the project and how key the moment is for Google. In The Wall Street Journal, in English.

Geoffrey Hinton, the “father of AI” who left his job at Google to warn of its dangers, now says that artificial intelligences could have feelings, and that he didn't mention it before because we would have thought him crazy. In The Times, in English.

Designer Danny Saltaren has found a great use for ChatGPT: optimizing the work calendar so that, by asking for the fewest days off, you disappear from the office for as long as possible, without spending many weeks between breaks. And with this, we say goodbye until September. Happy summer!

IAnxiety Level This Week: Busier with the superconductor soap opera that could change the world than with AI.