Think like machines | 'Artificial' Newsletter

This text belongs to 'Artificial', Delia Rodríguez's AI newsletter.

Oliver Thansan
Oliver Thansan
26 October 2023 Thursday 17:03
12 Reads
Think like machines | 'Artificial' Newsletter

This text belongs to 'Artificial', Delia Rodríguez's AI newsletter.

My teenage love, Andre Agassi, discovered that his great rival, Boris Becker, made an involuntary movement with his tongue that revealed how he was going to serve. “If he closed his mouth, the serve went to the center of the court; If he slid his tongue to the side, then he would surely perform an open serve,” he said for the first time many years later, in his book Open. “I had to resist the temptation to read it constantly.”

The neuroscientist Mariano Sigman and the technologist Santiago Bilikin have found in this story a magnificent metaphor for the intelligence of machines and they tell it in the book they have just published, coincidentally titled like this newsletter: Artificial (Debate, 2023).

Agassi had, the authors write, a superintelligence to detect almost imperceptible traits but accessible to everyone. “A neural network works the same way,” they say. “It detects attributes that allow it to identify whether or not an image is that of a cat, if there is a tumor in the image of a lung or what particular emotion a person's voice expresses (...) Like Agassi, no one It teaches a neural network which is the best attribute to predict something. You have to figure it out from an abysmal pile of data.” It is difficult for humans to identify the most relevant traits among many possible ones, but not for AIs. Their possibilities in diagnostic imaging are incredibly hopeful: for example, they can find on a CT scan not only what they are looking for, but also other findings.

We have many prejudices about the superiority of human intelligence over animal or artificial intelligence, but talent and creativity, in sport, art or life, are related to abilities, such as relating, abstracting, synthesizing, memorizing or creating from nothing that machines can do, are learning to do or will do in the future. For example, ChatGPT is a fabulous editor, capable of correcting and summarizing a text very well.

The real breakthrough will occur when we reach Artificial General Intelligence, the true AI of the movies, the great promise of computing, which will be capable not only of developing and improving specific tasks but of learning about anything else. In reality, no one knows if we will get there, or how long it will take. OpenAI, an interested party, is trying to warm up the feeling that they are in it. Bill Gates believes that we are in a stage of stagnation after the spectacular jump between GPT 2 and GPT 3-4 that occurred last year, and that the next GPT 5 will not be able to repeat such an advance, although there is great room for improvement in the next two or three years if problems such as the cost of the models and their hallucinations are resolved, which will allow more practical and more reliable applications.

Several trailers published this week have focused on just these issues. We have known that pigeons learn through trial and error, like AIs, to make better decisions. They don't need a prior rule to figure out how to get more food, but the problem with rough learning is that it consumes a lot of resources. That is why it is so useful to generalize, and two scientists, one of them from the Pompeu Fabra University (UPF) in Barcelona, ​​seem to have achieved it: they have managed to make a neural network combine new concepts with other already known ones, something considered impossible in the past. last 35 years. The advance, published in Nature, could make training AI models cheaper. Meanwhile, in China, a team from the University of Science and Technology (USTC) and Tencet have developed a framework named, curiously, after another very clever bird, the woodpecker. Woodpecker manages to reduce hallucinations in large multimodal language models (MLLMs), which are those intelligence blunders that try to protect the verisimilitude of the story rather than its veracity.

What else happened this week

- The United Kingdom hosts the AI ​​Safety Summit next week, a large international summit on the safety of artificial intelligence. The headquarters is the best place in the world for it: the legendary Bletchley Park where Alan Turing laid the foundations of the discipline and where the Enigma machine code breakers worked during World War II in one of the most fascinating episodes. of the history of technology. British Prime Minister Rishi Sunak is crazy about placing himself in the AI ​​race. He has already announced the founding of an institute for AI safety, and will call for the creation of a monitoring panel similar to that for climate change, something that the head of Deep Mind, Damian Hassabis, also supports. Kamala Harris, the vice president of the United States, has confirmed his attendance. China, at the moment, no.

- On Monday, two days before the start of the event, the US will announce a decree that, apparently, will require that AIs used by the government be previously evaluated. The immigration of specialized workers will also be facilitated and a voluntary agreement will be signed with companies, which will commit to facilitating the marking of images and the transfer of security data to the government and researchers.

- The Spanish Carme Artigas, Secretary of State for Digitalization and Artificial Intelligence, will be co-president of the new UN AI Advisory Body.

- Do you remember the Frontier Model Forum, that kind of employer association of the industry's big names? It already has a CEO and 10 million dollars.

- The “about this image” option now appears in every Google Images search result in English. There you can trace the history of that photo, its context and metadata.

- Another letter from experts asking for mitigation of AI risks.

- LoveGPT is an automated scammer for dating apps. Things are no longer done as before.

- Universal has sued Anthropic because Claude (its “ethical” chatbot) uses lyrics from its songs without permission.

- Someone has fixed their toilet bowl with ChatGPT.

- Nightshade is a tool that artists can use to “poison” AIs that take their work without permission. I like her.

- The University of Malaga and its municipal police are working on a patrol robot by 2024.

- Polymathic, an alliance of researchers to develop a new AI tool that works with numerical data and physical simulations.

- California has withdrawn the license to circulate in San Francisco for Cruise's robotaxis after one of them dragged a pedestrian six meters.

- People are still amazed by DALL-E 3.

- What is the situation of porn without consent in Latin America? By Noor Mahtani.

- Good news: a program manages to detect type 2 diabetes in a group of patients by analyzing voice messages lasting a few seconds.

- A day in the life of artificial intelligence. A nice interactive topic from The Guardian about the everyday uses of this technology.

- The Internet Watch Foundation has found and analyzed 3,000 pedophile images of child abuse made with AI and punishable under British law. They found celebrities recreated as minors, undressed children from photos found on the Internet and new material formed from old photos of real victims, re-victimizing them.

- By the way, after a long time of controversy the European Parliament has backed off the massive AI scanning of communications that the Commission proposed to fight against pedophilia on the internet, and which raised serious doubts about its respect for privacy. “You cannot scan the network in a general way nor open back doors in encryption,” said José Antonio Zarzalejos, of the European PP.

- Hardware: The US has banned NVidia from selling its most powerful graphics card to China, IBM has created a new super-efficient chip called NorthPole and Qualcomm has launched the Snapdragon 8 Gen 3.

- Meta and an AI company called Realeyes hired striking actors to train the machines with human expressions in an “Emotion Studio” for $150 an hour.

Level of IAnxiety this week: that of an actor hired to teach emotions to the machine that will replace him.