Microsoft warns what is the greatest danger of AI

The explosion of Artificial Intelligence (AI) can carry risks such as a lack of cybersecurity, biases, a lack of neutrality or the substitution of some jobs currently held by human beings.

Oliver Thansan
Oliver Thansan
26 May 2023 Friday 22:55
7 Reads
Microsoft warns what is the greatest danger of AI

The explosion of Artificial Intelligence (AI) can carry risks such as a lack of cybersecurity, biases, a lack of neutrality or the substitution of some jobs currently held by human beings. But nothing worries Microsoft more than the generation of deepfakes, videos or images that are not real, but that appear to be thanks to extreme manipulation.

"We're going to have to address the issues around deep forgeries. We're going to have to address in particular what we're concerned about most foreign cyber influence operations, the kind of activities that are already being carried out by the Russian government, the Chinese, the Iranians", explained the president of the technology company, Brad Smith, in a speech in Washington.

Smith called for steps to ensure people know when a photo or video is real and when it's generated by AI applications like Stable Diffusion or Midjourney: "We need to take steps to protect against altering legitimate content with the intent to mislead or defraud people by using AI."

In addition, the president of Microsoft called for licenses for the most critical forms of AI with "obligations to protect security, physical security, cybersecurity, national security."

“We will need a new generation of export controls, at least the evolution of export controls that we have, to ensure that these models are not stolen or used in ways that violate the country's export control requirements,” he argued.

In recent months, fake images of Donald Trump arrested or Pope Francis walking around in a modern white coat have gone viral. All false, although many users were not able to detect the deception. The controversy was such that Midjourney ended the free trials of its software.

The technology companies themselves recognize that AI needs regulation. Last week, Sam Altman, CEO of OpenAI, the company behind the popular ChatGPT, told the US Senate that the use of AI that interferes with election integrity is a "significant area of ​​concern." And this issue urgently needs regulation, he added.