Apple prohibits the use of ChatGPT to its employees for fear of leaks

The information advanced by The Wall Street Journal fell like a jug of cold water in the world of technology.

Oliver Thansan
Oliver Thansan
22 May 2023 Monday 22:46
4 Reads
Apple prohibits the use of ChatGPT to its employees for fear of leaks

The information advanced by The Wall Street Journal fell like a jug of cold water in the world of technology. Apple does not allow its employees to use ChatGPT or other generative artificial intelligence; Neither at work nor with company equipment.

This news coincides with the launch in the United States of the official ChatGPT application for iPhone by OpenAI. You will no longer have to resort to third-party applications to use it on phones. The Android version is also on the way.

Apple's ban contrasts with the current that warns about how this AI could end many jobs. How can a tech company not want their employees to use this AI for work?

The main answer is close at hand: Italy temporarily banned the use of ChatGPT for not respecting the Data Protection law. Apple's motives are the same as those of Italy or Samsung, which became known weeks ago that it had also banned this AI among its employees.

Now, ChatGPT allows us to delete the information from our chats, which led to its use again in Italy. But despite that, the Open AI chatbot continues to store some of the data that we provide to it. Even if we request that they be removed.

After all, generative artificial intelligences are getting smarter the more we train them with our questions. That is what deep learning technology is all about.

Doubts about the use of data by artificial intelligences are long before the appearance of ChatGPT. Until recently, Google was the company that was most concerned about the use it made of information to train its artificial intelligences. The Google assistant recognizes our voice with a high degree of reliability. One thing that it partly achieves with something as controversial as allowing the phone to record snippets of audio without us knowing.

But there is a big difference between entering more or less disconnected data in a service and entering very detailed information. As happens if we ask an AI to solve a problem that has to do with a patent in development.

That is today one of the great dangers of ChatGPT. If, for example, an Apple employee inputs information for a future company product, it can be visualized by the human trainers who feed the intelligence to him. Considering that these workers may have been employed by outsourced companies and be underpaid, leaks are possible.

The future of Chat-GPT and other artificial intelligences like Google's Bard depends to a large extent on privacy. Its professional use can be a threat to companies, institutions and workers.

A practical case. If we use ChatGPT to process data for a doctoral thesis, there are no guarantees that these will not feed the knowledge base of this AI, and it may be that, when the thesis is submitted, part of it has already been consulted by users of ChatGPT.

These are risks that the co-founder of OpenAI, the company that has developed ChatGPT, Sam Altman, who yesterday met with the President of the Spanish Government, Pedro Sánchez, is no stranger to. Madrid was a stage in his world tour to convey the need for regulation of this technology. In the conversation, they discussed the importance of the upcoming approval of the European regulation on Artificial Intelligence (AI Act), and Sánchez highlighted the role of Spain as a pioneering country in regulation in this area, as shown by the approval of the Charter of Digital Rights .

For his part, in a colloquium at IE University, Altman said that AI regulation should be focused on large models because "they are the ones that can really do harm." The American entrepreneur considers that "it does not make sense" to regulate small developers, who should be allowed to grow since, in the case of small language model companies, it would cut their capacity for innovation and creation. What he suggested was to “let the small models grow” and focus regulatory attention on the large models, since “we are the ones who can manage them”.

On the disputed privacy, he acknowledged that "it is fair and reasonable to be skeptical", and that the work of technology with AI "has not been perfect". He said that while there have been "a lot of bugs," all the work that has gone into fixing them and creating improvements must also be taken into account.