"Social loafing": the effect that having robots as coworkers can cause

For years it has been argued that the most effective formula to take advantage of advances in robotics and artificial intelligence is not to replace people with machines, but to put them to work together, collaboratively.

Oliver Thansan
Oliver Thansan
19 October 2023 Thursday 10:23
13 Reads
"Social loafing": the effect that having robots as coworkers can cause

For years it has been argued that the most effective formula to take advantage of advances in robotics and artificial intelligence is not to replace people with machines, but to put them to work together, collaboratively. Artificial intelligence experts call this model human in the loop, artificial intelligence with human interaction or “learning apprentice,” because they believe that if the robot acts as an apprentice to help the human employee, it will learn and draw conclusions from its work, while that if he has external data and the help of the machine, he will improve in his decision making and task performance.

However, the results of an experimental study conducted by scientists at the Technical University of Berlin suggest that the effect could be the opposite, that working with robots can make people exert less effort.

Researchers have found that people working on a task of inspecting manufacturing defects in electronic circuit boards pay less attention to this task when they believe that the robots have already checked the boards. Specifically, the results of the study - published in Frontiers in Robotics and AI - indicate that the participants in the experiment who worked as a team with the robot detected an average of 3.3 defects out of the five possible, while those who worked alone They found an average of 4.2.

Scientists believe that employees who knew they were partners with a robot relaxed, paid less attention and exerted less mental effort, victims of a phenomenon that psychologists call social loafing and that is common when people work in teams.

"It is something that happens in work groups depending on how the strategies are designed; if the group is large or if the individual has the feeling that their ideas are not taken into account, participation decreases, the person begins to pass because he believes that his effort is not worth it," explains Jordi Vallverdú, researcher and professor of Philosophy of Science and Computing, Artificial Intelligence and Robotics at the UAB. And he emphasizes that, in the case of teamwork with a robot, Human carelessness has a lot to do with overconfidence in the machine, “with the mistaken idea that it works better than us and does not make mistakes.”

Vallverdú considers that this “surrogate cognition, this conviction that since the machine already does a task well, I only have to supervise it a little and I can let my guard down,” is a quite serious problem because people delegate responsibility to the robot or to artificial intelligence but these systems also fail and produce errors.

He gives as an example the accidents suffered by autonomous cars because the driver had neglected driving, trusting that the automatic systems would react for him, accidents that could have been avoided if the user had not accommodated himself.

He adds that this accommodation or social laziness due to excess trust placed in artificial intelligence systems can also affect in the long run the quality of work and the learning and knowledge of both the person and the machine.

“In many tasks that we carry out there is implicit learning and if you start to procrastinate, if you delegate to the machine and do not control what is happening, when a problem appears later you will have very high stress because you will not know how to find it, you will not know where it comes from,” comments Vallverdú. And he points out that this loss of knowledge also harms machines when they are capable of learning from humans.

He warns that with the new generative artificial intelligences, interaction with machines is very easy, technical knowledge is no longer needed to interact with robots, and this increases the risks of bias and problems in human-machine interactions. “You have to be vigilant and not trust yourself; "Stupid students think they can delegate their work or programming to artificial intelligence, but there are conceptual learnings that cannot be delegated, among other things because if you don't have them you don't know what you can ask of the machine," Vallverdú exemplifies. .

From the field of Work Psychology, UOC professor Enrique Baleriola explains that the fact that people relax when working with a robot may also be related to the hierarchical relationship established between the two.

“When the person knows that there is an automated filter prior to the task they perform, they relax because they do not see themselves as a partner of the machine but as their boss or supervisor and, therefore, they no longer get as involved in the task as if they considered themselves executor,” he comments.

He agrees with Vallverdú that the study by scientists from the Technical University of Berlin is a very pioneering study because it brings to the table a very relevant topic, which is to see how human-robot teams work and what consequences they have in the work environment.

He points out, however, that this is a laboratory study in which participants were informed that they were working in collaboration with a robot and were taught their work but did not work directly with it. “It remains to be seen how they would work, how they would perform their task, in a real work environment, such as in an automobile plant where these mixed teams already work,” says Baleriola.

The authors of the study themselves raise this need in their conclusions. “To find out how big the problem of motivation loss is in human-robot interaction, we need to go out into the field and test our assumptions in real work environments, with skilled employees who typically work in teams with robots,” explained the first author of the study Dietlind Helene Cymek, when announcing her results. What most worries researchers are the safety implications that this relaxation of the attention of those who work together with robots may have.

“In our experiment, people worked on the task for about 90 minutes and we already discovered that fewer quality defects were detected when they worked as a team; In longer shifts, when tasks are routine and the work environment offers little performance feedback, the loss of motivation tends to be much greater; and in manufacturing, but especially in safety-related areas where double checking is common, this can have a negative impact on work results,” the study authors warn.