The misuse of algorithms in the work environment or the refusal of companies to report on the content of their artificial intelligence tools is vox populi in business management environments. Until now, the Labor Inspection had not focused on this issue, but the second vice president and Minister of Labor, Yolanda Díaz, announced this week an inspection campaign to increase surveillance of the use made by large technology companies, such as Amazon, of these algorithms to control employees and organize their work rhythms. However, different experts consulted point out some complications in imposing sanctions for these practices.
Last June, the Generalitat of Catalonia fined Amazon for refusing to reveal the operation of the algorithms used to measure the productivity of workers at its logistics center in Prat de Llobregat (Barcelona). Although the fine was little more than symbolic, of just over 2,400 euros, it was significant for being a pioneer in sanctioning a company’s refusal to account for the details of its algorithmic application.
To date, there are not many such decisions. But last July the National Court also condemned the company call center Foundever Spain for refusing to give the CGT union delegates in the company information about the use of a series of algorithms that it claimed it did not use, but which in the trial it was proven that it did use them. In this case, the Court found that the company violated article 64.4.d of the Workers’ Statute, which requires it to provide information about the design and operation of the algorithms.
Labor’s intention is to increase surveillance and, where appropriate, sanctions regarding artificial intelligence in the workplace. But the inspectors of this organization differ in their possibilities of exercising this new control. The president of the Union of Labor and Social Security Inspectors, Ana Ercoreca, defends that the legislation already allows the organization to sanction. And it ensures that beyond the violation of the aforementioned article 64.4.d of the statute, the Inspection can fine companies for misuse of algorithms through other types of breaches of labor regulations.
In fact, he explains that he has already initiated several actions in this regard. Among them, a case in which a large multinational outsourced a part of its delivery system to a subcontractor (something permitted by law) but the Inspection’s investigations found that it was the multinational’s algorithms that determined the shifts, vacations and other work organization issues of the subcontracted company. “In that case the sanction is for illegal transfer of workers, but this violation has been reached through the algorithms,” says Ercoreca.
Likewise, he adds, there have been cases in which a selection process is reported and the inspectors detect that there are no women of childbearing age and discover that it is due to the bias of the AI tool. In that case, the sanction to the company that owns this tool would be that established for discrimination, a right that is regulated in a general way in article 4.2.c of the Workers’ Statute, and specifically in labor relations in article 17 of this same regulation.
In the same way, practices involving discrimination in remuneration have also been punishable, situations in which a platform’s algorithm sends a lower workload to part-time employees. “Instead of the distribution of tasks being proportional to the contracted hours, the algorithm sends them less so that no one gets to receive the productivity supplement,” criticizes Ercoreca.
Other inspectors disagree that these non-compliances are being addressed. The head of the CSIF union in the Labor Inspection, Miguel Ángel Montero, considers that the inspectors “find themselves in an almost black hole, because the platforms generate misery underemployment and make inspection activity extremely difficult.” According to Montero, “the speed at which the misuse of algorithms in the workplace is growing is exponential: at the moment those that use them the most are large companies, but very soon they will reach small restaurants and workshops and the Inspection is lagging behind in this.”
This union official also complains about the lack of specific qualifications to interpret AI tools and detect how they are being misused. Although it recognizes that the Ministry of Labor is aware of this gap and is incorporating specific training in this type of algorithms in the official training itinerary of the agency’s school.
The lack of qualified professionals is the main obstacle that professor of Labor Law and platform legislation expert Adrián Todolí also encounters in the fight against the abuse of AI by companies. This academic agrees with the two positions of the consulted inspectors. He assures that Spanish legislation already allows the Inspection to sanction violating companies with the Workers’ Statute, but also through the data protection law, which further details the requirements regarding the use of algorithms in companies. In these cases, the Data Protection Agency is responsible for establishing a sanction, but the Labor Inspection, again tangentially, can accuse the company of violating the principle of non-discrimination, for example.
“If this Inspection campaign is carried out, what will be most difficult is carrying out the investigations. The main challenge (of the agency) will be to have specialists in how the algorithms work and how to detect the risks of AI,” says Todolí.
Audits
In parallel to the actions of the Inspection, the head of Artificial Intelligence at UGT, José Varela, denounces that “companies systematically block providing information in the negotiation of collective agreements about the algorithms they use, because they know that they would not pass any audit.” This complaint is in line with European legislation embodied in the artificial intelligence regulation (IA Act), which in the part that is already in force requires audits of algorithmic applications in the work environment before and after their implementation. Likewise, this standard already expressly prohibits the use, for example, of emotion recognition systems through biometric tools.
However, the jurist who was deputy rapporteur of this European AI regulation, Iban García del Blanco (now international director of Lasker), is not clear how the widespread use of these tools can be combated through inspections. “It will be very complicated how to plan inspections and for what purpose; it will be like controlling the implementation of robots, robotic machines. Furthermore, it may even be counterproductive,” he says. Furthermore, it draws attention to the fact that the part of this regulation in which the sanctioning scheme for violating companies will be established, as well as the distribution of powers (who will establish them), will not come into force until August 2, 2026.
For more updates, visit our homepage: NewsTimesWire