When robots go rogue: Who’s accountable in the digital workplace?

On April 28, the world observes the World Day for Safety and Health at Work—a day specified by the UN to reflect on how to make workplaces safer. The theme for 2025 is strikingly modern: "Revolutionizing health and safety: the role of artificial intelligence (AI) and digitalization at work." This theme speaks to a transformation already underway. Robots now carry out dangerous, repetitive tasks in factories. AI-driven platforms manage gig workers' schedules. Wearable tech monitors miners' heart rates in real time. Sensors can detect gas leaks before humans smell a thing. These technologies can save lives, boost productivity, and eliminate jobs that once led to deaths or injuries. Yet they also raise a troubling question: when something goes terribly wrong, who is to blame—the machine or the human behind it?
Let's begin with some real examples. In 2023, a factory worker in South Korea was killed when an industrial robot mistakenly identified him as a box and crushed him. Investigations revealed lack of proper programming and sensor calibration. In Arizona, a self-driving Uber car killed a pedestrian in 2018. The safety driver was distracted, but the AI failed to recognise the woman as a person. Prosecutors debated whether the driver, Uber, or the algorithm was at fault. In Amazon's warehouses, AI-powered systems track worker productivity. Workers have complained that the system penalises them for taking restroom breaks. In some cases, this pressure has reportedly led to exhaustion, injury, and even heart failure. In 2024, a gas leak at a semi-automated dyeing factory in Bangladesh killed three workers after an unmonitored early warning system failed. These examples highlight the incredible potential—and the unprecedented risks—of AI and automation in the workplace.
Who is Responsible? Suppose a robot malfunctions or an AI system leads to a workplace fatality. Is it the fault of the software engineer who built it? The manager who deployed it? The company that owns it? Or is the AI itself—trained, adapted, and increasingly autonomous—the culprit?
The problem is that most legal systems don't yet have an answer. In countries like Bangladesh, there are no clear laws on liability when AI or robots cause harm. Courts are left to rely on existing tort and criminal law, which were built for human conduct—not autonomous machines.
At the center of this legal confusion are two competing schools of thought: Fiction Theory and Reality Theory. The fiction theory views AI, robots, and corporations as legal fiction—not real persons but tools created by humans. These entities can only act through real people. They have no conscience, no emotions, no "guilty mind" (or mens rea). Hence, if something goes wrong, it must be because a human erred. By this view, for a robot arm that crushes a worker, the fault lies with the factory supervisor or the programming team. An AI scheduling tool that drives workers to burnout, is in fact, a management policy issue. For digital sensor that fails to sound an alarm, its engineers are to be blamed.
This approach upholds human responsibility and avoids blaming "dumb" machines. But it also has loopholes. What if no specific human can be identified? What if an autonomous system, learning on its own, develops harmful behavior over time? Does that mean no one is responsible?
In contrast, the reality theorists argue that legal entities— such as corporations, and potentially AI systems are real actors with their own "will" and "body." Just as a company can be sued, fined, or even held criminally liable in some countries, so too could an AI system or robot be treated as a juristic person. Under this theory, AI systems could be held liable for causing injury or death, face fines or operational bans, or similarly, trigger compensation payouts from mandatory insurance schemes. A legal entity, like an AI-managed logistics firm, is no longer a tool but a collective actor. Just how a football team function as a unit, the AI system and its human "organs"—engineers, managers, users—operate together and can be blamed when required as a whole.
As workplaces around the world—including in Bangladesh—adopt automation, AI, and smart devices, governments must not lag behind. A few urgent steps are necessary to revolutionase workplace safety. First, policymakers must clarify whether and how AI and robots can be held liable, especially in sectors such as manufacturing, construction, and logistics. Second, just as drivers need auto insurance, companies deploying autonomous systems should be required to carry insurance that compensates victims of malfunctions. Third, all workplace AI systems should be subject to independent safety audits and required to pass usability and risk tests—just like elevators or pressure boilers. Importantly, laws must clearly establish who is responsible when harm occurs: the programmer, the operator, the company—or the AI entity itself. Finally, Occupational Safety and Health (OSH) laws need urgent revision to address risks unique to digital systems—like mental stress from surveillance, or ergonomic injuries from automated work pacing.
The World Day for Safety and Health at Work reminds us that technological progress must not come at the cost of human life. We cannot afford to be dazzled by AI and robotics without building the legal frameworks that keep them in check.
The writer is law faculty at Southeast University, Dhaka, Bangladesh.
Comments