When headlines talk about artificial intelligence in the workplace, the focus is often on efficiency gains, automation or cost savings. But increasingly, the conversation is moving into more complex territory. One such development is the rise of “digital twins” AI systems designed to mirror how individuals think, prioritise and make decisions.
Anjali Malik, Associate at Bellevue Law, has been quoted in BBC News on exactly this topic, highlighting both the potential and the pitfalls of this emerging technology. As organisations begin to move digital twins from theory into real-world use, important questions around responsibility, ownership and judgment come sharply into focus.
From tools to counterparts
Traditionally, workplace AI has been framed as something we use: software that assists with scheduling, drafting, analysis or data processing. Digital twins represent a shift in mindset. Rather than supporting isolated tasks, these systems learn from an individual’s behaviour, decision-making patterns and priorities, with the aim of replicating them at scale.
In theory, this could allow businesses to operate more efficiently, preserve institutional knowledge, or support succession planning. A digital twin could handle routine decision-making, flag risks earlier, or maintain consistency across teams. For employers under pressure to do more with less, the appeal is obvious.
But as Anjali noted in her BBC commentary, this evolution raises questions that go well beyond productivity metrics.
Who is responsible for a digital twin’s decisions?
One of the most pressing issues is accountability. If a digital twin makes a decision that has legal, financial or reputational consequences, who is responsible?
Is it the employee whose working patterns and judgment the system was trained on?
The employer who deployed it?
Or the developers who built the underlying technology?
In employment and workplace law, responsibility is rarely abstract. Employers are used to clear lines of managerial oversight and human judgment. Digital twins blur those lines, particularly where AI systems are empowered to act autonomously or at speed.
This becomes especially sensitive in regulated environments, or where decisions affect employees’ rights, performance assessments, or progression.
Delegation, judgment and human oversight
Another key question is what organisations should delegate to AI, even if they technically can. Digital twins are designed to mimic judgment but judgment often relies on context, empathy and ethical reasoning, not just pattern recognition.
Employers need to consider:
- Which decisions must remain human-led
- How AI outputs are reviewed, challenged or overridden
- Whether reliance on digital twins changes expectations of employee availability or performance
Without clear policies, there is a risk that AI begins to shape workplace decisions in ways that were never consciously approved.
Ownership and identity at work
Digital twins also raise novel questions around ownership. If an AI system is trained on an individual’s working style, experience and professional judgment, who owns that data and its outputs?
This is not purely a technical issue. For employees, there can be concerns about autonomy, surveillance and whether their “working self” is being detached from them and reused without ongoing control. For employers, there is a desire to protect business continuity and intellectual capital.
Striking the right balance will be critical to maintaining trust and avoiding disputes further down the line.
The real-world impact is already here
At Bellevue Law, we’re already seeing these issues emerge in practice. Employers are asking how to introduce AI tools responsibly, while employees are increasingly aware of how their data and decision-making are being captured and replicated.
As Anjali’s insights make clear, this is not a future problem. Organisations experimenting with advanced AI today need to think carefully about governance, transparency and communication not just technical capability.
Navigating the grey areas
Digital twins offer genuine opportunities, but they also sit squarely within a growing legal and ethical grey zone. The challenge for employers is not whether to engage with AI, but how to do so thoughtfully, lawfully and sustainably.
Clear frameworks, human oversight and open dialogue will be essential as this technology continues to evolve.
If you’re exploring how AI fits into your organisation or are facing questions about responsibility, oversight or employee impact our team at Bellevue Law is always happy to talk.