Law & Our Rights
Law Vision

Urgency of comprehensive AI regulation in Bangladesh

Artificial Intelligence (AI) is no longer the future— it is the present. From global finance to health, surveillance to academics, and the legal profession, AI systems are  now integrated into many aspects of our everyday life. While countries all over the world are debating how to best deal with the problems created by AI by improving their legal framework, Bangladesh is a long way off. Such questions are important to deal with situations where an AI malfunctions. If an AI misidentifies a face, makes a wrongful or unintended transaction, or leaks personal data, who would be accountable? The user or the AI itself? Without clarified legal framework, the question of liability simply weighs heavily on the use and the usefulness of AI.

Recently, New Zealand's MP Laura MaClure illustrated the threat of deepfake technology by showing a manipulated, naked photograph of herself in the parliament. The UK's High Court has lately issued a warning to the lawyers to prohibit misusing AI after finding fake case-law citations. Similarly, the recent rise of Ghibli-style trends generated through ChatGPT has raised concerns over copyright and intellectual property rights.

Unethical use of Al poses unprecedented risks upon the user. Deepfakes, misinformation, or identity theft all are facilitated by the AI-powered tools. In Bangladesh, although the use of AI is limited, it often appears in connection with cybercrimes. Sadly, our laws have not caught up efficiently to prevent AI-related cybercrimes as well.

The Cyber Security Act 2023, the Information and Communication Act 2006, and the newly enacted Cyber Protection Ordinance 2025 address cyber offences but they are not tailored to address the complexities of AI. Even the 2025 Ordinance offers minimal guidance on AI-generated content or algorithmic decision making. The cyber tribunals, meanwhile, are overwhelmed and under-resourced and mostly dealing with defamation and digital harassment cases. The draft of the National Artificial Intelligence Policy 2024 is a positive step, but it, too, lacks clarity on fundamental issues such as transparency, ethical use, and human supervision. Moreover, it does little to recommend crimes prevention strategies ranging from harassment-like cybercrimes to organised crimes such as the Bangladesh Bank Heist of 2016.

Bangladesh ranked 75th out of 83 countries in the Global AI Index. This is not only about technology, but it also reflects a broader failure to prepare our legal, educational, and social institutions for the future. In contrast, countries such as the UAE are using AI to predict and prevent disasters, e.g., fire-related disasters, before they can occur by feeding the AI with vast data and training models. Sadly, we are still struggling with digital literacy and addressing digital divides.

What can we do? First, Bangladesh must enact a comprehensive AI law evaluating global best practices. It may follow the EU Artificial Intelligence Act, for example, which proposes a risk-based approach and mandates transparency, ethical use, and human supervision. Second, we need an independent AI regulatory authority to ensure accountability and investigate misuse. Third, we must include professional experts – technologists, academicians, lawyers, etc. in framing our AI policy.

And finally, we must treat data protection as a core fundamental human right. The public should be aware of who collects their data, how it is stored, and whether they can opt out from subscription. Without such enforcement, digital rights will continue to remain as a myth.

The writer is an official contributor for the Law Desk and law student at the Bangladesh University of Professionals.

Comments