Human Rights in the Age of Artificial Intelligence
Despite its overwhelming presence across many aspects of our lives, there is no widely accepted definition of Artificial Intelligence (AI). Essentially, AI functions with the help of various computer learning programmes and associated processes dedicated to improving the ability of machines to function. In fact, the fundamental purpose of AI is to assist humans in doing works that require intelligence.
At present, AI is causing and contributing to significant breaches of privacy and data protection, since collation of personal information at a massive scale is increasing the potential of exploitation. Indeed, AI may facilitate the harvesting of personal data without adequate or informed consent. For instance, between 2013 and 2018, Cambridge Analytica collated personal data of up to 87 million Facebook users without their knowledge or consent for use in political advertising.
Consequently, there is now a growing tension between privacy's requirement to restrict flows of personal data on the one hand and economic and commercial arguments supporting the free flow of such data on the other. Hence, a balance needs to be struck between the right to privacy and the economic interest driven by or arising out of the use of AI.
AI may also adversely impact fairness and due process in decision-making. In making decisions, AI may segregate or segment people by reference to a wide range of factors and without considering whether such segregation or segmentation is appropriate in the particular case even if they are completely unrelated to the decision in question. AI developers need to ensure that automated decision-making matches its human equivalent by developing the capacity to consider factors relevant to the individual's circumstances. Legal and technical communities should work together to find adequate ways of reducing possibilities of discrimination through algorithmic systems.
The use of AI for content curation and moderation on social media may affect the rights to freedom of expression and access to information. The use of facial recognition technology risks a serious impact on an array of civil rights. In the field of weapons for military use, AI risks undermining the right to life and the right to the integrity of the person if not closely circumscribed.
Human rights are inherent in all human beings, regardless of their race, sex, nationality, or any other status. The development of human rights law and evolution of its jurisprudence take time; technology, however, has a brisk pace. As such, human rights framework at times appears quite inadequate as a scheme for the ethical management of AI. Nonetheless, the existing human rights schema can form the basis for delimiting the appropriate scope of AI activities.
Human rights law requires governments and companies to provide a suitable right to remedy in case they breach their obligations and responsibilities. At all stages of design and deployment of AI, it must be clear as to who bears responsibility for its operation. Companies developing these technologies must proactively engage with academics, civil society actors, and representatives of community organisations. To fulfill their responsibility to respect human rights, they must implement a rigorous human rights due diligence framework governing the use of AI.
The writer is student of law, University of Dhaka
Comments