Man and Machine
This column is about humans and the interactions that this intelligent mammal have with machines. Humans have been using tools since the dawn of civilisation. However, the industrial revolution of the 18th century accelerated the replacement of muscle with machines. At the dawn of the 21st century, we are foreseeing the replacement of the mind with machines. Similar to the disruption that the invention of the steam engine brought in the 19th century, recent information technology inventions are disrupting our societies. One of these, Artificial Intelligence (AI), is set to change our relationship with the machines for good. Some of us are anticipating a clash of civilisations—our own with the mechanised new.
Unfortunately, technology is in its nascent form—hence, confusions loom over our mental horizon. Some portend the loss of jobs and even believe in the "take-over" of an algorithmic super intelligence, while others cheer the hope of technology-enabled more peaceful societies. Let us not brand them as futuristic thoughts. The rise of social media, use of image and video surveillance, amassing private data for malicious use, influencing people's lives with "misinformation" and "disinformation" are now part of the everyday newsfeed. With technology at a nascent stage, while usage picked up, the need to explore how these technologies interact with our personal, social and political lives is of paramount need. In this column, we want to explore this relationship and how this may change our lives—not in the future but now. This column is about how the new disruptive technologies are shaping the way we act, react and regulate our personal, social and political lives.
In 1950, Allan Turing in his ground-breaking essay asked a simple question "can machines think?" If a machine can think it can behave intelligently, and perhaps one day surpass the intelligence of the human creators as well. This idea of "superintelligence" has been a potential source of inspiration for a plethora of science fiction writing. It engrossed and frightened many fiction writers so much that Issac Asimov in his 1950 science fiction I, Robot put forward "three laws of Robotics". These laws were meant to help design robots that despite having "superintelligence" will never cross the line to harm humans. On the scientific side, Turing proposed a simple way to find the answer to his original question—he proposed an "imitation game". Popularly termed as "Turing test"—a human interrogator is tasked with distinguishing between a human and a machine.
There is an international competition called the Loebner Prize that annually awards prizes to computer programmes that are most "human-like". To date, there has not been a winner that has truly passed the test. We are far from designing "artificial superintelligence". In reality, we may need decades to achieve the capacity to manifest the capability to build "general AI" that refers to human-like AI. What we now have can generally be termed as "narrow AI"—systems that are intelligent not because they imitate human intelligence but because they can carry out tasks that would otherwise require human intelligence, time and effort to an unsustainable extent. AI systems are scalable and designed to take decisions from a vast amount of data. These AI algorithms are gradually replacing and complementing traditional algorithms that had computationally solved many of our problems.
Scalability and the capability to harness insights from data have made AI an essential and complementary tool for policymakers and service providers aiming to achieve social good. Various AI tools are being used for crisis response, economic empowerment, alleviating educational challenges, mitigating environmental challenges, ensuring equality and inclusion, alleviating health, reducing hunger, information verification and validation, infrastructure management, public and social sector management, and even security and justice.
AI is an umbrella word that shelters different types of algorithms. These algorithms and processes have multiple issues where scientists need to be careful about. One such thing is overfitting. Sometimes, the algorithms designed fit the training dataset so well that in the real world, they fail to give the right solution. Apart from these, data, algorithms, and human interaction in an algorithm can be potential sources of bias that can be a reason for AI failure. A massive amount of data are fed into the machine to recognise certain patterns. Unstructured data from the web, social media, mobile devices, sensors and IoT devices make data absorption, linking, sorting and manipulation difficult. Hence, if data are not carefully curated then the dataset may be fraught with incomplete or missing information or may be inaccurate or biased. This may cause an inadvertent revelation of sensitive data. Even after the removal of personal data from one dataset, another dataset may have it that the AI system may reveal.
The drivers of AI-risks can manifest in the forms of the individual (such as accidents and privacy violations), societal (such as manipulation of the political system), and organisational (such as discrimination against race) risks. Over the years, we have seen several cases of AI failures that resulted in the loss of lives, compromise of national or organisational security, damage of reputation, regulatory backlash, criminal investigations and diminished public trust. Bangladesh needs to start thinking about how we will embrace the AI surge.
In 1972, the office of technology assessment (OTA) was established in the US to provide congressional members with objective and authoritative analysis of complex scientific and technical issues. However, a Republican-controlled senate dismantled it in 1995, calling it an "unnecessary agency". The idea survived in Europe in the form of the European parliament technology assessment (EPTA). With the science-unfriendly policies adopted by the US in the Trump era, many are feeling the necessity of reinstating the agency. Perhaps we should be thinking of establishing an office of technology policy to aid the parliament and the chief executive's office to understand the policy challenges that AI and other new disruptive technologies are bringing forth.
Moinul Zaber, Senior Academic Fellow, United Nations University, E-Government Operating Unit, Guimaraes, Portugal.
Email: zaber@unu.edu
His Twitter handle is: @zabermi
"The views and opinions expressed in this article are those of the writer and do not necessarily reflect the official policy or the opinions, beliefs, and viewpoints of the UNU."
Comments