Project Syndicate

AI and the Global South

Unprecedentedly powerful predictive tools will strengthen authoritarian regimes’ surveillance capacity
AI and Global South
VISUAL: FREEPIK

Recent months may well be remembered as the moment when predictive artificial intelligence went mainstream. While prediction algorithms have been in use for decades, the release of applications such as OpenAI's ChatGPT – and its rapid integration with Microsoft's Bing search engine – may have unleashed the floodgates when it comes to user-friendly AI. Within weeks of ChatGPT's release, it attracted 100 million monthly users, many of whom have without a doubt already experienced its dark side – from insults and threats to disinformation and a demonstrated ability to write malicious code.

The chatbots that are generating headlines are just the tip of the iceberg. AI tools for creating text, speech, art, and video are progressing rapidly, with far-reaching implications for governance, commerce, and civic life. Not surprisingly, capital is flooding into the sector, with governments and companies alike investing in start-ups to develop and deploy the latest machine learning tools. These new applications will combine historical data with machine learning, natural language processing, and deep learning to determine the probability of future events.

Crucially, adoption of the new natural language processing and generative AIs will not be confined to the wealthy countries and companies such as Google, Meta, and Microsoft that spearheaded their creation. These technologies are already spreading across low- and middle-income settings, where predictive analytics for everything, from reducing urban inequality to addressing food security, hold tremendous promise for cash-strapped governments, firms, and NGOs seeking to improve efficiency and unlock social and economic benefits.

The problem, however, is that there has been insufficient attention on the potential negative externalities and unintended effects of these technologies. The most obvious risk is that unprecedentedly powerful predictive tools will strengthen authoritarian regimes' surveillance capacity.

One widely cited example is China's "social credit system," which uses credit histories, criminal convictions, online behaviour, and other data to assign a score to every person in the country. Those scores can then determine whether someone can secure a loan, access a good school, travel by rail or air, and so forth. Though China's system is billed as a tool to improve transparency, it doubles as an instrument of social control.

Yet, even when used by ostensibly well-intentioned democratic governments, companies focused on social impact, and progressive non-profit, predictive tools can generate sub-optimal outcomes. Design flaws in the underlying algorithms and biased data sets can lead to privacy breaches and identity-based discrimination. This has already become a glaring issue in criminal justice, where predictive analytics routinely perpetuate racial and socioeconomic disparities. For example, an AI system built to help US judges assess the likelihood of recidivism erroneously determined that Black defendants are at far greater risk of re-offending than White ones.

Concerns about how AI could deepen inequalities in the workplace are also growing. So far, predictive algorithms have been increasing efficiency and profits in ways that benefit managers and shareholders at the expense of rank-and-file workers (especially in the gig economy).

In all these examples, AI systems are holding up a funhouse mirror to society, reflecting and magnifying our biases and inequities. As technology researcher Nanjira Sambuli notes, digitisation tends to exacerbate, rather than ameliorate, pre-existing political, social, and economic problems.

The enthusiasm to adopt predictive tools must be balanced against informed and ethical consideration of their intended and unintended effects. Where the effects of powerful algorithms are disputed or unknown, the precautionary principle would counsel against deploying them.

Even when used by ostensibly well-intentioned democratic governments, companies focused on social impact, and progressive non-profit, predictive tools can generate sub-optimal outcomes. Design flaws in the underlying algorithms and biased data sets can lead to privacy breaches and identity-based discrimination. This has already become a glaring issue in criminal justice, where predictive analytics routinely perpetuate racial and socioeconomic disparities. For example, an AI system built to help US judges assess the likelihood of recidivism erroneously determined that Black defendants are at far greater risk of re-offending than White ones. Concerns about how AI could deepen inequalities in the workplace are also growing. So far, predictive algorithms have been increasing efficiency and profits in ways that benefit managers and shareholders at the expense of rank-and-file workers (especially in the gig economy).

We must not let AI become another domain where decision-makers ask for forgiveness rather than permission. That is why the United Nations high commissioner for human rights and others have called for moratoriums on the adoption of AI systems until ethical and human rights frameworks have been updated to account for their potential harms.

Crafting the appropriate frameworks will require forging a consensus on the basic principles that should inform the design and use of predictive AI tools. Fortunately, the race for AI has led to a parallel flurry of research, initiatives, institutes, and networks on ethics. And while civil society has taken the lead, intergovernmental entities such as the OECD and Unesco have also gotten involved.

The UN has been working on building universal standards for ethical AI since at least 2021. Moreover, the European Union has proposed an AI Act – the first such effort by a major regulator – which would block certain uses (such as those resembling China's social credit system) and subject other high-risk applications to specific requirements and oversight.

To date, this debate has been concentrated overwhelmingly in North America and Western Europe. But lower- and middle-income countries have their own baseline needs, concerns, and social inequities to consider. There is ample research showing that technologies developed by and for markets in advanced economies are often inappropriate for less-developed economies.

If the new AI tools are simply imported and put into wide use before the necessary governance structures are in place, they could easily do more harm than good. All these issues must be considered if we are going to devise truly universal principles for AI governance.

Recognising these gaps, the Igarapé Institute and New America recently launched a new Global Task Force on Predictive Analytics for Security and Development. The task force will convene digital rights advocates, public sector partners, tech entrepreneurs, and social scientists from the Americas, Africa, Asia, and Europe, with the goal of defining first principles for the use of predictive technologies in public safety and sustainable development in the Global South.

Formulating these principles and standards is just the first step. The bigger challenge will be to marshal the international, national, and subnational collaboration and coordination needed to implement them in law and practice. In the global rush to develop and deploy new predictive AI tools, harm prevention frameworks are essential to ensure a secure, prosperous, sustainable, and human-centred future.

 

Robert Muggah, co-founder of the Igarapé Institute and the SecDev Group, is a member of the World Economic Forum's Global Future Council on Cities of Tomorrow and an adviser to the Global Risks Report. 
Gabriella Seiler is consultant at the Igarapé Institute and a partner and director at Kunumi. 
Gordon LaForge is senior policy analyst at New America and lecturer at the Thunderbird School of Global Management of Arizona State University in the US.

Copyright: Project Syndicate, 2023
www.project-syndicate.org

Comments

AI and the Global South

Unprecedentedly powerful predictive tools will strengthen authoritarian regimes’ surveillance capacity
AI and Global South
VISUAL: FREEPIK

Recent months may well be remembered as the moment when predictive artificial intelligence went mainstream. While prediction algorithms have been in use for decades, the release of applications such as OpenAI's ChatGPT – and its rapid integration with Microsoft's Bing search engine – may have unleashed the floodgates when it comes to user-friendly AI. Within weeks of ChatGPT's release, it attracted 100 million monthly users, many of whom have without a doubt already experienced its dark side – from insults and threats to disinformation and a demonstrated ability to write malicious code.

The chatbots that are generating headlines are just the tip of the iceberg. AI tools for creating text, speech, art, and video are progressing rapidly, with far-reaching implications for governance, commerce, and civic life. Not surprisingly, capital is flooding into the sector, with governments and companies alike investing in start-ups to develop and deploy the latest machine learning tools. These new applications will combine historical data with machine learning, natural language processing, and deep learning to determine the probability of future events.

Crucially, adoption of the new natural language processing and generative AIs will not be confined to the wealthy countries and companies such as Google, Meta, and Microsoft that spearheaded their creation. These technologies are already spreading across low- and middle-income settings, where predictive analytics for everything, from reducing urban inequality to addressing food security, hold tremendous promise for cash-strapped governments, firms, and NGOs seeking to improve efficiency and unlock social and economic benefits.

The problem, however, is that there has been insufficient attention on the potential negative externalities and unintended effects of these technologies. The most obvious risk is that unprecedentedly powerful predictive tools will strengthen authoritarian regimes' surveillance capacity.

One widely cited example is China's "social credit system," which uses credit histories, criminal convictions, online behaviour, and other data to assign a score to every person in the country. Those scores can then determine whether someone can secure a loan, access a good school, travel by rail or air, and so forth. Though China's system is billed as a tool to improve transparency, it doubles as an instrument of social control.

Yet, even when used by ostensibly well-intentioned democratic governments, companies focused on social impact, and progressive non-profit, predictive tools can generate sub-optimal outcomes. Design flaws in the underlying algorithms and biased data sets can lead to privacy breaches and identity-based discrimination. This has already become a glaring issue in criminal justice, where predictive analytics routinely perpetuate racial and socioeconomic disparities. For example, an AI system built to help US judges assess the likelihood of recidivism erroneously determined that Black defendants are at far greater risk of re-offending than White ones.

Concerns about how AI could deepen inequalities in the workplace are also growing. So far, predictive algorithms have been increasing efficiency and profits in ways that benefit managers and shareholders at the expense of rank-and-file workers (especially in the gig economy).

In all these examples, AI systems are holding up a funhouse mirror to society, reflecting and magnifying our biases and inequities. As technology researcher Nanjira Sambuli notes, digitisation tends to exacerbate, rather than ameliorate, pre-existing political, social, and economic problems.

The enthusiasm to adopt predictive tools must be balanced against informed and ethical consideration of their intended and unintended effects. Where the effects of powerful algorithms are disputed or unknown, the precautionary principle would counsel against deploying them.

Even when used by ostensibly well-intentioned democratic governments, companies focused on social impact, and progressive non-profit, predictive tools can generate sub-optimal outcomes. Design flaws in the underlying algorithms and biased data sets can lead to privacy breaches and identity-based discrimination. This has already become a glaring issue in criminal justice, where predictive analytics routinely perpetuate racial and socioeconomic disparities. For example, an AI system built to help US judges assess the likelihood of recidivism erroneously determined that Black defendants are at far greater risk of re-offending than White ones. Concerns about how AI could deepen inequalities in the workplace are also growing. So far, predictive algorithms have been increasing efficiency and profits in ways that benefit managers and shareholders at the expense of rank-and-file workers (especially in the gig economy).

We must not let AI become another domain where decision-makers ask for forgiveness rather than permission. That is why the United Nations high commissioner for human rights and others have called for moratoriums on the adoption of AI systems until ethical and human rights frameworks have been updated to account for their potential harms.

Crafting the appropriate frameworks will require forging a consensus on the basic principles that should inform the design and use of predictive AI tools. Fortunately, the race for AI has led to a parallel flurry of research, initiatives, institutes, and networks on ethics. And while civil society has taken the lead, intergovernmental entities such as the OECD and Unesco have also gotten involved.

The UN has been working on building universal standards for ethical AI since at least 2021. Moreover, the European Union has proposed an AI Act – the first such effort by a major regulator – which would block certain uses (such as those resembling China's social credit system) and subject other high-risk applications to specific requirements and oversight.

To date, this debate has been concentrated overwhelmingly in North America and Western Europe. But lower- and middle-income countries have their own baseline needs, concerns, and social inequities to consider. There is ample research showing that technologies developed by and for markets in advanced economies are often inappropriate for less-developed economies.

If the new AI tools are simply imported and put into wide use before the necessary governance structures are in place, they could easily do more harm than good. All these issues must be considered if we are going to devise truly universal principles for AI governance.

Recognising these gaps, the Igarapé Institute and New America recently launched a new Global Task Force on Predictive Analytics for Security and Development. The task force will convene digital rights advocates, public sector partners, tech entrepreneurs, and social scientists from the Americas, Africa, Asia, and Europe, with the goal of defining first principles for the use of predictive technologies in public safety and sustainable development in the Global South.

Formulating these principles and standards is just the first step. The bigger challenge will be to marshal the international, national, and subnational collaboration and coordination needed to implement them in law and practice. In the global rush to develop and deploy new predictive AI tools, harm prevention frameworks are essential to ensure a secure, prosperous, sustainable, and human-centred future.

 

Robert Muggah, co-founder of the Igarapé Institute and the SecDev Group, is a member of the World Economic Forum's Global Future Council on Cities of Tomorrow and an adviser to the Global Risks Report. 
Gabriella Seiler is consultant at the Igarapé Institute and a partner and director at Kunumi. 
Gordon LaForge is senior policy analyst at New America and lecturer at the Thunderbird School of Global Management of Arizona State University in the US.

Copyright: Project Syndicate, 2023
www.project-syndicate.org

Comments

আমরা রাজনৈতিক দল, ভোটের কথাই তো বলব: তারেক রহমান

তিনি বলেন, কিছু লোক তাদের স্বার্থ হাসিলের জন্য আমাদের সব কষ্টে পানি ঢেলে দিচ্ছে।

৬ ঘণ্টা আগে