Views

Artificial intelligence is still far from being ‘intelligent’

Why does Big Tech want an immediate six-month pause on any further development?
Artificial intelligence is still far from being ‘intelligent’
An image of a robot taking a picture, generated by AI software Midjourney. SOURCE: REUTERS

Is artificial intelligence (AI) really "intelligent" in its creativity and decision-making? Or is it stealing others' works and perpetuating existing human biases?

This January, three artists filed a class-action lawsuit with the Northern California District Court against AI imagery generators – Midjourney, Stable Diffusion, and DreamUp. They claimed these companies are using their artwork to generate newer ones – using a publicly available database of images including theirs called LAION-5B – though the artists had not consented to have their copyrighted artworks to be included in the database, were not compensated for the use of their works, and their influence was not credited when AI images were produced using said works.

AI is literally scraping through billions of existing works produced by raw human labour to "produce newer ones." That's why several experts are already asking whether AI is at all "artificial" or "intelligent."

Tech writer Evgeny Morozov has argued that while the early AI systems were mostly rules and programmes, and could have some "artificiality," today's AI models draw their strength entirely from the works of actual humans. Built on vast amounts of human work stored at mammoth energy-hungry data centres, AI is not "intelligent" in the way human intelligence is as it cannot discern things without extensive human training, as Microsoft's Kate Crawford has pointed out.

Even in decision making, AI-models can have strong biases as a 2019 article published in Nature has confirmed. An algorithm common in US hospitals has been systematically discriminating against black people. The study found that hospitals traditionally assign them lower risk scores than white people. Automatically, the algorithm takes that as a cue and puts blacks in a lesser risk group, regardless of the prevailing medical conditions. In another case, a painting bot returned the image of a salmon steak in water when asked to draw a swimming salmon. The AI model couldn't make this simple judgement that even a toddler could do.

However, despite not being anywhere near "intelligent," recent developments, especially the release of ChatGPT in November last year, have raised dramatic concerns about the effects of AI on human society. Renowned tech experts have published an open letter calling for an immediate pause on all AI development for six months. Its signatories include many big names and AI heavyweights, including Elon Musk from Tesla, Emad Mostaque from Stability AI, Sam Altman from OpenAI, Demis Hassabis from Google's DeepMind, and Kevin Scott from Microsoft. Altman even advised the US government to issue licenses to trusted companies (Does this mean only Big Techs?) to train AI models.

Is this call for an immediate pause coming from genuine concern for human well-being? Or is there a commercial motive, as Michael Bennett, a PhD student at Australian National University (ANU) has pointed out? Potentially, AI can generate an enormous amount of wealth for whoever controls it. Let's try to understand the premise of the call.

ChatGPT isn't a research breakthrough, it's a product based on open research work that is already a few years old. The only difference is that the technology was not widely available through a convenient interface. Smaller entrepreneurs will soon develop better and more efficient AI-based models at much lesser costs, some of which is already available at GitHub, a popular repository for open-source non-commercial software. That worries the Big Techs, made abundantly clear by a leaked Google internal memo.

The long memo from a Google researcher said, "People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality…We Have No Moat." Licenses would be a "kinda moat," as Stability AI's CEO Emad Mostaque puts it bluntly, moat being jargon for a way to secure a business against competitors.

AI Now Institute, a research non-profit that addresses the concentration of power in the tech industry, highlights the perils of unregulated AI in its April 2023 report because the AI boom will make the powerful Big Techs even more powerful. AI models depend on vast amounts of data, and super-fast computing power to process it, both of which only Big Techs can afford. Without access to these resources, no entrepreneur or researcher can develop any meaningful AI application, as an MIT Technical Review article elaborates.

Yes, we need regulations for AI development, and a pause if necessary, but not for the reasons mentioned in the open letter. It's to ensure that AI technology remains open source and democratic.

The other reason AI should be regulated is the way social media platforms have used it to fuel gender bias and extreme polarisation, and played on social divisions resulting in unspeakable violence on a massive scale (such as in Myanmar using Facebook). AI models will amplify both intentional misinformation (simple inaccuracies) and disinformation (false information) simply because they are trained on such data to produce more data (model cannibalism effect). Large language models can keep repeating fabricated and false information because of a phenomenon called 'hallucination' which independent watchdog NewsGuard has found in several online news portals.

Intentional or otherwise, all these could be quite handy in manipulating public opinion or creating biases to benefit those in power. That makes it even more necessary to regulate AI. To ensure that the benefits of AI reaches everyone, humans must always be on top of it.

Dr Sayeed Ahmed is a consulting engineer and the CEO of Bayside Analytix, a technology-focused strategy and management consulting organisation.

Comments

Artificial intelligence is still far from being ‘intelligent’

Why does Big Tech want an immediate six-month pause on any further development?
Artificial intelligence is still far from being ‘intelligent’
An image of a robot taking a picture, generated by AI software Midjourney. SOURCE: REUTERS

Is artificial intelligence (AI) really "intelligent" in its creativity and decision-making? Or is it stealing others' works and perpetuating existing human biases?

This January, three artists filed a class-action lawsuit with the Northern California District Court against AI imagery generators – Midjourney, Stable Diffusion, and DreamUp. They claimed these companies are using their artwork to generate newer ones – using a publicly available database of images including theirs called LAION-5B – though the artists had not consented to have their copyrighted artworks to be included in the database, were not compensated for the use of their works, and their influence was not credited when AI images were produced using said works.

AI is literally scraping through billions of existing works produced by raw human labour to "produce newer ones." That's why several experts are already asking whether AI is at all "artificial" or "intelligent."

Tech writer Evgeny Morozov has argued that while the early AI systems were mostly rules and programmes, and could have some "artificiality," today's AI models draw their strength entirely from the works of actual humans. Built on vast amounts of human work stored at mammoth energy-hungry data centres, AI is not "intelligent" in the way human intelligence is as it cannot discern things without extensive human training, as Microsoft's Kate Crawford has pointed out.

Even in decision making, AI-models can have strong biases as a 2019 article published in Nature has confirmed. An algorithm common in US hospitals has been systematically discriminating against black people. The study found that hospitals traditionally assign them lower risk scores than white people. Automatically, the algorithm takes that as a cue and puts blacks in a lesser risk group, regardless of the prevailing medical conditions. In another case, a painting bot returned the image of a salmon steak in water when asked to draw a swimming salmon. The AI model couldn't make this simple judgement that even a toddler could do.

However, despite not being anywhere near "intelligent," recent developments, especially the release of ChatGPT in November last year, have raised dramatic concerns about the effects of AI on human society. Renowned tech experts have published an open letter calling for an immediate pause on all AI development for six months. Its signatories include many big names and AI heavyweights, including Elon Musk from Tesla, Emad Mostaque from Stability AI, Sam Altman from OpenAI, Demis Hassabis from Google's DeepMind, and Kevin Scott from Microsoft. Altman even advised the US government to issue licenses to trusted companies (Does this mean only Big Techs?) to train AI models.

Is this call for an immediate pause coming from genuine concern for human well-being? Or is there a commercial motive, as Michael Bennett, a PhD student at Australian National University (ANU) has pointed out? Potentially, AI can generate an enormous amount of wealth for whoever controls it. Let's try to understand the premise of the call.

ChatGPT isn't a research breakthrough, it's a product based on open research work that is already a few years old. The only difference is that the technology was not widely available through a convenient interface. Smaller entrepreneurs will soon develop better and more efficient AI-based models at much lesser costs, some of which is already available at GitHub, a popular repository for open-source non-commercial software. That worries the Big Techs, made abundantly clear by a leaked Google internal memo.

The long memo from a Google researcher said, "People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality…We Have No Moat." Licenses would be a "kinda moat," as Stability AI's CEO Emad Mostaque puts it bluntly, moat being jargon for a way to secure a business against competitors.

AI Now Institute, a research non-profit that addresses the concentration of power in the tech industry, highlights the perils of unregulated AI in its April 2023 report because the AI boom will make the powerful Big Techs even more powerful. AI models depend on vast amounts of data, and super-fast computing power to process it, both of which only Big Techs can afford. Without access to these resources, no entrepreneur or researcher can develop any meaningful AI application, as an MIT Technical Review article elaborates.

Yes, we need regulations for AI development, and a pause if necessary, but not for the reasons mentioned in the open letter. It's to ensure that AI technology remains open source and democratic.

The other reason AI should be regulated is the way social media platforms have used it to fuel gender bias and extreme polarisation, and played on social divisions resulting in unspeakable violence on a massive scale (such as in Myanmar using Facebook). AI models will amplify both intentional misinformation (simple inaccuracies) and disinformation (false information) simply because they are trained on such data to produce more data (model cannibalism effect). Large language models can keep repeating fabricated and false information because of a phenomenon called 'hallucination' which independent watchdog NewsGuard has found in several online news portals.

Intentional or otherwise, all these could be quite handy in manipulating public opinion or creating biases to benefit those in power. That makes it even more necessary to regulate AI. To ensure that the benefits of AI reaches everyone, humans must always be on top of it.

Dr Sayeed Ahmed is a consulting engineer and the CEO of Bayside Analytix, a technology-focused strategy and management consulting organisation.

Comments

ঘন কুয়াশায় ঢাকা-মাওয়া এক্সপ্রেসওয়েতে একাধিক গাড়ির সংঘর্ষ, নিহত ১

মাওয়ামুখী লেনে প্রথমে একটি প্রাইভেট গাড়িকে পেছন থেকে ধাক্কা দেয় একটি কাভার্ডভ্যান। তারপরে একটি বাস প্রাইভেট গাড়িকে পেছন থেকে ধাক্কা দেয়। কাভার্ডভ্যানের পেছনে এসে ধাক্কা দেয় আরেকটি মাইক্রোবাস।...

৩৯ মিনিট আগে