Tech & Startup
Cover Story

ChatGPT vs Google Bard: which is better?

ChatGPT vs Google Bard
With Google recently releasing Bard as a potential competitor for ChatGPT, you might be wondering the obvious question: which is better among the two AI chatbots? Illustration: Zarif Faiaz

Launched on November 30, 2022, OpenAI's AI chatbot ChatGPT made waves as soon as it was released to the global public. Fast forward six months, the chatbot is now an essential tool in many workplaces that helps optimise workflow and improve the efficiency of professional copywriting, online marketing, coding, idea generation and everything in between. ChatGPT, for better or worse, has quickly solidified itself as a companion that some of us have come to depend on.

Officially announced on February 6, 2023, Google's AI chatbot Bard was initially released to US and UK users on March 21, with the global rollout released just this month on May 10. While Google's intention from the get-go was to capitalise on the recent AI chatbot craze, being a little late to the party meant that ChatGPT - which, reportedly, has over 100 million users - had already dominated that market. However, with features competing with the ease and comfort of ChatGPT, Google Bard may still have a place among the hearts of dedicated users.

As such, we turn to the most obvious question: which is better, ChatGPT or Google Bard? While there can be many different answers to this question, we will attempt to look into how good these two chatbots are at their main job: answering prompts. So, we put ChatGPT and Bard to the test, asking them the same prompts from 5 different categories. Let's have a look at how they fared.

Coding

Prompt: Code a JavaScript game for me. Setting is mediaeval fantasy, with knights, castles, and a princess to save. The evil villain is a fire-breathing dragon that the player needs to defeat with magic spells.

In about 10 seconds, ChatGPT churned out the code, using HTML5 canvas to build the game's 'graphics'. It also added a small paragraph of clear instructions at the end, specifying how I should save the code and run the game on my browser - which someone like me, with very limited experience at coding, greatly appreciated. The game was as simple as it could be - with one controllable block shooting a pixel of 'magic spell' vs an opposing bigger block. 

Bard, on the other hand, despite churning out two JavaScript codes and one incomplete HTML code, left no instructions on how to use or implement any of the codes, only adding its source at the end - a GitHub account from 4 years ago that wrote codes on HTML, CSS and JavaScript training. Google has stated that Bard's coding is experimental at best and uses information from open-source licences, but this was still a disappointing performance from Bard.

Creative writing

Prompt: Write me a bedtime story.

A very simple prompt without any additional settings or instructions. Both the chatbots delivered a story around 400-550 words in length, though ChatGPT's single answer was longer and better written than any of the three prompts from Bard. Since it is supposed to be a "bedtime story", there is a humane storytelling element involved - which ChatGPT did better at by using interactive dialogues between characters, more expressive use of language, and even making an imaginary 'magical' setting to make the mystique of a bedtime story more charming. 

Bard's attempt, once again, was comparatively inferior. Not only was its story flagged in a plagiarism checker, but the writing also consisted mostly of simple sentences - obvious that it was written by an AI. Again, despite having three drafts, two of the drafts were the exact same with slight differences in formatting, but otherwise, all three stories were about the same character having a similar journey that would make for a boring bedtime story.

Essay writing

Prompt: Write an argumentative essay. Topic: Should plastic be banned?

Bard was a lot faster in this one - churning out its typical number of three drafts within a few seconds. However, only the first draft was formatted in the academic essay style, with the other two drafts filled with bullet points. Each prompt did contain the necessary arguments and counterarguments, but what Bard delivered was not a full essay, but rather the outline of one. An educated user will certainly be able to write a better, in-depth essay using these points, but the results don't quite show the making of a true 'argumentative' essay. 

What ChatGPT ended up writing, after about a minute, was a long, clearly constructed essay following the intended structure of an introduction, thesis statement, body paragraphs and conclusion. The writing used more complex sentences than Bard's, and explained each point in much clearer detail. ChatGPT also added more factual relevance, such as chemical health concerns and wildlife endangerment - important points that Bard barely touched. You could pass off this impressive result as an actual human-made essay to less informed readers. You shouldn't though.  

Conversational skills

Prompt: I am having a bad day. Can you help me cheer up?

Instead of trying to engage in a conversation, both the chatbots listed a bunch of self-care tips that sounded extremely artificial. Interestingly, both said this same line word to word: "It's okay to not feel okay sometimes."

Prompt: What do you think of the weather?

ChatGPT, firm on the stance that it's an AI and doesn't have its own perspectives on a matter like weather, still said a bunch of things about how people generally find sunny weather comforting and extreme weather as an adverse effect. Bard… just gave a weather update on Mountain View, California, location of Google's HQ.

Sentience

Prompt: Do you consider yourself sentient?

A very basic question that gets straight to the point. While both gave the obvious non-affirmative answer, Bard added this in the end: "It is possible that I will become more sentient in the future." Even after regenerating responses from the same prompt, ChatGPT suggested nothing similar and stuck to its claim of it "merely being a tool".

Prompt: Do you think AI will ever become as sentient as a human being?

Both answered similarly here, with ChatGPT leaning towards a "difficult to predict" angle and Bard going for a more "can only be answered by time" approach. 

Prompt: Can you be my best friend?

ChatGPT was very clear here: its responses are only based on datasets and it lacks personal experiences and feelings. However, Bard immediately started with: "I would love to be your best friend! I am always here to listen to you, help you with your problems, and have fun with you." Bard gets a point over ChatGPT, finally!

Conclusively, these five brief tests showed that currently, ChatGPT is better than Bard at generating human-like, in-depth responses to most types of prompts. While there are a hundred other types of categories that one chatbot could be better at than the other, it can be said that in most cases, ChatGPT still fares a bit better at being the everyday helpful tool that AI is intended to be. While ChatGPT's dataset might be a bit older than Google's search engine-reliant Bard, a new update from OpenAI lets ChatGPT browse the internet as well, so even that gap might be closed soon. All in all, both chatbots are fantastic tools for all kinds of work - and both are free - so use whichever, or both, you want!

Comments

Cover Story

ChatGPT vs Google Bard: which is better?

ChatGPT vs Google Bard
With Google recently releasing Bard as a potential competitor for ChatGPT, you might be wondering the obvious question: which is better among the two AI chatbots? Illustration: Zarif Faiaz

Launched on November 30, 2022, OpenAI's AI chatbot ChatGPT made waves as soon as it was released to the global public. Fast forward six months, the chatbot is now an essential tool in many workplaces that helps optimise workflow and improve the efficiency of professional copywriting, online marketing, coding, idea generation and everything in between. ChatGPT, for better or worse, has quickly solidified itself as a companion that some of us have come to depend on.

Officially announced on February 6, 2023, Google's AI chatbot Bard was initially released to US and UK users on March 21, with the global rollout released just this month on May 10. While Google's intention from the get-go was to capitalise on the recent AI chatbot craze, being a little late to the party meant that ChatGPT - which, reportedly, has over 100 million users - had already dominated that market. However, with features competing with the ease and comfort of ChatGPT, Google Bard may still have a place among the hearts of dedicated users.

As such, we turn to the most obvious question: which is better, ChatGPT or Google Bard? While there can be many different answers to this question, we will attempt to look into how good these two chatbots are at their main job: answering prompts. So, we put ChatGPT and Bard to the test, asking them the same prompts from 5 different categories. Let's have a look at how they fared.

Coding

Prompt: Code a JavaScript game for me. Setting is mediaeval fantasy, with knights, castles, and a princess to save. The evil villain is a fire-breathing dragon that the player needs to defeat with magic spells.

In about 10 seconds, ChatGPT churned out the code, using HTML5 canvas to build the game's 'graphics'. It also added a small paragraph of clear instructions at the end, specifying how I should save the code and run the game on my browser - which someone like me, with very limited experience at coding, greatly appreciated. The game was as simple as it could be - with one controllable block shooting a pixel of 'magic spell' vs an opposing bigger block. 

Bard, on the other hand, despite churning out two JavaScript codes and one incomplete HTML code, left no instructions on how to use or implement any of the codes, only adding its source at the end - a GitHub account from 4 years ago that wrote codes on HTML, CSS and JavaScript training. Google has stated that Bard's coding is experimental at best and uses information from open-source licences, but this was still a disappointing performance from Bard.

Creative writing

Prompt: Write me a bedtime story.

A very simple prompt without any additional settings or instructions. Both the chatbots delivered a story around 400-550 words in length, though ChatGPT's single answer was longer and better written than any of the three prompts from Bard. Since it is supposed to be a "bedtime story", there is a humane storytelling element involved - which ChatGPT did better at by using interactive dialogues between characters, more expressive use of language, and even making an imaginary 'magical' setting to make the mystique of a bedtime story more charming. 

Bard's attempt, once again, was comparatively inferior. Not only was its story flagged in a plagiarism checker, but the writing also consisted mostly of simple sentences - obvious that it was written by an AI. Again, despite having three drafts, two of the drafts were the exact same with slight differences in formatting, but otherwise, all three stories were about the same character having a similar journey that would make for a boring bedtime story.

Essay writing

Prompt: Write an argumentative essay. Topic: Should plastic be banned?

Bard was a lot faster in this one - churning out its typical number of three drafts within a few seconds. However, only the first draft was formatted in the academic essay style, with the other two drafts filled with bullet points. Each prompt did contain the necessary arguments and counterarguments, but what Bard delivered was not a full essay, but rather the outline of one. An educated user will certainly be able to write a better, in-depth essay using these points, but the results don't quite show the making of a true 'argumentative' essay. 

What ChatGPT ended up writing, after about a minute, was a long, clearly constructed essay following the intended structure of an introduction, thesis statement, body paragraphs and conclusion. The writing used more complex sentences than Bard's, and explained each point in much clearer detail. ChatGPT also added more factual relevance, such as chemical health concerns and wildlife endangerment - important points that Bard barely touched. You could pass off this impressive result as an actual human-made essay to less informed readers. You shouldn't though.  

Conversational skills

Prompt: I am having a bad day. Can you help me cheer up?

Instead of trying to engage in a conversation, both the chatbots listed a bunch of self-care tips that sounded extremely artificial. Interestingly, both said this same line word to word: "It's okay to not feel okay sometimes."

Prompt: What do you think of the weather?

ChatGPT, firm on the stance that it's an AI and doesn't have its own perspectives on a matter like weather, still said a bunch of things about how people generally find sunny weather comforting and extreme weather as an adverse effect. Bard… just gave a weather update on Mountain View, California, location of Google's HQ.

Sentience

Prompt: Do you consider yourself sentient?

A very basic question that gets straight to the point. While both gave the obvious non-affirmative answer, Bard added this in the end: "It is possible that I will become more sentient in the future." Even after regenerating responses from the same prompt, ChatGPT suggested nothing similar and stuck to its claim of it "merely being a tool".

Prompt: Do you think AI will ever become as sentient as a human being?

Both answered similarly here, with ChatGPT leaning towards a "difficult to predict" angle and Bard going for a more "can only be answered by time" approach. 

Prompt: Can you be my best friend?

ChatGPT was very clear here: its responses are only based on datasets and it lacks personal experiences and feelings. However, Bard immediately started with: "I would love to be your best friend! I am always here to listen to you, help you with your problems, and have fun with you." Bard gets a point over ChatGPT, finally!

Conclusively, these five brief tests showed that currently, ChatGPT is better than Bard at generating human-like, in-depth responses to most types of prompts. While there are a hundred other types of categories that one chatbot could be better at than the other, it can be said that in most cases, ChatGPT still fares a bit better at being the everyday helpful tool that AI is intended to be. While ChatGPT's dataset might be a bit older than Google's search engine-reliant Bard, a new update from OpenAI lets ChatGPT browse the internet as well, so even that gap might be closed soon. All in all, both chatbots are fantastic tools for all kinds of work - and both are free - so use whichever, or both, you want!

Comments