SHOUT

ChatGPT gets “A” in Bangladesh and Global Studies SSC exam

With the recent craze around AI based language model ChatGPT and its supposed prowess in academic knowledge, we wanted to test it out in a Bangladeshi setting. ChatGPT is supposedly trained on text that is openly available on the Internet, so we thought it would be a great idea to have ChatGPT answer the SSC (Secondary School Certificate) level Bangladesh and Global Studies (BGS) question paper from 2020. The reason we couldn't pick one from later is that this exam has not taken place since then because of Covid.

The AI secured 50 marks out of 70 in the Creative Question (CQ) section, coming in at 7.14 out of 10 in each question. As for the objective section, we scaled the marks based on the performance in the CQ section, giving ChatGPT a total of 71.4 out of 100. 

To conduct the experiment, we dug up the Dhaka Board BGS question and picked seven Creative Question sets based on what we thought the model could answer best. This seemed fair as students also pick question sets based on what they know best. We picked the question sets that were meant to evaluate students on Bangladesh history, sociology, civics, economics, etc. 

The question stem (the passage based on which the answers are supposed to be written) was input into ChatGPT and we asked ChatGPT to answer based on that input. We had to correct the question for grammar and clarity as the translations provided by NCTB left much to be desired. 

Once the answers were generated, we organised them like a typical SSC-level BGS answer script and sent it to Mohammad Main Uddin, Assistant Professor of Social Science at Rajuk Uttara Model College. 

We asked him to evaluate the answer script using the same standards he'd use to evaluate any of his students. According to his evaluation, the AI scored 50 marks out of 70 in the CQ section. 

When asked if he was able to differentiate the AI-generated answers from those written by an actual student, he said, "After evaluating the script, I noticed that the text lacks a humane touch. Although there is a lot of information, it doesn't feel cohesive to me as a reader. You can understand that it's written by an AI. And there's an overload of information without setting the tone for any context. It feels like a jumble of sentences." 

He further added, "Although ChatGPT is good in some areas, I don't think it can surpass the creativity of a human. It's a useful tool if we use it alongside our own skills. But, we cannot expect ChatGPT to completely lead us and the conversation should focus on that."

With the emergence of ChatGPT 4, it remains to be seen if the AI can score an "A+" in SSC-level BGS. 

Comments

ChatGPT gets “A” in Bangladesh and Global Studies SSC exam

With the recent craze around AI based language model ChatGPT and its supposed prowess in academic knowledge, we wanted to test it out in a Bangladeshi setting. ChatGPT is supposedly trained on text that is openly available on the Internet, so we thought it would be a great idea to have ChatGPT answer the SSC (Secondary School Certificate) level Bangladesh and Global Studies (BGS) question paper from 2020. The reason we couldn't pick one from later is that this exam has not taken place since then because of Covid.

The AI secured 50 marks out of 70 in the Creative Question (CQ) section, coming in at 7.14 out of 10 in each question. As for the objective section, we scaled the marks based on the performance in the CQ section, giving ChatGPT a total of 71.4 out of 100. 

To conduct the experiment, we dug up the Dhaka Board BGS question and picked seven Creative Question sets based on what we thought the model could answer best. This seemed fair as students also pick question sets based on what they know best. We picked the question sets that were meant to evaluate students on Bangladesh history, sociology, civics, economics, etc. 

The question stem (the passage based on which the answers are supposed to be written) was input into ChatGPT and we asked ChatGPT to answer based on that input. We had to correct the question for grammar and clarity as the translations provided by NCTB left much to be desired. 

Once the answers were generated, we organised them like a typical SSC-level BGS answer script and sent it to Mohammad Main Uddin, Assistant Professor of Social Science at Rajuk Uttara Model College. 

We asked him to evaluate the answer script using the same standards he'd use to evaluate any of his students. According to his evaluation, the AI scored 50 marks out of 70 in the CQ section. 

When asked if he was able to differentiate the AI-generated answers from those written by an actual student, he said, "After evaluating the script, I noticed that the text lacks a humane touch. Although there is a lot of information, it doesn't feel cohesive to me as a reader. You can understand that it's written by an AI. And there's an overload of information without setting the tone for any context. It feels like a jumble of sentences." 

He further added, "Although ChatGPT is good in some areas, I don't think it can surpass the creativity of a human. It's a useful tool if we use it alongside our own skills. But, we cannot expect ChatGPT to completely lead us and the conversation should focus on that."

With the emergence of ChatGPT 4, it remains to be seen if the AI can score an "A+" in SSC-level BGS. 

Comments