How Facebook has failed us
The internet and technological advancements over the past decades have enabled social media platforms to bloom and develop immensely. Gone are the days when we would have to continually log in and out of MSN Messenger or save up tiffin money to send extra text messages which required us to type 4433555555666 to even start a conversation on our Nokia 3310 handsets.
One of the major turning points leading to social media reaching the heights it has today was the creation of Facebook. However, the platform that we had such high hopes for has unfortunately let us (its users) down.
Content and Censorship
Facebook has faced criticism time and again for censoring and regulating content that may be damaging to its reputation or ones supporting opposing views and establishments. In 2019, US Senator Elizabeth Warren had placed ads that targeted tech companies such as Amazon and even Facebook itself, expressing her intentions to break them up if she won the 2020 elections. The ad was quickly taken down and later re-uploaded once it faced sufficient backlash. Needless to say, the removal of the ad and the reluctant restoration only after reports criticising the move had surfaced, show a clear bias.
The problem not only lies with what they tend to remove but also with what they refuse to take down. In the past few weeks, Facebook has been under fire for refusing to remove hate speech and censor political advertisements. Since then, CEO Mark Zuckerberg has "doubled down" on his decision to not remove political advertisements as he says he believes in "free speech" and thinks "people need to see what the politicians have to say for themselves". However, that does not provide a good enough reason for Facebook not removing hate speech or even fact check political advertisements.
On July 8, 2020, Facebook finally released its independent audit report which was being worked on for the past two years. The audit ultimately concluded by saying that the measures taken by Facebook were "piecemeal" and raised doubts about their intentions to actually make resolutions to address the innumerable problems that remain regarding the civil rights of its users.
The question that arises next is, why should we care about political advertisements, that too ones that may not directly impact our lives?
The answer lies with the picture being depicted. This helps us understand two things. First, that Facebook is biased in terms of actions taken, and second that it doesn't seem to care too much about the safety and civil rights of its users. Not only that, the impacts of spreading misinformation through said ads could create false ideas about vulnerable groups and minorities that could ultimately be damaging to them.
This understanding helps elaborate some other issues that Facebook seems to be facing for a long time, such as those regarding content that may be psychologically damaging or sensitive in nature. Facebook groups, when big enough, can be ecosystems of their own. However, Facebook refuses to police the content of such groups and only focuses on the news feeds of its users. This often leads to inappropriate, violent and/or harmful content to remain and circulate in a way that may cause direct and long-lasting impacts on those who come across it. The fact that a picture that has nothing to do with other people and is posted with full consent may be removed whereas sensitive material exploited and passed around without the owner's consent may remain for days even after being reported, speaks volumes of the kind of environment being harboured within the platform.
Privacy and Technical Issues
In an investigation carried out by the Electronic Frontier Foundation (EFF) in 2010, it was found that personal information on a Facebook profile was accessible to almost anyone even if it was not visible to the public. The foundation categorised personal data breach techniques as "connections" and "instant personalisation". Such a "connection" is established when a user clicks on the "Like" button on the Facebook site or app or an embedded button on the page of a product or service. Facebook then treats that data as public information where the user's public information is visible on the service provider's Facebook page. This breach of privacy is not even half of the equation.
EFF's research concluded that not opting out from Facebook's Instant Personalisation services results in instant data leakage. The way it works is that they may use users' personal information to provide them a more personalised experience on the internet. Facebook does this by monitoring their interests, such as the brand or type of clothing they tend to like, or specific pages a user has been frequenting. The next browser ads they come across will then be more likely of a fashion house or anything else the algorithm deems to be similar. Facebook has a number of sites under its pilot programme such as Microsoft Docs, Pandora, etc. As soon as they enter these sites under the programme, Facebook gains access to their personal information, including but not limited to, the user's name, gender, location and who they are friends on Facebook with. Basically, everything falling under what Facebook considers "public information".
One might think that opting out of it automatically makes them safe. Unfortunately that is not the case as a person's Facebook friends who may have not opted out of Instant Personalisation, can give away this personal information about them as well. To be completely on the safe side, you have to block the applications individually. In a nutshell, your data is out in the open, giving brands the perfect amount of data to know your needs. It was found that Facebook's top rated apps such as Zynga and Lolapps were transmitting tons of data to internet tracking companies. Your data is being treated as nothing but a product, ready to be marketed. And yes, this happens in Bangladesh, too.
There have been a few instances where Facebook has turned a blind eye to data breach risks. In 2018, Facebook succumbed to its biggest ever data breach. Data security experts had been warning Facebook from as early as December 2017 about a flaw which was neglected by the company and later resulted in a breach in which personal information from 29 million user accounts were compromised. The data included dates of birth, log in devices, location data and search queries, among others. The breach also caused the hackers to get hold of certain data such as posts on timelines, group memberships and friend lists of some 400,000 additional users.
Hackers exploited a flaw in Facebook's "access token system". Basically, access tokens are a method Facebook uses to provide seamless access to external web apps by providing temporary, secure access to Facebook APIs (App Programming Interface). You are more familiar with this as simple Facebook sign in features on different apps and pages. These tokens, however, did not have any expiration date on them. This is what the hackers exploited in the very first place. The company could have rendered these tokens invalid or could have patched the loophole. This would have eliminated any such risks.
Facebook had been clear on their statements after the breach saying the social media giant had no knowledge of any such risk. Some Facebook employees have spoken saying that the risk of such a breach was well known and could have been averted. Facebook, ever since the breach, has vowed to make such a token with no time limit, inaccessible to the public.
In 2018, the Facebook-Cambridge Analytica data breach occurred which caused millions of Facebook users' personal data to be harvested without the consent of Cambridge Analytica. The breached data consisted of a questionnaire used to build cognitive profiles of its users. Personal data of the users' friends were also compromised.
Facebook's controversial approach to such scandals can be better understood if their past motto "move fast and break things" is brought into perspective. This mentality has influenced Facebook's way of tackling things not only in its R&D but also how they handle data breach scandals. Future scandals are inevitable and only time will tell whether Facebook has learned from its mistakes or if the trend continues.
Furthermore, Facebook has often struggled in reviewing non-English posts, often removing them. Facebook does not have an adequate support system to genuinely read the content and make decisions. The organisation has poor customer service as a vast majority of users simply cannot contact them. Even if they do, they are asked to refer to support pages that are redundant and outdated, leaving users at a dead end. Facebook's post review process is done through AI and definitely needs some improvement.
Despite having a revenue worth billions of dollars, Facebook often has a number of outages and noticeable downtime. Sometimes, the user experience is flawed as pages take longer to load due to the enormous load on its servers and sometimes inaccessible.
Undoubtedly, Facebook is the most used social media platform. A large number of people globally have Facebook as their primary source of news. With Facebook's weak fact-checking, it has now become a perfect mishmash of facts and misinformation. With fewer moderators working due to the pandemic, Facebook cannot monitor enough incoming data and more responsibility has been bestowed to AI. The AI itself is not perfect but is ever-evolving and needs a large dataset and information to distinguish facts from misinformation.
At this point, it can safely be said that Facebook has a lot of problems and not nearly enough measures to truly tackle them. Their intentions to truly work on the issues seem sketchy or fragmentary at best. However, the question to ask is if these problems reflect the need for a social media platform not dominated by a single entity, and whether all hope is lost for Facebook.
Osaman is a curious mind always wondering about AI, simulations, theoretical physics and philosophy. To discuss nerd stuff DM him on www.fb.com/osaman.binahmed
Syeda Afrin Tarannum would choose 'The Script' over 'G-Eazy' any day. Continue ignoring her taste in music on afrintara@gmail.com
Comments