Since the election of Donald Trump as President, the term Fake News has become more common in both social media and regular media streams. For many of us, it has become tiring, as anyone and everyone begins utilizing Fake News to disrupt any meaningful argument that opposes their point of view.
While Fake News can be seriously annoying, it can also be very disruptive to our democracy and how people essentially collect their information. By introducing doubt into the sources available for information, people can develop a sour taste and thus avoid researching important topics that can affect how they vote and think. Fake News as I have made reference to in previous posts, is essentially nothing new, but poses a toxic threat in a world with fast internet connections and instant answers.
Due to the importance of social media in our every day lives, it makes sense then that Google and Facebook have a responsibility to take action in order to address Fake News. More importantly, for their own reputations, it is important they do not become a direct platform that contributes to the spread of misinformation and hate speech.
In March of this year, it was announced that both Facebook and Google had decided to start policing their platforms in order to tag and remove fake or disputed articles. At first, I took great caution to the direction both social media entities were taking, since they were essentially proposing to inform users on what information is valid and what is invalid. While this may seem innocent and proper, I do not believe anyone should sacrifice their ability to determine what is real, simply by allowing a large corporation to tell them what is in fact truth.
Many of us are adults and should have the capabilities to discern what is misleading from what is valid through searching the web or simply discussing the topic with others.
If Facebook wants to continue making profits off of our general interests and topics posted, they need to ensure that those with a motive to spread misleading information cannot use their advertising as a vehicle to do so.
A simple approach to resolving this would be to ensure that users have the tools necessary to isolate suspect material and quickly confirm if the information is actually valid. For example, in our modern world, information moves quick, so if something is suspicious, a user can simply highlight the heading of an article and right click to access a quick web engine search for verification.
By allowing people to install or incorporate a widget, much like they do to eliminate pop ups, you continue to give them the power to control the information they intake. Google can easily achieve this through the creation of said widget or by partnering with social media outlets to allow some form of instant check to be conducted at the users request.
Luckily, Facebook decided not to continue with their original plan to inform users on what is valid, but instead has installed fail safes that will target status updates that may contain links or keywords that have originally been refuted or reported by someone.
If you discover something that is directly misleading or a proponent of hate speech, you simply report it to Facebook and once investigated, it will be flagged, in order to avoid others from accidentally sharing it and further spreading the disinformation.
Throw in some quick gadgets from Google and this system can not only ensure that Fake News finds a quick end, but also enables users to continue their personal due diligence, when it comes to the kind of information they encounter and their power to influence it.
How do you feel about Fake News? Have you encountered it on a regular basis and if so, how do you respond?
Do you agree with the direction of allowing people to police the information they receive or should more of a centralized effort be maintained that controls what information people are exposed to?
-The Political Road Map
No comments:
Post a Comment