The new Editors

Aris Alexis
4 min readDec 4, 2020

A glimpse on how AI could help fake news in the future

Meet our new editors

*article was submitted for the SXSW future writer of the year in 2016

Could you imagine a world without lies? Furthermore, could you imagine a world that everything said was an actual fact? What this would mean for society and politics?

In our everlasting quest to explain the universe and interact with the world, we humans gather facts and try to make sense of them. Even for our most trivial interactions we need data as input to perform our actions. If the data is wrong the action will probably be wrong.

Data is everywhere. Internet of Things sensors are starting to gather it at phenomenal levels and increasingly bigger schematic knowledge datasets are being gathered and made available (Wikimedia,Freebase).

Most importantly, almost all of humanity’s written material is hosted online whether it was written a while ago or just now.

But there is a lot of material out there that is not facts but wrong assumptions based on incorrect data or misinformation. So when you want to search for something, you will stumble across sources of trustworthy data and random trashy blog posts. The more of an expert in the subject you are the easier it is for you to curate this information and keep only what is worth. Could we make it so that since fact A happened, there are no conflicting sources of information online about if A did or did not happen?

The curators

The role of insightful people and great leaders — in short, people that the masses listen to — are, in effect, curators of facts that are needed to form an opinion. Their opinion. They serve it to the public where, of course, everyone is free to judge if this is adequately scrutinized by this person and are free to do so in their own research. But frequently they don’t because there is a thing that is called trust.

And here lies the most significant problem:

When people trust somebody, they accept most of the facts that are presented to them as true and then if their assumptions are plausible they must be correct since the data leads to them.

It is increasingly easy to lookup information online but, if it is too much, people don’t do it. It is the same with feeds, if the social feed is not curated it is tiring for you.

AI

All the big tech companies have entered an arms race for machine learning and artificial intelligence. Interestingly enough, these technologies also relate to natural language processing (NLP) and the understanding of text and speech. Making the machine understand the meaning is quite difficult but at some point (rather soon in my opinion) it will improve enough to be usable; and when it does, a whole window of opportunity will appear.

Making computers so smart that they can understand meaning and check facts is not a trivial task- don’t get me wrong. But I am in the camp that believes that AI with human level intelligence will at some point become reality. What most people don’t realise is that it doesn’t have to be like humans, but perform some tasks such as these at a similar level as a human. Understanding meaning is very complicated , it is not like beating Kasparov at chess.

How can we use technology to better humanity?

Apart from the obvious medical research, we can and probably will use computers to make Truth appear more often and minimise deliberate lies or misinformed statements.

Technology can help us filter incorrect data from articles,streams,chats,speeches and whatever you can think of even in real-time. Cheap artificial proofreaders for everything.

This will be a game changer.

Imagine articles having a % of fact correctness. Will they even be published if they have less than 100%? Could you write wrong facts anymore?

This could be done manually of course but who does this? It is up to the journalists and their editors to check them and in the new era of social media it’s just influential people with a lot of followers.

As a bonus imagine wearing Google Glass (well, in a new format but still the same idea) and while your friend says something that you would normally(?) go on and check in Wikipedia from your phone (this is super annoying to many people btw), the glass has already picked it up and analysed it.

The difference between human editors, curators,proofreaders and everything alike is that humans have an inherent bias stemming from emotions.

These machines will (probably) not have that so when correcting facts there will be no “yes but”. For example, sentences such as “most of the scientific community says that …” should stand the scrutiny of the New Editors.

This may all sound to you like an oppressive dystopian future, but if there is one thing that I would like to be oppressed by is absolute Truth seekers.

P.S

Could you find incorrect data in this piece of text? (Hint: there is a trivial error in the facts).

--

--

Aris Alexis

Full stack software developer, startup founder. Mostly interested in radical & futuristic ideas with social impact.