Fake news will get exponentially worse with Artificial Intelligence

/, News, Socio-Tech Accountability/Fake news will get exponentially worse with Artificial Intelligence

Fake news will get exponentially worse with Artificial Intelligence

By Karthik Krishna and Giuliano Meroi

Facebook CEO started 2018 with a message admitting that Facebook currently makes too
many errors when it comes to preventing misuse of its tools. In about a year before this
message, Facebook went from ridiculing the concerns about it being exploited
during the 2016 U.S Presidential Election as a “pretty crazy idea”, to sending its General
Council – Colin Stretch – to testify under oath before the US Senate and House Intelligence
Committees and publicly acknowledge the role of foreign interference. There is no hiding the
fact that 2016 election was an event when the most powerful country on the planet got “nuked”
with information payloads on networks and platforms that American companies built, and that
America failed to defend the integrity of its own democracy.

The overblown and overhyped Silicon Valley narrative that “borderless” technology deployment
to digitize everything and hyper-connect everyone is making our world a better place, now faces
brutal ground reality. Rep. Ro Khanna, a Democrat from California, hits the nail on the head
saying that if Silicon Valley took credit for Arab Springs movement, then they should accept
responsibility when their tools have been weaponized against social harmony.

The heartening sign is that now leaders from within Silicon Valley are publicly questioning the
impact of their “innovations.” Here is John McAfee, Founder of one of world’s leading
cybersecurity companies – McAfee – expressing his concern that every single political attitude for
virtually every adult in America is in the database of companies like Google and Facebook, with
their hacking being almost a certainty. Here is one of Twitter’s early investors making a strong
argument for innovators to take responsibility for their creations. Maybe it is too little too late, but
Stanford (the mecca for the Silicon Valley narrative) is taking tiny steps to bring the perils of
hyper-connected world to forefront. The notion that there COULD be a negative side to a
hyper-connected world, may have been a treated as a joke in Palo Alto not too long ago. Widely
recognized anchors from journalistic news media like Maria Bartiromo (Fox Business Network)
and Tucker Carlson (Fox News) have highlighted the need for tech companies to be regulated
because they can’t be trusted to not distort the free flow of information on the internet. Based on
some serious investigative work, Cable News Network (CNN) calls the current state of the
internet so-called Tragedy of the Commons. Again, all this is simply not enough, because the
overhyped tech train originating from Silicon Valley still seems to be screaming past these
warning signs and heading in the wrong direction with trillions of dollar riding on it.

Let us address the impact of fake news of society. In order to effectively solve this problem we
need to clearly define it first. Here is Socio-Tech Academy definition of the real problem –
“targeted distribution of “junk news” on clickbait platforms.” Addressing one portion in isolation,
say determining whether a piece of information is real or fake is a futile exercise. Facebook itself has admitted it has no clue on how to stop its users from viewing posts with tags
indicating that they are “fake.” A couple of big points we seems to be missing is that, (a) it is not
illegal to communicate a piece of information that cannot be verified as 100% true, and (b)
blindly trusting Facebook to decide what information we get to see and, more importantly, what
we do not see, could have dangerous repursions.

Now for the main point of this article, if you believe we currently have a real problem with the
unchecked spread of fabricated material, then here is some bad news – Artificial Intelligence (AI)
is going to make it far FAR worse. Some of known fabricated material were memes like those
that claimed that Pope Francis endorsed Trump for US President (untrue). All it takes for
successful propagation of such deliberately false material is a basic photo editing tool to imprint a message and access to platforms that freely allow its targeted distribution.
Now with AI enhanced capabilities, we maybe on the verge of democratizing the ability of
creating fabricated audio/ video material that is hard to differentiate from authentic content.
Author of a report published on behalf of U.S. Intelligence Advanced Research Activities
Projects (IARPA), Greg Allen states that with AI enhanced Audio/Video forgery, people will
struggle to know whom and what to trust. Carefully fabricated audio/video will have
devastating impacts from a personal to broader social levels. Think of the impact of a fabricated,
yet totally believable viral video of a spouse cheating, a politician taking a bribe, or a solder with
a US flag spilling pig blood near a mosque in the middle-east. As IARPA categorically states,
this technology will transform the meaning of evidence and truth in domains across journalism,
government communications, testimony in criminal justice, and, of course, national security.

Advances in technology are on the trajectory of creating, and more importantly, enabling
widespread access to tools that could shake the foundation of social harmony across the world.
Our plea is to address such concerns proactively rather than after the fact.

By | 2018-01-11T01:34:46+00:00 January 8th, 2018|Artificial Intelligence, News, Socio-Tech Accountability|2 Comments


  1. Ron Cole January 22, 2018 at 5:11 pm - Reply

    While your concern over the potentially negative impact of AI on our society is spot on, where are your possible solutions to this issue. Can we require companies like IBM/Watson or INTEI/Saffron to build in required user identification in their software so that items can rapidly tracked to source? There needs a concise education on the
    concerns over AI but accompanied with solutions.

    • Socio-Tech January 22, 2018 at 5:37 pm - Reply

      Thank you for your comment Ron. We need social awareness and education for these concerns to be taken seriously and that is our focus right now. Regarding possible solutions, given the scope of the issue, there has to be a more transparent public discourse, with participation from diverse stakeholder groups, on the best way forward. A couple of upcoming events you may find relevant – Data on Purpose at Stanford and Regulating Computing and Code hosted at University of Colorado Boulder. Here are the links:

Leave A Comment