- Fake news is entirely fictional, fabricated stories, made with the aim of generating clicks and, therefore, advertising revenue
- The authors of this nonsense use particularly sensational headlines to attract viewers and shares to boost their traffic
- They also employ 'bots' - or fictional Facebook profiles - to engage with their fake content to boost a story's weight in the algorithm and therefore increase traffic
- Stories can then take on a life of their own because of the echo-chamber effect Facebook's algorithm has on the things we all see on there
- Fake news is NOT when CNN reports something Donald Trump doesn't like
- Fake news is NOT the BBC
It's also worth noting that fake news is not helpful for Facebook's business model. Facebook is so successful because it's a network based on real life. It requires real names and real engagement. If people stop trusting what they see in news feed, the whole system could start to break down.
So how does Facebook plan to fight it?
Well in a blog post, the network outlined three areas:
- Disrupting economic incentives
- Building new products
- Helping people make more informed decisions
To me, the second of these is the most compelling, because it's the area that requires the innovation and the fundamental changes to the way Facebook operates.
It's difficult to see how disrupting economic incentives will achieve much, as the fake news ad revenue is likely coming through different streams. It's true that revenue share is available to publishers through the Facebook audience network - but this is by no means the only option for publishers looking to make click money.
Then helping people make informed decisions is a laudable aim - and involves some interesting projects, like the Facebook Journalism Project and the News Integrity Initiative. But these are broader, longer-term changes.
I will also say here that it's encouraging to see Facebook taking news organisations seriously. For a while it looked like they'd be treated simply as more paying advertisers and would thus be strangled off of all news feed traffic.
So what new products are they building?
One of the things they are doing is making reporting of dodgy news easier.
This might sound like an easy solution. People report fake news, Facebook removes it. But it's not that simple.
The main problem with fake news is that people don't know it's fake.
For example: a really fake story might have a hundred shares from credulous readers, and four or five reports from angry skeptics.
Then, a true story might have hundreds of shares simply from interested people, and four or five reports from those who, say, don't like it politically.
How to tell the difference? This is what Facebook will be struggling with.
It's a numbers game
Another issue Facebook is dealing with is scale.
In most programming (and I should emphasise here that I'm a basic-level programmer, certainly not an expert) it's relatively easy to perform operations on known numbers - it's the unknowns that become tricky.
So, one, two, three, or a hundred fake news stories are easy enough to take down one by one. But what about n fake news stories? This is what you are dealing with when they are constantly being uploaded all day every day.
And it requires systems that can understand an incredibly complex range of signals.
One of the signals they are looking at, for example, is a person's likelihood to share a story after reading it. They reason that fake news is more likely to reveal itself post click-through, so the majority of its shares would come from people who haven't actually read the story.
Take it from someone who has worked social at two major newspapers: sharing without actually reading an article happens all the time.
So Facebook will now be grappling with that, too. But it's a task I expect them to do well. After all, writing smart, complex algorithms is what has got them to where they are today. They'll have some of the best minds on the planet taking on the job.
Working with partners
Working with major news organisations to fact-check stories is another method they are using - and is similar to what Google is doing with its own fact-check roll out.
These are both great ideas and should do a lot of good. My only concern is that part of the problem is that people are coming to trust fake news sites MORE than the mainstream media. So what if they simply don't care what Associated Press says?
What about the bots?
The bots - or fake profiles sharing fake news - are under constant scrutiny. Facebook looks at the names, the profile pictures, and a huge variety of ways in which they are related to real life profiles to determine which profiles are real and which ones aren't.
But it's an arms race. And, as I said above, it's also a question of scale. It probably isn't that difficult to find out whether one individual profile is fake. But determining whether an unknown number are fake when compared to an unknown number of real ones - by using a huge number of different variables - is another question entirely.
Building systems that recognise real from fake is tough. And it's sometimes easy to forget that social media is a brave new world. Many of the systems they use will have to have been invented first.
One thing is for sure - Facebook has a huge investment in trust. And while the search for engagement sometimes pushes towards maximum engagement at the expense of all else - they'll know that becoming known simply as bullshit merchants is very bad for business. So Zuckerberg and co be working incredibly hard to be as reliable and truthful as possible. I expect them to come out on top.
This blog post first featured on schmocial.org.
-- This feed and its contents are the property of The Huffington Post UK, and use is subject to our terms. It may be used for personal consumption, but may not be distributed on a website.