By Hazel Baker
For almost every globally significant news event of 2019 – and of these there were many – it is possible to identify misleading video and pictures that were shared on social media in the aftermath.
This year the Reuters newsroom discovered, among many other examples, that a harrowing video shared during Cyclone Idai had been shot in Libya five years earlier. We watched videos going viral following Brazils’ Brumadinho dam collapse that were from different, unrelated incidents. And during conflict between India and Pakistan in February this year, we saw numerous false clips circulating, including one taken from a video game.
As social media has evolved to become a critical part of newsgathering, it has also become a minefield, where misinformation travels rapidly. Newsrooms have had to adapt, growing journalists’ verification skills in order to be able to filter powerful, authentic eyewitness media from that which distorts the truth.
Into this landscape comes a new threat: that of so-called “deepfakes” – a form of synthetically-generated media. Interest around deepfakes has grown dramatically this year. Google Trends data shows that searches for the term peaked in June, around the time a deepfake video was released that featured the Facebook chief executive Mark Zuckerberg. As we close this year, the number of searches for “deepfake” is five times’ higher than at the end of 2018.
(Link to chart: here)
Despite this growing discussion, it still remains a hypothetical threat for many, as the technology to make these types of manipulated videos is not yet widespread. Cyber security start-up Deeptrace counted around 14,600 deepfakes online as of August 2019, but the vast majority are on adult entertainment sites – highly damaging to the individuals involved, but not to the news ecosystem.
Since Reuters carried out a newsroom experiment into deepfakes in March, we’ve seen an incredible appetite from our own journalists and many others worldwide to learn more about the technology behind these new forms of manipulation, in order to better prepare themselves for what may lie ahead. This is encouraging – because if there is one key takeaway from our research this year, it’s that the more familiar Reuters journalists were with emerging forms of synthetic media, the readily they were able to identify potential red flags and consider a range of possible scenarios.
That’s why we decided to partner with the Facebook Journalism Project to produce an in-depth course into manipulated media. It includes explanations and examples of new forms of synthetic media, along with an assessment of the full range of ways visual material may mislead. It also challenges participants to place themselves into a breaking news situation and consider the steps they can take in order to establish the facts around the pictures and video they obtain.
We know that visual misinformation is very much a global problem, and we’ve therefore produced this course in four major languages, with more to follow. We hope that it will be informative, thought-provoking and will help lay the ground for newsrooms to build upon their own verification strategies.
Media contact:
deepal.patadia @ thomsonreuters.com