2,000 Developers will meet later this month in Vienna to learn about the latest developments in Artificial Intelligence. It is a great honor for us to take part in this amazing event and share insights from our work on using AI to help fight disinformation.

Towards the event – Or Levi, AdVerif.ai’s Founder – was interviewed for the WeAreDevelopers Magazine:

1. How has fake news gotten so out of control?

To address this question, we need to consider the motivation for fake news purveyors.Some purveyors are politically motivated. This include state-backed actors that got encouraged by the alleged intervention in the 2016 US elections and are looking to follow the same playbook. A recent report by researchers at Oxford University found that the number of countries with political disinformation campaigns more than doubled to 70 in the last two years.Other purveyors are economically motivated. This include opportunists who take advantage of social media to generate traffic to their websites and monetize via advertising. For this part, the structure and algorithmic design of social media played a big role. The Facebook feed for example rewards engagement and sensationalism, and this, together with a climate of a polarized political discourse have resulted in the explosion of fake news stories. With AdVerif.ai – our mission is to cut the cord between fake news websites and advertising money, to help fight the spread of misinformation. While it is not possible to manually check every website or story, we believe that technology can help – similar to how the spam problem, for example, was addressed.

2. How can developers help fight the spread of misinformation online?

For developers working on AI research – consider also unintended, or malicious applications of your software. For example, a recent work by Microsoft researches has suggested that generating fake comments to articles can help trigger discussion and engagement among readers, but failed to recognize the possible malicious application of such technology to sway public opinion etc. For developers working on content recommendation – the challenge is to find a trade-off between optimizing for engagement (which is the cornerstone of most advertising business models) and quality of content. Consider how you can provide users with a balanced perspective, that encourages critical thinking.Moreover, if you are passionate about developing technology to fight misinformation, we invite you to join the AdVerif.ai research group – to advance the research in these areas and develop practical tools. For example, in our latest work (which I will talk about in the WeAreDevelopers Vienna event) – we developed a method to automatically identify nuances between fake news and satire (a protected form of speech) using machine learning and semantic Natural Language Processing.

3. What do you think will be the future of fake news?

A recent report by Gartner predicted that by 2022, people in mature economies will consume more false information than true information. While this might turn out to be too pessimistic, I believe the challenge of fake news is probably not going to disappear, especially considering that the technology to generate fakes continues to progress at a faster pace than technology to detect them.For example, Deepfakes today still have visible artifacts that make them detectable, but as the technology improves, tools to create fakes will become better and better in generating realistic images and videos, that might not be detectable to the naked eye. To address this challenge, first and foremost we will need to be more critical as consumers of information. In addition, while AI could not 100% replace human judgement, there are specific tasks where it can help – to flag fakes and provide readers with more context.

Categories: Conferences