Research,  Students,  WhatsApp

WhatsFarzi: Analyzing Fake / Manipulated content / Misinformation on WhatsApp

As mobile coverage penetrates the most remote areas and data packages become cheaper every day, over 4 billion people across the world today have access to the Internet, of which more than 56% traffic comes from mobile devices. This is especially true in developing countries like India, where more than 430 million Indians out of the total online population of 462 million access the Internet through  mobile phones. WhatsApp is the most popular personal messaging service in India, with more than 200 million users, the highest in the world. One of the biggest side effects of WhatsApp’s deep penetration is its misuse as a medium to disseminate misinformation. Last year, 70+ incidents have led to deaths of 30+ people across India because of misinformation people received on WhatsApp. In the landmark case of Jogulamba Gadwal district of Telangana, where the literacy rate is still at 50%, doctored images and videos of innocent citizens were made viral on WhatsApp as alleged child kidnappers which led to widespread fear and unrest among the locals.

Till a few years ago, Online Social Networks (OSNs) like Facebook and Twitter were among the primary sources of news, but in recent years, news consumption over WhatsApp has tripled, overtaking Twitter in many countries. Spread of misinformation, especially in the context of news can lead to: misallocation of resources during time-sensitive situations like terror attacks and natural disasters, misalignment of business investments, and polarized elections across the world. Various attempts have been made to understand the ‘how’ and ‘why’ of misinformation dissemination on social networks from a social science, and technical perspective. Our group has done significant related work to study how social media can be leveraged to predict credibility of information. Gupta et al. studied 14 high impact and time-sensitive events, identified features, and developed a supervised Machine Learning (ML) and relevance feedback approach to calculate credibility score for tweets. We analyzed spread of misinformation during Hurricane Sandy and Boston blasts; using this understanding, we developed TweetCred, a Chrome browser extension and a REST API. TweetCred has analyzed more than 16 million tweets to date and produced credibility score for those tweets. Building upon the success of TweetCred, Dewan et al. extended TweetCred for Facebook. We analyzed Paris attacks in 2015 based on the visual theme, embedded text, and the sentiment of images. We also identified malicious posts on Facebook in real time with an accuracy of over 80% using class probabilities obtained from two independent supervised learning models based on a random forest classifier. Finally, using these models, we built a Chrome and Firefox browser plugin called Facebook Inspector (FbI). FbI has processed more than 10 million posts on Facebook over the past 3 years.

Using our understanding and experience from building technologies like TweetCred, and FbI, we have built WhatsFarzi to fight against the spread of Fake News on various Instant Messaging platforms such as WhatsApp, Telegram, Hike, etc. We make use of both the text and visual features to predict the authenticity of the claims present in the message. In order to verify the textual claims we make use of a Knowledge Graph, given a real article we first extract all the relevant entities which correspond to people, organisations, locations, products, etc. and their corresponding relations which comprises of a subject, an object and a relationship among the subject and the object. These triplets are stored into a Knowledge Graph. This graph is updated on the fly with real news from credible sources and is stored in a database for quick access. The same pipeline is used for a test article that is given by the user.  We validate the extracted relations from the test article to those that are present in the knowledge graph. For this, graph traversal algorithms along with word embeddings are used to find nodes in the graph that are similar to the nodes of the test article. The relationships of the retrieved nodes are evaluated with similarity measures which is then returned as a score that gives a fair idea about the authenticity of the test article. Since people also tend to use visual features to spread information, we have also incorporated an image tampering model in the app which helps us detect if a given image is tampered or not. This image tampering algorithm helps us detect the exact sections in an image that are been doctored. Below are the attached images showing the working of the App.

Figure: Left is the landing screen for the app, where users can enter text to check. Middle, presents the results of the analysis of both the text and image. Right, presents the analysis of the image tampering.

If you would like to add to the data that we have collected, please share potential fake messages (messages you think may be fake) to this number +91-9354325700. We plan to share the data and our code soon. Stay tuned!

Feel free to download the app from Play Store. If you are interested in knowing more or helping us with the research please write to pk [at] iiitd [dot] ac [dot] in. This is work-in-progress. We continue to add more strength to the model and the app.

Students involved (alphabetical order): Dhruv Kuchhal, Madhur Tandon, Suryatej Reddy Vyalla.

Acknowledgements: Shubham Singh, Karan Dabas



Comments Off on WhatsFarzi: Analyzing Fake / Manipulated content / Misinformation on WhatsApp

Professor @ IIIT Hyderabad http://precog.iiit.ac.in