In a recent communication to the Election Commission of India, WhatsApp stated it would block or disable its chat services on phone numbers that originate or forward fake news or objectionable election-related content. The service provider, however, requires the Election Commission to share screenshots of the objectionable content or fake news.
The move comes in the wake of social media operators that recently agreed to follow a voluntary code of ethics. Per this code, the operators will remove ‘problematic content’ from their platforms in a bid to enhance ‘transparency in political advertising.’
WhatsApp’s latest action also answers industry observers that had raised concerns about how the Facebook-owned platform would address the issue of fake news, given WhatsApp’s inability to remove messages.
WhatsApp had earlier pleaded inability to curb the spread of fake news. This stemmed from its lack of access to the content shared on the private chat service, which is secured by end-to-end encryption.
In addition, WhatsApp said it recently put in place measures to strengthen privacy on its platform. The company said April 3 that its updated privacy settings require people to seek consent from users before adding them to chat groups. These actions emerged in response to concerns around privacy raised by the government for more than six months.
WhatsApp also said it implemented advanced machine learning technology to identify and block sources of misinformation. The technology reportedly works around the clock to identify and ban accounts engaged in bulk or automated messaging.
“Through this approach, we ban two million accounts from WhatsApp per month, 75% of them without a recent user report. We published a white paper on the impact of these efforts,” the company added.