701 reports of complaints in October.
According to reports provided by WhatsApp, more than 2.3 million malicious accounts originating in India have been banned.
A representative from WhatsApp commented:
“In accordance with the IT Rules 2021, we’ve published our report for the month of October 2022.
“As captured in the latest Monthly Report, WhatsApp banned over 2.3 million accounts in the month of October.”
The choice was made in accordance with the updated IT Rules 2021, which now place greater obligations on social media sites.
Between October 1 and October 31, 2022, 2,324,000 WhatsApp accounts were blocked and 811,000 of those accounts were blocked without being faced with a user complaint against them, according to the company.
The text messaging service, which has more than 400 million users in India, got 701 reports of complaints in October.
However, only 34 of these records were subsequently actioned by complaints officers.
Any complaints made by users are sent to an India-specific Grievance officer at firstname.lastname@example.org.
The officer can also be contacted via post.
Large digital and social media sites with more than five million users must provide monthly compliance reports under the updated IT Rules 2021.
The Ministry of Electronics and IT has announced several revisions intended at defending the rights of “Digital Nagriks” in the meanwhile, as part of a significant drive towards an open, safe, trusted and accountable Internet.
As a result of the revisions, intermediaries are now required by law to make reasonable measures to stop users from submitting harmful or threatening content.
Why do WhatsApp accounts get banned?
In essence, WhatsApp divides the blocked accounts into two groups:
- Complaints sent to WhatsApp’s complaint systems by users in India.
- Accounts that have been terminated in India as a result of preventive and detection measures for breaking Indian law or WhatsApp’s terms of service.
WhatsApp uses several resources and methods to combat bad conduct on the network.
The messaging app states that prevention is its main priority.
This is because WhatsApp thinks it is preferable to avoid bad behaviour before it occurs than to discover it after harm has already been done.
The abuse detection function runs at three different points in an account’s lifecycle: upon registration, when messaging, and in reaction to unfavourable comments.
These systems are supplemented by analysts who assess critical situations and work to increase the efficiency of the application overall.