Twitter says it will remove images of people posted without consent

Twitter says it will remove images of people posted without consent

Twitter CEO Jack Dorsey during a past function. PHOTO / CNN

  • Under its current policy, the social media giant prohibits the publication of people's private information, including addresses, phone numbers, identity documents and medical records.
  • Now, it says it has added "private media" to the list, because the sharing of such material could be used to "harass, intimidate, and reveal the identities of individuals."
  • Before removing the image or video, Twitter said, it would require a first-person report.

Twitter has updated its privacy policy so that it can remove images of people that have been posted without their consent, the company said in a blog post Tuesday.

Under its current policy, the social media giant prohibits the publication of people's private information, including addresses, phone numbers, identity documents and medical records.

Now, it says it has added "private media" to the list, because the sharing of such material could be used to "harass, intimidate, and reveal the identities of individuals."

"Sharing personal media, such as images or videos, can potentially violate a person's privacy, and may lead to emotional or physical harm," the company said.

"The misuse of private media can affect everyone, but can have a disproportionate effect on women, activists, dissidents, and members of minority communities," it added.

Before removing the image or video, Twitter said, it would require a first-person report or a report from an authorized representative to establish whether or not the individual had consented to it being shared.

Once Twitter established that the personal media had been shared without permission, it would then proceed to remove it from the platform, the company said.

It added that the policy changes do not apply when there is public interest at stake, or in an emergency situation.

"This policy is not applicable to media featuring public figures or individuals when media and accompanying Tweet text are shared in the public interest or add value to public discourse," the company said.

"We recognize that there are instances where account holders may share images or videos of private individuals in an effort to help someone involved in a crisis situation, such as in the aftermath of a violent event, or as part of a newsworthy event due to public interest value, and this might outweigh the safety risks to a person," it added.

The new measures, which went into effect globally on Tuesday, were met with criticism from users arguing that the changes were too immediate, and could result in undue censorship.

The company subsequently clarified the changes in a series of tweets — adding that images and videos showing public events including mass protests and sports events would largely not violate the policy.

"Context matters. Our existing private information policy includes many exceptions in order to enable robust reporting on newsworthy events and conversations that are in the public interest," the company said.

"We will take into consideration whether the image is publicly available and/or is being covered by journalists -- or if a particular image and the accompanying Tweet text adds value to the public discourse -- is being shared in public interest or is relevant to the community," the company added.

However, some users continued to voice their concerns, questioning the ambiguity of the policy updates.

"Privacy and data protection laws typically protect public interest reporting, Twitter @Policy. How do you propose to ensure the same balance in your new policies?," one user tweeted.

"So does Twitter's new policy effectively mean that candid street photography is no longer allowed? This demonstrates a major problem with blanket, well intentioned policies for removal," tweeted another.

Twitter's move comes as social media companies face increased scrutiny of how they are safeguarding users.

In September, Instagram announced it was hitting the brakes on plans to develop a version of its product that is aimed at children under 13, following revelations that the social media platform has a potentially harmful impact on kids.

Similarly, Meta, the parent company of both Facebook and Instagram, said in November that it had plans to curb advertisers' targeting of users based on certain sensitive categories.

latest stories