Reporting Abuse to Internet Platforms

Illustration Reporting abuse to internet platforms Pranisha03 v05

When it all gets too much, one option is to report the abuse to the Internet platforms. Or to at least block the offending user. Because while reporting mechanisms exist, platforms have not always been found to be responsive. On the contrary.

The below explanations of how to report abuse to Facebook, Instagram, YouTube and Twitter are accurate at the time of writing, but these policies and mechanisms change often. By the time you are reading this, quite a few things may have changed.

If so, some of these changes will hopefully have been for the better!

***The below contains examples of online abuse that readers might find distressing***

How to report abuse:

Profiles

  1. Go to the profile of abuser
  2. Under the cover photo click on icon […] and select ‘Report'

Posts

  1. Go to post
  2. Click icon on top right hand corner of the post and select 'I don't want to see this'

Comments

  1. On the top right corner of the comment click [...]
  2. Select 'Report' and follow on-screen instructions

Photos and videos

  1. Click on the photo or video to expand it
  2. Click 'Options' in the bottom right

  3. Click 'Report Photo' for photos or 'Report Video' for videos

Messages

  1. Open the message you'd like to report
  2. Click the icon on the top right hand corner called 'Actions'

  3. Click 'Report Spam or Abuse' and follow the on-screen instructions

Pages/groups/events

  • Go to the Page/Group/Event you want to report

  • Click on the icon […] on the Page/Group/Event's cover photo

  • Select 'Report Page' and follow the on-screen instructions

Ads

  • Hover over the ad and click the X in the top right hand corner

  • Choose 'Hide this ad' to report a specific ad, or 'Hide all from...' to hide all ads from that particular advertiser

Questions

  • Click the icon on the top right corner of the question

  • Select Report to Admin' or 'Report/Mark as Spam' and follow on-screen instructions

  • To report a reply to a question, click X, then click 'Report' to report it

What counts as abuse:

It’s not clear who checks the reports, but many people receive automated replies saying, “We reviewed the photo/post you reported but found it does not violate Facebook’s Community Standards.”

According to Facebook’s Community Standards the following things count as abuse. Except, well, um, when they are mysteriously seen as okay.

Direct threats

Facebook carefully reviews reports of threatening language that may cause serious harm to public and personal safety along with theft, vandalism and financial safety

Most of Facebook’s policies are targeted to protect English speaking users. If a threat is made in another language or in a different cultural space, Facebook often does not recognise it. When Afghan women have their profiles hacked, Facebook moderators cannot understand the implications. A beer mug may be innocent to American eyes but in Afghanistan it carries a life sentence.

Bullying and harassment

Facebook believes in the freedom of speech, however, if certain content is posted specifically targeting an individual to shame and humiliate them, then this content can be removed.

Facebook often doesn’t see misogyny as harassment. When Australian writer Clementine Ford uploaded a topless picture of herself, she received a barrage of vicious, threatening, sexually explicit messages. According to Facebook, these messages were acceptable as per their guidelines.

Sexual violence and exploitation

We remove content that threatens or promotes sexual violence or exploitation. This includes the sexual exploitation of minors, and sexual assault. Our definition of sexual exploitation includes solicitation of sexual material, any sexual content involving minors, threats to share intimate images, and offers of sexual services

This policy is applied erratically, and often not to the benefit of women. When a Facebook user posted a graphic, six minute video documenting the gang rape of a woman by the side of a road in Malaysia, it was live on the site for more than three weeks. Repeated requests to take the video down were declined by Facebook moderators. It was removed only after a member of Facebook's Safety Advisory Board flagged it

A post on Facebook by 'The Kinky Geeks of Chicago' which gratifies child pornography and incest was not taken down, despite multiple requests.

Nudity

Facebook does not tolerate any form of nudity (even for artistic projects or awareness campaigns) for the sake of their global audience, some of whom may be intolerant to this content.

Facebook equates 'nudity' with obscenity, sexual exploitation and pornography. It allows public content that assaults women’s anatomy, but censors content that celebrates women’s bodies. Photographer Ana Alvarez-Errecalde, for example, has had non-sexual photography of women’s bodies repeatedly censored on Facebook.

On the bright side, in 2015, Facebook reviewed its policies to include photos of women "actively engaged in breastfeeding," or post-surgery breast photos in accepted content. Photographs of paintings and sculptures that depict nude figures are also ok.

Hate speech

"Facebook removes hate speech, which includes content that directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender or gender identity, serious disabilities or diseases"

Humour, satire or social commentary on these topics are allowed on Facebook. It is not clear how Facebook makes the distinction between satirical pages and pages which promote genuine hatred.

In good news, the once profusely applied “controversial humour” label in Facebook is no longer in use.

Violence and graphic content

"Graphic images which glorify sadism or violence will be removed from Facebook. If such content is to be shared, Facebook asks users to warn their audience beforehand."

Facebook has been criticised for their refusal to take down content which jokes about rape, which it maintains is acceptable under “humour” or “satire.”

Facebook has also been criticized for removing graphic content that raises political awareness, such as removing the video of Tibetan monks who set themselves on fire to protest against Chinese policies. However in 2013, a video was circulated on Facebook of a man slitting open a woman's throat. Although the video was reported as being inappropriate by one of Facebook's Safety Advisory Committee members, it was allowed to come back on Facebook.

Points to note:

  • Information about the person reporting is not revealed to the offender, and so reporting is anonymous to that extent. However, it is quite likely that Facebook will retain the information about your reports.
  • You can check the status of something you have reported by going to the 'Support Inbox' section.
  • Facebook isn’t really the most proactive with the reports and reporting something doesn’t really guarantee that it will be removed either. However, it will bring it up for review, so there’s that.
  • As fun as rallying your entire friends list to report something may be, the number of reports don’t actually determine whether something will be taken down.
  • Even if Facebook takes something down, in most cases the content is created anyway, and there aren’t any checks in place to stop this.
  • Facebook is also said to have a “team of moderators”, usually comprised of a bunch of young people. These fabled moderators spend their time sifting through millions of reports every week, spending basically less than half a minute on each report. Yeah, not exactly what you’d call “ideal”.

How to report abuse:

Posts

(via a phone)

  1. Tap (iOS and Windows Phone) or (Android) below the post
  2. Tap 'Report'
  3. Follow the on-screen instructions

(via the web)

  1. Click below the post
  2. Click 'Report Inappropriate Content'
  3. Follow the on-screen instructions

Profiles

(via a phone)

  1. Tap (iOS), (Windows Phone) or (Android) in the top right of the profile
  2. Tap 'Report' (iOS and Android) or 'Report for Spam' (Windows Phone)
  3. Follow the on-screen instructions

(via the web)

  1. Click next to their username
  2. Select 'Report User'
  3. Follow the on-screen instructions

Comments

(on iOS)

  1. Tap below the photo
  2. Swipe your finger to the left over the comment you'd like to report
  3. Tap
  4. Tap 'Report Abuse'
  5. Select an option for why the comment is abusive

(on Android)

  1. Tap below the photo
  2. Tap the comment you want to report
  3. Tap and choose 'Delete Comment' and 'Report Abuse'

Direct messages

  1. Tap and hold the message
  2. Select 'Report'

Block profiles

  1. Tap (iOS), (Windows Phone) or (Android) in the top right of the profile
  2. Tap 'Block'
  3. Reporting abuse for a non-Instagram user: an online form needs to be filled

Users can also report abusers who have blocked content from said user by either asking a friend to fill the online form, or doing so themselves.

What counts as abuse:

Nudity

Sharing of nude, partially nude, pornographic or sexual photos/videos that show sexual intercourse, genitals and close ups of fully-nude buttocks and female nipples are not allowed. Even images that celebrate nudity for artistic or creative purposes are not allowed on the photo-sharing app. However photos of post-mastectomy scarring and women actively breastfeeding are allowed. Nudity in photos of paintings and sculptures are okay too.

Many of the nudity photos that Instagram has taken down are those that represent the female body in a non-sexual way.

Instagram once suspended the account of Samm Newman, who posted a selfie of herself in a bra and boy shorts to highlight her weight issues and to promote a body positive image. There are pictures of women in bikinis all over Instagram, yet their accounts do not get suspended. Instagram later reinstated her account apologising and stating that they had wrongly removed her picture.

Photos of men posing partially nude are not in violation of Instagram’s policies.

Credible threats and hate speech

Content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages are not allowed.

This means that it's not OK to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases. When hate speech is being shared to challenge it or to raise awareness, it may be allowed. In those instances, you are expected to express your intent clearly.

Serious threats of harm to public and personal safety aren't allowed either. This includes specific threats of physical harm as well as threats of theft, vandalism, and other financial harm.

In August 2015, Sandra Bland was found dead in her prison cell in Texas after being arrested for allegedly assaulting an officer at a traffic stop. Bland was vocally outspoken about racial prejudice on her social media accounts. Her name started trending on social media, including Instagram, as #SandraBland. Instagram blocked this hashtag for twenty four hours as soon as people started abusing it, posting racist, violent and threatening messages – it was trying to take a positive step in weeding out hate speech.

However, the incident brought up the question of how social media needs to come up with a system that preserves the right to free speech while preventing hate speech.

For example, Instagram got into hot water for also banning the hashtag #curvy, saying it was encouraging inappropriate content, while women were trying to use the hashtag to promote body-positive images. Instagram needs to come up with a solution for their moderators to understand in which context content is being shared.

Graphic violence

Sharing photos for sadistic pleasure or to glorify violence is not allowed. However, if such content is being shared to create awareness, they will be allowed if they come with a warning in the caption.

For example, Instagram has become a platform that many women use to upload graphic photos or messages to highlight issues such as domestic violence or the abusive messages many women get on online dating platforms. Instagram has not taken these accounts down.

Points to note:

  • Think we’ve been a bit inconsistent by explaining some of Instagram’s reporting mechanisms only for the phone app and others for the app and the web? That’s because not all features on Instagram to report abuse are available across platforms. You will need to use the web to report a privacy violation using Instagram’s form dedicated to that purpose, for example, but to report a profile, you better turn to the app. Also, some app features are available on iOS and Android, but not on Windows Phone. Shame for a company of Instagram’s size.

  • On the positive side, Instagram make it possible for non-users to report content, through a dedicated form, available on the web. This form is also useful where users have to report abusers who have blocked the problematic content from being viewed by said user.

  • Instagram says that the number of times something is reported does not affect whether or not it is removed from Instagram.

  • When you have reported a post/profile/comment/direct message, you’ll generally see a message that thanks you for submitting your report. However, Instagram does not specify if the user will be informed when and if the picture will be removed, and how long after the complaint Instagram will attend to it.

  • Though Instagram allows you to provide them with the name of a public person or celebrity if you believe an account is impersonating that public figure or celebrity, oddly you cannot do the same if the account is impersonating your friend. In that case, Instagram encourages you to reach out to your friend, so they can report it themselves instead. Only the person who has been impersonated, or someone authorised to represent them, can file the complaint.

  • When you report something to Instagram, your information isn't shared with the person whose post, profile or comment you're reporting.

  • When Instagram receives a request for information from law enforcement, it does notify the affected user(s) of this request prior to disclosure, unless Instagram is prohibited by law from doing so or in exceptional circumstances, such as child exploitation cases, emergencies and/or when notice would be counterproductive. Law enforcement officials who believe that notification would jeopardise an investigation should obtain an appropriate court order or go through another appropriate process to establish that notice is prohibited.

  • If a request for data from law enforcement draws Instagram’s attention to an ongoing violation of its terms of use, Instagram will also take action to prevent further abuse, including actions that may notify the user that Instagram is aware of their misconduct.

  • Accounts may be disabled by Instagram either with or without warning if they do not abide by Instagram's community guidelines. However, a user may appeal this decision by following the on-screen instructions on the log in page.

How to report abuse:

Flagging videos

  1. Below the video, click on the 'More' button
  2. Highlight and click the 'Report' button in the drop-down menu
  3. Click on the reason for flagging that best fits the violation within the video
  4. Provide any additional details that may help the review team make their decision.

Flagging a channel

  1. Visit the channel page you wish to report
  2. Click 'About'
  3. Click the 'Flag' drop down button
  4. Select the option that best suits your issue

Flagging comments

  1. Go to 'Comment'
  2. In the right hand corner, there is a button. Click on 'Report Spam or Abuse' from the drop-down menu
  3. If enough users mark the comment as spam, the comment will be hidden under a 'Mark as Spam' link
  4. By clicking 'Show Link', the comment can be viewed again

YouTube’s flagging feature relies on a “mob mentality”: the more people flag the content, the more convinced YouTube will be to take down the content.

    Moderating comments on your channel

    1. Remove, report or hide comments: When someone comments on your video, you'll get a notification. Click the arrow in the upper right corner of the comment to manage comments. You can then chose to remove the content. This will take down the comment and any replies from YouTube. However, if the comment was also shared on Google+, it will still be visible there. You can also report comments that you believe are spam or abuse to the YouTube team. If you opt to hide the content from your channel, this will block the user from posting comments on videos on your channel. If you change your mind, you can remove the user from the hidden users list in your community settings.
    2. Hold video comments for approval:
    • Find the video in the Video Manager
    • Under the video, click 'Edit'
    • Click 'Advanced Settings'
    • Under 'Allow comments', select 'Approved'

    Setting comment filters

    Go to 'Creator Studio', select 'Community', then select 'Community Settings' and choose from the following filters:

    a) Manage approved users:

    1. On a comment left by a user who you want to make an approved user, click the drop-down arrow next to the flag icon
    2. Select 'Always Approve Comments from this User'

    b) Manage hidden users:

    1. On a comment left by a user who you want to make a hidden user, click the drop-down arrow next to the flag icon
    2. Select 'Hide' to hide this user's comments on this channel

    c) Add words and phrases to your blacklist:

    You can add a list of words and phrases that you want to review before they're allowed in comments on your video or channel. Just add them to the box next to 'Blacklist' and use a comma to separate words and phrases in the list. Comments closely matching these terms will be held for your approval, unless they were posted by someone on your approved user list.

    Disabling comments

    (for a specific video)

    1. Go to your YouTube account
    2. In the upper right hand corner, click on your account and click on 'Creator Studio'
    3. Under the video, click on the drop down menu under 'Edit' and select 'Info' and then 'Settings'
    4. Click on 'Advanced Settings'
    5. Under 'Comments' untick the box that says 'Allow comments'

    (for all videos)

    1. Go to your YouTube account
    2. In the upper right hand corner, click on your account and click on 'Creator Studio'
    3. Under 'Community', select 'Community Settings'
    4. Under 'Default Settings', select 'Comments on your Channel' and then 'Disable Comments'

    Ideally, this would not be a permanent solution as comments should be a way of interacting with the community.

    Blocking users

    1. Visit their channel page, which should have a URL similar to www.youtube.com/user/NAME
    2. On their 'About' tab, click the flag icon
    3. Click 'Block User'

    Reporting an abusive user

    1. Go to YouTube’s 'Policy, Safety and Reporting' page and find 'Reporting and Enforcement Centre'
    2. Click on 'Report an Abusive User'
    3. Follow on screen instructions

    Privacy reporting

    1. Go to YouTube's 'Policy, Safety and Reporting' page
    2. Go to 'Other Reporting Options' under 'Reporting Center'
    3. Click 'Privacy Complaint Process' and follow on screen instructions

    Reporting tool (for when you want to report more than one piece of content and want to send a more detailed report for review)

    1. Go to YouTube's 'Policy, Safety and Reporting' page
    2. Go to 'Other Reporting Options' under 'Reporting Center'
    3. Click 'Reporting Tool' and follow on screen instructions

    What counts as abuse:

    Nudity or sexual content

    YouTube does not allow pornography, sexually explicit content, or violent, graphic or humiliating fetishes. YouTube works closely with law enforcement and reports child exploitation.

    A video that contains nudity or other sexual content may be allowed if the primary purpose is educational, documentary, scientific, or artistic, and it isn’t gratuitously graphic.

    In cases where videos do not cross the line, but still contain sexual content, YouTube applies an age-restriction so that only viewers over a certain age can view the content.

    YouTube needs to define, however, what’s the difference between “sexually explicit” and “sexually artistic”.

    Many male producers are allowed to post sexually explicit content in order to appeal to their audiences. For example, Bart Baker’s video “Pussies” has ads enabled despite graphic language and sexually provocative content. But if the YouTube community thinks a woman who herself choses to dance around in a bikini is provocative, her content can be flagged and become age-restricted. Because of its “mature content”, ads will no longer be enabled on the video (taking away the woman's ability to earn money) and their video will also be harder to find in search.

    Very few women on YouTube can openly talk about their sexuality in their videos without inciting ire from the community.

    Violent or graphic content

    It’s not okay to post violent or gory content that’s primarily intended to be shocking, sensational or disrespectful. If a video is graphic or disturbing, it should be balanced with additional context and information.

    Much like with movies and TV, graphic or disturbing content that contains a certain level of violence or gore is not suitable for minors and will be age-restricted.

    YouTube largely expects the user who is uploading the video to be honest about its content.

    The age-restriction feature also expects the audience to be honest about their age.

    YouTube seems to struggle with implementing its own policies. When ISIS uploaded a video of James Foley’s beheading, the video was immediately taken down, but a simple Google search of “beheading video” will nevertheless give you a string of YouTube results. YouTube has been responsive in removing other graphic content, such as a video of a mob burning a teenage girl in Guatemala.

    Hateful content

    YouTube does not support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation or gender identity, or where the primary purpose is inciting hatred on the basis of these core characteristics. If the primary purpose is to attack a protected group, the content crosses the line.

    Harmful or dangerous content

    Content that intends to incite violence or to encourage dangerous or illegal activities that have an inherent risk of serious physical harm or death is disbarred from YouTube.

    Videos that are considered to encourage dangerous or illegal activities include instructional bomb making, choking games, hard drug use and other acts which may result in serious injury. A video that depicts dangerous acts may be allowed if the primary purpose is educational, documentary, scientific, or artistic (EDSA), and it isn’t gratuitously graphic.

    Threats

    Things like predatory behaviour, stalking, threats, harassment, intimidation, invading someone's privacy, revealing other people's personal information and inciting others to commit violent acts or to violate YouTube's Terms of Use are taken very seriously. Anyone caught doing these things may be permanently banned from YouTube.

    Content that makes threats of serious physical harm against a specific individual or defined group of individuals will be removed.

    Harassment and cyberbullying

    Harassment may include:

    • abusive videos, comments, messages
    • revealing someone’s personal information
    • maliciously recording someone without their consent
    • deliberately posting content in order to humiliate someone
    • making hurtful and negative comments or videos about another person
    • unwanted sexualisation, which encompasses sexual harassment or sexual bullying in any form

    YouTube has been highly responsive in taking down videos that contain anti-feminist rants.

    This policy does not apply, however, to comments which are problematic in nature. The moderation of comments depends heavily on the user and the YouTube community.

    Points to note:

    • YouTube’s reporting feature uses 'strikes' to determine whether to take down reported content that is flagged for violating their ‘community guidelines’. The more people flag the content, the more convinced YouTube will be to take down the content.

    • YouTube notifies the user about every 'strike', and the user can appeal the strike in their channel settings.

    • A lot of YouTuber vloggers have repeatedly complained that the comment section is mostly filled with negativity or spam rather than actual feedback on their videos. Being a woman on YouTube means having to deal with a lot of derogatory, sexist and racist comments, none of which has anything to do with the content you have put up. In August 2014, YouTube’s biggest vlogger, PewDiePie, who has around 40 million subscribers, decided to disable all the comments on his videos as the negativity was getting to him.
    • To discourage vitriol and encourage positivity and community belonging, YouTube changed their comments system in 2013, so that the latest comment will not be the first comment you see under a video. Instead, posts by the video creator, 'popular personalities' and, posts with engaged conversations will appear at the top of the stream of comments. However, this is not a fool proof plan to stop harassment, as 'popular posts' can be those which attract the most negative attention. The ‘top comment’ also later sets the tone for the video, and when it is negative, can encourage other negative comments to follow.

    • There have been reports of glitches in YouTube’s processes for comment moderation, with comments sometimes managing to slip through even though they are unapproved.
    • Interestingly, although disabling comments is an option on YouTube, it is not described on their 'Reporting Abuse' page.
    • Blocking someone does not prevent them from creating a new account and harassing people again.
    • For reporting legal issues, YouTube provides separate webforms.
    • If you are filing a legal complaint with YouTube, you will need to provide contact information that allows both YouTube and the person who uploaded the video to contact you.
    • For some legal complaints, YouTube will not pass on your legal name and email alias to the uploader if you request them not to do so. The uploader will, however, be notified of the complaint.
    • According to a video by Google that seeks to explain reporting on Google platforms, Youtube reviews complaints on a case-by-case basis. It also says that penalties are imposed on 'repeat offenders', in the nature of temporary or permanent account suspension.

    How to report abuse:

    Reporting tweets

    1. Go to the tweet you'd like to report
    2. Click on or tap the 'More' icon
    3. Select 'Report'
    4. Select 'It’s abusive or harmful'
    5. Provide additional information about the issue you’re reporting

    Once you’ve submitted your report, Twitter recommends additional actions you can take to improve your Twitter experience

    Reporting profiles

    1. Go to the user’s profile and click the gear icon
    2. Select 'Report'
    3. Select 'They're being abusive or harmful'
    4. Provide additional information about the issue you’re reporting

    Once you’ve submitted your report, Twitter recommends additional actions you can take to improve your Twitter experience.

    Reporting direct messages

    For an individual message via the web:

    1. Click into the direct message conversation and find the message you’d like to report
    2. Hover over the message and click the ⃠ icon when it appears
    3. Select 'Report spam' or 'Mark as abusive' and click again to confirm

    For an individual message via the Android and iOS app:

    1. Tap the direct message conversation and find the message you’d like to report
    2. Tap and hold the message and select 'Flag' from the menu that pops up
    3. Select 'Flag as spam' or 'Mark as abusive'

    To report a conversation via the web:

    1. Click into the direct message conversation you’d like to report
    2. Click the ••• 'More' icon
    3. Select 'Flag as spam' or 'Mark as abusive' and click again to confirm

    Note: once you have reported the message or conversation, it will be deleted from your messages inbox.

    To report a conversation using the Twitter for iOS or Android app:

    1. Swipe left on the direct message conversation you’d like to report
    2. Tap the ••• 'More' icon and click 'Flag'
    3. Select 'Flag as spam' or 'Mark as abusive'

    Blocking users

    (Blocking via the Web)

    Blocking a user via a tweet:

    1. From a tweet, click the 'More' icon (•••) at the bottom of the tweet
    2. Click 'Block'

    Blocking a user via their profile:

    1. Go to the profile page of the account you wish to block
    2. Click or tap the gear icon on their profile page
    3. Select 'Block' from the menu
    4. Click 'Block' to confirm

    (Blocking on iOS)

    Blocking a user via a tweet:

    1. Tap a tweet from the user you’d like to block
    2. Tap the 'More' icon (•••)
    3. Tap 'Block' and then 'Block' to confirm

    Blocking a user via their profile:

    1. Visit the profile page of the user you wish to block
    2. Tap the gear icon
    3. Tap 'Block' and then 'Block' to confirm

    (To block via Android)

    Blocking from a tweet:

    1. Tap the overflow icon
    2. Tap 'Block' and then 'Block' to confirm

    Blocking from a profile:

    1. Visit the profile page of the user you wish to block
    2. Tap the overflow icon
    3. Tap 'Block' and then 'Block' to confirm

    Flagging media

    1. From the tweet you would like to flag, click or tap the 'More' icon
    2. Select 'Report'
    3. Select 'It displays a sensitive [image/video/media]'

    Twitter will then provide recommendations for additional actions you can take to improve your Twitter experience.

    What counts as abuse:

    Serial accounts

    You may not create multiple accounts for disruptive or abusive purposes, or with overlapping use cases. Mass account creation may result in suspension of all related accounts.

    Targeted abuse

    You may not engage in targeted abuse or harassment. Some of the factors taken into account when determining what conduct is considered to be targeted abuse or harassment are:

    • if you are sending messages to a user from multiple accounts;
    • if the sole purpose of your account is to send abusive messages to others;
    • if the reported behavior is one-sided or includes threats.

    Graphic content

    You may not use pornographic or excessively violent media in your profile image, header image, or background image.

    Abusive behaviour policy

    Violent threats

    Twitter's policy says it does not tolerate threats of violence or content that promotes violence and terrorism. Users cannot make threats of violence or promote violence on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability. You can report tweets or profiles that make violent threats directly to Twitter (see above).

    In practice, however, there have been many instances where reported rape threats have not been taken seriously by Twitter. In fact, it is unclear how Twitter assesses which tweets and profiles are violating its policy, as there are discrepancies in its results: despite having been reported, many abusive tweets, including rape threats, escape any sort of consequences.

    Also, if someone has tweeted a violent threat that you feel is credible, Twitter recommends you contact law enforcement so they can accurately assess the validity of the threat. If contacted by law enforcement directly, Twitter can work with them and provide the necessary information for their investigation of your issue. You can get your own copy of your report of a violent threat to share with law enforcement by clicking 'Email report' on the 'We have received your report' screen.

    Twitter claims that websites do not have the ability to investigate and assess a threat, let alone bring charges or prosecute individuals. While the latter may be true, there nevertheless seems to be much more that Twitter can do.

    Abuse and harassment

    Users may not engage in targeted abuse or harassment. Factors taken into account include:

    • if a primary purpose of the reported account is to send abusive messages to others;
    • if the reported behavior is one-sided or includes threats;
    • if the reported user is inciting others to harass another user; and
    • if the reported user is sending harassing messages to a user from multiple accounts.

    A lot of misogyny exists on Twitter, where women are harassed by vitriol that targets their gender or sexuality rather than the content of their tweet. Twitter either does not recognise this harassment or is slow to respond.

    Women in India, too, face a lot of vitriol on Twitter, especially left and liberal secular women who are targeted by right-wing trolls. This has led these women to either stop using Twitter completely, tweeting infrequently as compared to before, or taking on the arduous task of blocking each troll. These tweets range from rape threats to death threats, casteist and racist abuse, threats of acid attacks and patriarchal abuse. Sagarika Ghose called this the 'social media version of gang rape.'

    Twitter’s CEO, Dick Costolo, acknowledged in a private memo that 'we suck at dealing with abuse' and that stronger action needs to be introduced as the current Twitter strategy promotes the 'block and ignore' action.

    Women also often experience offensive but not harassing encounters online. Twitter needs to clearly explain to its users how it differentiates between the two to determine harassment.

    Trolls sometimes use the harassment reporting process for functions other than reporting harassment. They might falsely flag an account, in an attempt to silence or intimidate it. They can also use reporting as a form of trolling, when they have not experienced harassment but aim to intentionally waste reviewers' time.

    Private information

    A person’s private and confidential information cannot be published without prior authorisation or permission. Intimate photos and videos taken without a person’s content cannot be shared on the website.

    If this information was posted elsewhere before being published on Twitter, it may not be in violation of this policy.

    According to research conducted by WAM (Women, Action & the Media) most reports of harassment on Twitter are about hate speech and doxxing. The latter entails releasing private information.

    Impersonation

    Impersonation of another person is not allowed if it intends to mislead, confuse or deceive others.

    When content is reported, the entire account of the accused may be reviewed.

    Offensive content and mediation

    Twitter does not wish to mediate content or intervene in disputes between users. Hence it advises that you tailor your Twitter experience according to your preference.

    If you upload media that might be considered sensitive content such as nudity, violence, or medical procedures, you should apply the account setting 'Mark my media as containing sensitive content'.

    International users agree to comply with all local laws regarding online conduct and acceptable content.

    Uploaded media that is reported and that is determined to violate the law will be removed from the site and your account will be suspended.

    Media that is marked as containing sensitive content will have a warning message that a viewer must click through before viewing the media. Only users who have opted in to see possibly sensitive content will see the media without the warning message.

    Points to note:

    • As the processes for reporting abuse make clear, Twitter requires URLs and rejects screenshots as evidence for complaints. This means that if you are harassed by someone who consistently deletes his tweets after sending them, there is no way you can report this behavior using screenshots you might have collected. It also makes it harder to report harassment that is not associated with a URL, for example, offensive profile pictures.

    • Could your complaint fall under several of the categories mentioned on Twitter’s reporting forms? If so, tick the most serious option that applies, as you can only tick one.
    • Every report to Twitter by a target of abuse is looked at manually.
    • The more people flag the content, the faster Twitter will tend to the matter – the tweet will be prioritised and the process will be expedited.
    • You don’t have to be a Twitter user to report abuse on Twitter.
    • Reporting a tweet does not automatically result in the user being suspended.
    • To decide whether or not to suspend an account, Twitter generally assesses account behaviour over a longer period of time, rather than considering just one incident or interactions with one other user.
    • Also, Twitter might ask an abusive user to provide them with a mobile phone number, before suspending the account. Interestingly, sometimes that is enough of a deterrent and the abuser simply disappears.
    • Except in cases where doing so would put the target of the abuse at severe risk, such as where domestic violence is concerned, the tweets that violate Twitter’s terms and conditions will be shared with the suspended user. The identity of the person reporting the tweets is never shared with the abuser.
    • If you have blocked a user, the user will not be notified but they will be able to see that they are blocked if they try to visit your profile.
    • Note that Twitter’s blocking system is flawed. Even though you may have blocked a user via the web, they may not be blocked via the app, so you still may see their Tweets on your timeline.
    • Twitter is currently working on a feature that will help a user block multiple accounts at once, as sometimes women get up to numerous abusive tweets per hour and blocking each account separately can be tedious.
    • You can export/import your own block lists to share them with others.
    • If the police asks Twitter for an abuser’s account details, the account holder will be informed that there was a law enforcement request, unless there is a threat to life, for example where the abuser is a paedophile.
    • Twitter will not file a police complaint on your behalf.

    In Summary

    Reporting Abuse to Internet Platforms

    • Facebook, Instagram, YouTube and Twitter all have various tools and policies that are supposed to create a safer environment for users.
    • But often, policies around what content is accepted or not are implemented inconsistently, leaving many people who are already vulnerable in the lurch.
    • Much is also still left to users' own efforts, rather than building measures to improve safety into the design of platforms.