Reporting Abuse to Internet Platforms

When it all gets too much, one option is to report the abuse to the Internet platforms.
How to report abuse:
Profiles
- Go to the profile of abuser
- Under the cover photo click on icon […] and select ‘Report'
Posts
- Go to post
- Click icon on top right hand corner of the post and select 'I don't want to see this'
Comments
- On the top right corner of the comment click
- Select 'Report' and follow on-screen instructions
Photos and videos
- Click on the photo or video to expand it
Click 'Options' in the bottom right
Click 'Report Photo' for photos or 'Report Video' for videos
Messages
- Open the message you'd like to report
Click the icon on the top right hand corner called 'Actions'
Click 'Report Spam or Abuse' and follow the on-screen instructions
Pages/Groups/Events
Go to the Page/Group/Event you want to report
Click on the icon […] on the Page/Group/Event's cover photo
Select 'Report Page' and follow the on-screen instructions
Ads
Hover over the ad and click the X in the top right hand corner
Choose 'Hide this ad' to report a specific ad, or 'Hide all from...' to hide all ads from that particular advertiser
Questions
Click the icon on the top right corner of the question
Select Report to Admin' or 'Report/Mark as Spam'and follow on-screen instructions
To report a reply to a question, click X, then click 'Report'to report it
What counts as abuse:
It’s not clear who checks the reports, but many people receive automated replies saying, “We reviewed the photo you but found it does not violate Facebook’s Community Standards.”
According to Facebook’s Community Standards the following things count as abuse. Except, well, um, when they are mysteriously seen as okay.
Direct Threats
“Facebook carefully reviews reports of threatening language that may cause serious harm to public and personal safety along with theft, vandalism and financial safety”
Most of Facebook’s policies are targeted to protect English speaking users. If a threat is made in another language or in a different cultural space, Facebook does not recognise it. When Afghan women have their profiles hacked, Facebook moderators cannot understand the implications. A beer mug may be innocent to American eyes but in Afghanistan it carries a life sentence.
Bullying and Harassment
“Facebook believes in the freedom of speech, however, if certain content is posted specifically targeting an individual to shame and humiliate them, then this content can be removed.”
Facebook often doesn’t see misogyny as harassment. When Australian writer Clementine Ford uploaded a topless picture of herself, she received a barrage of with vicious, threatening, sexually explicit messages. According to Facebook, these messages were acceptable as per their guidelines.
Sexual Violence and Exploitation
“We remove content that threatens or promotes sexual violence or exploitation. This includes the sexual exploitation of minors, and sexual assault. Our definition of sexual exploitation includes solicitation of sexual material, any sexual content involving minors, threats to share intimate images, and offers of sexual services”
This policy is applied erratically, and often not to the benefit of women. Facebook user posted a video documenting the gang rape of a woman by the side of a road in Malaysia. The six minutes of graphic footage were live for more than three weeks, during which Facebook moderators declined repeated requests for removal. The content was only taken down after Facebook was personally contacted by the media house.
A post on Facebook by 'The Kinky Geeks of Chicago' which gratifies child pornography and incest was not taken down, despite multiple requests.
Nudity
“Facebook does not tolerate any form of nudity (even for artistic projects or awareness campaigns) for the sake of their global audience, some of whom may be intolerant to this content.”
Facebook equates 'nudity' with obscenity, sexual exploitation and pornography. It allows public content that assaults women’s anatomy, but censors content that celebrates women’s bodies. Photographer Ana Alvarez-Errecalde, for example, has had non-sexual photography of women’s bodies repeatedly censored on Facebook.
On the bright side, in 2015, Facebook reviewed its policies to include photos of women "actively engaged in breastfeeding," or post-surgery breast photos are ok, as are photographs of paintings and sculptures that depict nude figures.
Hate Speech
"Facebook removes hate speech, which includes content that directly attacks people based on their Race, Ethnicity, National origin, Religious affiliation, Sexual orientation, Sex, gender or gender identity, Serious disabilities or diseases"
Humour, satire or social commentary on these topics are allowed on Facebook. It is not clear how Facebook makes the distinction between satirical pages and pages which promote genuine hatred.
In good news, the once profusely applied “Controversial Humor” label in Facebook is no longer in use.
Violence and Graphic Content
"Graphic images which glorify sadism or violence will be removed from Facebook. If such content is to be shared, Facebook asks users to warn their audience beforehand."
Facebook has been criticized for their refusal to take down content which jokes about rape, which it maintains is acceptable under “humour” or “satire.”
Facebook has also been criticized for removing graphic content that raises political awareness, such as removing the video of Tibetan monks who set themselves on fire to protest against Chinese policies. However in 2013, a video was circulated around Facebook of a man slitting open a woman's throat. Although the video was reported as being inappropriate by one of Facebook's Safety Advisory Committee members, it was allowed to come back on Facebook.
Points to note:
- Information about the person reporting is not revealed to the offender, and so reporting is anonymous to that extent. However, it is quite likely that Facebook will retain the information about your reports.
- You can check the status of something you have reported by going to the 'Support Inbox' section.
- Facebook isn’t really the most proactive with the reports and reporting something doesn’t really guarantee that it will be removed either. However, it will bring it up for review, so there’s that.
- As fun as rallying your entire friends list to report something may be, the number of reports don’t actually determine whether something will be taken down.
- The thing is, even if Facebook takes something down, in most cases the content is created anyway, and there aren’t any checks in place to stop this.
- Facebook is also said to have a “team of moderators”, usually comprised of a bunch of young people. These fabled moderators spend their time sifting through millions of reports every week, spending basically less than half ah minute on each report. Yeah, not exactly what you’d call “ideal”.
How to report abuse:
Posts
(via a phone)
- Tap
(iOS and Windows Phone) or
(Android) below the post
- Tap Report
- Follow the on-screen instructions
(via the web)
- Click
below the post
- Click Report Inappropriate
- Follow the on-screen instructions
Profiles
(via a phone)
- Tap
(iOS),
(Windows Phone) or
(Android) in the top right of the profile
- Tap Report (iOS and Android) or Report for Spam (Windows Phone)
- Follow the on-screen instructions
(via the web)
- Click
next to their username
- Select 'Report User'
- Follow the on-screen instructions
Comments
(on iOS)
- Tap
below the photo
- Swipe your finger to the left over the comment you'd like to report
- Tap
- Tap 'Report Abuse'
- Select an option for why the comment is abusive
(on Android)
- Tap
below the photo
- Tap the comment you want to report
- Tap and choose 'Delete Comment' and 'Report Abuse'
Direct messages
- Tap and hold the message
- Select 'Report'
Block profiles
- Tap
(iOS),
(Windows Phone) or
(Android) in the top right of the profile
- Tap 'Block'
- Reporting abuse for a non-Instagram user: an online form needs to be filled
Users can also report on abusers who have blocked content from said user by either asking a friend to fill the online form, or doing so themselves.
What counts as abuse:
If accounts do not abide by Instagram's Community Guidelines, they may be disabled either with or without warning. However, a user may be able to appeal this decision by following on-screen instructions on the log in page.
Nudity
Sharing of nude, partially nude, pornographic or sexual photos/videos
that show sexual intercourse, genitals and close ups of fully-nude
buttocks and female nipples are not allowed. Even images that
celebrate
nudity for artistic or creative purposes are not allowed on the
photo-sharing app. However photos of post-mastectomy scarring and women
actively breastfeeding are allowed. Nudity in photos of paintings and
sculptures are okay too.
Most of the nudity photos that Instagram has taken down are those that celebrate the female body; those which are being represented in a non-sexual way.
Indian origin photographer Rupi Kaur uploaded a picture of her menstrual blood. Instagram took down the picture twice stating that it was violating “community guidelines.” After an online campaign against the company, which was started by Kaur, Instagram allowed the picture to come back up saying that it didn’t violate any of their policies.
Instagram also once suspended the account of Samm Newman, who posted a selfie of herself in a bra and boy shorts to highlight her weight issues and to promote a body positive image. There are pictures of women in bikinis all over Instagram yet their accounts do not get suspended. Instagram later reinstated her account apologizing and stating that they had wrong removed her picture.
However photos of men posting partially nude photos are not in violation of Twitter’s policies.
Credible threats and hate speech
Content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages are not allowed.
It's never OK to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases. When hate speech is being shared to challenge it or to raise awareness, we may allow it. In those instances, we ask that you express your intent clearly.
Serious threats of harm to public and personal safety aren't allowed. This includes specific threats of physical harm as well as threats of theft, vandalism, and other financial harm.
In August this year, Sandra Bland was found dead in her prison cell
in Texas after being arrested for allegedly assaulting an officer at a
traffic stop. Bland was vocally outspoken about racial prejudice on
her
social media accounts. Her name started trending on social media,
including Instagram as #SandraBland. However, Instagram removed this
hashtag as soon as people started abusing it, posting racist, violent
and threatening messages – it was taking a positive step in weeding out
hate speech.
However, it brought up the question of how social media needs to come up with a system that preserves the right to free speech while preventing hate speech.
For example, Instagram got into hot water for also banning the hashtag #curvy saying it was encouraging inappropriate content, while women were trying to use the hashtag to promote body-positive images. Instagram needs to come up with a solution for their moderators to understand in which context content is being shared.
Graphic Violence
Sharing photos for sadistic pleasure or to glorify violence is not allowed. However, if such content is being shared to create awareness, they will be allowed on the basis that it comes with a warning in the caption.
Instagram however has been a platform where many women use it as a platform to upload graphic photos or messages to highlight issues such as domestic violence or abusive messages women get on online dating platforms. Instagram has not taken these accounts down.
Points to note:
Think we’ve been a bit inconsistent by explaining some of Instagram’s reporting mechanisms only for the phone app and others for the app and the web? That’s because not all features on Instagram to report abuse are available across platforms. You will need to use the web to report a privacy violation using Instagram’s form dedicated to that purpose, for example, but to report a profile, you better turn to the app. Also, some app features are available on iOS and Android, but not on Windows Phone. Shame for a company of Instagram’s size.
On the positive side, Instagram make it possible for non-users to report content, through a dedicated form (available on the web). This form is also useful where users have to report abusers who have blocked the problematic content from being viewed by said user.
Instagram says that the number of times something is reported does not affect whether or not it is removed from Instagram.
When you have reported a post/profile/comment/direct message, you’ll generally see a message that thanks you for submitting your report. However, Instagram does not specify if the user will be informed when and if the picture will be removed, and how long after the complaint Instagram will attend to it.
Though Instagram allows you to provide them with the name of a public person or celebrity if you believe an account is impersonating that public figure or celebrity, oddly you cannot do the same if the account is impersonating your friend. In that case, Instagram encourages you to reach out to your friend, so they can report it themselves, instead. Only the person who has been impersonated, or someone authorized to represent them, can file the complaint.
When you report something to Instagram, your information isn't shared with the person whose post, profile or comment you're reporting.
When Instagram receives a request for information from law enforcement, it does notify the affected user(s) of this request prior to disclosure, unless Instagram is prohibited by law from doing so or in exceptional circumstances, such as child exploitation cases, emergencies or when notice would be counterproductive. Law enforcement officials who believe that notification would jeopardize an investigation should obtain an appropriate court order or other appropriate process establishing that notice is prohibited.
If a request for data from law enforcement draws Instagram’s attention to an ongoing violation of its terms of use, Instagram will also take action to prevent further abuse, including actions that may notify the user that we are aware of their misconduct.
Accounts may be disabled by Instagram either with or without warning if they do not abide by the community guidelines. However, a user may be appeal this decision by following on-screen instructions on the log in page.
How to report abuse:
Flagging Videos
- Below the video, click on the 'More' button
- Highlight and click the 'Report' button in the drop-down menu
- Click on the reason for flagging that best fits the violation within the video
- Provide any additional details that may help the review team make their decision.
Flagging comments
- Go to comment
- On the right hand corner, there is a button. Click on 'Report Spam or Abuse' from the drop-down menu
- If enough users mark the comment as spam, the comment will be hidden under a 'Mark as Spam' link
- By clicking the 'Show link', the comment can be viewed again
YouTube’s flagging feature depends on a “mob mentality.” The more people who flag the content, the more convinced YouTube will be to take down the content.
Disabling Comments
Although this is an option on YouTube it is not described on their Reporting
Abuse page
(for a specific video)
- Go to your YouTube account
- On the upper right hand corner click on your account and click on 'Creator Studio'
- Under the video click on the drop down menu under 'Edit'and select 'Info' and 'Settings'
- Click on 'Advanced Settings'
- Under 'Comments' untick the box that says 'Allow comments'
(for all videos)
- Go to your YouTube account
- On the upper right hand corner click on your account and click on 'Creator Studio'
- Under 'Community' select 'Community Settings'
- Under 'Default Settings' > 'Comments on your Channel' > 'Disable Comments'
This should not be a permanent solution as comments should be a way of interacting with the community.
Flagging a channel
- Visit the channel page you wish to report
- Click 'About'
- Click the 'Flag' drop down button
- Select the option that best suits your issue
Blocking Users
- Visit their 'Channel' page, which should have a URL similar to www.youtube.com/user/NAME
- On their 'About' tab, click the flag icon
- Click 'Block User'
However, blocking users does not prevent them from creating a new account and harassing people again.
Reporting an Abusive User
- Go to YouTube’s 'Policy, Safety and Reporting' Page and find 'Reporting and Enforcement Centre'
- Click on 'Report an Abusive User'
- Follow on screen instructions
Reporting Tool (for when you want to report more than one piece of content and want to send a more detailed report for review)
- Go to Youtube's 'Policy, Safety and Reporting' Page
- Go to Other 'Reporting Options' under 'Reporting Center'
- Click 'Reporting Tool' and follow on screen instructions
Privacy Reporting
- Go to Youtube's 'Policy, Safety and Reporting' Page
- Go to 'Other Reporting Options' under 'Reporting Center'
- Click 'Privacy Complaint Process' and follow on screen instructions
Moderating Comments on your channel
- Remove, report or hide comments: When someone comments on your video, you'll get a notification. Click the arrow in the upper right of the comment to manage comments: Remove: Takedown the comment and replies from YouTube. However, if the comment was also shared on Google+, it will still be visible there Report spam or abuse: Report comments that you believe are spam or abuse to the YouTube team Hide from channel: Block the user from posting comments on videos on your channel. If you change your mind, you can remove the user from the hidden users list in your community settings.
- Hold comments for approval
- Video comments
- Find the video in the Video Manager
- Under the video, click 'Edit'
- Click 'Advanced Settings'
- Under "Allow comments," select 'Approved'
However, several times, there have been glitches in this system and comments manage to slip through even though they are unapproved.
Set Comment Filters
Go to Creator Studio > Community > Community settings and choose from the following filters:
a) Manage approved users:
- On a comment left by the user you want to make an approved user, click the drop-down arrow next to the flag icon.
- Select 'Always approve comments from this user'
b) Manage hidden users:
- On a comment left by the user you want to make a hidden user, click the drop-down arrow next to the flag icon.
- Select 'Hide' this user's comments on this channel
c) Add words and phrases to your blacklist: You can add a list of words and phrases that you want to review before they're allowed in comments on your video or channel. Just add them to the box next to Blacklist and use a comma to separate words and phrases in the list. Comments closely matching these terms will be held for your approval — unless they were posted by someone on your approved user list.
In 2013, YouTube’s comment system got an overhaul, where the above options were put into place. To discourage vitriol and encourage positivity and community belonging, YouTube changed their system so that the latest comment will not be the first comment you see under a video. Instead posts by the video creator, “popular personalities,” posts with engaged conversations, and posts from your Google+ friends will appear at the top of the stream of comments. However, this is not a full proof plan to stop harassment as “popular posts” can be those which attracted the most negative attention.
The “top comment” also later sets the tone for the video, and when it is negative, can encourage other negative comments to follow.
A lot of YouTuber vloggers have repeatedly complained that the
comment section is mostly filled with negativity or spam than actual
feedback on their videos. In August 2014, YouTube’s biggest vlogger,
PewDiePie,
who has around 40 million subscribers decided to disable all the
comments from his videos as the negativity was getting to him.
Here is video by Buzzfeed Yellow that talks about what it means to be a woman on YouTube.
The
main concern is that they get a lot of derogatory, sexist and racist
comments, none of which has anything to do with the content they have
put up.
What counts as abuse:
Nudity or sexual content
YouTube does not allow pornography, sexually explicit content, or violent, graphic or humiliating fetishes. YouTube works closely with law enforcement and reports child exploitation.
A video that contains nudity or other sexual content may be allowed if the primary purpose is educational, documentary, scientific, or artistic, and it isn’t gratuitously graphic.
In cases where videos do not cross the line, but still contain sexual content, YouTube applies an age-restriction so that only viewers over a certain age can view the content.
YouTube needs to define what’s the difference between “sexually explicit” and “sexually artistic” content as many music videos, which are almost close to being pornographic, gets the green light.
Many male producers are allowed to post sexually explicit content
in order to appeal to their audiences such as Bart Baker’s “Pussies”
which still has enabled ads despite graphic language and sexually
provocative
content. But if the YouTube community thinks a woman dancing around in a
bikini (out of choice) is provocative, her content can be flagged and
become age-restricted. Ads can no longer
be enabled on the video
(taking away their ability to earn money) because of its “mature
content” and their video will also be harder to search.
Very few women on YouTube can openly talk about their sexuality on their videos without inciting ire from the community.
Violent or graphic content
It’s not okay to post violent or gory content that’s primarily intended to be shocking, sensational or disrespectful. If a video is particularly graphic or disturbing, it should be balanced with additional context and information.
Much like movies and TV, graphic or disturbing content that contains a certain level of violence or gore is not suitable for minors and will be age-restricted.
YouTube largely expects the user who is uploading the video to be honest about its content.
The age-restriction feature also expects the audience to be honest about their age.
YouTube is also not consistent about their policies. When ISIS uploaded a Foley’s beheading, the video was immediately taken down. But a simple Google search of “beheading video” will give you a string of results.
However, YouTube has been responsive in removing graphic content such as a video of a mob burning a teenage girl in Guatemala.
Hateful content
YouTube does not support content that promotes or condones violence
against individuals or groups based on race or ethnic origin, religion,
disability, gender, age, nationality, veteran status, or sexual
orientation/gender
identity, or whose primary purpose is inciting hatred on the basis of
these core characteristics. If the primary purpose is to attack a
protected group, the content crosses the line.
Harmful or dangerous content
Content that intends to incite violence or encourage dangerous or illegal activities that have an inherent risk of serious physical harm or death is disbarred from YouTube
Videos that are considered to encourage dangerous or illegal activities include instructional bomb making, choking games, hard drug use, or other acts where serious injury may result. A video that depicts dangerous acts may be allowed if the primary purpose is educational, documentary, scientific, or artistic (EDSA), and it isn’t gratuitously graphic.
Threats
Things like predatory behavior, stalking, threats, harassment, intimidation, invading privacy, revealing other people's personal information, and inciting others to commit violent acts or to violate the Terms of Use are taken very seriously. Anyone caught doing these things may be permanently banned from YouTube.
Content that makes threats of serious physical harm against a specific individual or defined group of individuals will be removed.
Harassment and Cyberbullying
Harassment may include:
- Abusive videos, comments, messages
- Revealing someone’s personal information
- Maliciously recording someone without their consent
- Deliberately posting content in order to humiliate someone
- Making hurtful and negative comments/videos about another person
- Unwanted sexualization, which encompasses sexual harassment or sexual bullying in any form
YouTube has been highly responsive in taking down videos or that have anti-feminist
rants.
This policy however does not apply to comments which are more problematic in nature. Moderating comments depends more on the user and the YouTube community.
Points to note:
YouTube’s reporting feature uses 'strikes' to determine whether to take down reported content that is flagged for violating their ‘community guidelines’. The more people who flag the content, the more convinced YouTube will be to take down the content.
Youtube notifies the user about every 'strike', and the user can appeal the strike in their channel settings.
Blocking someone does not prevent them from creating a new account and harassing people again.
To discourage vitriol and encourage positivity and community belonging, YouTube changed their comment system in 2013, so that the latest comment will not be the first comment you see under a video. Instead posts by the video creator, “popular personalities,” posts with engaged conversations, and posts from your Google+ friends will appear at the top of the stream of comments. However, this is not a fool proof plan to stop harassment as “popular posts” can be those which attract the most negative attention. The ‘top comment’ also later sets the tone for the video, and when it is negative, can encourage other negative comments to follow.
- Interestingly, although disabling comments is an option on YouTube, it is not described on their Reporting Abuse page.
- There have been reports of glitches in YouTube’s processes for comment moderation, with comments sometimes managing to slip through even though they are unapproved.
- If you remove comments from your video on youtube, but the video was also shared on Google+, the comments will still be visible on Google+!
- For reporting legal issues, Youtube provides separate Webforms.
- If you are filing a legal complaint with YouTube, you will need to provide contact information that allows both YouTube and the person who uploaded the video to contact you.
- For some legal complaints, YouTube will not pass on your legal name and email alias to the uploader if you request them not to do so. The uploader will, however, be notified of the complaint.
- According to a video by Google that seeks to explain reporting on Google platforms, Youtube reviews complaints on a case-by-case basis. It also says that penalties are imposed on 'repeat offenders', in the nature of temporary or permanent account suspension.
How to report abuse:
Tweets
- Go to the tweet you'd like to report
- Click or tap the More icon (••• icon on web and iOS; icon on Android)
- Select 'Report'
- Select It’s abusive or harmful
- Provide additional information about the issue you’re reporting
- Once you’ve submitted your report, Twitter recommends additional actions you can take to improve your Twitter experience
Profiles
- Go to the user’s profile and click the gear icon
- Select Report
- Select 'They're being abusive or harmful'
- Provide additional information about the issue you’re reporting
Once you’ve submitted your report, Twitter recommends additional actions you can take to improve your Twitter experience.
Report a Direct Message
For an individual message via the web:
- Click into the Direct Message conversation and find the Message you’d like to report
- Hover over the Message and click the ⃠ icon when it appears
- Select Report spam or Mark as abusive and click again to confirm
For an individual message via the Android and iOS app:
- Tap the Direct Message conversation and find the Message you’d like to report
- Tap and hold the Message and select 'Flag' from the menu that pops up
- Select 'Flag as spam' or 'Mark as abusive'
To report a conversation via the web:
- Click into the Direct Message conversation you’d like to report
- Click the ••• More icon
- Select Flag as spam or Mark as abusive and click again to confirm
Note: Once you have reported the Message or conversation, it will be deleted from your Messages inbox.
To report a conversation using the Twitter for iOS or Android app:
- Swipe left on the Direct Message conversation you’d like to report
- Tap the ••• More icon and click Flag
- Select 'Flag as spam' or 'Mark as abusive'
As the above processes show, Twitter requires URLs and rejects
screenshots as evidence. This is problematic as this ignores the entire
section of ‘tweet and delete’ harassment that occurs quite
often on
Twitter. It also makes it harder to report harassment that is not
associated with a URL – for example, offensive profile pictures.
Reporting a Tweet does not automatically result in the user being suspended.
It also isn’t clear what happens to the user who is reported for the harassment.
Reporting Violent Threats
You can report Tweets or profiles directly to us (see above).
If someone has tweeted a violent threat that you feel is credible, contact law enforcement so they can accurately assess the validity of the threat. Websites do not have the ability to investigate and assess a threat, bring charges or prosecute individuals.
A 21 year old man was arrested by the police for harassing a woman on Twitter with rape threats and
abuse. Twitter’s response to the situation, when internally reported, was weak and inadequate.
If contacted by law enforcement directly, we can work with them and provide the necessary information for their investigation of your issue. You can get your own copy of your report of a violent threat to share with law enforcement by clicking Email report on the We have received your report screen.
(Blocking via the Web)
Blocking a user via a Tweet:
- From a Tweet, click the more icon (•••) at the bottom of the Tweet
- Click 'Block'
Blocking a user via their profile:
- Go to the profile page of the account you wish to block
- Click or tap the gear icon on their profile page
- Select 'Block' from the menu
- Click 'Block' to confirm
(Blocking on iOS)
Blocking a user via a Tweet:
- Tap a Tweet from the user you’d like to block
- Tap the More icon (•••)
- Tap Block and then Block to confirm
Blocking a user via their profile:
- Visit the profile page of the user you wish to block
- Tap the gear icon
- Tap 'Block' and then 'Block' to confirm
(To block via Android)
Blocking from a Tweet:
- Tap the overflow icon
- Tap 'Block' and then'Block' to confirm
Blocking from a profile:
- Visit the profile page of the user you wish to block
- Tap the overflow icon
- Tap 'Block' and then 'Block'to confirm
You can export/import your block lists with others.
However, Twitter’s blocking system is flawed. Even though you may have blocked a user via the web, they may not be blocked via the app, so you still may see their Tweets on your timeline.
Twitter is currently working on a feature that will help a user block multiple accounts at once as sometimes women get up to 50 abusive tweets per hour and blocking each account separately can be tedious.
Flagging media
- From the Tweet you would like to flag, click or tap the More icon (••• on web or iOS; on Android)
- Select Report
- Select 'It displays a sensitive [image/video/media]'
Twitter will then provide recommendations for additional actions you can take to improve your Twitter experience.
The more people that flag the content, the faster Twitter will tend to the matter – the Tweet will be prioritized and the process will be expedited.
What counts as abuse:
Serial Accounts
You may not create multiple accounts for disruptive or abusive
purposes, or with overlapping use cases. Mass account creation may
result in suspension of all related accounts.
Targeted Abuse
You may not engage in targeted abuse or harassment. Some of the factors taken into account when determining what conduct is considered to be targeted abuse or harassment are:
- if you are sending messages to a user from multiple accounts;
- if the sole purpose of your account is to send abusive messages to others;
- if the reported behavior is one-sided or includes threats
Graphic Content
You may not use pornographic or excessively violent media in your profile image, header image, or background image.
Abusive Behaviour Policy
Violent Threats
Twitter does not tolerate threats of violence or promotes violence and terrorism. Users cannot make threats of violence or promote violence on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability.
There have been many instances where reporting rape threats have not been taken seriously by Twitter. Reporting rape threats on Twitter was a tedious process that yielded few results.
It is unclear how Twitter assess which tweets/profiles are violating its policies as there are discrepancies in its results with many abusive tweets that abscond from any sort of consequences despite reporting, including rape threats, offensive/cuss language directed towards women, pornography, harassment and many others.
However in this article death threats and repeated harassment received prompt responses that lead to account suspension, although hate speech against women were either ignored or did not receive that quick of a response.
Abuse and harassment
Users may not engage in targeted abuse or harassment. Factors taken into account:
- if a primary purpose of the reported account is to send abusive messages to others;
- if the reported behavior is one-sided or includes threats;
- if the reported user is inciting others to harass another user; and
- if the reported user is sending harassing messages to a user from multiple accounts.
A lot of misogyny exists on Twitter were women are harassed by
vitriol that targets their sexuality rather than the content of their
tweet. Twitter either does not recognize this harassment or is slow to
respond.
Women in India face a lot of vitriol on Twitter, especially liberal secular women by right-wing trolls. This has led these women to either stop using Twitter completely, tweeting infrequently as compared to before, or taking on the arduous task of blocking each troll. These tweets range from rape threats to death threats, caste and race abuse, threats of acid attacks and patriarchal abuse. Sagarika Ghose called this the “social media version of gang rape.”
Twitter’s CEO, Dick Costolo, acknowledged in a private memo that, “We
suck at dealing with abuse” and that stronger action needs to be
introduced as the current Twitter strategy promotes the “block
and ignore” action.
Women also often experience offensive but not harassing encounters online. Twitter needs to clearly define to the users how it differentiates between the two to determine harassment.
Trolls sometimes use the harassment reporting process for functions other than reporting harassment:
- False flagging to attempt silence or to intimidate an account
- Report trolling (have not experienced harassment and intentionally wastes reviewers time)
Private information
A person’s private and confidential information cannot be published without prior authorization or permission. Intimate photos and videos taken without a person’s content cannot be shared on the website.
If this information was posted elsewhere before being published on Twitter, it may not be in violation of this policy.
According to research conducted by WAM (Women, Action & the Media) mostreports of harassment were about hate speech and doxxing (releasing private information.)
Impersonation
Impersonation of another person is not allowed if it intends to mislead, confuse or deceive others.
When content is reported, the entire account of the accused may be reviewed.
Offensive content and mediation
Twitter does not wish to mediate content or intervene in disputes between users. Hence it advises that you tailor your Twitter experience according to your preference.
Uploading Media (Images, Videos)
If you upload media that might be considered sensitive content such as nudity, violence, or medical procedures, you should apply the account setting “Mark my media as containing sensitive content”.
International users agree to comply with all local laws regarding online conduct and acceptable content.
Uploaded media that is reported and that is determined to violate the law will be removed from the site and your account will be suspended.
Media that is marked as containing sensitive content will have a warning message that a viewer must click through before viewing the media. Only users who have opted in to see possibly sensitive content will see the media without the warning message.
Points to note:
As the processes for reporting abuse make clear, Twitter requires URLs and rejects screenshots as evidence for complaints. This means that if you are harassed by someone who consistently deletes his tweets after sending them, there is no way you can report this behavior using screenshots you might have collected. It also makes it harder to report harassment that is not associated with a URL, for example, offensive profile pictures.
- Could your complaint fall under several of the categories mentioned on Twitter’s reporting forms? If so, tick the most serious option that applies (you can only tick one).
- The more people flag the content, the faster Twitter will tend to the matter – the tweet will be prioritised and the process will be expedited.
- You don’t have to be a Twitter user to report abuse on Twitter!
- Reporting a tweet does not automatically result in the user being suspended.
- Every report to Twitter by a target of abuse is looked at manually.
- To decide whether or not to suspend an account, Twitter generally assesses account behaviour over a longer period of time, rather than considering just one incident or interactions with one other user.
- Also, Twitter might ask an abusive user to provide them with a mobile phone number, before suspending the account. Interestingly, sometimes that is enough of a deterrent and the abuser simply disappears.
- Except in cases where doing so would put the target of the abuse at severe risk, such as where domestic violence is concerned, the tweets that violate Twitter’s terms and conditions will be shared with the suspended user. The identity of the person reporting the tweets is never shared with the abuser.
- If you have blocked a user, the user will not be notified but they will be able to see that they are blocked if they try to visit your profile.
- If the police asks Twitter for an abuser’s account details, the account holder will be informed that there was a law enforcement request, unless there is a threat to life, for example where the abuser is a paedophile.
- Twitter will not file a police complaint on your behalf.