Twitter makes a Trust and Safety Council. Hires Anita Sarkeesian as a member.

Author Topic: Twitter makes a Trust and Safety Council. Hires Anita Sarkeesian as a member.  (Read 3521 times)

at that point it should be a clear sign that you should get the forget out
because a super dedicated minority of twitter users decided they don't like her?

Were her beliefs to be put into legislation. That's the entire point behind what I'm saying. She wants what she believes to be put into action around the world. She doesn't want it to be constricted to thought. Were she to take action on them, it would be a human rights violation.

No, I'm not. Not in the slightest. That's not bad in the slightest. You think that's bad? Look at the stuff Richard Dawkins gets: https://youtu.be/qhYT4vE1gvM https://youtu.be/gW7607YiBso
That's just a fraction of what he gets, and he doesn't throw a hissyfit over it. Why? Because he knows they're just idiots with no credibility to back them up. Popular people attract trolls and idiots like the plague. That's just a fact of life, nothing of harm is coming to her or ever was coming to her.

It's not extensive knowledge at all. All you have to do is ask people who are in the know. Thankfully being a university student I have access to those kinds of people. I've attended classes and lectures and talked to professors. Nothing deviates more than semantically from what I've said.

Can you show me what specifically she's advocated to be legislated?

You've shown me that dumb Christians really hate Richard Dawkins. This proves nothing. He doesn't get hate just because he's popular, he's a controversial and outspoken figure with strong views about religion. He is no different than Anita. Harassment can be harm without it escalating to physical violence, also.

You're trying to use anecdotal evidence to make this argument. The opinions of whatever professors and classes you have available to you are not an adequate sample.

Clearly twitter wants to make their website safer. Why shouldn't it exist?
Because a 'safe space' is just a name, much like 'pro-life' is a name. The titles that people assign to things do not always properly illustrate what's actually going on.

The idea of a safe space is to actively remove and prevent discussion that is contrary to the emotional well-being of other users. This is a problem because it's curating ideas from a website just because they are in opposition to someone elses' worldview. If some user's twitter account is promoting anti-feminist beliefs, then a feminist can chime in and claim that the user is creating an 'unsafe space' for them, thus getting the user's posts removed and their account suspended. The original problem was that the feminist chose to read the anti-feminists posts, but that doesn't matter in a website that's trying to curate content rather than to allow people to communicate with others as they choose to.

you have to listen to seventhsandwich because he's a genius

I don't get why censorship is necessary on a website where only people who follow you can hear what you say.
As for sending messages to others who don't follow you or posting with hashtags, why don't people just rely on blocking users who upset them? Then you never see their posts again.

The website is designed around choosing who you do and don't listen to.

There is an option to report people for having a different opinion.



https://support.twitter.com/forms/abusiveuser

Can you show me what specifically she's advocated to be legislated?
Firmly believing that something is good and true and right (Or at least better than what we have now), and wanting something to be changed are mutually inclusive.

You've shown me that dumb Christians really hate Richard Dawkins. This proves nothing. He doesn't get hate just because he's popular, he's a controversial and outspoken figure with strong views about religion. He is no different than Anita. Harassment can be harm without it escalating to physical violence, also.
If you look at basically any popular youtuber you'll see they have lots and lots of hate comments. Most are buried underneath the top comments, but they're nonetheless there. Same goes for twitter, it's just a lot harder because the UI is crappier.
https://www.youtube.com/watch?v=ft5suLEZ4vk
https://youtu.be/sQiupUGjKaU
https://youtu.be/q3e4rfiTEPE
https://www.youtube.com/watch?v=UefaAcO6ukA
https://www.youtube.com/watch?v=5163pfq4xAg
https://www.youtube.com/watch?v=eczKA3QlDzI (This one is not just a single video, but a whole SERIES with multiple hate comment videos in it.)
https://youtu.be/6bF69qbPPD0
https://youtu.be/Xs4kByJTD5A
https://youtu.be/cAOVu8kyhOs
https://youtu.be/RRBoPveyETc
https://youtu.be/Hcmz74AaXHs
https://youtu.be/gTix7FDHZcA
https://youtu.be/nrjp6e04dZ8
https://youtu.be/cpOEO2gUekE
https://youtu.be/4Y1iErgBrDQ
https://youtu.be/imW392e6XR0
https://youtu.be/LABGimhsEys
https://youtu.be/w1AhrEhQ0mg

This is only scratching the surface of the youtubers and celebrities that decided to do entire videos dedicated to hate comments. There are likely thousands more that haven't decided to do any. I could spend another hour and get another 50+ links for you if you want, but I think I've made my point. You cannot possibly tell me that popularity does not attract hate without some sort of evidence.

And yes, it is basically the same circumstances that led him to get the hate specifically of the religious extremism type, yes. The point was double, to show that trolls and idiots are not somehow specifically targeting her and her alone, but simply because she's popular and has political opinions (which everyone does), and that there is absolutely no harm being done. He takes it in stride. It fuels him. It's not a good thing that he's being attacked, no. But it's certainly not causing any harm, so where's your evidence that it's caused harm to Anita? Sure, she's expressed sincere disappointment before but I've seen no legitimate emotional (or otherwise) harm.

I'd like to stress that. Harassment is not good. I'm not saying people should continue to harass, but you're blowing it out of proportion yourself. She's not a baby, she can handle 13 year old trolls saying mean things to her.

You're trying to use anecdotal evidence to make this argument. The opinions of whatever professors and classes you have available to you are not an adequate sample.
You're trying to tell me that I'm not allowed to cite the expertise of people who have devoted their entire lives to studying sociology and feminism? Let's compare:
Me: Support of multiple doctorates in Sociology (30+ years of relevant experience and studies and thesis')
You: Conjecture.

There is an option to report people for having a different opinion.
Yeah and when you select it it says "We do not remove tweets for differing opinions" or something like that, we've been over this haven't we?

EDIT: Removed one of the links, I confused some stuff :/
« Last Edit: February 09, 2016, 08:33:40 PM by Ipquarx »

Yeah and when you select it it says "We do not remove tweets for differing opinions" or something like that, we've been over this haven't we?
It has to be in violation of a rule for them to remove it.
Though it still can be used for censorship.
« Last Edit: February 09, 2016, 08:49:18 PM by Rainzx¹ »

Because a 'safe space' is just a name, much like 'pro-life' is a name. The titles that people assign to things do not always properly illustrate what's actually going on.

The idea of a safe space is to actively remove and prevent discussion that is contrary to the emotional well-being of other users. This is a problem because it's curating ideas from a website just because they are in opposition to someone elses' worldview. If some user's twitter account is promoting anti-feminist beliefs, then a feminist can chime in and claim that the user is creating an 'unsafe space' for them, thus getting the user's posts removed and their account suspended. The original problem was that the feminist chose to read the anti-feminists posts, but that doesn't matter in a website that's trying to curate content rather than to allow people to communicate with others as they choose to.
The only context I've ever heard "safe space" in was for a space in which protected groups had the freedom to speak without fear of harassment or being spoken over. As in, insular spaces like a club meeting in a university, or the back room of a coffee shop, or a specific space on the internet. As in, defined "this is a safe space for X" places. I personally think there is certainly a place for that in the world, but I don't think anyone worth listening to is advocating for twitter as a whole to adopt this philosophy.

As for making twitter safer, it's impossible to have a perfect moderation team that only bans things "worth banning" because what is "worth banning" is unclear. Going too far in either direction (lax curation or extensive curation) both produce stuffty side effects. Clearly some kind of balance has to be struck.

I believe the argument actually being made, at least by people like Anita, is that twitter is imbalanced towards "too little moderation". That's why this committee has been assembled. As someone pointed out, there's a range of beliefs among those they picked for this committee, so I think that a good balance will be struck.

There is an option to report people for having a different opinion.
https://support.twitter.com/forms/abusiveuser

If you had looked a bit further, you may have noticed that they don't screen content unless it is explicitly against their rules.

Firmly believing that something is good and true and right (Or at least better than what we have now), and wanting something to be changed are mutually inclusive.

If you look at basically any popular youtuber you'll see they have lots and lots of hate comments. Most are buried underneath the top comments, but they're nonetheless there. Same goes for twitter, it's just a lot harder because the UI is crappier.
~REMOVED LINKDUMP~

This is only scratching the surface of the youtubers and celebrities that decided to do entire videos dedicated to hate comments. There are likely thousands more that haven't decided to do any. I could spend another hour and get another 50+ links for you if you want, but I think I've made my point. You cannot possibly tell me that popularity does not attract hate without some sort of evidence.

And yes, it is basically the same circumstances that led him to get the hate specifically of the religious extremism type, yes. The point was double, to show that trolls and idiots are not somehow specifically targeting her and her alone, but simply because she's popular and has political opinions (which everyone does), and that there is absolutely no harm being done. He takes it in stride. It fuels him. It's not a good thing that he's being attacked, no. But it's certainly not causing any harm, so where's your evidence that it's caused harm to Anita? Sure, she's expressed sincere disappointment before but I've seen no legitimate emotional (or otherwise) harm.

I'd like to stress that. Harassment is not good. I'm not saying people should continue to harass, but you're blowing it out of proportion yourself. She's not a baby, she can handle 13 year old trolls saying mean things to her.
You're trying to tell me that I'm not allowed to cite the expertise of people who have devoted their entire lives to studying sociology and feminism? Let's compare:
Me: Support of multiple doctorates in Sociology (30+ years of relevant experience and studies and thesis')
You: Conjecture.
You can want for something to be changed without wanting it to be legislated. For example, she could want private websites like reddit or tumblr or twitter to curate their content towards her worldviews. That is not in violation of any sort of legislation, it would just be a company upholding some rules on a platform that they own. And websites already do this.

Linking a bunch of videos of people getting hate comments isn't really conducive to an argument. We can't compare the magnitudes of harassment that different people with videos or articles that they put together because obviously they're posting some arbitrarily sized sample of the whole. How much hate mail a person gets relative to other people isn't all that we should take into consideration when considering the seriousness of it anyway. She clearly gets a significant amount of harassment. Whether or not someone like Richard Dawkins would take it in stride is irrelevant to how it effects her emotionally. And of course you haven't seen this harm. Anita has a professional persona she keeps throughout her public appearances and youtube videos. There isn't really an opportunity for you to see the emotional effect any of this has had on her. And once again, you're downplaying her harassment to "13 year old trolls".

You're claiming to understand the extent to which she is harassed without being her or having seen her inbox or having read the tweets she gets. You can't just say I'm "blowing it out of proportion". You're just downplaying this harassment because it suits your worldview, not because you actually have an understanding of the extent to which she is harassed.

You can cite their expertise, but you can't use their viewpoints and beliefs as some sort of benchmark for the collective consciousness of feminism and sociology. Especially since you're just saying that they agree with you and I can't actually verify that.
« Last Edit: February 09, 2016, 08:49:08 PM by ultimamax »

I heard of this before actually in another situation. Didn't end well.

This white woman was elected (actually no one voted for her but no one else ran for election) as some college campus adviser for something. Then it ended with her saying that cis white men are scum. She resigned.

The only context I've ever heard "safe space" in was for a space in which protected groups had the freedom to speak without fear of harassment or being spoken over. As in, insular spaces like a club meeting in a university, or the back room of a coffee shop, or a specific space on the internet. As in, defined "this is a safe space for X" places. I personally think there is certainly a place for that in the world, but I don't think anyone worth listening to is advocating for twitter as a whole to adopt this philosophy.
And in the context of internet websites, a 'safe space' implies censorship, as it's impossible to avoid people being 'harassed' and spoken over without removing people's comments. Glad we're in agreement on that.

As for making twitter safer, it's impossible to have a perfect moderation team that only bans things "worth banning" because what is "worth banning" is unclear. Going too far in either direction (lax curation or extensive curation) both produce stuffty side effects. Clearly some kind of balance has to be struck.
Curation should purely be for spam. For instance, a twitter bot that only posts the exact same ad for cheap online viagra.

edit: and content that's illegal, like child research. that much is obvious, IMO

I believe the argument actually being made, at least by people like Anita, is that twitter is imbalanced towards "too little moderation". That's why this committee has been assembled. As someone pointed out, there's a range of beliefs among those they picked for this committee, so I think that a good balance will be struck.
And she's wrong, because Twitter is already a website where you can curate your experience exactly the way you want. If someone is sending you messages that hurt your feelings, you can block them. If one of the people you follow starts posting things you don't like, you unfollow them and look for someone else to follow.

When we start talking about curating 'unfit content' on a non-personal level, that's when people like Anita step in and say, "Because I do not want to see this, nobody can."

And in the context of internet websites, a 'safe space' implies censorship, as it's impossible to avoid people being 'harassed' and spoken over without removing people's comments. Glad we're in agreement on that.

Curation should purely be for spam. For instance, a twitter bot that only posts the exact same ad for cheap online viagra.
And she's wrong, because Twitter is already a website where you can curate your experience exactly the way you want. If someone is sending you messages that hurt your feelings, you can block them. If one of the people you follow starts posting things you don't like, you unfollow them and look for someone else to follow.

When we start talking about curating 'unfit content' on a non-personal level, that's when people like Anita step in and say, "Because I do not want to see this, nobody can."
Yep. Safe spaces employ censorship. Glad we agree. However, as I said, nobody worth listening to is looking to make Twitter a "safe space" of this manner.

That's a belief. Some people would say twitter should be curated for hate speech. It depends on what Twitter wants to achieve. Not everything is obligated to be a 100% non-curated open forum.

Twitter isn't a 100% self-curated experience because people can tweet at you regardless of whether you actually want them to. Even if you block them, you have to have interacted with them once already in order to know to block them, since pre-emptive blocking isn't really feasible. Blocking is also easily nullified by 1. bypassing blocks with alternate accounts and 2. harassment at such a scale that blocking isn't really feasible to stop the harassment. I would say the only sort of networking platform that I'd consider 100% self-curated is a messaging platform like Skype or Telegram. Something you use explicitly to talk to certain people instead of as a forum.

As far as we know, Anita is only advocating against harassment, not for the personal tailored curation of Twitter. Harassment's legal definition is probably a good standard by which moderators could make decisions.

That's a belief. Some people would say twitter should be curated for hate speech. It depends on what Twitter wants to achieve. Not everything is obligated to be a 100% non-curated open forum.
You're exaggerating what will happen if Twitter doesn't employ 'safe spaces'. Twitter is already not an uncurated open forum. You can block users, unfollow users, and in some cases download software to automatically block lists of people deemed unsavory by like-minded idiots. Twitter will not become 4chan just because some people don't want SJW moderators combing through ideas and deciding which are 'okay' or not.

Twitter isn't a 100% self-curated experience because people can tweet at you regardless of whether you actually want them to. Even if you block them, you have to have interacted with them once already in order to know to block them, since pre-emptive blocking isn't really feasible.
I love this argument because it's extremely easy to show why it's wrong.  

First scenario: No moderators, only self-moderation
1. SeventhSandwich tweets hate speech at Suey Park
2. Suey Park blocks SeventhSandwich (Interaction occurs in this part, as per your post)
3. SeventhSandwich creates alt accounts and continues to harass Suey Park

Second scenario: Moderators, plus self-moderation
1. SeventhSandwich tweets hate speech at Suey Park
2. Suey Park blocks SeventhSandwich and reports him, resulting in his account being deleted and his posts being erased (Interaction also occurs here)
3. SeventhSandwich creates alt accounts and continues to harass Suey Park

Both these scenarios have literally the exact same ending, however any user who likes SeventhSandwich's posts and wants to view them is no longer able to in scenario two. Their twitter experience is lessened because Suey Park is unwilling to curate her own content. The number of voices being heard on the website is fewer, while the amount of harassment remains the same.

You're exaggerating what will happen if Twitter doesn't employ 'safe spaces'. Twitter is already not an uncurated open forum. You can block users, unfollow users, and in some cases download software to automatically block lists of people deemed unsavory by like-minded idiots. Twitter will not become 4chan just because some people don't want SJW moderators combing through ideas and deciding which are 'okay' or not.


I love this argument because it's extremely easy to show why it's wrong.  

First scenario: No moderators, only self-moderation
1. SeventhSandwich tweets hate speech at Suey Park
2. Suey Park blocks SeventhSandwich (Interaction occurs in this part, as per your post)
3. SeventhSandwich creates alt accounts and continues to harass Suey Park

Second scenario: Moderators, plus self-moderation
1. SeventhSandwich tweets hate speech at Suey Park
2. Suey Park blocks SeventhSandwich and reports him, resulting in his account being deleted and his posts being erased (Interaction also occurs here)
3. SeventhSandwich creates alt accounts and continues to harass Suey Park

Both these scenarios have literally the exact same ending, however any user who likes SeventhSandwich's posts and wants to view them is no longer able to in scenario two. Their twitter experience is lessened because Suey Park is unwilling to curate her own content. The number of voices being heard on the website is fewer, while the amount of harassment remains the same.

As I've tried to say before, I'm not saying Twitter should employ 'safe spaces' at all, nor am I sure what that means. All I'm arguing is that there is too much harassment, and that Twitter could be safer in general. I never said or implied that Twitter would get as bad as 4chan.

In that first quote, I meant curated as in by moderators or anyone but oneself. Self-curation was not supposed to be part of that argument. So to restate,
"Not everything is obligated to be a 100% uncensored open forum."
Twitter is still not 100% uncensored though. But clearly Anita and other members of this council think it's "too uncensored".

I think the scenarios you presented were too narrow.
  • The banning of accounts, despite being bypassable, is still a deterrent. Effective IP banning / email banning is also a strong deterrent. Some people aren't savvy enough to change their IP. But many would also be too apathetic to bother changing their IP or making a new account/email.
  • Harassers, especially trolls, are likely to have multiple targets, so Suey Park reporting SeventhSandwich and getting him banned means that others don't have to block SeventhSandwich on the good chance that he harasses someone else. So in cases where there is a clear mutual interest for Twitter and Suey Park to have this guy banned, that user should absolutely be banned.
  • While the difference between harassment and disagreement can be blurry sometimes, there are demonstrably useless ideas that don't contribute to the discourse that can be banned. If a harasser's twitter history is mostly dedicated to insulting people, and not really contributing meaningful ideas in any merit, then it wouldn't be a loss because their voice was not valuable in any right. Twitter can kick them to the curb.
  • If SeventhSandwich had fans who liked reading his posts, and he's tweeting hate speech at someone, it's good that we aren't perpetuating his thoughts by providing a platform for his message. If there were people who genuinely liked some of the things he tweeted, then it's Seventh's fault for doing something that's bannable.
  • Whereas Suey Park may not be able to block SeventhSandwich in the torrent of hate mail she's getting (due to the volume) someone who wants to support her can report SeventhSandwich for her instead, thereby lightening the load on Suey without requiring her to interact with her harasser(s) nor requiring her to block this particular guy herself.
  • Not all bans need to be predicated by a report. For instance, if there is clearly some kind of organized assault against a Suey Park, Suey doesn't have to go and block all of them, something that might be unfeasible for a single person. If a moderator team were to take notice of a large influx of harassment towards her, they could take action instead. Moderator teams are better equipped to deal with these harassers. They have more people at their disposal, they have data about the harassing users at their disposal (e.g if a bunch of really new accounts harassed someone, they likely made accounts specifically to harass Suey. their voices are not valuable, ban them) and there is a mutual interest for them to rid Twitter of those who contribute little and detract greatly from the conversation

A moderated forum such as this can't be absolutely perfect. Moderators can only be so effective. What it comes down to is how much you value open communication vs. safer communication
« Last Edit: February 09, 2016, 10:42:11 PM by ultimamax »

All of your points rely on the assumption that the 'hate speech' censored by moderators is the kind of slur-laden nonsense that most people associate with 'hate speech'.

If the Blockbot moderation logs are any clue to what future Twitter policies could be like, something like posting a negative comment to someone receiving large amounts of negative comments (called 'dogpiling' by the Blockbot folks) could be considered hate speech and harassment. Recruiting teams of moderators to ban 'dogpilers' en-masse is just a recipe for disaster. You're just begging to ban people who weren't posting anything outright inflammatory or defamatory, but just giving an opposing viewpoint. Then, you're back to the original issue with curating other people's content.

In fact, you seem to be pretty much blatantly supporting the idea of banning 'dogpilers' en masse right here:
If a moderator team were to take notice of a large influx of harassment towards her, they could take action instead. Moderator teams are better equipped to deal with these harassers. They have more people at their disposal, they have data about the harassing users at their disposal (e.g if a bunch of really new accounts harassed someone, they likely made accounts specifically to harass Suey. their voices are not valuable, ban them)

I take issue with another thing even more though:
there is a mutual interest for them to rid Twitter of those who contribute little and detract greatly from the conversation
People who are appointed as moderators do not get to decide what is 'mutually in the interest' of both themselves and the community. When admins try to decide what the community wants instead of giving the community themselves the right of choice, you end up with the whole Reddit Ellen Pao fiasco: A bunch of people very angry at website admins trying to craft the community that they want instead of the one that forms organically.

Back to your original list though. If you want to get rid of those issues associated with self-curation on Twitter, the solution is to give users better tools for self-curation. Giving them to moderators and paying them to decide what's 'okay' for people to read and what's 'not okay' is just a recipe for disaster. There shouldn't even be any debate with this point, honestly. If users have the full ability to tailor their communications exactly how they want, then any further calls for outside moderation is just them trying to prevent other people from reading what their opponents are saying.

their voices are not valuable, ban them
You do not get to decide whose voices are valuable. That's a choice that's supposed to be left up to the users. This underlying philosophy of "I know what you want better than you do, so shush" is exactly why we have the FCC, the MPAA, and all these other bullstuff, basically-moderator agencies that absolutely nobody likes.
« Last Edit: February 10, 2016, 12:25:47 AM by SeventhSandwich »

Can I use this to report terrorists now?  :cookieMonster: