Online abuse

BloggingDigital mediaInternetMediaOnline abuseTechnologyTwitter

Twitter announces global change to algorithm in effort to tackle harassment | Technology

no thumb

Twitter is announcing a global change to its ranking algorithm this week, its first step toward improving the “health” of online conversations since it launched a renewed effort to address rampant trolling, harassment and abuse in March.

“It’s shaping up to be one of the highest-impact things that we’ve done,” the chief executive, Jack Dorsey ,said of the update, which will change how tweets appear in search results or conversations. “The spirit of the thing is that we want to take the burden off the person receiving abuse or mob-like behavior.”

Social media platforms have long struggled to police acceptable content and behavior on their sites, but external pressure on the companies increased significantly following the revelation that a Russian influence operation used the platforms in coordinated campaigns around the 2016 US election.

Facebook and Google have largely responded by promising to hire thousands of moderators and improve their artificial intelligence tools to automate content removal. Twitter’s approach, which it outlined to reporters in a briefing on Monday, is distinct because it is content neutral and will not require more human moderators.

“A lot of our past action has been content based, and we are shifting more and more to conduct,” Dorsey said.

Del Harvey, Twitter’s vice-president of trust and safety, said that the new changes were based on research that found that most of the abuse reports on Twitter originate in search results or the conversations that take place in the responses to a single tweet. The company also found that less than 1% of Twitter accounts made up the majority of abuse reports and that many of the reported tweets did not actually violate the company’s rules, despite “detract[ing] from the overall experience” for most users.

The new system will use behavioral signals to assess whether a Twitter account is adding to – or detracting from – the tenor of conversations. For example, if an account tweets at multiple other users with the same message, and all of those accounts either block or mute the sender, Twitter will recognize that the account’s behavior is bothersome. But if an account tweets at multiple other accounts with the same message, and some of them reply or hit the “heart” button, Twitter will assess the interactions as welcome. Other signals will include whether an account has confirmed an email address or whether an account appears to be acting in a coordinated attack.

With these new signals, Harvey explained, “it didn’t matter what was said; it mattered how people reacted.”

The updated algorithm will result in certain tweets being pushed further down in a list of search results or replies, but will not delete them from the platform. Early experiments have resulted in a 4% decline in abuse reports from search and an 8% drop in abuse reports in conversations, said David Gasca, Twitter’s director of product management for health.

This is not the first time that Twitter has promised to crack down on abuse and trolling on its platform. In 2015, then CEO Dick Costolo acknowledged that the company “sucks at dealing with abuse and trolls”. But complaints have continued under Dorsey’s leadership, and in March, the company decided to seek outside help, issuing a request for proposals for academics and NGOs to help it come up with ways to measure and promote healthy conversations.

Dorsey and Harvey appeared optimistic that this new approach will have a significant impact on users’ experience.

“We are trying to strike a balance,” Harvey said. “What would Twitter be without controversy?”

Source link

read more
Amnesty InternationalHuman rightsInternetMediaOnline abuseSocietyTechnologyTwitterUK newsUS newsWomenWorld news

Twitter not protecting women from abuse, says Amnesty | Technology

no thumb

Twitter is failing to prevent online violence and abuse against women, creating a toxic environment for them, Amnesty International has claimed.

In a report published on Wednesday, the day that Twitter celebrates 12 years since the first tweet, Amnesty said the social network responded inconsistently when abuse was highlighted, even when it violated its own rules.

The human rights group accused Twitter of failing to respect women’s rights, with users in the dark as to how it interpreted and enforced its policies intended to prevent such toxic content. The result, Amnesty said, was death threats, rape threats and racist, transphobic and homophobic abuse aimed at women.

A survey of 1,100 British women carried out for the report found that just 9% thought Twitter was doing enough to stop violence and abuse against women, while 78% did not feel it was a place where they could share their opinion without receiving such vitriol.

Kate Allen, director of Amnesty International UK, said Twitter had become a “toxic place for women”. She said: “For far too long Twitter has been a space where women can too easily be confronted with death or rape threats, and where their genders, ethnicities and sexual orientations are under attack.

“The trolls are currently winning, because despite repeated promises, Twitter is failing to do enough to stop them. Twitter must take concrete steps to address and prevent violence and abuse against women on its platform, otherwise its claim to be on women’s side is meaningless.”

The report was based on interviews with more than 80 women, including politicians, journalists, and regular users in the UK and USA. While it found that public figures were often targets, others also experienced abuse – particularly if they spoke out about sexism or used campaign hashtags.

Amnesty said that while movements such as #MeToo on social media can be empowering, participants often faced a backlash.

Amnesty documented how women from ethnic or religious minorities, LGBTI women, non-binary individuals and those with disabilities were targeted with specific abuse against their identities. It said this could have the effect of “driving already marginalised voices further out of public conversations”.

Twitter said it disagreed with Amnesty’s findings, pleading that it “cannot delete hatred and prejudice from society”. It said it had made more than 30 changes to its platform in the past 16 months to improve safety, including increasing the instances of action it takes on abusive tweets.

Source link

read more
Digital mediaFacebookGeneral election 2017MediaOnline abusePoliticsRace issuesSocial mediaSocial networkingSocietyTechnologyTwitterUK newsWomen in politicsYouTube

Make Facebook liable for content, says report on UK election intimidation | Society

no thumb

Theresa May should consider the introduction of two new laws to deter the intimidation of MPs during elections and force social media firms to monitor illegal content, an influential committee has said.

The independent Committee on Standards in Public Life, which advises the prime minister on ethics, has called for the introduction within a year of a new specific offence in electoral law to halt widespread abuse when voters go to the polls.

The watchdog will recommend another law to shift the liability for illegal content on to social media firms such as Facebook and Google, a legal change which will be easier once Britain leaves the European Union.

Both changes form part of the hard-hitting conclusions of an inquiry into intimidation experienced by parliamentary candidates in this year’s election campaign.

Other recommendations include:

  • Social media firms should make decisions quickly to take down intimidatory content.
  • Political parties should, by December 2018, draw up a joint code of conduct on intimidatory behaviour during election campaigns.
  • The National Police Chiefs’ Council should ensure that police are trained to investigate offences committed through social media.
  • Ministers should bring forward rules so that council candidates will no longer be required to release their home addresses.

Lord Bew, the committee chair, said: “This level of vile and threatening behaviour, albeit by a minority of people, against those standing for public office is unacceptable in a healthy democracy. We cannot get to a point where people are put off standing, retreat from debate, and even fear for their lives as a result of their engagement in politics.

“This is not about protecting elites or stifling debate, it is about ensuring we have a vigorous democracy in which participants engage in a responsible way which recognises others’ rights to participate and to hold different points of view.”

A YouGov survey of 1,616 adults last month, commissioned by the committee, found that 49% either agreed or were ambivalent when asked if MPs who received abuse brought it upon themselves. But 62% found it highly unacceptable that any member of the public should send abusive messages to MPs on social media.

A majority of the public – 60% – found it unacceptable that MPs had been disrespectful to members of the public on social media.

MPs reported being subjected to weeks of abuse leading up to the general election, including racism, antisemitism and death threats.

The shadow home secretary, Diane Abbott, was found by Amnesty to have received 45% of all abusive tweets sent during the election, many of which were racist and sexist.

Dozens of MPs had already moved to improve their security after Labour MP Jo Cox was murdered by a rightwing extremist in 2016.

The report said that the intimidation of parliamentary candidates was of particular significance because of the threat it posed to the integrity of the democratic process.

“A new electoral offence of intimidating parliamentary candidates and party campaigners during an election should be considered. This would serve to highlight the seriousness of the issue, result in more appropriate sanctions, and serve as a deterrent,” it said.

Facebook, Twitter and Google “are not simply platforms for the content that others post” because they play a role in shaping what users see, and so “must take more responsibility for illegal material”, the committee said.

They were not liable “largely” due to a EU directive which treated them as “hosts” of online content, the watchdog said, but May’s commitment to Brexit means the government can introduce laws to make companies responsible once the UK leaves the EU.

The report said social media was “the most significant factor” driving harassment, abuse and intimidation of 2017 general election candidates, which included threats of violence and sexual violence, as well as damage to property.

“Some have felt the need to disengage entirely from social media because of the abuse they face, and it has put off others who may wish to stand for public office,” the report said.

“Not enough has been done. The committee is deeply concerned about the limited engagement of the social media companies in tackling these issues.”

The committee said it was concerned about the impact on the diversity of a representative democracy and said parties had an “important responsibility” to support female, black and minority ethnic and LGBT candidates.

Abbott welcomed the proposal for new obligations on social media companies but questioned whether it would be right to call for new legislation covering intimidation. “I’m not persuaded that yet more legislation is required. Intimidation, threatening behaviour and violence are all illegal. These laws need to be enforced for everyone,” she said.

She questioned whether she could work with all political opponents given that some were accused of being behind some social media campaigns. “Yes, of course, political parties should work together to combat intimidation. But some of my opponents have engaged in dog-whistle politics, and not just at election time,” she said.

Source link

read more
AlphabetGoogleInternetMediaOnline abuseSearch enginesSocietyTechnologyVideo on demandYouTube

YouTube investigates reports of child abuse terms in auto-fill searches | Technology

no thumb

YouTube is investigating reports that its auto-fill search features are suggesting “profoundly disturbing” child abuse terms.

Users reported seeing auto-suggestions of “s*x with your kids” and other variants after entering the phrase “how to have” in the search box on the Google-owned site. Experts have speculated that the search terms – several of which use the asterisked word “s*x” – may have been deliberately aimed at embarrassing the site, avoiding YouTube’s filters for terms such as “sex”.

The latest incident comes days after major brands, including Mars, Lidl and Adidas, pulled their adverts from Google and YouTube after predatory comments were found on videos of children.

A YouTube spokeswoman said: “Earlier today our teams were alerted to this profoundly disturbing autocomplete result and we worked to quickly remove it as soon as we were made aware. We are investigating this matter to determine what was behind the appearance of this autocompletion.”

Because some of YouTube’s search algorithms are based on popularity, some have suggested that a coordinated effort by a group of people could have caused the searches to appear higher in results than they would organically.

None of the results linked to the “how to have” search showed videos of children being abused.

Following last week’s incident over comments, Johanna Wright, vice-president of product management at YouTube said: “We’re wholly committed to addressing these issues and will continue to invest the engineering and human resources needed to get it right. As a parent and as a leader in this organisation, I’m determined that we do.”

YouTube recently announced new “toughened” guidelines on videos featuring children and “videos that attempt to pass as family friendly but are clearly not”, which it says have already resulted in thousands of videos and more than 50 channel accounts being removed from the site.

Source link

read more
Online abuseTechnologyYouTube

YouTube promises ‘aggressive’ action over predatory comments on videos of children | Technology

no thumb

YouTube has pledged to begin taking an “even more aggressive stance” against predatory behaviour as it was reported that paedophiles are operating on the site and evading protection mechanisms.

Several global brands are reported to have pulled advertisements from YouTube on the eve of Black Friday, one of the biggest shopping days of the year, after they were alerted that they were appearing with videos exploited by paedophiles.

According to investigations by BBC News and the Times, there are estimated to be tens of thousands of predatory accounts leaving indecent comments on videos of children. Some videos are posted by paedophiles and many are innocently posted by youngsters.

Some of the comments are said to be sexually explicit, while others reportedly encourage children posting the videos to perform sexual acts.

Anne Longfield, the children’s commissioner, said the findings were “very worrying”, while the National Crime Agency said it was “vital” online platforms have robust protection mechanisms in place when they are used by children.

The BBC and the Times spoke to people from the site’s “trusted flagger” scheme who report inappropriate content or behaviour by users to YouTube employees.

Some of the volunteer moderators told the BBC there could be “between 50,000 to 100,000 active predatory accounts still on the platform”, while another told the Times there are “at least 50,000 active predators” on the site.

As well as trusted flaggers, YouTube also uses algorithms to identify inappropriate sexual or predatory comments.

But the system is said to be failing to tackle the problem and paedophiles are continuing to comment on videos of children.

Ads for several major international brands, including a global sportswear brand and food and drink giants, appear alongside the videos, raising concerns that they could be indirectly funding child abuse. The Times said Adidas, Mars, HP, Diageo, Cadbury, Deutsche Bank, Lidl and Now TV were among those who had asked for their advertising to be removed on the eve of Black Friday.

YouTube said it had noticed a growing trend around content “that attempts to pass as family-friendly, but is clearly not” in recent months and announced new ways it was “toughening our approach”.

Johanna Wright, vice-president of product management at YouTube, said in a blog post: “We have historically used a combination of automated systems and human flagging and review to remove inappropriate sexual or predatory comments on videos featuring minors.

“Comments of this nature are abhorrent and we work … to report illegal behaviour to law enforcement. Starting this week we will begin taking an even more aggressive stance by turning off all comments on videos of minors where we see these types of comments.”

Longfield told the BBC: “This is a global platform and so the company need to ensure they have a global response. There needs to be a company-wide response that absolutely puts children protection as a number one priority, and has the people and mechanisms in place to ensure that no child has been put in an unsafe position while they use the platform.”

Source link

read more
BloggingCultureDigital mediaInternetJack DorseyMediaNewspapers & magazinesOnline abusePornographySocietyTechnologyTwitter

Twitter further tightens abuse rules in attempt to prove it cares | Technology

no thumb

Twitter is introducing new rules around hate symbols, sexual advances and violent groups, in an effort to counter perceptions that the social network is not doing enough to protect those who feel silenced on the site.

The company was planning to announce the new rules later on this week, but they leaked in an email to Wired magazine, which published the changes on Tuesday.

“We hope our approach and upcoming changes, as well as our collaboration with the Trust and Safety Council, show how seriously we are rethinking our rules and how quickly we’re moving to update our policies and how we enforce them,” a Twitter spokesperson said. The company had launched its Trust and Safety Council in February 2016, with membership of more than 40 organisations including the US Anti-Defamation League, Samaritans and the Internet Watch Foundation to help Twitter “more efficiently and quickly” canvass for opinion from experts.

The changes come a few days after Twitter’s chief executive, Jack Dorsey, acknowledged the company’s failings in a short series of tweets posted on the day of the #womenboycotttwitter protest. The protest was sparked by the company locking the account of actress Rose McGowan, after she sent a series of tweets attacking those she claimed had enabled the abuse of Hollywood mega-producer Harvey Weinstein.


1/ We see voices being silenced on Twitter every day. We’ve been working to counteract this for the past 2 years.

October 14, 2017

“We see voices being silenced on Twitter every day. We’ve been working to counteract this for the past 2 years. We prioritized this in 2016. We updated our policies and increased the size of our teams. It wasn’t enough,” Dorsey wrote.

“In 2017 we made it our top priority and made a lot of progress. Today we saw voices silencing themselves and voices speaking out because we’re *still* not doing enough.”

As part of the changes, Twitter’s rules on non-consensual nudity (sometimes called “revenge porn”) and unwanted sexual advances have been strengthened, with the former expanding to include imagery that isn’t explicitly nude, such as “upskirt” pictures, and the latter ban being extended to cover bystander reports, rather than only being actionable if the target complains. Users who are the original posters of non-consensual nudity, or those who share it explicitly to harass someone, will also receive an immediate permanent ban, rather than a temporary account lock for first offenders.

The raft of rules concerning hate symbols and violent groups are completely new, although the email acknowledges that their “exact scope” is still being determined. Hateful imagery and hate symbols will now be treated as “sensitive media”, similar to how pornography is already flagged and blockable, while the company says it will “take enforcement action against organisations that use violence as a means to advance their cause”.

Twitter will also begin to take action against tweets that glorify or condones violence, as well as simple threats. “We realize that a more aggressive policy and enforcement approach will result in the removal of more content from our service. We are comfortable making this decision,” the leaked email says.

Brianna Wu

Jack, as someone who has received a LOT of abuse on Twitter, this is what worries me. It requires trusting the reporting process.

October 18, 2017

The company’s proposed changes have already been criticised as addressing the wrong problems, however. Brianna Wu, a US congressional candidate who made her name as an anti-harassment campaigner, told Dorsey that the changes “require trusting the reporting process. Your reporting outcomes are very uneven – I can report the same behavior one day and it’s acted on. The next day it’s not.

“Unless you are investing more in personnel and training staff in subjects they may not understand, this isn’t going to solve it.”

Uneven enforcement was ultimately the reason for the #womenboycotttwitter protest in the first place. McGowan’s account was locked, Twitter said, because one of her tweets contained an image private phone number. When Twitter cited this as the reason for McGowan’s suspension, others noted that they had reported similar tweets and seen no enforcement action taken.

Source link

read more
GamergateGamesLife and styleOnline abuseWomen

Anita Sarkeesian: ‘It’s frustrating to be known as the woman who survived #Gamergate’ | Life and style

no thumb

It has been five years since the feminist critic and blogger Anita Sarkeesian became the target for a staggeringly vicious online hate campaign after producing the online video series Tropes vs Women in Video Games. Given the scale of the harassment she has been experiencing non-stop for half a decade – including a continuous barrage of rape and death threats, a bomb scare and a game in which players can punch an image of her face – it’s almost surprising to see her so relaxed and at ease, having played a couple of rounds of Mario Kart at the Guardian’s London office. It’s only when she speaks that she reveals a cautiousness most of us lack; Sarkeesian chooses her words carefully, ever mindful of what may spark even more abuse. “The biggest difference is that I don’t monitor our social media any more,” she says.

Sarkeesian is the founder of Feminist Frequency, a not-for-profit educational organisation “that analyses modern media’s relationship to societal issues such as gender, race and sexuality”. She suffered under Gamergate, the campaign conducted under the guise of representing those concerned about ethics in game journalism, but which was, in reality, a hashtagged rallying cry for those wanting to harass women in the games industry. As Feminist Frequency tweeted in June of this year, “Gamergate still exists, still harasses marginalised voices and still affects our daily lives. The abuse has never stopped.”

Nonetheless, she says: “I’ve gotten really fortunate that Feminist Frequency now has staff, and there are people who will look at it.” But it’s a double-edged sword: not having to regularly process horrific abuse means Sarkeesian finds it more difficult when she does see it. She says she finds it horrifying that those who experience online harassment are advised to just toughen up, but says that there isn’t much advice she can give. She highlights the importance of self-care, therapy and – in her case – fitness, but says that to really deal with the problem we need major cultural change.

Sarkeesian was something of a rebel growing up. The daughter of Armenian immigrants, she grew up in Toronto and moved to Orange County, California, when she was 15. She promptly got into drugs and punk music, even shaving her head in high school. Her father, a computer engineer, taught her how to build PCs and, from there, Sarkeesian learned how to make websites by building GeoCities fan pages, most of them for the musician Courtney Love. Internet culture is deeply familiar to her and yet, she feels, it has become ever more unpredictable.

“Social media companies need to change the ways in which their platforms operate. Online harassment is very easily done and there are very few consequences for it,” she says. “We’re seeing some of that happening, but it really feels like Band-Aids on a fundamentally flawed structure.” She’s more interested in what will come next: “When the next big new platform happens, have they thought about how they’re going to integrate anti-harassment structures?”

She regularly talks to people at Twitter, often considered the most problematic platform for women, to find ways to make it less toxic. Her work has also had an impact on video games, where more creators are trying to make games without so many of the cliches – damsels in distress, women as background props, lack of body diversity and so on – that have been discussed in Tropes vs Women. “It’s been so gratifying to watch the industry shift,” she says. “Change does not usually happen this quickly.”

The FREQ Show.

Still, there’s more to be done. The games industry is still unfriendly to its non-fictional women. At the European Women in Games conference in London last month, several women told Sarkeesian that seeing her persevere showed them they could, too, which she finds upsetting. “I don’t want you to have to put on a shield in order to make it in this industry,” she says. She also finds it frustrating to always be known as “‘the woman who survived harassment’. It’s not a thing that I think I’ll ever get away from.”

Indeed, women are often remembered for what is done to them rather than what they do. Sarkeesian is trying to combat that trend with Feminist Frequency’s other video series Ordinary Women: Daring to Defy History and an accompanying book, co-written with Ebony Adams, due to be published in October 2018. It will tell the stories of 25 women who have been overlooked by history, from the 17th-century pickpocket Moll Cutpurse to the 19th-century pirate Ching Shih.

Living through a Trump presidency has propelled her further into activism. “The American election destroyed us,” she says. “We woke up the next morning in tears.” Her response is an online series, The FREQ Show, which asks: “What do media representations have to do with the current political climate?” Season one is under way, and topics range from companies using feminism to sell products to crisis pregnancy centres in the US that mislead women trying to get abortions.

Feminist Frequency has more than 200,000 subscribers on YouTube and works hard to encourage readers to think critically about the representation of race, gender and sexuality in pop culture through analysis of films, television, video games, books and the press. And Sarkeesian says she is as tenacious as she has ever been. “[Tropes vs Women] was the biggest thing I’ve ever done and it might be my legacy. And it was fucking hard.” Still, five years later, she has gathered a community around her and is able to open up. And, yes, even relax a little. “Not every stranger is going to hurt me in some way.”

The FREQ Show is available to stream on YouTube

Source link

read more