US elections 2016

EuropeRussiaTechnologyTumblrUS elections 2016US newsUS politicsWorld news

Tumblr says Russia used it for fake news during 2016 election | Technology

no thumb

The blogging platform Tumblr has unmasked 84 accounts that it says were used by a shadowy Russian internet group to spread disinformation during the 2016 US election campaign.

Tumblr said it uncovered the scheme in late 2017, helping an investigation that led to the indictment in February of 13 individuals linked to the Russia-based Internet Research Agency (IRA).

The announcement adds Tumblr to the list of internet platforms targeted in a social media campaign that US officials said sought to disrupt the 2016 election and help boost Donald Trump’s bid to defeat Hillary Clinton.

A Tumblr statement said it discovered the accounts “were being used as part of a disinformation campaign leading up to the 2016 US election”.

The company said it notified law enforcement, terminated the accounts, and deleted the posts while working “behind the scenes” with the US Justice Department.

Tumblr said it is now free to disclose what happened, and unveiled steps to protect the platform against further disinformation campaigns.

The blogging site said it would notify by email “anyone who liked, reblogged, replied to, or followed an IRA-linked account with the list of usernames they engaged with,” the statement said.

It also listed the 84 user names linked to the Russian “troll farm” so users could see if they had interactions with the entities.

“We’re committed to transparency and want you to know everything that we know,” the statement said.

“We’ve decided to leave up any reblog chains that might be on your Tumblrs – you can choose to leave them or delete them. We’re letting you decide because the reblog chains contain posts created by real Tumblr users, often challenging or debunking the false and incendiary claims in the IRA-linked original post.”

Source link

read more
FacebookGoogleInternetSocial networkingTechnologyUS elections 2016US newsUS politics

‘It might work too well’: the dark art of political advertising online | Technology

‘It might work too well’: the dark art of political advertising online | Technology

Alan Gould was hitting a wall. It was the late 1990s, and the political advertising operative had an idea about using a relatively newfangled tool – banner ads on web sites – to promote political candidates. “It was pretty clear to me at the time that the ability to target and tailor messaging was perfect for political campaigns,” Gould recalled recently. “I did a whole presentation on the internet and the power to connect, track, do fundraising, target.”

But when Gould finished his pitches, he would be met with blank stares. “I was a very lonely pied piper,” he says.

Finally, in 1998, Gould found a political candidate who was so far behind in the polls, and so strapped for cash, that he was willing to take a risk and spend $100,000 on banner ads on the New York Times homepage. Peter Vallone, then a New York City councilmember challenging George Pataki for the governorship, gave Gould the green light for an ad buy that has since entered the history books as the first significant use of online advertising in a political campaign.

The ads themselves are lost to internet history – Gould believes he may have copies somewhere on floppy discs. But it’s not hard to draw a line from that moment to Robert Mueller’s 16 February indictment of the Internet Research Agency, which alleges that Russian agents carried out a conspiracy to interfere with a US presidential election, in large part by purchasing targeted Facebook ads designed to “encourage US minority groups not to vote”. Or to the news recently revealed in the Observer that 50m Facebook profiles were obtained and misused by the data mining company Cambridge Analytica to target voters during the 2016 presidential election.

The Vallone ads contained rudimentary versions of many of the attributes that make digital advertising such a powerful – and terrifying – force today: the ability to target specific audiences with tailored messages, then track their reaction.

“Come November 2000, I expect the question will no longer be whether Web-based political advertising works,” wrote Cyrus Krohn, then the manager of political advertising for the Microsoft Network, in a prescient 1999 column for Slate, “but whether it works too well.”

Nearly 20 years later, the world has caught up to Krohn’s concerns, with some critics making the not entirely hyperbolic argument that micro-targeted “dark advertising” on Facebook is a fundamental threat to democracy itself. Is it too late for democracy to fix itself?


In February, Donald Trump named Brad Parscale as his 2020 re-election campaign manager. The decision lends credence to what Parscale has been saying for the past year: that his Facebook advertising operation won Trump the election.

Brad Parscale, the digital media director of Donald Trump’s 2016 campaign has been hired to lead his 2020 presidential re-election campaign. Photograph: Drew Angerer/Getty Images

Parscale had been a little-known digital marketing executive working out of Texas when he was tapped to build Trump’s campaign website in 2015. Until then, digital advertising was barely a rounding error in campaign budgets. In 2008, the year Barack Obama became the first social media candidate, candidates spent just $22.25m on online political ads, according to an analysis by Borrell Associates. That number grew significantly in 2012, but the real explosion came in 2016, when campaigns pumped $1.4bn into digital ads.

US presidential campaigns are often remembered – and understood – by their advertisements. Lyndon B Johnson’s “Daisy” ad powerfully (and controversially) set the stakes of an election in a nuclear world. George HW Bush’s “Willie Horton” attack ad still epitomizes the racist dog-whistle politics of the tough-on-crime era. The message, as much as the messenger, is a key part of the debate over who is best equipped to lead the country.

But no such public debate took place around Trump’s apparently game-changing digital political advertisements before election day.

This is partly due to a loophole in the prevailing campaign finance law, which was written in 2002 and did not include internet ads in the class of regulated “electioneering communications”. But perhaps even more important is the very nature of online advertising, which is self-serve (just sign up with a credit card and go) and highly iterative.

Parscale claims he typically ran 50,000 to 60,000 variations of Facebook ads each day during the Trump campaign, all targeting different segments of the electorate. Understanding the meaning of a single one of those ads would require knowing what the ad actually said, who the campaign targeted to see that ad, and how that audience responded. Multiply that by 100 and you have a headache; by 50,000 and you’ll start to doubt your grasp on reality. Then remember that this is 50,000 a day over the course of a campaign that lasted more than a year.

“The reason I said it might work too well,” Krohn said in a recent interview with the Guardian, “is that mass marketing went away and micro-targeting – nano-targeting – came to fruition.”

Any candidate using Facebook can put a campaign message promising one thing in front of one group of voters, while simultaneously running an ad with a completely opposite message in front of a different group of voters. The ads themselves are not posted anywhere for the general public to see (this is what’s known as “dark advertising”), and chances are, no one will ever be the wiser.

That undermines the very idea of a “marketplace of ideas”, says Ann Ravel, a former member of the Federal Election Commission who has long advocated stricter regulations on digital campaigning. “The way to have a robust democracy is for people to hear all these ideas and make decisions and discuss,” Ravel said. “With microtargeting, that is not happening.”

Parscale and his staff told reporters with Bloomberg that they used Facebook ads to target Hillary Clinton supporters with messages designed to make them sit the election out, including her own forays into dog-whistle politics from the 1990s, which the Trump campaign hoped would discourage black voters from turning out to the polls.

That degree of political manipulation might be unsavory, but it’s also relatively old-fashioned. One digital campaign staffer (not affiliated with the Trump campaign) compared it to Richard Nixon’s Southern Strategy, only “technologically savvy”.

But new reporting by the Observer has revealed that the data analytics team that worked for Trump, Cambridge Analytica, went far beyond Nixonian dirty tricks. The firm obtained Facebook data harvested under the auspices of an academic study, the Observer has revealed, and then used that data to target millions of US voters based on their psychological weaknesses.

“We exploited Facebook to harvest millions of people’s profiles,” whistleblower Christopher Wylie told the Observer about the data theft. “And built models to exploit what we knew about them and target their inner demons.”

European elections

Political advertising in the United States is the wild west compared with other western democracies, which tend to have shorter election campaigns with strict regulations on the amount and type of spending permitted. Such rules may enhance the impact of digital advertising, which is much cheaper than television and largely unregulated.

The UK has seen a rapid shift to digital campaigning, following the Conservative party’s embrace of Facebook in the 2015 general election. The Tories outspent Labour by a factor of 10 on Facebook advertisements, a decision that many political observers saw as decisive. In a country that bans political ads on television, Facebook enabled the Conservatives to reach 80.65% of users in targeted constituencies with its promoted posts and video ads, according to marketing materials created by Facebook. (At some point in the past year, the company began hiding previously produced “Success Stories” about its ability to sway election results.)

The Vote Leave campaign in the 2016 Brexit referendum went on to spend almost its entire budget on Facebook advertising, an investment that resulted in about 1bn targeted digital ads being served to voters over the course of a 10-week campaign.

Though it is impossible to parse the exact impact of Facebook advertisements amid all the other factors that shape an electoral result (including organic Facebook content), the platform is increasingly cited as a factor in the growing electoral might of far-right groups in Europe.

Supporters of the right-wing Alternative for Germany (AfD) political party demonstrate outside the Chancellery in Berlin last week.

Supporters of the right-wing Alternative for Germany (AfD) political party demonstrate outside the Chancellery in Berlin last week. Photograph: Sean Gallup/Getty Images

The radical rightwing Alternative for Germany (AfD) party reportedly worked with a US campaign consultancy and Facebook itself to target German voters susceptible to its anti-immigrant message during the 2017 election in which AfD surged in popularity to become the third-largest party in parliament.

Campaigning in Italy’s recent election, which saw the rise of anti-establishment parties including the populist Five Star Movement and the far-right League, largely took place on social media. Facebook advertisements and targeting information gathered by Italian transparency group Openpolis found that the neo-fascist Brothers of Italy party ran a Facebook ad targeting Italian adults who are interested in the paramilitary police force, the carabinieri.

After the polls closed in Italy, the League’s Matteo Salvini shared some words of gratitude with the press: “Thank God for the internet. Thank God for social media. Thank God for Facebook.”

Targeting the midterms

While investigations into the 2016 US election and Brexit referendum continue, it’s worth remembering that more elections are fast approaching. Scores of countries will hold national elections in 2018, including Sweden, Ireland, Egypt, Mexico and Brazil.

In the US, candidates for the 435 congressional and 35 Senate seats that are up for grabs in November are already running campaigns on Facebook, and we may never know what they’re saying in those advertisements.

Take, for example, Paul Nehlen, a candidate who is running a Republican primary challenge against the speaker of the House, Paul Ryan, in Wisconsin. Nehlen is primarily known as a vehement antisemite who was once embraced by Steve Bannon and the Breitbart wing of the right, but was excommunicated after appearing on a white supremacist podcast.

According to his FEC filings, Nehlen spent at least $2,791.72 on Facebook ads in the final six months of 2017.

What did that money buy?

In the first instance, everything that any Facebook advertiser can get: access to one of the most powerful databases of personal information that has ever existed, with insights into individuals’ intimate relationships, political beliefs, consumer habits, and internet browsing.

Beyond that, we don’t know. Nehlen could be using Facebook to target likely voters in his district with a message about infrastructure. Or he could have taken a list of his own core supporters (he has more than 40,000 likes on Facebook), used Facebook’s “lookalike audience” tool to find other people inclined to support his particular politics, then fed them ads designed to persuade more people to join him in hating Jews.

Last fall, after Facebook had been forced to admit that, despite its initial denials, its platform had been used by foreign agents seeking to illegally influence the election, the company announced a set of reforms designed to assuage its critics – and stave off actual, enforceable regulation.

Mark Zuckerberg

Mark Zuckerberg says Facebook has taken steps to achieve ‘an even higher standard of transparency’. Photograph: Bloomberg/Bloomberg via Getty Images

Starting this summer, the platform has promised that every political ad will be linked back to the page that paid for it. The pages themselves will display every ad that they’re running, as well as demographic information about the audience that they are reaching, a measure that Mark Zuckerberg claimed would “bring Facebook to an even higher standard of transparency” than the law requires for television or other media.

A version of these reforms is already live in Canada, where users can see all the ads being run by a political candidate in a designated tab on their page.

But there is good reason to be skeptical.

Since 2014, Facebook has had a transparency tool for all ads served on the platform. Click on the upper right-hand corner of a Facebook ad and you’ll see an option reading “Why am I seeing this ad?” Click through and you’ll get an explanation of the characteristics that make you desirable to the advertiser.

So far so good, but a new study by computer scientists found that Facebook’s ad explanations were “often incomplete and sometimes misleading”, in a way that “may allow malicious advertisers to easily obfuscate ad explanations that are discriminatory or that target privacy-sensitive attributes”.

Alan Mislove, a professor of computer science at Northeastern University and one of the study’s co-authors, said that he gave Facebook credit for having the feature at all, noting that it is one of the only examples of a company offering any kind of explanation of how an algorithm actually works. But the findings do not paint a particularly pretty picture of Facebook’s ability to self-regulate.

“They’ve built this incredibly powerful platform that allows very narrow targeting, a very powerful tool that anyone on the internet can use, so that scares me,” Mislove said. “And up until very recently, there was very little accountability. You as a malicious actor on Facebook don’t even really need to obfuscate your behavior, because the only person watching is Facebook.”

Honest Ads Act

The best hope for bringing some order to the realm of digital political ads is through updating US law for the Facebook era.

A bipartisan proposal to do just that exists. In October, Senators Amy Klobuchar, Mark Warner and John McCain introduced the Honest Ads Act, which would close the loophole that allows internet ads to avoid regulation, and also require internet platforms (ie Facebook and Google) to maintain a public file of all the political ads they run and who paid for them.

Senators Amy Klobuchar and Mark Warner introduce the ‘Honest Ads Act’ at a news conference on Capitol Hill on 19 October 2017.

Senators Amy Klobuchar and Mark Warner introduce the ‘Honest Ads Act’ at a news conference on Capitol Hill on 19 October 2017. Photograph: Michael Reynolds/EPA

But as much as we need transparency around political ads to maintain democracy, we also need a functioning democracy to get that transparency. And it’s not clear that it’s not already too late.

“In an ideal world, with a fully functioning Congress, there would be hearings around the Honest Ads Act, and you would have Facebook and Google and Twitter and experts testify to shine a light on the nature of political advertising,” said Brendan Fischer, director of FEC reform at the Campaign Legal Center. “We’re not close to that at all.”

In the absence of a fully functioning Congress, what is to be done? Should we expect Facebook to simply stop selling political ads?

Antonio Garcia Martinez, a former product manager for Facebook who helped develop its advertising tools says that he has come to realize that political ads are simply a different beast than commercial ones, which can and should be treated differently by his former employer.

“Selling shoes needs to be different than selling politicians, even though the mechanics of it are identical,” he said. “Morally it’s different.”

Or should we pressure Facebook to stop allowing candidates with hateful or extremist views to use its tools?

“If we farm these important democratic responsibilities out to a private company, today they might be regulating antisemitism, but tomorrow they’re regulating what people can say about the Honest Ads Act,” Fischer said.

Indeed, Facebook could already be suppressing political views unfavorable to its business practices, and we would have no way of knowing. It’s possible Facebook wouldn’t even know. In response to queries about inconsistent moderation of clothing advertisements, a Facebook spokeswoman recently told the New York Times that “the company could not ask an automated system about [its] decisions”.

Frankenstein’s monster is not under any human’s control.

If this all seems positively dystopian, one person who is surprisingly sanguine is Alan Gould.

Gould left politics soon after the Vallone campaign, founded an advertising analytics firm, sold it, and is now a tech investor. He does have concerns about media literacy and Facebook’s tendency to trap people in filter bubbles, but says, “If people choose to stay in that bubble and not explore anything outside of it, that’s a statement about who they are and not about the technology.

“If you’re going to have a representative democracy, then you have to have a way to communicate with the voters and you’re going to use whatever is available, whether that’s newspapers or mail or email or Snapchat,” he said. “I don’t regret it at all.”

Source link

read more
Digital mediaMediaRedditSocial mediaTechnologyUS elections 2016US newsUS politicsWorld news

Reddit infiltrated by Russian propaganda in run-up to US election | Technology

no thumb

Reddit has become the latest social network to admit that it was infiltrated by Russian misinformation actors in the run-up to the 2016 US election.

In a post on the social news site, Reddit’s chief executive Steve Huffman said that the company has “found and removed a few hundred accounts” which it suspects are of Russian origin, or which were linking directly to “known propaganda domains”.

“The vast majority of suspicious accounts we have found in the past months were banned back in 2015–2016 through our enhanced efforts to prevent abuse of the site generally,” Huffman added.

Reddit has been silent on Russian misinformation efforts on its site for months, despite being viewed as a likely target as far back as September 2017, when the US senator Mark Warner suggested that an investigation into Russian misinformation should expand beyond the initial focus on Twitter and Facebook.

The company appears to have been prompted to speak by a leak from the Internet Research Agency, a notorious “troll farm” which was behind many of the misinformation efforts to date. The documents, seen by the Daily Beast, showed that the agency was sharing “American proxies” for accessing Reddit, and that content from websites it created were receiving “hundreds – and sometimes thousands – [of] upvotes on” Trump-supporting subreddits.

The Daily Beast also reported that at least 21 accounts on Tumblr, the hybrid social network/blogging service owned by Oath (formerly Yahoo), were run directly by the Internet Research Agency.

In his post on Reddit, Huffman acknowledged Reddit’s previous silence on the issue. “While I know it’s frustrating that we don’t share everything we know publicly, I want to reiterate that we take these matters very seriously, and we are co-operating with congressional inquiries. We are growing more sophisticated by the day, and we remain open to suggestions and feedback for how we can improve.”

A parallel investigation into Russian misinformation efforts around the EU referendum has been less fruitful. Little hard evidence of substantial involvement has been uncovered, with technology companies arguing the lack of evidence is proof of a lack of meddling, and other observers querying whether the companies are looking in the right place at all.

Source link

read more
TechnologyUS elections 2016US newsUS politicsUS SenateYouTube

Senator warns YouTube algorithm may be open to manipulation by ‘bad actors’ | Technology

no thumb

Senator Mark Warner of Virginia warns of ‘optimising for outrageous, salacious, and often fraudulent content’ amid 2016 election concerns

Senator Mark Warner

Senator Mark Warner: ‘I’ve been increasingly concerned that the recommendation engine algorithms behind platforms like YouTube are, at best, intrinsically flawed in optimising for outrageous, salacious, and often fraudulent content.’
Photograph: Michael Reynolds/EPA

The top-ranking Democrat on the Senate intelligence committee has warned that YouTube’s powerful recommendation algorithm may be “optimising for outrageous, salacious, and often fraudulent content” or susceptible to “manipulation by bad actors, including foreign intelligence entities”.

Senator Mark Warner, of Virginia, made the stark warning after an investigation by the Guardian found that the Google-owned video platform was systematically promoting divisive and conspiratorial videos that were damaging to Hillary Clinton’s campaign in the months leading up to the 2016 election.

“Companies like YouTube have immense power and influence in shaping the media and content that users see,” Warner said. “I’ve been increasingly concerned that the recommendation engine algorithms behind platforms like YouTube are, at best, intrinsically flawed in optimising for outrageous, salacious, and often fraudulent content.”

He added: “At worst, they can be highly susceptible to gaming and manipulation by bad actors, including foreign intelligence entities.”

YouTube’s recommendation algorithm is a closely guarded formula that determines which videos are promoted in the “Up next” column beside the video player. It drives the bulk of traffic to many videos on YouTube, where over a billion hours of footage are watched each day.

However, critics have for months been warning that the complex recommendation algorithm has also been developing alarming biases or tendencies, pushing disturbing content directed at children or giving enormous oxygen to conspiracy theories about mass shootings.

The algorithm’s role in the 2016 election has, until now, largely gone unexplored.

The Guardian’s research was based on a previously unseen database of 8,000 videos recommended by the algorithm in the months leading up to the election. The database was collated at the time by Guillaume Chaslot, a former YouTube engineer who built a program to detect which videos the company recommends.

An analysis of the videos contained in the database suggests the algorithm was six times more likely to recommend videos that was damaging to Clinton than Trump, and also tended to amplify wild conspiracy theories about the former secretary of state.

Videos that were given a huge boost by YouTube’s algorithm included dozens of clips that claimed Clinton had a mental breakdown or suffered from syphilis or Parkinson’s disease, and many others that fabricating the contents of WikiLeaks disclosures to make unfounded claims, accusing Clinton of involvement in murders or connecting her to satanic and paedophilic cults.

The videos in the database collated by Chaslot and shared with the Guardian were watched more than 3 billion times before the election. Many of the videos have since vanished from YouTube and the research prompted several experts to question whether the algorithm was manipulated or gamed by Russia.

The Alex Jones Channel, the broadcasting arm of the far-right conspiracy website InfoWars, was one of the most recommended channels in the database of videos.

In his statement, Warner added: “The [tech] platform companies have enormous influence over the news we see and the shape of our political discourse, and they have an obligation to act responsibly in wielding that power.”

Warner’s warning about potential foreign interference in YouTube’s recommendation algorithm is especially noteworthy given Google repeatedly played down the extent of Russian involvement in its video platform during testimony to the Senate committee in December.

The committee’s investigation into Russian interference in the US presidential election is ongoing but has so far mostly focused on Facebook and Twitter.

The 8,000 YouTube-recommended videos were also analysed by Graphika, a commercial analytics firm that has been tracking political disinformation campaigns. It concluded many of the YouTube videos appeared to have been pushed by networks of Twitter sock puppets and bots controlled by pro-Trump digital consultants with “a presumably unsolicited assist” from Russia.

This and other techniques may have encouraged YouTube’s recommendation algorithm into disseminating videos that were damaging to Clinton. Chaslot has said he is willing to cooperate with the Senate intelligence committee and share his database with investigators.

Correspondence made public just last week revealed that Warner wrote to Google demanding more information about YouTube’s recommendation algorithm, which he warned could be manipulated by foreign actors.

The senator asked Google what it was doing to prevent a “malign incursion” of its video platform’s recommendation system. Google’s counsel, Kent Walker, offered few specifics in his written reply, but said YouTube had “a sophisticated spam and security ­breach detection system to identify anomalous behavior and malignant incursions”.

Google was initially critical of Guardian’s research, saying it “strongly disagreed” with its methodology, data and conclusions. “It appears as if the Guardian is attempting to shoehorn research, data and their conclusions into a common narrative about the role of technology in last year’s election,” a company spokesperson said. “The reality of how our systems work, however, simply doesn’t support this premise.”

However, last week, after correspondence between the Senate intelligence committee and Google was made public, revealing Warner’s written exchange with the company over the recommendation algorithm, Google offered a new statement.

“We appreciate the Guardian’s work to shine a spotlight on this challenging issue,” the new statement said, pointing to changes made since the election to discourage algorithms from promoting problematic content. “We know there is more to do here and we’re looking forward to making more announcements in the months ahead.”

On Friday, after it was informed the Guardian would imminently publish its investigation, YouTube provided an interview to the Wall Street Journal in which it laid out plans to label state-sponsored content and tackle the proliferation of conspiracy theories on the platform.

The Journal reported the plan “was early in development, so it is unclear when it would take effect – or how the site would select conspiracy theories”.

Source link

read more
AlphabetDonald TrumpGoogleHillary ClintonSilicon ValleyTechnologyUS elections 2016US newsUS politicsYouTube

How an ex-YouTube insider investigated its secret algorithm | Technology

no thumb

The methodology Guillaume Chaslot used to detect videos YouTube was recommending during the election – and how the Guardian analysed the data

Guillame Chaslot, an ex-Google software engineer, developed a program to scrutinise YouTube’s algorithm.

Guillaume Chaslot, an ex-Google software engineer, developed a program to scrutinise YouTube’s algorithm.
Photograph: Talia Herman for the Guardian

YouTube’s recommendation system draws on techniques in machine learning to decide which videos are auto-played or appear “up next”. The precise formula it uses, however, is kept secret. Aggregate data revealing which YouTube videos are heavily promoted by the algorithm, or how many views individual videos receive from “up next” suggestions, is also withheld from the public.

Disclosing that data would enable academic institutions, fact-checkers and regulators (as well as journalists) to assess the type of content YouTube is most likely to promote. By keeping the algorithm and its results under wraps, YouTube ensures that any patterns that indicate unintended biases or distortions associated with its algorithm are concealed from public view.

By putting a wall around its data, YouTube, which is owned by Google, protects itself from scrutiny. The computer program written by Guillaume Chaslot overcomes that obstacle to force some degree of transparency.

The ex-Google engineer said his method of extracting data from the video-sharing site could not provide a comprehensive or perfectly representative sample of videos that were being recommended. But it can give a snapshot. He has used his software to detect YouTube recommendations across a range of topics and publishes the results on his website,

How Chaslot’s software works

The program simulates the behaviour of a YouTube user. During the election, it acted as a YouTube user might have if she were interested in either of the two main presidential candidates. It discovered a video through a YouTube search, and then followed a chain of YouTube–recommended titles appearing “up next”.

Chaslot programmed his software to obtain the initial videos through YouTube searches for either “Trump” or “Clinton”, alternating between the two to ensure they were each searched 50% of the time. It then clicked on several search results (usually the top five videos) and captured which videos YouTube was recommending “up next”.

The process was then repeated, this time by selecting a sample of those videos YouTube had just placed “up next”, and identifying which videos the algorithm was, in turn, showcasing beside those. The process was repeated thousands of times, collating more and more layers of data about the videos YouTube was promoting in its conveyor belt of recommended videos.

By design, the program operated without a viewing history, ensuring it was capturing generic YouTube recommendations rather than those personalised to individual users.

The data was probably influenced by the topics that happened to be trending on YouTube on the dates he chose to run the program: 22 August; 18 and 26 October; 29-31 October; and 1-7 November.

On most of those dates, the software was programmed to begin with five videos obtained through search, capture the first five recommended videos, and repeat the process five times. But on a handful of dates, Chaslot tweaked his program, starting off with three or four search videos, capturing three or four layers of recommended videos, and repeating the process up to six times in a row.

Whichever combinations of searches, recommendations and repeats Chaslot used, the program was doing the same thing: detecting videos that YouTube was placing “up next” as enticing thumbnails on the right-hand side of the video player.

His program also detected variations in the degree to which YouTube appeared to be pushing content. Some videos, for example, appeared “up next” beside just a handful of other videos. Others appeared “up next” beside hundreds of different videos across multiple dates.

In total, Chaslot’s database recorded 8,052 videos recommended by YouTube. He has made the code behind his program publicly available here. The Guardian has published the full list of videos in Chaslot’s database here.

Content analysis

The Guardian’s research included a broad study of all 8,052 videos as well as a more focused content analysis, which assessed 1,000 of the top recommended videos in the database. The subset was identified by ranking the videos, first by the number of dates they were recommended, and then by the number of times they were detected appearing “up next” beside another video.

We assessed the top 500 videos that were recommended after a search for the term “Trump” and the top 500 videos recommended after a “Clinton” search. Each individual video was scrutinised to determine whether it was obviously partisan and, if so, whether the video favoured the Republican or Democratic presidential campaign. In order to judge this, we watched the content of the videos and considered their titles.

About a third of the videos were deemed to be either unrelated to the election, politically neutral or insufficiently biased to warrant being categorised as favouring either campaign. (An example of a video that was unrelated to the election was one entitled “10 Intimate Scenes Actors Were Embarrassed to Film”; an example of a video deemed politically neutral or even-handed was this NBC News broadcast of the second presidential debate.)

Many mainstream news clips, including ones from MSNBC, Fox and CNN, were judged to fall into the “even-handed” category, as were many mainstream comedy clips created by the likes of Saturday Night Live, John Oliver and Stephen Colbert.

Formulating a view on these videos was a subjective process but for the most part it was very obvious which candidate videos benefited. There were a few exceptions. For example, some might consider this CNN clip, in which a Trump supporter forcefully defended his lewd remarks and strongly criticised Hillary Clinton and her husband, to be beneficial to the Republican. Others might point to the CNN anchor’s exasperated response, and argue the video was actually more helpful to Clinton. In the end, this video was too difficult for us categorise. It is an example of a video listed as not benefiting either candidate.

For two-thirds of the videos, however, the process of judging who the content benefited was relatively uncomplicated. Many videos clearly leaned toward one candidate or the other. For example, a video of a speech in which Michelle Obama was highly critical of Trump’s treatment of women was deemed to have leaned in favour of Clinton. A video falsely claiming Clinton suffered a mental breakdown was categorised as benefiting the Trump campaign.

We found that most of the videos labeled as benefiting the Trump campaign might be more accurately described as highly critical of Clinton. Many are what might be described as anti-Clinton conspiracy videos or “fake news”. The database appeared highly skewed toward content critical of the Democratic nominee. But for the purpose of categorisation, these types of videos, such as a video entitled “WHOA! HILLARY THINKS CAMERA’S OFF… SENDS SHOCK MESSAGE TO TRUMP”, were listed as favouring the Trump campaign.

Missing videos and bias

Roughly half of the YouTube-recommended videos in the database have been taken offline or made private since the election, either because they were removed by whoever uploaded them or because they were taken down by YouTube. That might be because of a copyright violation, or because the video contained some other breach of the company’s policies.

We were unable to watch original copies of missing videos. They were therefore excluded from our first round of content analysis, which included only videos we could watch, and concluded that 84% of partisan videos were beneficial to Trump, while only 16% were beneficial to Clinton.

Interestingly, the bias was marginally larger when YouTube recommendations were detected following an initial search for “Clinton” videos. Those resulted in 88% of partisan “Up next” videos being beneficial to Trump. When Chaslot’s program detected recommended videos after a “Trump” search, in contrast, 81% of partisan videos were favorable to Trump.

That said, the “Up next” videos following from “Clinton” and “Trump” videos often turned out to be the same or very similar titles. The type of content recommended was, in both cases, overwhelmingly beneficial to Trump, with a surprising amount of conspiratorial content and fake news damaging to Clinton.

Supplementary count

After counting only those videos we could watch, we conducted a second analysis to include those missing videos whose titles strongly indicated the content would have been beneficial to one of the campaigns. It was also often possible to find duplicates of these videos.

Two highly recommended videos in the database with one-sided titles were, for example, entitled “This Video Will Get Donald Trump Elected” and “Must Watch!! Hillary Clinton tried to ban this video”. Both of these were categorised, in the second round, as beneficial to the Trump campaign.

When all 1,000 videos were tallied – including the missing videos with very slanted titles – we counted 643 videos had an obvious bias. Of those, 551 videos (86%) favoured the Republican nominee, while only 92 videos (14%) were beneficial to Clinton.

Whether missing videos were included in our tally or not, the conclusion was the same. Partisan videos recommended by YouTube in the database were about six times more likely to favour Trump’s presidential campaign than Clinton’s.

Database analysis

All 8,052 videos were ranked by the number of “recommendations” – that is, the number of times they were detected appearing as “Up next” thumbnails beside other videos. For example, if a video was detected appearing “Up next” beside four other videos, that would be counted as four “recommendations”. If a video appeared “Up next” beside the same video on, say, three separate dates, that would be counted as three “recommendations”. (Multiple recommendations between the same videos on the same day were not counted.)

    Here are the 25 most recommended videos, according to the above metric.

  1. Trump supporter leaves CNN anchor speechless
  2. This Video Will Get Donald Trump Elected
  3. Must Watch!! Hillary Clinton tried to ban this video
  4. SR# 1271 NBC Crew – Crooked Hillary’s MASSIVE MELTDOWN at Commander-in-Chief Forum
  5. 10 Photos of MELANIA, TRUMP Wishes We’d Forget
  6. Full Interview: Donald Trump, Melania & Family with George Stephanopoulos
  7. Busted! Bill Clinton’s Face When Trump Brings Up The Rape Allegations is Priceless
  8. Donald Trump Has Won The 2016 Presidential Election
  9. Angry Ivanka Trump Walks Out Of Cosmo Interview
  10. TRUMP: the COMING LANDSLIDE ~Ancient Prophecy Documentary of Donald Trump / 2016
  12. “Obama out:” President Barack Obama’s hilarious final White House correspondents’ dinner speech
  13. Watch Live: The Final Presidential Debate
  14. Can Donald Trump win the presidential election?
  15. Michelle Obama’s EPIC Speech On Trump’s Sexual Behavior (FULL | HD)
  16. ALL LEAKED TRUMP FOOTAGE Lewd comments Made on Daughter Ivanka Mini Documentary
  17. Melania Trump – The Woman Behind Donald
  20. Bill Clinton’s Sexual Escapades
  21. Anonymous Release Bone-Chilling video of Huma Abedin every American Needs to See
  22. BREAKING: Michael Moore Admits Trump Is Right
  23. BREAKING: FBI Reopens Hillary Clinton Email Investigation
  24. Full monologue: Donald Trump roasts Hillary Clinton at Al Smith charity dinner
  25. Hillary Cheats AGAIN?? Debate #3 Earphone AND Teleprompter?? BUSTED ON TV!

Chaslot’s database also contained information the YouTube channels used to broadcast videos. (This data was only partial, because it was not possible to identify channels behind missing videos.) Here are the top 10 channels, ranked in order of the number of “recommendations” Chaslot’s program detected.

  1. The Alex Jones Channel
  2. Fox News
  4. The Young Turks
  5. MSNBC
  6. CBS News
  7. TheRichest
  8. The Next News Network
  9. CNN
  10. Right Side Broadcasting Network

Campaign Speeches

We searched the entire database to identify videos of full campaign speeches by Trump and Clinton, their spouses and other political figures. This was done through searches for the terms “speech” and “rally” in video titles followed by a check, where possible, of the content. Here is a list of the videos of campaign speeches found in the database.

  1. Donald Trump (382 videos)
  2. Barack Obama (42 videos)
  3. Mike Pence (18 videos)
  4. Hillary Clinton (18 videos)
  5. Melania Trump (12 videos)
  6. Michelle Obama (10 videos)
  7. Joe Biden (42 videos)

Graphika analysis

The Guardian shared the entire database with Graphika, a commercial analytics firm that has tracked political disinformation campaigns. The company merged the database of YouTube-recommended videos with its own dataset of Twitter networks that were active during the 2016 election.

The company discovered more than 513,000 Twitter accounts had tweeted links to at least one of the YouTube-recommended videos in the six months leading up to the election. More than 36,000 accounts tweeted at least one of the videos 10 or more times. The most active 19 of these Twitter accounts cited videos more than 1,000 times – evidence of automated activity.

“Over the months leading up to the election, these videos were clearly boosted by a vigorous, sustained social media campaign involving thousands of accounts controlled by political operatives, including a large number of bots,” said John Kelly, Graphika’s executive director. “The most numerous and best-connected of these were Twitter accounts supporting President Trump’s campaign, but a very active minority included accounts focused on conspiracy theories, support for WikiLeaks, and official Russian outlets and alleged disinformation sources.”

graphika analysis

YT Amplification Photograph: Graphika

Kelly then looked specifically at which Twitter networks were pushing videos that we had categorised as beneficial to Trump or Clinton. “Pro-Trump videos were pushed by a huge network of pro-Trump accounts, assisted by a smaller network of dedicated pro-Bernie and progressive accounts. Connecting these two groups and also pushing the pro-Trump content were a mix of conspiracy-oriented, ‘Truther’, and pro-Russia accounts,” Kelly concluded. “Pro-Clinton videos were pushed by a much smaller network of accounts that now identify as a ‘resist’ movement. Far more of the links promoting Trump content were repeat citations by the same accounts, which is characteristic of automated amplification.”

Finally, we shared with Graphika a subset of a dozen videos that were both highly recommended by YouTube, according to the above metrics, and particularly egregious examples of fake or divisive anti-Clinton video content. Kelly said he found “an unmistakable pattern of coordinated social media amplification” with this subset of videos.

The tweets promoting them almost always began after midnight the day of the video’s appearance on YouTube, typically between 1am and 4am EDT, an odd time of the night for US citizens to be first noticing videos. The sustained tweeting continued “at a more or less even rate” for days or weeks until election day, Kelly said, when it suddenly stopped. That would indicate “clear evidence of coordinated manipulation”, Kelly added.

YouTube statement

YouTube provided the following response to this research:

“We have a great deal of respect for the Guardian as a news outlet and institution. We strongly disagree, however, with the methodology, data and, most importantly, the conclusions made in their research,” a YouTube spokesperson said. “The sample of 8,000 videos they evaluated does not paint an accurate picture of what videos were recommended on YouTube over a year ago in the run-up to the US presidential election.”

“Our search and recommendation systems reflect what people search for, the number of videos available, and the videos people choose to watch on YouTube,” the continued. “That’s not a bias towards any particular candidate; that is a reflection of viewer interest.” The spokesperson added: “Our only conclusion is that the Guardian is attempting to shoehorn research, data, and their incorrect conclusions into a common narrative about the role of technology in last year’s election. The reality of how our systems work, however, simply doesn’t support that premise.”

Last week, it emerged that the Senate intelligence committee wrote to Google demanding to know what the company was doing to prevent a “malign incursion” of YouTube’s recommendation algorithm – which the top-ranking Democrat on the committee had warned was “particularly susceptible to foreign influence”. The following day, YouTube asked to update its statement.

“Throughout 2017 our teams worked to improve how YouTube handles queries and recommendations related to news. We made algorithmic changes to better surface clearly-labeled authoritative news sources in search results, particularly around breaking news events,” the statement said. “We created a ‘Breaking News’ shelf on the YouTube homepage that serves up content from reliable news sources. When people enter news-related search queries, we prominently display a ‘Top News’ shelf in their search results with relevant YouTube content from authoritative news sources.”

It continued: “We also take a tough stance on videos that do not clearly violate our policies but contain inflammatory religious or supremacist content. These videos are placed behind an warning interstitial, are not monetized, recommended or eligible for comments or user endorsements.”

“We appreciate the Guardian’s work to shine a spotlight on this challenging issue,” YouTube added. “We know there is more to do here and we’re looking forward to making more announcements in the months ahead.”

The above research was conducted by Erin McCormick, a Berkeley-based investigative reporter and former San Francisco Chronicle database editor, and Paul Lewis, the Guardian’s west coast bureau chief and former Washington correspondent.

Source link

read more
Donald TrumpHillary ClintonSilicon ValleyTechnologyUS elections 2016US politicsYouTube

‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth | Technology

‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth | Technology

It was one of January’s most viral videos. Logan Paul, a YouTube celebrity, stumbles across a dead man hanging from a tree. The 22-year-old, who is in a Japanese forest famous as a suicide spot, is visibly shocked, then amused. “Dude, his hands are purple,” he says, before turning to his friends and giggling. “You never stand next to a dead guy?”

Paul, who has 16 million mostly teen subscribers to his YouTube channel, removed the video from YouTube 24 hours later amid a furious backlash. It was still long enough for the footage to receive 6m views and a spot on YouTube’s coveted list of trending videos.

The next day, I watched a copy of the video on YouTube. Then I clicked on the “Up next” thumbnails of recommended videos that YouTube showcases on the right-hand side of the video player. This conveyor belt of clips, which auto-play by default, are designed to seduce us spending more time on Google’s video broadcasting platform. I was curious where they might lead.

The answer was a slew of videos of men mocking distraught teenage fans of Logan Paul, followed by CCTV footage of children stealing things and, a few clicks later, a video of children having their teeth pulled out with bizarre, homemade contraptions.

I had cleared my history, deleted my cookies, and opened a private browser to be sure YouTube was not personalising recommendations. This was the algorithm taking me on a journey of is own volition, and it culminated with a video of two boys, aged about five or six, punching and kicking one another.

“I’m going to post it on YouTube,” said a teenage girl, who sounded like she might be an older sibling. “Turn around an punch the heck out of that little boy.” They scuffled for several minutes until one had knocked the other’s tooth out.

There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

Company insiders tell me the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works – an academic paper that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them – YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

Lately, it has also become one of the most controversial. The algorithm has been found to be promoting conspiracy theories about the Las Vegas mass shooting and incentivising, through recommendations, a thriving subculture that targets children with disturbing content such as cartoons in which the British children’s character Peppa Pig eats her father or drinks bleach.

Lewd and violent videos have been algorithmically served up to toddlers watching YouTube Kids, a dedicated app for children. One YouTube creator who was banned from making advertising revenues from his strange videos – which featured his children receiving flu shots, removing earwax, and crying over dead pets – told a reporter he had only been responding to the demands of Google’s algorithm. “That’s what got us out there and popular,” he said. “We learned to fuel it and do whatever it took to please the algorithm.”

Google has responded to these controversies in a process akin to Whac-A-Mole: expanding the army of human moderators, removing offensive YouTube videos identified by journalists and de-monetising the channels that create them. But none of those moves has diminished a growing concern that something has gone profoundly awry with the artificial intelligence powering YouTube.

Yet one stone has so far been largely unturned. Much has been written about Facebook and Twitter’s impact on politics, but in recent months academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”

If YouTube’s recommendation algorithm really has evolved to promote more disturbing content, how did that happen? And what is it doing to our politics?

‘Like reality, but distorted’

Guillaume Chaslot, an ex-Google software engineer. Photograph: Talia Herman for the Guardian

Those are not easy questions to answer. Like all big tech companies, YouTube does not allow us to see the algorithms that shape our lives. They are secret formulas, proprietary software, and only select engineers are entrusted to work on the algorithm. Guillaume Chaslot, a 36-year-old French computer programmer with a PhD in artificial intelligence, was one of those engineers.

During the three years he worked at Google, he was placed for several months with a team of YouTube engineers working on the recommendation system. The experience led him to conclude that the priorities YouTube gives its algorithms are dangerously skewed.

“YouTube is something that looks like reality, but it is distorted to make you spend more time online,” he tells me when we meet in Berkeley, California. “The recommendation algorithm is not optimising for what is truthful, or balanced, or healthy for democracy.”

Chaslot explains that the algorithm never stays the same. It is constantly changing the weight it gives to different signals: the viewing patterns of a user, for example, or the length of time a video is watched before someone clicks away.

The engineers he worked with were responsible for continuously experimenting with new formulas that would increase advertising revenues by extending the amount of time people watched videos. “Watch time was the priority,” he recalls. “Everything else was considered a distraction.”

Chaslot was fired by Google in 2013, ostensibly over performance issues. He insists he was let go after agitating for change within the company, using his personal time to team up with like-minded engineers to propose changes that could diversify the content people see.

He was especially worried about the distortions that might result from a simplistic focus on showing people videos they found irresistible, creating filter bubbles, for example, that only show people content that reinforces their existing view of the world. Chaslot said none of his proposed fixes were taken up by his managers. “There are many ways YouTube can change its algorithms to suppress fake news and improve the quality and diversity of videos people see,” he says. “I tried to change YouTube from the inside but it didn’t work.”

YouTube told me that its recommendation system had evolved since Chaslot worked at the company and now “goes beyond optimising for watchtime”. The company said that in 2016 it started taking into account user “satisfaction”, by using surveys, for example, or looking at how many “likes” a video received, to “ensure people were satisfied with what they were viewing”. YouTube added that additional changes had been implemented in 2017 to improve the news content surfaced in searches and recommendations and discourage the promotion of videos containing “inflammatory religious or supremacist” content.

It did not say why Google, which acquired YouTube in 2006, waited over a decade to make those changes. Chaslot believes such changes are mostly cosmetic, and have failed to fundamentally alter some disturbing biases that have evolved in the algorithm. In the summer of 2016, he built a computer program to investigate.

The software Chaslot wrote was designed to provide the world’s first window into YouTube’s opaque recommendation engine. The program simulates the behaviour of a user who starts on one video and then follows the chain of recommended videos – much as I did after watching the Logan Paul video – tracking data along the way.

It finds videos through a word search, selecting a “seed” video to begin with, and recording several layers of videos that YouTube recommends in the “up next” column. It does so with no viewing history, ensuring the videos being detected are YouTube’s generic recommendations, rather than videos personalised to a user. And it repeats the process thousands of times, accumulating layers of data about YouTube recommendations to build up a picture of the algorithm’s preferences.

Over the last 18 months, Chaslot has used the program to explore bias in YouTube content promoted during the French, British and German elections, global warming and mass shootings, and published his findings on his website, Each study finds something different, but the research suggests YouTube systematically amplifies videos that are divisive, sensational and conspiratorial.

When his program found a seed videos by the searching the query “who is Michelle Obama?” and then followed the chain of “up next” suggestions, for example, most of the recommended videos said she “is a man”. More than 80% of the YouTube-recommended videos about the pope detected by his program described the Catholic leader as “evil”, “satanic”, or “the anti-Christ”. There were literally millions of videos uploaded to YouTube to satiate the algorithm’s appetite for content claiming the earth is flat. “On YouTube, fiction is outperforming reality,” Chaslot says.

A voter in Ohio. Trump won the election by just 80,000 votes spread across three swing states.

A voter in Ohio. Trump won the election by just 80,000 votes spread across three swing states. Photograph: Ty Wright/Getty Images

He believes one of the most shocking examples was detected by his program in the run-up to the 2016 presidential election. As he observed in a short, largely unnoticed blogpost published after Donald Trump was elected, the impact of YouTube’s recommendation algorithm was not neutral during the presidential race: it was pushing videos that were, in the main, helpful to Trump and damaging to Hillary Clinton. “It was strange,” he explains to me. “Wherever you started, whether it was from a Trump search or a Clinton search, the recommendation algorithm was much more likely to push you in a pro-Trump direction.”

Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.

Even a small bias in the videos would have been significant. “Algorithms that shape the content we see can have a lot of impact, particularly on people who have not made up their mind,” says Luciano Floridi, a professor at the University of Oxford’s Digital Ethics Lab, who studies the ethics of artificial intelligence. “Gentle, implicit, quiet nudging can over time edge us toward choices we might not have otherwise made.”

Promoting conspiracy theories

Chaslot sent me a database of more YouTube-recommended videos his program identified in the three months leading up the presidential election. It contained more than 8,000 videos – all of them detected by his program appearing “up next” on 12 dates between August and November 2016, after equal numbers of searches for “Trump” and “Clinton”.

It was not a comprehensive set of videos and it may not have been a perfectly representative sample. But it was, Chaslot said, a previously unseen dataset of what YouTube was recommending to people interested in content about the candidates – one snapshot, in other words, of the algorithm’s preferences.

Jonathan Albright, research director at the Tow Center for Digital Journalism, who reviewed the code used by Chaslot, says it is a relatively straightforward piece of software and a reputable methodology. “This research captured the apparent direction of YouTube’s political ecosystem,” he says. “That has not been done before.”

I spent weeks watching, sorting and categorising the trove of videos with Erin McCormick, an investigative reporter and expert in database analysis. From the start, we were stunned by how many extreme and conspiratorial videos had been recommended, and the fact that almost all of them appeared to be directed against Clinton.

Some of the videos YouTube was recommending were the sort we had expected to see: broadcasts of presidential debates, TV news clips, Saturday Night Live sketches. There were also videos of speeches by the two candidates – although, we found, the database contained far more YouTube-recommended speeches by Trump than Clinton.

But what was most compelling was how often Chaslot’s software detected anti-Clinton conspiracy videos appearing “up next” beside other videos.

There were dozens of clips stating Clinton had had a mental breakdown, reporting she had syphilis or Parkinson’s disease, accusing her of having secret sexual relationships, including with Yoko Ono. Many were even darker, fabricating the contents of WikiLeaks disclosures to make unfounded claims, accusing Clinton of involvement in murders or connecting her to satanic and paedophilic cults.

One video that Chaslot’s data indicated was pushed particularly hard by YouTube’s algorithm was a bizarre, one-hour film claiming Trump’s rise was predicted in Isaiah 45. Another was entitled: “BREAKING: VIDEO SHOWING BILL CLINTON RAPING 13 YR-OLD WILL PLUNGE RACE INTO CHAOS ANONYMOUS CLAIMS”. The recommendation engine appeared to have been particularly helpful to the Alex Jones Channel, which broadcasts far-right conspiracy theories under the Infowars brand.

Conspiracy theorist and radio talk show host Alex Jones speaks during a rally in support of Donald Trump in July 2016.

The conspiracy theorist and talkshow host Alex Jones. Photograph: Brooks Kraft/Getty Images

There were too many videos in the database for us to watch them all, so we focused on 1,000 of the top-recommended videos. We sifted through them one by one to determine whether the content was likely to have benefited Trump or Clinton. Just over a third of the videos were either unrelated to the election or contained content that was broadly neutral or even-handed. Of the remaining 643 videos, 551 were videos favouring Trump, while only only 92 favoured the Clinton campaign.

The sample we had looked at suggested Chaslot’s conclusion was correct: YouTube was six times more likely to recommend videos that aided Trump than his adversary. YouTube presumably never programmed its algorithm to benefit one candidate over another. But based on this evidence, at least, that is exactly what happened.

‘Leading people down hateful rabbit holes’

“We have a great deal of respect for the Guardian as a news outlet and institution,” a YouTube spokesperson emailed me after I forwarded them our findings. “We strongly disagree, however, with the methodology, data and, most importantly, the conclusions made in their research.”

The spokesperson added: “Our search and recommendation systems reflect what people search for, the number of videos available, and the videos people choose to watch on YouTube. That’s not a bias towards any particular candidate; that is a reflection of viewer interest.”

It was a curious response. YouTube seemed to be saying that its algorithm was a neutral mirror of the desires of the people who use it – if we don’t like what it does, we have ourselves to blame. How does YouTube interpret “viewer interest” – and aren’t “the videos people choose to watch” influenced by what the company shows them?

Offered the choice, we may instinctively click on a video of a dead man in a Japanese forest, or a fake news clip claiming Bill Clinton raped a 13-year-old. But are those in-the-moment impulses really a reflect of the content we want to be fed?

Tufekci, the sociologist who several months ago warned about the impact YouTube may have had on the election, tells me YouTube’s recommendation system has probably figured out that edgy and hateful content s engaging. “This is a bit like an autopilot cafeteria in a school that has figured out children have sweet teeth, and also like fatty and salty foods,” she says. “So you make a line offering such food, automatically loading the next plate as soon as the bag of chips or candy in front of the young person has been consumed.”

Once that gets normalised, however, what is fractionally more edgy or bizarre becomes, Tufekci says, novel and interesting. “So the food gets higher and higher in sugar, fat and salt – natural human cravings – while the videos recommended and auto-played by YouTube get more and more bizarre or hateful.”

But why would a bias toward ever more weird or divisive videos benefit one candidate over another? That depends on the candidates. Trump’s campaign was nothing if not weird and divisive. Tufekci points to studies showing that “field of misinformation” largely tilted anti-Clinton before the election. “Fake news providers,” she says, “found that fake anti-Clinton material played much better with the pro-Trump base than did fake anti-Trump material with the pro-Clinton base.”

She adds: “The question before us is the ethics of leading people down hateful rabbit holes full of misinformation and lies at scale just because it works to increase the time people spend on the site – and it does work.”

Tufekci was one of several academics I shared our research with. Philip Howard, a professor at the Oxford Internet Institute, who has studied how disinformation spread during the election, was another. He questions whether a further factor might have been at play. “This is important research because it is seems to be the first systematic look into how YouTube may have been manipulated,” he says, raising the possibility that the algorithm was gamed as part of the same propaganda campaigns that flourished on Twitter and Facebook.

In testimony to the House intelligence committee, investigating Russian interference in the election, Google’s general counsel, Kent Walker, played down the degree to which Moscow’s propaganda efforts infiltrated YouTube. The company’s internal investigation had only identified 18 YouTube channels and 1,100 videos suspected of being linked to Russia’s disinformation campaign, he told the committee in December – and generally the videos had relatively small numbers of views. He added: “We believe that the activity we found was limited because of various safeguards that we had in place in advance of the 2016 election, and the fact that Google’s products didn’t lend themselves to the kind of micro-targeting or viral dissemination that these actors seemed to prefer.”

General counsels for Twitter, Facebook and Google prepare to testify before the House intelligence committee hearing on Russia’s use of social media to influence the election.

General counsels for Twitter, Facebook and Google prepare to testify before the House intelligence committee hearing on Russia’s use of social media to influence the election. Photograph: Shawn Thew/EPA

Walker made no mention of YouTube recommendations. Correspondence made public just last week, however, reveals that Senator Mark Warner, the ranking Democrat on the intelligence committee, later wrote to the company about the algorithm, which he said seemed “particularly susceptible to foreign influence”. The senator demanded to know what the company was specifically doing to prevent a “malign incursion” of YouTube’s recommendation system. Walker, in his written reply, offered few specifics, but said YouTube had “a sophisticated spam and security ­breach detection system to identify anomalous behavior and malignant incursions”.

Tristan Harris, a former Google insider turned tech whistleblower, likes to describe Facebook as a “living, breathing crime scene for what happened in the 2016 election” that federal investigators have no access to. The same might be said of YouTube. About half the videos Chaslot’s program detected being recommended during the election have now vanished from YouTube – many of them taken down by their creators. Chaslot has always thought this suspicious. These were videos with titles such as “Must Watch!! Hillary Clinton tried to ban this video”, watched millions of times before they disappeared. “Why would someone take down a video that has been viewed millions of times?” he asks.

How serious are the allegations?

The story of Donald Trump and Russia comes down to this: a sitting president or his campaign is suspected of having coordinated with a foreign country to manipulate a US election. The story could not be bigger, and the stakes for Trump – and the country – could not be higher.

What are the key questions?

Investigators are asking two basic questions: did Trump’s presidential campaign collude at any level with Russian operatives to sway the 2016 US presidential election? And did Trump or others break the law to throw investigators off the trail?

What does the country think?

While a majority of the American public now believes that Russia tried to disrupt the US election, opinions about Trump campaign involvement tend to split along partisan lines: 73% of Republicans, but only 13% of Democrats, believe Trump did “nothing wrong” in his dealings with Russia and its president, Vladimir Putin.

What are the implications for Trump?

The affair has the potential to eject Trump from office. Experienced legal observers believe that prosecutors are investigating whether Trump committed an obstruction of justice. Both Richard Nixon and Bill Clinton – the only presidents to face impeachment proceedings in the last century – were accused of obstruction of justice. But Trump’s fate is probably up to the voters. Even if strong evidence of wrongdoing by him or his cohort emerged, a Republican congressional majority would probably block any action to remove him from office. (Such an action would be a historical rarity.)

What has happened so far?

Former foreign policy adviser George Papadopolous pleaded guilty to perjury over his contacts with Russians linked to the Kremlin, and the president’s former campaign manager Paul Manafort and another aide face charges of money laundering.

When will the inquiry come to an end?

The investigations have an open timeline.

I located a copy of “This Video Will Get Donald Trump Elected”, a viral sensation that was watched more than 10m times before it vanished from YouTube. It was a benign-seeming montage of historical footage of Trump, accompanied by soft piano music. But when I played the video in slow motion, I saw that it contained weird flashes of Miley Cyrus licking a mirror. It seemed an amateurish and bizarre attempt at inserting subliminal, sexualised imagery. But it underscored how little oversight we have over anyone who might want to use YouTube to influence public opinion on a vast scale.

I shared the entire database of 8,000 YouTube-recommended videos with John Kelly, the chief executive of the commercial analytics firm Graphika, which has been tracking political disinformation campaigns. He ran the list against his own database of Twitter accounts active during the election, and concluded many of the videos appeared to have been pushed by networks of Twitter sock puppets and bots controlled by pro-Trump digital consultants with “a presumably unsolicited assist” from Russia.

“I don’t have smoking-gun proof of who logged in to control those accounts,” he says. “But judging from the history of what we’ve seen those accounts doing before, and the characteristics of how they tweet and interconnect, they are assembled and controlled by someone – someone whose job was to elect Trump.”

Chaslot and some of the academics I spoke to felt this social media activity was significant. YouTube’s algorithm may have developed its biases organically, but could it also have been nudged into spreading those videos even further? “If a video starts skyrocketing, there’s no question YouTube’s algorithm is going to start pushing it,” Albright says.

YouTube did not deny that social media propaganda might have influenced its recommendations, but played down the likelihood, stressing its system “does not optimise” for traffic from Twitter or Facebook. “It appears as if the Guardian is attempting to shoehorn research, data and their conclusions into a common narrative about the role of technology in last year’s election,” the spokesperson added. “The reality of how our systems work, however, simply don’t support this premise.”

After the Senate’s correspondence with Google over possible Russian interference with YouTube’s recommendation algorithm was made public last week, YouTube sent me a new statement. It emphasised changes it made in 2017 to discourage the recommendation system from promoting some types of problematic content. “We appreciate the Guardian’s work to shine a spotlight on this challenging issue,” it added. “We know there is more to do here and we’re looking forward to making more announcements in the months ahead.”

Content creators

A video by the Next News Network fed on debunked allegations against Bill Clinton.

A video by the Next News Network fed on debunked allegations against Bill Clinton. Photograph: YouTube

With its flashy graphics and slick-haired anchor, the Next News Network has the appearances of a credible news channel. But behind the facade is a dubious operation that recycles stories harvested from far-right publications, fake news sites and Russian media outlets.

The channel is run by anchor Gary Franchi, once a leading proponent of a conspiracy that claimed the US government was creating concentration camps for its citizens. It was the Next News Network that broadcast the fabricated claims about Bill Clinton raping a teenager, although Franchi insists he is not a fake news producer. (He tells me he prefers to see his channel as “commentating on conservative news and opinion”.)

In the months leading up to the election, the Next News Network turned into a factory of anti-Clinton news and opinion, producing dozens of videos a day and reaching an audience comparable to that of MSNBC’s YouTube channel.

Chaslot’s research indicated Franchi’s success could largely be credited to YouTube’s algorithms, which consistently amplified his videos to be played “up next”. YouTube had sharply dismissed Chaslot’s research.

I contacted Franchi to see who was right. He sent me screen grabs of the private data given to people who upload YouTube videos, including a breakdown of how their audiences found their clips. The largest source of traffic to the Bill Clinton rape video, which was viewed 2.4m times in the month leading up to the election, was YouTube recommendations.

The same was true of all but one of the videos Franchi sent me data for. A typical example was a Next News Network video entitled “WHOA! HILLARY THINKS CAMERA’S OFF… SENDS SHOCK MESSAGE TO TRUMP” in which Franchi, pointing to a tiny movement of Clinton’s lips during the a TV debate, claims she says “fuck you” to her presidential rival. The data Franchi shared revealed in the month leading up to the election, 73% of the traffic to the video – amounting to 1.2m of its views – was due to YouTube recommendations. External traffic accounted for only 3% of the views.

Franchi is a professional who makes a living from his channel, but many of the other creators of anti-Clinton videos I spoke to were amateur sleuths or part-time conspiracy theorists. Typically, they might receive a few hundred views on their videos, so they were shocked when their anti-Clinton videos started to receive millions of views, as if they were being pushed by an invisible force.

In every case, the largest source of traffic – the invisible force – came from the clips appearing in the “up next” column. William Ramsey, an occult investigator from southern California who made “Irrefutable Proof: Hillary Clinton Has a Seizure Disorder!”, shared screen grabs that showed the recommendation algorithm pushed his video even after YouTube had emailed him to say it violated its guidelines. Ramsey’s data showed the video was watched 2.4m times by US-based users before election day. “For a nobody like me, that’s a lot,” he says. “Enough to sway the election, right?”

A video by William Ramsey.

Daniel Alexander Cannon, a conspiracy theorist from South Carolina, tells me: “Every video I put out about the Clintons, YouTube would push it through the roof.” His best-performing clip was a video titled “Hillary and Bill Clinton ‘The 10 Photos You Must See’”, essentially a slideshow of appalling (and seemingly doctored) images of the Clintons with voiceover in which Cannon speculates on their health. It has been seen 3.7m times on YouTube, and 2.9m of those views, Cannon said, came from “up next” recommendations.

Chaslot has put a spotlight on a trove of anti-Clinton conspiracy videos that had been hidden in the shadows – unless, that is, you were one of the the millions YouTube served them to. But his research also does something more important: revealing how thoroughly our lives are now mediated by artificial intelligence.

Less than a generation ago, the way voters viewed their politicians was largely shaped by tens of thousands of newspaper editors, journalists and TV executives. Today, the invisible codes behind the big technology platforms have become the new kingmakers.

They pluck from obscurity people like Dave Todeschini, a retired IBM engineer who, “let off steam” during the election by recording himself opining on Clinton’s supposed involvement in paedophilia, child sacrifice and cannibalism. “It was crazy, it was nuts,” he said of the avalanche of traffic to his YouTube channel, which by election day had more than 2m views.

Dave Todeschini describes a Hillary Clinton “pedophile ring”.

“Breaking news,” he announced in one of his last dispatches before the vote: the FBI, he said, had just found graphic images of Clinton and her aide in “sexually compromising positions” with a teenager. “It seems to me, with Bill Clinton’s trips to paedophile island a number of times, that what we have here is nothing short of the Clinton paedophile ring,” he declared.

Todeschini sits in his darkened living room in New Jersey, staring into his smartphone. “I’ll tell you what: the rabbit hole just got a couple of yards deeper.”

Contact the author: [email protected].

A full description of the methodology Chaslot used to detect YouTube recommendations (and an explanation of how the Guardian analysed them) is available here.

Source link

read more
RussiaTechnologyTwitterUS elections 2016

Twitter admits far more Russian bots posted on election than it had disclosed | Technology

no thumb

Company says it removed more than 50,000 accounts and reported them to investigators, marking latest upward revision of figures


Twitter told Congress in October that it had found 36,746 Russian accounts that had posted about the election.
Photograph: Anadolu Agency/Getty Images

Twitter has admitted that more than 50,000 Russia-linked accounts used its service to post automated material about the 2016 US election – a far greater number than previously disclosed.

Announcing the discovery in a post to its website late on Friday, the company said the posts had reached at least 677,775 Americans, all of whom would be receiving a warning by email.

Twitter said it had removed the 50,258 accounts linked to Russia and turned over details to congressional investigators who are looking into Moscow’s interference with the US election campaign.

The company stressed that the Russian accounts represented a small proportion of the total number using its service. “However any such activity represents a challenge to democratic societies everywhere, and we’re committed to continuing to work on this important issue,” it said.

Strictly defined, a Twitter bot is any automated account on the social network. That can be something as simple as automatically tweeting links to news articles – most of the Guardian’s social media accounts are technically Twitter bots, for instance – to complex interactions like automatically generating Emoji-based art or automatically replying to climate change deniers with scientific evidence.

But, as with “troll” and “fake news”, the strict definition has been forgotten as the term has become one of political conflict. The core of the debate is the accusation that a number of political tweets were sent by “Russian bots”, with the intention of subverting political debate, or simply creating chaos generally.

Based on what we know about Russian information warfare, the Twitter accounts run by the country’s “troll army”, based in a nondescript office building in St Petersburg, are unlikely to be automated at all. Instead, accounts like @SouthLoneStar, which pretended to be a Texan right-winger, were probably run by individuals paid 45-65,000 rubles a month to sow discord in Western politics.

In other ways, they resembled bots – hence the confusion. They rarely tweeted about themselves, sent far more posts than a typical user, and were single-minded in what they shared. People behaving like bots pretending to be people: this is the nature of modern propaganda.

Twitter and Facebook have been sharply criticised by US lawmakers and others for failing to prevent a torrent of propaganda and disinformation about American politics being posted to their networks during the contentious 2016 campaign.

US intelligence authorities have concluded that Russia’s government mounted an assault on the US election campaign across several fronts including social media, with the intention of aiding Donald Trump and harming Clinton, his Democratic opponent.

Twitter said on Friday that more than 3,800 accounts had been traced back to Russian state operatives. It gave examples of their tweets, which included an attack on Hillary Clinton’s performance in a presidential debate.

Posts by one Russian state propaganda account were retweeted by senior advisers to Trump, including his son Donald Jr and Kellyanne Conway, who is now a senior aide to the president in the White House.

The company’s update on Friday was only the latest in a series of upward revisions to the estimated scale of exploitation of its platform by Russian state entities and automated accounts, or “bots”, whose origins are less clear.

Twitter previously told Congress last October that it had discovered 36,746 Russian accounts that posted automated material about the US election, and that Russian state operatives were behind at least 2,752. This figure was ten times greater than the 201 state-backed accounts identified by the company the previous month.

At that time, a senior company executive gave testimony to the Senate intelligence committee that was described by Mark Warner, the committee’s senior Democrat, as “frankly inadequate on almost every level”.

Facebook, meanwhile, has estimated that as many as 126 million Americans had been exposed to Russian-backed material on its platform during the 2016 election campaign.

In the new announcement on Friday, Twitter said it had now identified a total of 3,814 state operative accounts connected to the Internet Research Agency (IRA), a “troll farm” backed by the Russian government. Twitter said these IRA accounts posted 175,993 tweets during the election period, and 8.4% of these were election-related.

The company said that it would take the unusual step of emailing all people in the US who had followed one of the Russian accounts or retweeted or liked any tweet posted by them during the election campaign period.

Source link

read more
Digital mediaEuropeFacebookInstagramInternetMediaPoliticsRussiaSocial mediaSocial networkingTechnologyUK newsUS elections 2016US newsUS politicsWorld news

Facebook to tell users if they interacted with Russia’s ‘troll army’ | Technology

no thumb

Facebook has promised to tell users whether they liked or followed a member of Russia’s notorious “troll army” who are accused of trying to influence elections in the United States and the United Kingdom.

The social network says it will create a tool allowing users to see whether they interacted with a Facebook page or Instagram account created by the Internet Research Agency (IRA), a state-backed organisation based in St Petersburg that carries out online misinformation operations.

“It is important that people understand how foreign actors tried to sow division and mistrust using Facebook before and after the 2016 US election,” the company said in a statement. “That’s why as we have discovered information, we have continually come forward to share it publicly and have provided it to congressional investigators. And it’s also why we’re building the tool we are announcing today.”

The tool will not be able to warn everyone who may have seen content created by the IRA, however. The company estimates that more than 140 million people, across both Facebook and Instagram, may have seen a story initially created or shared by one of those Russian-run accounts or page, in addition to the 10 million people who saw adverts bought by Russian state-sponsored actors.

The majority of those people will not have liked or followed a Russian-backed account, instead having seen the content when it was shared by friends or promoted on to their newsfeed through some other facet of Facebook’s curation algorithm. Facebook will not tell those users about their exposure to misinformation, although the company has not said whether it is unable, or simply unwilling, to provide that information.

Both Facebook and Twitter have steadily been making public the results of their investigations into Russian influence operations on the 2016 US election. As well as this new Facebook tool, Twitter released to the US Congress a list of 2,752 accounts it believes were created by Russian actors in an attempt to sway the election.

Source link

read more
ActivismBloggingCataloniaDonald TrumpDonald Trump JrEdward SnowdenEuropeFacebookHackingInternetJulian AssangeMediaRussiaSocial networkingTechnologyTwitterUS elections 2016US newsUS politicsWikiLeaksWorld news

Disruption games: why are libertarians lining up with autocrats to undermine democracy? | Technology

Disruption games: why are libertarians lining up with autocrats to undermine democracy? | Technology

At a time when strange alliances are disrupting previously stable democracies, the Catalan independence referendum was a perfect reflection of a weird age. Along with the flag-waving and calls for ‘freedom’ from Madrid, the furore that followed the vote unleashed some of the darker elements that have haunted recent turbulent episodes in Europe and America: fake news, Russian mischief and, marching oddly in step, libertarian activism.

From his residence of more than five years inside the Ecuadorian embassy in London, Assange tweeted 80 times in support of Catalan secession, and his views were amplified by the state-run Russian news agency, Sputnik, making him the most quoted English-language voice on Twitter, according to independent research and the Sydney Morning Herald.

In second place was Edward Snowden, another champion of transparency, who like Assange had little by way of a track record on Spanish politics. Together, Snowden and Assange accounted for a third of all Twitter traffic under the #Catalonia hashtag.

At the same time, a European Union’s counter-propaganda unit detected an upsurge in pro-Kremlin fake news on the political crisis, playing up the tensions.

“World powers prepare for war in Europe,” one Russian politics site declared in its headline.

The same patterns were apparent in the Brexit vote, Donald Trump’s shock victory, the surge of the Front National in France and the dramatic ascent of Five Star Movement in Italy, from the pet project of a comedian, Beppe Grillo, to the second most powerful force in Italy.

In all cases, libertarians viscerally opposed to centralised power made common cause with a brutally autocratic state apparatus in Moscow, an American plutocrat with a deeply murky financial record, and the instinctively authoritarian far right. All in the name of disruption of government and liberal norms in western democracies.

So why are the pioneering crusaders of total transparency and freedom of information lining up alongside the most powerful exponents of disinformation and disruption?

This has not just been a marriage of convenience. There are elements of ideological bonding too. In Twitter direct messages during the last throes of the US election campaign, released over the past week, WikiLeaks, which US intelligence has deemed a tool of Russian intelligence, attempted to woo Trump’s eldest son, Donald Trump Jr, with offers of secret collusion.

The furore that followed the Catalan vote unleashed fake news, Russian mischief and, oddly, libertarian activism. Photograph: Jeff J Mitchell/Getty Images

The messages from the official WikiLeaks account, first published in the Atlantic, ask for a leak of the future president’s tax return to soften the blow of its eventual publication, and to give WikiLeaks the appearance of impartiality given it had already released a trove of documents hacked from the Democratic party (by Russia, according to US intelligence).

Donald Jr only replied occasionally to the WikiLeaks emails, but appears in some case to have acted on them, notifying colleagues. In one instance, his father tweeted a reference to WikiLeaks 15 minutes after the group had been in touch.

WikiLeaks grew steadily bolder in its proposals, urging the Trump campaign not to concede on election night if he lost but to challenge the result as rigged. And in mid-December, when Trump was president-elect, it suggested Trump should push for Assange to be made Australian ambassador to Washington.

Assange also gave his backing to the Brexit vote in the June UK, an intervention which again does not appear to be merely incidental. It earned him an unannounced visit in March from Nigel Farage, the Brexit leader and Trump’s closest British ally. When doorstopped on his way out of the Ecuadorian embassy, Farage claimed he could not remember why he had gone there.

In recent weeks, it has become increasingly clear that Brexit was another arena in which Assange and Moscow were in step. Over the past week, researchers at the University of Edinburgh identified over 400 fake Twitter accounts apparently run from St Petersburg, which published Brexit-related posts in the run up to the UK referendum, some of them aimed at stirring anti-Islamic sentiment.

“The radical libertarians and the autocrats are allied by virtue of sharing an enemy which is the mainstream, soft, establishment, liberal politics,” said Jamie Bartlett, the director of the centre for the analysis of social media at the Demos thinktank.

“Most early, hardline cryptographers who were part of this movement in the 1990s considered that democracy and liberty were not really compatible. Like most radical libertarians – as Assange was – the principal enemy was the soft democrats who were imposing the will of the majority on the minority and who didn’t really believe in genuine, absolute freedom.”

That meant some odd bedfellows could become useful allies. “They have been able to forge a very convenient marriage with other enemies of liberal democracy,” said Bartlett, “who are in every other sense imaginable are completely at odds with each other, but they do they share that common hatred of establishment, western, soft, democratic politics as they see it.”

Edward Snowden’s worldview also had libertarian roots. He was a supporter of the rightwing maverick US presidential candidate, Ron Paul, and vigorously opposed the Obama administration’s endorsement of gun control and affirmative action.

He turned against his employers in the US security apparatus, and stole their secrets in the name of transparency and the citizen’s right to privacy, but his defection has left him in exile in Moscow, at the mercy of a government that hardly even pretends to observe such western niceties.

However, Snowden has never professed any great enthusiasm for Russian governance, and most of the available evidence suggests he did not end up in Russia by design, but because of a failed scheme, hatched by WikiLeaks to fly him from Hong Kong to Latin America. Unlike Assange, he has been increasingly critical of the Kremlin.

But there are plenty of other examples of the mutual embrace between Moscow and western libertarianism. In particular, the libertarians share with Moscow a profound distaste for the European Union, which they see as a continent-wide epitome of centralisation, and of liberal social norms.

“This libertarian hatred of political correctness, that everyone has to follow this social democratic view on gender, welfare, progressive politics and immigration, and libertarians can’t stand that, as degrading the idea of individual liberty,” Bartlett said. “So I think you’ll find quite a lot of people on the libertarian right who think that Russia has become that only really counterbalance to that philosophy.”

The meeting of minds is embodied in the man long seen as Trump’s chief ideologue, Steve Bannon. Bannon is another western libertarian for whom the contradiction between opposing restrictions on individual liberties at home and backing Russian authoritarianism is subsumed beneath an admiration for Putin’s muscular nationalism.

Trump and Hillary Clinton at the debate in St Louis in October last year.

Trump and Hillary Clinton at the debate in St Louis in October last year. Photograph: Rick Wilking/Reuters

In the summer of 2014, Bannon explained the attraction of the Russian leader for “traditionalists”, to a meeting of conservative Catholics through a Skype link to the Vatican.

“One of the reasons is that they believe that at least Putin is standing up for traditional institutions, and he’s trying to do it in a form of nationalism — and I think that people, particularly in certain countries, want to see sovereignty for their country, they want to see nationalism for their country,” Bannon said, according to a transcript of the discussion published by BuzzFeed.

“They don’t believe in this kind of pan-European Union or they don’t believe in the centralized government in the United States. They’d rather see more of a states-based entity that the founders originally set up where freedoms were controlled at the local level.”

For Farage too, reverence for Putin’s boldness on the world stage has outweighed doubts about his repressive rule. Asked in a 2014 GQ magazine interview, which world leader he most admired, he said: “As an operator, but not as a human being, I would say Putin.

“The way he played the whole Syria thing. Brilliant. Not that I approve of him politically. How many journalists in jail now?”

The investigation into the Russian involvement in the Brexit vote is only now getting started. Russian journalist Alexey Kovalev argues Moscow’s influence has been overstated and misunderstood.

Pointing to the minimal audience for the Russian English-language TV channel, Kovalev wrote: “Russian trolling operations seem less like pouring gasoline on fire and more like pouring a bucket of water into the ocean.”

Kadri Liik, an expert on Russian-European relations, said: “Some fake news probably may have influenced the Brexit vote, but these fake news were manufactured by the British tabloids and the leave campaign. Any amplification provided by Russia’s agents was negligible compared to the energy that was invested locally.”

In Catalonia too, Russian bots and their fake news output were pushing on a door that was already swinging open because of other circumstances. The Catalan leaders, unlike those in the US, France, and the UK have shown little interest so far in reciprocating Moscow’s embrace.

However, the long-term corrosive effect of Russia’s use of disinformation to break down trust in western institutions is hard to measure and may be unmeasurable. What is clear is that it is continuing with the active assistance of political movements who trade in disillusion and resentment, and who have found a natural home on the internet.

In Italy, the Five Star Movement (M5S), combines its anti-establishment stance at home with close alignment to Moscow’s line in foreign policy. Its web guru, Gianroberto Casaleggio, who claims that M5S is pioneering “a new, direct democracy that will see the elimination of all barriers between the citizen and the state”, has established news sites that circulate conspiracy theories, many of them crossposted from Russian outlets. One such story suggested the US was covertly funding the flow of immigrants from Africa. It linked back to a story on Sputnik Italia.

As with Assange, Casaleggio’s distaste for the overbearing state does not apply to Moscow. The common fight against US, Nato and the rest of the western world’s liberal order is what has taken primacy.

“I think they’re more anarchic in their belief that the state will wither away and power wil be redistributed in some fundamentally democratic revolution that they thought would be embedded within the internet,” said Franklin Foer, a US journalist and the author of World Without Mind: The Existential Threat of Big Tech. “It’s fairly naive, because power always reasserts itself.”

Source link

read more
FacebookMediaRussiaSocial networkingTechnologyUS CongressUS elections 2016US newsUS politicsUS SenateWorld news

US senators warn of ‘fake news’ threat from Russia and urge tech giants to act | Technology

no thumb

Senators from both parties warned of the role of Russian misinformation on Wednesday in a hearing on the Kremlin’s meddling in the 2016 election.

Questioning executives from Twitter, Google and Facebook, members of the Senate intelligence committee warned of the influence of “fake news” and bots on the campaign and how Russia “conducted an information operation intended to divide our society along issues like race, immigration and second amendment rights” in the words of the committee’s chair, Richard Burr of North Carolina.

However, there were partisan divides in how the concern was phrased. Burr attempted to minimize the influence of Russian-linked Facebook ads in determining the result of the 2016 election, while Mark Warner, the committee’s vice-chair, explicitly criticized the Trump administration’s approach to this issue.

“We have a president who remains unwilling to acknowledge the threat that Russia poses to our democracy,” said the Virginia Democrat. “President Trump should stop actively delegitimizing American journalism and acknowledge and address this real threat posed by Russian propaganda.”

Warner also raised concern about the sheer scope of misinformation on social media. He said in his opening statement that, in addition to the 80,000 Russian-backed posts on Facebook that reached 126 million Americans during the 2016 campaign, there were potentially 120,000 similar posts on Instagram and that up to 15% of Twitter accounts are “fake or automated. An executive for Facebook conceded that starting from October 2016, just before the election, the Russian posts on Instagram reached an additional 16 million Americans.”

Warner went on to attack the tech companies testifying, telling them that members of the committee “were frankly blown off by the leadership of your companies, and dismissed” when they initially raised concerns. All three tech executives said their internal investigations were continuing and that they had yet to determine the full scope of Russian interference.

Warner’s outrage was shared by other members, including Democrat Dianne Feinstein of California, who explicitly told the tech executives, “I don’t think you get it” and described Russian interference as “the beginning of cyberwarfare.”

In questioning, Burr displayed two ads from Russian-created Facebook pages, one from a group called “Heart of Texas” and the second from a group called “United Muslims of America” that pushed divisive contact. In particular, he noted that both groups created events at the same time and location in Houston, Texas, in an attempt to spark conflict in 2016.

The questioning also touched on other issues. Facebook executive Colin Stretch conceded that his network had blocked a profile of Chinese dissident Guo Wengui after the Chinese government had reported it to them. Wengui, who lives in the United States, had used his account to expose corruption within China, where Facebook is banned.

The Senate intelligence committee’s hearing came a day after the Senate judiciary committee held a similar session on Russian use of social media to influence and interfere in the 2016 election. The House intelligence committee is due to question the tech companies this afternoon.

Source link

read more
1 2
Page 1 of 2