close

Media

CybercrimeEncryptionFacebookInternetMediaSocial networkingSurveillanceTechnologyWorld news

Without encryption we will lose all privacy. This is our new battleground | Edward Snowden | Opinion

no thumb


In every country of the world, the security of computers keeps the lights on, the shelves stocked, the dams closed, and transportation running. For more than half a decade, the vulnerability of our computers and computer networks has been ranked the number one risk in the US Intelligence Community’s Worldwide Threat Assessment – that’s higher than terrorism, higher than war. Your bank balance, the local hospital’s equipment, and the 2020 US presidential election, among many, many other things, all depend on computer safety.

And yet, in the midst of the greatest computer security crisis in history, the US government, along with the governments of the UK and Australia, is attempting to undermine the only method that currently exists for reliably protecting the world’s information: encryption. Should they succeed in their quest to undermine encryption, our public infrastructure and private lives will be rendered permanently unsafe.

In the simplest terms, encryption is a method of protecting information, the primary way to keep digital communications safe. Every email you write, every keyword you type into a search box – every embarrassing thing you do online – is transmitted across an increasingly hostile internet. Earlier this month the US, alongside the UK and Australia, called on Facebook to create a “backdoor”, or fatal flaw, into its encrypted messaging apps, which would allow anyone with the key to that backdoor unlimited access to private communications. So far, Facebook has resisted this.

If internet traffic is unencrypted, any government, company, or criminal that happens to notice it can – and, in fact, does – steal a copy of it, secretly recording your information for ever. If, however, you encrypt this traffic, your information cannot be read: only those who have a special decryption key can unlock it.

I know a little about this, because for a time I operated part of the US National Security Agency’s global system of mass surveillance. In June 2013 I worked with journalists to reveal that system to a scandalised world. Without encryption I could not have written the story of how it all happened – my book Permanent Record – and got the manuscript safely across borders that I myself can’t cross. More importantly, encryption helps everyone from reporters, dissidents, activists, NGO workers and whistleblowers, to doctors, lawyers and politicians, to do their work – not just in the world’s most dangerous and repressive countries, but in every single country.

When I came forward in 2013, the US government wasn’t just passively surveilling internet traffic as it crossed the network, but had also found ways to co-opt and, at times, infiltrate the internal networks of major American tech companies. At the time, only a small fraction of web traffic was encrypted: six years later, Facebook, Google and Apple have made encryption-by-default a central part of their products, with the result that today close to 80% of web traffic is encrypted. Even the former director of US national intelligence, James Clapper, credits the revelation of mass surveillance with significantly advancing the commercial adoption of encryption. The internet is more secure as a result. Too secure, in the opinion of some governments.

Donald Trump’s attorney general, William Barr, who authorised one of the earliest mass surveillance programmes without reviewing whether it was legal, is now signalling an intention to halt – or even roll back – the progress of the last six years. WhatsApp, the messaging service owned by Facebook, already uses end-to-end encryption (E2EE): in March the company announced its intention to incorporate E2EE into its other messaging apps – Facebook Messenger and Instagram – as well. Now Barr is launching a public campaign to prevent Facebook from climbing this next rung on the ladder of digital security. This began with an open letter co-signed by Barr, UK home secretary Priti Patel, Australia’s minister for home affairs and the US secretary of homeland security, demanding Facebook abandon its encryption proposals.

If Barr’s campaign is successful, the communications of billions will remain frozen in a state of permanent insecurity: users will be vulnerable by design. And those communications will be vulnerable not only to investigators in the US, UK and Australia, but also to the intelligence agencies of China, Russia and Saudi Arabia – not to mention hackers around the world.

End-to-end encrypted communication systems are designed so that messages can be read only by the sender and their intended recipients, even if the encrypted – meaning locked – messages themselves are stored by an untrusted third party, for example, a social media company such as Facebook.

The central improvement E2EE provides over older security systems is in ensuring the keys that unlock any given message are only ever stored on the specific devices at the end-points of a communication – for example the phones of the sender or receiver of the message – rather than the middlemen who own the various internet platforms enabling it. Since E2EE keys aren’t held by these intermediary service providers, they can no longer be stolen in the event of the massive corporate data breaches that are so common today, providing an essential security benefit. In short, E2EE enables companies such as Facebook, Google or Apple to protect their users from their scrutiny: by ensuring they no longer hold the keys to our most private conversations, these corporations become less of an all-seeing eye than a blindfolded courier.

It is striking that when a company as potentially dangerous as Facebook appears to be at least publicly willing to implement technology that makes users safer by limiting its own power, it is the US government that cries foul. This is because the government would suddenly become less able to treat Facebook as a convenient trove of private lives.

To justify its opposition to encryption, the US government has, as is traditional, invoked the spectre of the web’s darkest forces. Without total access to the complete history of every person’s activity on Facebook, the government claims it would be unable to investigate terrorists, drug dealers money launderers and the perpetrators of child abuse – bad actors who, in reality, prefer not to plan their crimes on public platforms, especially not on US-based ones that employ some of the most sophisticated automatic filters and reporting methods available.

The true explanation for why the US, UK and Australian governments want to do away with end-to-end encryption is less about public safety than it is about power: E2EE gives control to individuals and the devices they use to send, receive and encrypt communications, not to the companies and carriers that route them. This, then, would require government surveillance to become more targeted and methodical, rather than indiscriminate and universal.

What this shift jeopardises is strictly nations’ ability to spy on populations at mass scale, at least in a manner that requires little more than paperwork. By limiting the amount of personal records and intensely private communications held by companies, governments are returning to classic methods of investigation that are both effective and rights-respecting, in lieu of total surveillance. In this outcome we remain not only safe, but free.

Edward Snowden is former CIA officer and whistleblower, and author of Permanent Record. He is president of the board of directors of the Freedom of the Press Foundation



Source link

read more
Data protectionFacebookMediaPrivacySocial networkingTechnology

Facebook lets advertisers target users based on sensitive interests | Technology

no thumb


Facebook allows advertisers to target users it thinks are interested in subjects such as homosexuality, Islam or liberalism, despite religion, sexuality and political beliefs explicitly being marked out as sensitive information under new data protection laws.

The social network gathers information about users based on their actions on Facebook and on the wider web, and uses that data to predict on their interests. These can be mundane – football, Manhattan or dogs, for instance – or more esoteric.

A Guardian investigation in conjunction with the Danish Broadcasting Corporation found that Facebook is able to infer extremely personal information about users, which it allows advertisers to use for targeting purposes. Among the interests found in users’ profiles were communism, social democrats, Hinduism and Christianity.

The EU’s general data protection regulation (GDPR), which comes into effect on 25 May, explicitly labels such categories of information as so sensitive, with such a risk of human rights breaches, that it mandates special conditions around how they can be collected and processed. Among those categories are information about a person’s race, ethnic origin, politics, religion, sex life and sexual orientation.

The information commissioner’s office says: “This type of data could create more significant risks to a person’s fundamental rights and freedoms, for example, by putting them at risk of unlawful discrimination.”

Organisations must cite one of 10 special dispensations to process such information, such as “preventive or occupational medicine”, “to protect the vital interests of the data subject”, or “the data subject has given explicit consent to the processing of those personal data for one or more specified purposes”.

Facebook already applies those special categories elsewhere on the site. As part of its GDPR-focused updates, the company asked every user to confirm whether or not “political, religious, and relationship information” they had entered on the site should continue to be stored or displayed. But while it offered those controls for information that users had explicitly given it, it gathered no such consent for information it had inferred about users.

The data means an advertiser can target messages at, for instance, people in the UK who are interested in homosexuality and Hinduism – about 68,000 people, according to the company’s advertising tools.

Facebook does demonstrate some understanding that the information is sensitive and prone to misuse. The company provides advertisers with the ability to exclude users based on their interests, but not for sensitive interests. An advertiser can advertise to people interested in Islam, for instance, but cannot advertise to everyone except those interested in Islam.

The company requires advertisers to agree to a set of policies that, among other things, bar them from “using targeting options to discriminate against, harass, provoke or disparage users, or to engage in predatory advertising practices.”

In a statement, Facebook said classifying a user’s interests was not the same as classifying their personal traits. “Like other internet companies, Facebook shows ads based on topics we think people might be interested in, but without using sensitive personal data. This means that someone could have an ad interest listed as gay pride because they have liked a Pride-associated page or clicked a Pride ad, but it does not reflect any personal characteristics such as gender or sexuality.”

The company also said it provided some controls to users on its ad preferences screen. “People are able to manage their ad preferences tool, which clearly explains how advertising works on Facebook and provides a way to tell us if you want to see ads based on specific interests or not. When interests are removed, we show people the list of removed interests so that they have a record they can access, but these interests are no longer used for ads.”

It added: “Our advertising complies with relevant EU law and, like other companies, we are preparing for the GDPR to ensure we are compliant when it comes into force.”

The findings are reminiscent of Facebook’s previous attempts to skirt the line between profiling users and profiling their interests. In 2016 it was revealed that the company had created a tool for “racial affinity targeting”.

At the time, Facebook repeatedly argued that the tool “is based on affinity, not ethnicity”. Discussing a person who was in the African American affinity group, for instance, the company said: “They like African American content. But we cannot and do not say to advertisers that they are ethnically black.”

Almost a year later, after it was revealed that advertisers could use the ethnic affinity tools to unlawfully discriminate against black Facebook users in housing adverts, Facebook agreed to limit how those tools could be used.



Source link

read more
BloggingDigital mediaInternetMediaOnline abuseTechnologyTwitter

Twitter announces global change to algorithm in effort to tackle harassment | Technology

no thumb


Twitter is announcing a global change to its ranking algorithm this week, its first step toward improving the “health” of online conversations since it launched a renewed effort to address rampant trolling, harassment and abuse in March.

“It’s shaping up to be one of the highest-impact things that we’ve done,” the chief executive, Jack Dorsey ,said of the update, which will change how tweets appear in search results or conversations. “The spirit of the thing is that we want to take the burden off the person receiving abuse or mob-like behavior.”

Social media platforms have long struggled to police acceptable content and behavior on their sites, but external pressure on the companies increased significantly following the revelation that a Russian influence operation used the platforms in coordinated campaigns around the 2016 US election.

Facebook and Google have largely responded by promising to hire thousands of moderators and improve their artificial intelligence tools to automate content removal. Twitter’s approach, which it outlined to reporters in a briefing on Monday, is distinct because it is content neutral and will not require more human moderators.

“A lot of our past action has been content based, and we are shifting more and more to conduct,” Dorsey said.

Del Harvey, Twitter’s vice-president of trust and safety, said that the new changes were based on research that found that most of the abuse reports on Twitter originate in search results or the conversations that take place in the responses to a single tweet. The company also found that less than 1% of Twitter accounts made up the majority of abuse reports and that many of the reported tweets did not actually violate the company’s rules, despite “detract[ing] from the overall experience” for most users.

The new system will use behavioral signals to assess whether a Twitter account is adding to – or detracting from – the tenor of conversations. For example, if an account tweets at multiple other users with the same message, and all of those accounts either block or mute the sender, Twitter will recognize that the account’s behavior is bothersome. But if an account tweets at multiple other accounts with the same message, and some of them reply or hit the “heart” button, Twitter will assess the interactions as welcome. Other signals will include whether an account has confirmed an email address or whether an account appears to be acting in a coordinated attack.

With these new signals, Harvey explained, “it didn’t matter what was said; it mattered how people reacted.”

The updated algorithm will result in certain tweets being pushed further down in a list of search results or replies, but will not delete them from the platform. Early experiments have resulted in a 4% decline in abuse reports from search and an 8% drop in abuse reports in conversations, said David Gasca, Twitter’s director of product management for health.

This is not the first time that Twitter has promised to crack down on abuse and trolling on its platform. In 2015, then CEO Dick Costolo acknowledged that the company “sucks at dealing with abuse and trolls”. But complaints have continued under Dorsey’s leadership, and in March, the company decided to seek outside help, issuing a request for proposals for academics and NGOs to help it come up with ways to measure and promote healthy conversations.

Dorsey and Harvey appeared optimistic that this new approach will have a significant impact on users’ experience.

“We are trying to strike a balance,” Harvey said. “What would Twitter be without controversy?”





Source link

read more
BrexitCambridge AnalyticaEuropean UnionFacebookForeign policyMediaPoliticsSocial networkingTechnologyUK news

‘We’re waiting for answers’: Facebook, Brexit and 40 questions | Technology

‘We’re waiting for answers’: Facebook, Brexit and 40 questions | Technology


Mike Schroepfer, Facebook’s chief technology officer, was the second executive Facebook offered up to answer questions from parliament’s select committee for Digital, Culture, Media and Sport (DCMS).

He took his place in the hot seat in the wake of the first attendee, Simon Milner, Facebook’s (now ex-) head of policy for Europe, who answered a series of questions about Cambridge Analytica’s non-use of Facebook data that came back to haunt the company in the furore that followed the Observer and New York Times revelations from Christopher Wylie.

Schroepfer is Facebook’s nerd-in-chief. He was the tech guy sent to answer a series of questions from MPs about how his platform had facilitated what appeared to be a wholesale assault on Britain’s democracy, and though there was much he couldn’t answer, when he was asked about spending by Russian entities directed at British voters before the referendum, he spoke confidently: “We did look several times at the connections between the IRA [the Kremlin-linked Internet Research Agency] … and the EU referendum and we found $1 of spend. We found almost nothing.”

But new evidence released by the United States Congress suggests adverts were targeted at UK Facebook users, and paid for in roubles, in the months preceding the short 10-week period “regulated” by the Electoral Commission but when the long campaigns were already under way.

This is the latest episode in a series of miscommunications between the company and British legislators, which has come to a head in the week the Electoral Commission finally published the findings of its investigation into the Leave.EU campaign.

Damian Collins, the chair of the DCMS committee, said: “We asked them to look for evidence of Russian influence and they came back and told us something we now know appears misleading. And we’re still waiting for answers to 40 questions that Mike Schroepfer was unable to answer, including if they have any record of any dark ads.

“It could be that these adverts are just the tip of the iceberg. It’s just so hard getting any sort of information out of them, and then not knowing if that information is complete.”



Leave.EU supporters celebrate the Leave vote in Sunderland after polling stations closed in the Brexit referendum. Photograph: Toby Melville/Reuters

Preliminary research undertaken by Twitter user Brexitshambles suggests anti-immigrant adverts were targeted at Facebook users in the UK and the US.

One – headlined “You’re not the only one to despise immigration”, which cost 4,884 roubles (£58) and received 4,055 views – was placed in January 2016. Another, which accused immigrants of stealing jobs, cost 5,514 roubles and received 14,396 impressions. Organic reach can mean such adverts are seen by a wider audience.

Facebook says that it only looked for adverts shown during the officially regulated campaign period. A spokesperson said: “The release of the set of IRA adverts confirms the position we shared with the Electoral Commission and DCMS committee. We did not find evidence of any significant, coordinated activity by the IRA operatives directed towards the Brexit referendum.

“This is supported by the release of this data set which shows a significant amount of activity by the IRA with only a handful of their ads listing the UK as a possible audience.”

Collins said that the committee was becoming increasingly frustrated by Facebook’s reluctance to answer questions and by founder Mark Zuckerberg’s ongoing refusal to come to the UK to testify.

Milner told the committee in February that Cambridge Analytica had no Facebook data and could not have got data from Facebook.

The news reinforces MPs’ frustrations with a system that last week many of them were describing as “broken”. On Friday, 15 months after the first Observer article that triggered the Electoral Commission’s investigation into Leave.EU was published, it found the campaign – funded by Arron Banks and endorsed by Nigel Farage – guilty of multiple breaches of electoral law and referred the “responsible person” – its chief executive, Liz Bilney – to the police.

Banks described the commission’s report as a “politically motivated attack on Brexit”.

Leading academics and MPs called the delay in referring the matter to the police “catastrophic”, with others saying British democracy had failed. Liam Byrne, Labour’s shadow digital minister, described the current situation as “akin to the situation with rotten boroughs” in the 19th century. “It’s at that level. What we’re seeing is a wholesale failure of the entire system. We have 20th-century bodies fighting a 21st-century challenge to our democracy. It’s totally lamentable.”

Stephen Kinnock, Labour MP for Aberavon, said it was unacceptable that the Electoral Commission had still not referred the evidence about Vote Leave from Christopher Wylie and Shahmir Sanni – published in the Observer and submitted to the Electoral Commission – to the police. He said: “What they seem to have done, and are continuing to do, is to kick this into the long grass. There seems to be political pressure to kick this down the road until Britain has exited the EU.”

He accused the commission of ignoring what he considered key evidence, including about Cambridge Analytica. The commission had found Leave.EU guilty of not declaring work done by its referendum strategist, Goddard Gunster, but said it had found no evidence of work done by Cambridge Analytica.

“The whole thing stinks,” Kinnock said. “I wrote to the commission with evidence that the value of work carried out by Cambridge Analytica was around £800,000. The glib way it dismissed the multiple pieces of evidence about the company was extraordinary. I just think it is absolutely not fit for purpose.”

Gavin Millar QC, a leading expert in electoral law at Matrix Chambers, said: “Our entire democratic system is vulnerable and wide open to attack. If we allow this kind of money into campaigning on national basis – and the referendum was the paradigm for this – you have to have an organisation with teeth to police it.”

Damian Tambini, director of research in the department of media and communications at the London School of Economics, described the whole system as broken and said there was not a single investigatory body that seemed capable of uncovering the truth. “The DCMS Select Committee has found itself in this extraordinary position of, in effect, leading this investigation because it at least has the power to compel witnesses and evidence – something the Electoral Commission can’t do. It’s the classic British solution of muddling through.

“The big picture here is it’s possible for an individual or group with lots of money and some expertise to change the course of history and buy an election outcome. And with our regulatory system, we’ll never know if it’s happened.”

This article was amended on 13 May 2018 to clarify that a remark from Damian Tambini referred to the DCMS Select Committee.





Source link

read more
Digital mediaMediaSocial mediaTechnologyUS newsWorld news

Santa Clarita Principles could help tech firms with self-regulation | Technology

no thumb


Social networks should publish the number of posts they remove, provide detailed information for users whose content is deleted about why, and offer the chance to appeal against enforcement efforts, according to a groundbreaking effort to provide a set of principles for large-scale moderation of online content.

The Santa Clarita Principles, agreed at a conference in the Californian town this week, were proposed by a group of academics and non-profit organisations including the Electronic Frontier Foundation, ACLU, and the Center for Democracy and Technology.

They are intended to provide a guiding light for tech companies keen on self-regulation, akin to similar sets of principles established by other industries – most famously the Asilomar principles, drawn up in 1975 to regulate genetic engineering.

The principles are made up of three key recommendations: Numbers, Notice, and Appeal. “Companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines,” the first principle advises.

Currently, only YouTube provides such a report, of the major content sites, and in less detail than the principle recommends: it calls for information including the number of posts and accounts flagged and suspended, broken down by category of rule violated, format of content, and locations, among other things. YouTube’s content moderation transparency report revealed the company removed 8.3m videos in the first quarter of 2018.

The second principle, Notice, recommends that “companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension.

“In general, companies should provide detailed guidance to the community about what content is prohibited, including examples of permissible and impermissible content and the guidelines used by reviewers.” Many companies keep such detailed guidelines secret, arguing that explaining the law lets users find loopholes they can abuse.

In 2017, the Guardian published Facebook’s community moderation guidelines, revealing some examples of how the company draws the line on sex, violence and hate speech. Last month, almost a year later, Facebook finally decided to publish the documents itself. Mark Zuckerberg said the publication was a step towards his goal “to develop a more democratic and independent system for determining Facebook’s community standards”.

Finally, the principles call for a right to appeal. “Companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.” Most companies allow for some sort of appeal, in principle, although many users report little success in overturning incorrect decisions in practice.

Instead, observers have noted that the press has increasingly become an independent ombudsman for large content companies, with many of the most flagrant mistakes only being overturned when journalists highlight them. Twitter, for example, “is slow or unresponsive to harassment reports until they’re picked up by the media,” according to Buzzfeed writer Charlie Warzel.

Facebook’s Zuckerberg has said he wants a more explicit appeals process. “Over the long term, what I’d really like to get to is an independent appeal,” he said, in an interview with Vox. “So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion.

“You can imagine some sort of structure, almost like a supreme court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.”

Neither Facebook, Google nor Twitter commented for this article.



Source link

read more
AdvertisingGoogleMediaTechnologyUS newsUS politics

Google promises better verification of political ad buyers in US | Technology

no thumb


Google says it will do a better job of verifying the identity of political ad buyers in the US by requiring a copy of a government-issued ID and other information.

In a blogpost, Google executive Kent Walker said the company would also require the disclosure of who is paying for the ad.

He also repeated a pledge he made in November to create a library of such ads that will be searchable by anyone by this summer: “We’ll also release a new Transparency Report specifically focused on election ads. This Report will describe who is buying election-related ads on our platforms and how much money is being spent. We’re also building a searchable library for election ads, where anyone can find which election ads purchased on Google and who paid for them.”

Google’s blogpost stops short of declaring support for the Honest Ads Act, a bill that would impose disclosure requirements on online ads, similar to what is required for television and other media. Facebook and Twitter support that bill.

Google says applications under the new system will open by the end of May, with approval taking up to five days.



Source link

read more
Cambridge AnalyticaFacebookMediaPrivacySocial networkingTechnologyUK newsUS newsWorld news

Cambridge Analytica closing after Facebook data harvesting scandal | News

Cambridge Analytica closing after Facebook data harvesting scandal | News


Cambridge Analytica, the data firm at the centre of this year’s Facebook privacy row, is closing and starting insolvency proceedings.

The company has been plagued by scandal since the Observer reported that the personal data of about 50 million Americans and at least a million Britons had been harvested from Facebook and improperly shared with Cambridge Analytica.

Cambridge Analytica denies any wrongdoing, but says that the negative media coverage has left it with no clients and mounting legal fees.


What is the Cambridge Analytica scandal? – video explainer

“Despite Cambridge Analytica’s unwavering confidence that its employees have acted ethically and lawfully, the siege of media coverage has driven away virtually all of the Company’s customers and suppliers,” said the company in a statement, which also revealed that SCL Elections Ltd, the UK entity affiliated with Cambridge Analytica, would also close and start insolvency proceedings.

“As a result, it has been determined that it is no longer viable to continue operating the business, which left Cambridge Analytica with no realistic alternative to placing the company into administration.”

As first reported by the Wall Street Journal, the company has started insolvency proceedings in the US and UK. At Cambridge Analytica’s New York offices on an upmarket block on Manhattan’s Fifth Avenue, it appeared all the staff had already left the premises.

The Guardian rang the doorbell to the company’s seventh-floor office and was met by a woman who would not give her name but said she did not work for the company.



The Cambridge Analytica office in New York. Photograph: Oliver Laughland for the Guardian

Asked if anyone from Cambridge Analytica or SCL was still inside, she said: “They used to be. But they all left today.”

The scandal centres around data collected from Facebook users via a personality app developed by the Cambridge University researcher Aleksandr Kogan. The data was collected via Facebook’s permissive “Graph API”, the interface through which third parties could interact with Facebook’s platform. This allowed Kogan to pull data about users and their friends, including likes, activities, check-ins, location, photos, religion, politics and relationship details. He passed the data to Cambridge Analytica, in breach of Facebook’s platform policies.

Christopher Wylie, the original Cambridge Analytica whistleblower, told the Observer that the data Kogan obtained was used to influence the outcome of the US presidential election and Brexit. According to Wylie the data was fed into software that profiles voters and tries to target them with personalised political advertisements. Cambridge Analytica insists it never incorporated the Kogan data.

Kogan told BBC Radio 4’s Today programme he was being used as a scapegoat.

He said: “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica. Honestly, we thought we were acting perfectly appropriately. We thought we were doing something that was really normal.”

Cambridge Analytica said it had been “vilified for activities that are not only legal, but also widely accepted as a standard component of online advertising in both the political and commercial arenas”.

The CEO of Cambridge Analytica, Alexander Nix, was suspended in late March after Britain’s Channel 4 News broadcast secret recordings in which he claimed credit for the election of Donald Trump.

He told an undercover reporter: “We did all the research, all the data, all the analytics, all the targeting. We ran all the digital campaign, the television campaign and our data informed all the strategy.”

He also revealed that the company used a self-destruct email server to erase its digital history.

“No one knows we have it, and secondly we set our … emails with a self-destruct timer … So you send them and after they’ve been read, two hours later, they disappear. There’s no evidence, there’s no paper trail, there’s nothing.”

Although Cambridge Analytica might be dead, the team behind it has already set up a mysterious new company called Emerdata. According to Companies House data, Alexander Nix is listed as a director along with other executives from SCL Group. The daughters of the billionaire Robert Mercer are also listed as directors.

Damian Collins, chair of the British parliamentary committee looking into data breaches, expressed concern that Cambridge Analytica’s closure might hinder the investigation into the firm.

“Cambridge Analytica and SCL group cannot be allowed to delete their data history by closing. The investigations into their work are vital,” he wrote on Twitter.

The episode has shone a spotlight on the way that Facebook data is collected, shared and used to target people with advertising.

The social network initially scrambled to blame rogue third parties for “platform abuse” – “the entire company is outraged we were deceived,” the company said – before it unveiled sweeping changes to its privacy settings and data sharing practices.

“This was a breach of trust between Kogan, Cambridge Analytica and Facebook,” said Mark Zuckerberg in a Facebook post. “But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.”

Facebook first discovered that Kogan had shared data with Cambridge Analytica when a Guardian journalist contacted the company about it at the end of 2015. It asked Cambridge Analytica to delete the data and revoked Kogan’s apps’ API access. However, Facebook relied on Cambridge Analytica’s word that it had done so.

After it was revealed that the data hadn’t been deleted, Facebook revoked Cambridge Analytica’s access to its platform and launched an investigation of “thousands” of apps that had similar access and made several changes to restrict how much third-party developers can access from people’s profiles.

The company also pledged to verify the identities of administrators of popular Facebook pages and advertisers buying political “issue” ads on “debated topics of national legislative importance” such as education, immigration and abortion.





Source link

read more
AlphabetData protectionDigital mediaEuropeEuropean UnionFacebookGDPRGoogleInternetMediaPrivacySocial mediaSocial networkingTechnologyTwitterWorld news

EU: data-harvesting tech firms are ‘sweatshops of connected world’ | Technology

no thumb


The European data protection supervisor has hit out at social media and tech firms over the recent constant stream of privacy policy emails in the run up to GDPR, calling them them the “sweatshops of the connected world”.

With the tough new General Data Protection Regulations coming into force on 25 May, companies around the world are being forced to notify their users to accept new privacy policies and data processing terms to continue to use the services.

But Giovanni Buttarelli, the European data protection supervisor (EDPS), lambasted the often-hostile approach of the recent deluge of notifications.

“If this encounter seems a take-it-or-leave it proposition – with perhaps a hint of menace – then it is a travesty of at least the spirit of the new regulation, which aims to restore a sense of trust and control over what happens to our online lives,” said Buttarelli. “Consent cannot be freely given if the provision of a service is made conditional on processing personal data not necessary for the performance of a contract.”

“The most recent [Facebook] scandal has served to expose a broken and unbalanced ecosystem reliant on unscrupulous personal data collection and micro-targeting for whatever purposes promise to generate clicks and revenues.

“The digital information ecosystem farms people for their attention, ideas and data in exchange for so called ‘free’ services. Unlike their analogue equivalents, these sweatshops of the connected world extract more than one’s labour, and while clocking into the online factory is effortless it is often impossible to clock off.”

The European Union’s new stronger, unified data protection laws, the General Data Protection Regulation (GDPR), will come into force on 25 May 2018, after more than six years in the making.

GDPR will replace the current patchwork of national data protection laws, give data regulators greater powers to fine, make it easier for companies with a “one-stop-shop” for operating across the whole of the EU, and create a new pan-European data regulator called the European Data Protection Board.

The new laws govern the processing and storage of EU citizens’ data, both that given to and observed by companies about people, whether or not the company has operations in the EU. They state that data protection should be both by design and default in any operation.

GDPR will refine and enshrine the “right to be forgotten” laws as the “right to erasure”, and give EU citizens the right to data portability, meaning they can take data from one organisation and give it to another. It will also bolster the requirement for explicit and informed consent before data is processed, and ensure that it can be withdrawn at any time.

To ensure companies comply, GDPR also gives data regulators the power to fine up to €20m or 4% of annual global turnover, which is several orders of magnitude larger than previous possible fines. Data breaches must be reported within 72 hours to a data regulator, and affected individuals must be notified unless the data stolen is unreadable, ie strongly encrypted.

While data protection and privacy has become a hot-button issue in part thanks to the Cambridge Analytica files, Buttarelli is concerned that it is simply being used as part of the “PR toolkit” of firms. He said that there is “a growing gulf between hyperbole and reality, where controllers learn to talk a good game while continuing with the same old harmful habits”.

A new social media subgroup of data protection regulators will be convened in mid-May to tackle what Buttarelli called the “manipulative approaches” that must change with GDPR.

“Brilliant lawyers will always be able to fashion ingenious arguments to justify almost any practice. But with personal data processing we need to move to a different model,” said Buttarelli. “The old approach is broken and unsustainable – that will be, in my view, the abiding lesson of the Facebook/ Cambridge Analytica case.”



Source link

read more
FacebookMark ZuckerbergMediaSocial networkingTechnologyUS news

Amid privacy scandal, Facebook unveils tool that lets you clear browsing history | Technology

no thumb


Mark Zuckerberg unveiled a new Facebook privacy control called “clear history” at the social media company’s annual developer conference, and admitted that he “didn’t have clear enough answers” about data control when he recently testified before Congress.

The CEO announced the new tool on Tuesday, describing it in a post as a “simple control to clear your browsing history on Facebook – what you’ve clicked on, websites you’ve visited, and so on”. The move comes at a time when Zuckerberg is battling some of the worst publicity his company has faced since it launched 14 years ago.

After reporting by the the Observer and the Guardian in March revealed that millions of Americans’ personal data was harvested from Facebook and impromperly shared with political consultancy Cambridge Analytica, the company has consistently been on the defense – struggling through protracted government inquiries and hearings in the US and the UK, and forced to respond to renewed calls for strict regulations.

Last month, Zuckerberg survived a two-day grilling by Congress in Washington DC, remaining composed and in some cases cleverly deflecting lawmakers’ toughest questions about data collection. The CEO has also worked to overcome a viral #DeleteFacebook campaign, fueled by concerns about the social media company’s potential impacts on elections in the US and Europe and a steady stream of revelations about the controversial ways the company tracks its users.

In advance of his speech in a packed conference hall in San Jose, Zuckerberg wrote: “Once we roll out this update, you’ll be able to see information about the apps and websites you’ve interacted with, and you’ll be able to clear this information from your account. You’ll even be able to turn off having this information stored with your account.”

He added, “One thing I learned from my experience testifying in Congress is that I didn’t have clear enough answers to some of the questions about data. We’re working to make sure these controls are clear, and we will have more to come soon.”

Zuckerberg also cautioned users against clearing cookies in their browser, saying “it can make parts of your experience worse”, and adding, “Your Facebook won’t be as good while it relearns your preferences.”

Even though the company’s stocks suffered in the wake of the recent privacy scandal, Facebook still posted record revenues for the first quarter of 2018, making $11.97bn in the first three months of the year.

In 2018, Zuckerberg pledged that his personal new year’s resolution – an annual tradition for the CEO – was to “fix” Facebook, an ambitious goal at the end of a year of relentless criticism surrounding the site’s role in spreading misinformation and having negative impacts on users’ mental health.

This year’s developer conference features a number of events that appear to emphasize Facebook’s positive influences on society, including sessions called “amplifying youth voices to influence policy” and “using technology to solve social and environmental issues”.



Source link

read more
Cambridge AnalyticaEuropeFacebookInternetMark ZuckerbergMediaPoliticsSocial networkingTechnologyUK newsUS newsWorld news

MPs threaten Mark Zuckerberg with summons over Facebook data | News

MPs threaten Mark Zuckerberg with summons over Facebook data | News


MPs have threatened to issue Mark Zuckerberg with a formal summons to appear in front of parliament when he next enters the UK, unless he voluntarily agrees to answer questions about the activities of his social network and the Cambridge Analytica scandal.

Damian Collins, the chair of the parliamentary committee that is investigating online disinformation, said he was unhappy with the information the company had provided and now wanted to hear evidence from the Facebook chief executive before parliament went into recess on 24 May.

Saturday 17 March

The Observer publishes online its first story on the Facebook and Cambridge Analytica scandal, written by Carole Cadwalladr and Emma Graham-Harrison.

Former Cambridge Analytica employee Christopher Wylie reveals how the firm used personal information taken in early 2014 to build a system that could profile individual US voters.

The data was collected through an app, built by academic Aleksandr Kogan, separately from his work at Cambridge University, through his company Global Science Research (GSR).

Sunday 18 March

As the Observer publishes its full interview with Wylie in the print edition, the fallout begins. US congressional investigators call for Cambridge Analytica boss Alexander Nix to testify again before their committee.

Monday 19 March

Channel 4 News airs the findings of an undercover investigation where Cambridge Analytica executives ​boast of using honey traps, fake news campaigns and operations with ex-spies to swing election campaigns.

Tuesday 20 March

​A former Facebook employee claims​ hundreds of millions of Facebook users may have had their private information harvested by companies in similar methods.

Wednesday 21 March

UK MPs summon Mark Zuckerberg to appear before a select committee investigating fake news, and accuse Facebook of misleading them at a previous hearing. 

Thursday 22 March

It emerges Facebook had previously provided Kogan with an anonymised, aggregate dataset of 57bn Facebook friendships. Zuckerberg breaks his silence to call the misuse of data a ‘breach of trust’.

Friday 23 March

Brittany Kaiser, formerly Cambridge Analytica’s business development director, reveals the blueprint for how CA claimed to have won the White House for Donald Trump by using Google, Snapchat, Twitter, Facebook and YouTube.


Photograph: Antonio Olmos

“It is worth noting that, while Mr Zuckerberg does not normally come under the jurisdiction of the UK parliament, he will do so the next time he enters the country,” Collins wrote in a public letter to Facebook. “We hope that he will respond positively to our request, but, if not, the committee will resolve to issue a formal summons for him to appear when he is next in the UK.”

Collins referred to an unconfirmed report by Politico that Zuckerberg planned to appear in front of the European parliament this month, suggesting it would be simple for the Facebook chief to extend his trip to attend a hearing in the UK.

The committee has repeatedly invited Zuckerberg to give evidence but Facebook has sent more junior executives to answer questions from MPs.

Facebook declined to comment on the possibility of a formal summons. In theory, Zuckerberg could be found in contempt of parliament if he refuses one.

When Rupert Murdoch and his son James resisted appearing in front of a select committee in 2011 it was speculated that potential punishments could include “fines and imprisonment”. In reality it is likely that, at worst, the punishment for ignoring such a summons would include an arcane process resulting in little more than a formal warning.

Collins said last week’s five-hour evidence session by Facebook’s chief technology officer, Mike Schroepfer, was unsatisfactory and his answers “lacked many of the important details” needed.

Collins’ committee formally issued a list of 39 supplementary questions they wanted answered following Schroepfer’s session, in which Facebook was labelled a “morality-free zone”.

Zuckerberg did make time to appear in front of the US Congress, where politicians were allocated five minutes each to ask questions. British select committee hearings allow politicians more time to ask follow-up questions, potentially making it a more testing experience.



Source link

read more
1 2 3 25
Page 1 of 25