close

Data protection

Data protectionFacebookMediaPrivacySocial networkingTechnology

Facebook lets advertisers target users based on sensitive interests | Technology

no thumb


Facebook allows advertisers to target users it thinks are interested in subjects such as homosexuality, Islam or liberalism, despite religion, sexuality and political beliefs explicitly being marked out as sensitive information under new data protection laws.

The social network gathers information about users based on their actions on Facebook and on the wider web, and uses that data to predict on their interests. These can be mundane – football, Manhattan or dogs, for instance – or more esoteric.

A Guardian investigation in conjunction with the Danish Broadcasting Corporation found that Facebook is able to infer extremely personal information about users, which it allows advertisers to use for targeting purposes. Among the interests found in users’ profiles were communism, social democrats, Hinduism and Christianity.

The EU’s general data protection regulation (GDPR), which comes into effect on 25 May, explicitly labels such categories of information as so sensitive, with such a risk of human rights breaches, that it mandates special conditions around how they can be collected and processed. Among those categories are information about a person’s race, ethnic origin, politics, religion, sex life and sexual orientation.

The information commissioner’s office says: “This type of data could create more significant risks to a person’s fundamental rights and freedoms, for example, by putting them at risk of unlawful discrimination.”

Organisations must cite one of 10 special dispensations to process such information, such as “preventive or occupational medicine”, “to protect the vital interests of the data subject”, or “the data subject has given explicit consent to the processing of those personal data for one or more specified purposes”.

Facebook already applies those special categories elsewhere on the site. As part of its GDPR-focused updates, the company asked every user to confirm whether or not “political, religious, and relationship information” they had entered on the site should continue to be stored or displayed. But while it offered those controls for information that users had explicitly given it, it gathered no such consent for information it had inferred about users.

The data means an advertiser can target messages at, for instance, people in the UK who are interested in homosexuality and Hinduism – about 68,000 people, according to the company’s advertising tools.

Facebook does demonstrate some understanding that the information is sensitive and prone to misuse. The company provides advertisers with the ability to exclude users based on their interests, but not for sensitive interests. An advertiser can advertise to people interested in Islam, for instance, but cannot advertise to everyone except those interested in Islam.

The company requires advertisers to agree to a set of policies that, among other things, bar them from “using targeting options to discriminate against, harass, provoke or disparage users, or to engage in predatory advertising practices.”

In a statement, Facebook said classifying a user’s interests was not the same as classifying their personal traits. “Like other internet companies, Facebook shows ads based on topics we think people might be interested in, but without using sensitive personal data. This means that someone could have an ad interest listed as gay pride because they have liked a Pride-associated page or clicked a Pride ad, but it does not reflect any personal characteristics such as gender or sexuality.”

The company also said it provided some controls to users on its ad preferences screen. “People are able to manage their ad preferences tool, which clearly explains how advertising works on Facebook and provides a way to tell us if you want to see ads based on specific interests or not. When interests are removed, we show people the list of removed interests so that they have a record they can access, but these interests are no longer used for ads.”

It added: “Our advertising complies with relevant EU law and, like other companies, we are preparing for the GDPR to ensure we are compliant when it comes into force.”

The findings are reminiscent of Facebook’s previous attempts to skirt the line between profiling users and profiling their interests. In 2016 it was revealed that the company had created a tool for “racial affinity targeting”.

At the time, Facebook repeatedly argued that the tool “is based on affinity, not ethnicity”. Discussing a person who was in the African American affinity group, for instance, the company said: “They like African American content. But we cannot and do not say to advertisers that they are ethnically black.”

Almost a year later, after it was revealed that advertisers could use the ethnic affinity tools to unlawfully discriminate against black Facebook users in housing adverts, Facebook agreed to limit how those tools could be used.



Source link

read more
Australia newsAustralian politicsData protectionFacial recognitionGillian TriggsTechnology

Gillian Triggs joins call for digital rights reforms after brush with data’s dark side | Technology

no thumb


Gillian Triggs, Australia’s controversial former human rights commissioner has had a personal experience of the dangers of data retention laws.

She was caught out, she reveals in a new report on Digital Rights, when she agreed to provide access to 24 hours of her digital life as part of an experiment at the Melbourne Writers Festival in 2017.

As her emails were put on a screen behind her, the audience tittered.


Her digital indiscretion was a minor one. She had applied for a seniors card.

But Triggs’ experience illustrates a point, made over and over in the Digital Rights Watch state of the nation report: your digital life is easily tracked, mapped, stored and exploited.

And the tools to more accurately track and match your data to you are about to be greatly enhanced by the use of facial recognition software, making it almost impossible for people to opt out of giving their details any time they are physically present.

The report from Digital Digital Rights Watch, released on Monday, calls for a series of reforms to better protect Australians’ digital rights.

While the revelations by Edward Snowden and Chelsea Manning about the scale and reach of surveillance in the US, and the activities of Cambridge Analytica in seeking to manipulate elections have highlighted what is possible, Digital Rights Watch warns this is just “scratching the surface”.

“A much wider, systematic and wilful degradation of our human rights online is happening,” the group’s chairman, Tim Singleton Norton warns.

The report warns that new technologies, such as facial recognition software, will open up new opportunities for advertisers and marketers but also pose significant new risks.

Dr Suelette Dreyfus, from the school of computing at the University of Melbourne, said data about people’s movements and behaviours collected by shopping centres, retailers and advertising companies could be combined with new technologies that included physical biometric identification, mood analysis and behavioural biometrics.

This, she warned, would remove consumers’ ability to not give their details or use a pseudonym when when dealing with an organisation that collects data, like a supermarket. That has the potential to undermine one of the key protections of the Privacy Act – the right not to give your details.

Dreyfus also said the biometric analysis technology now used for security was being repurposed to monitor the mood of individuals and their responses to advertising. It could also be used by employers to monitor the mood of employees.

“This technology is being sold and implemented despite the clear privacy and ethical issues with its implementation and the questionable value of the measurement itself,” she said.

Data Rights Watch is urging the development of an opt-out register for people who do not want their movement data used for commercial purposes. It is calling for a compulsory register of entities that collect behavioural biometric data.

The report also warns that Australia’s laws have failed to protect human rights.

“Upholding digital rights requires us to find the balance between the opportunity the internet provides us to live better, brighter and more interconnected lives, and the threat, posed by trolls, corporations and government,” Norton said. He is calling for a more nuanced debate.

The report highlights a number of concerns about Australian law and calls for the immediate scrapping of the law that requires telecommunications companies to retain their customers’ metadata for two years.

The report says that current regime effectively allows law enforcement bodies to watch everybody all of the time without them knowing. Warrants are not required except in the case of accessing journalist’s metadata, presumably to protect sources.

It notes there are reports of some organisations, including government departments intentionally circumventing privacy protections within the legislation in order to gain access to data they are not authorised to have.

It is also calling for new laws that respects and uphold the right to digital privacy and to data protection. It wants the government to create a similar body to the European Data Protection to monitor privacy protections.

And the group proposes a “right to disconnect”, which would prevent employers using digital tools to encroach on statutory rest breaks or holidays of their workers.



Source link

read more
AlphabetData protectionDigital mediaEuropeEuropean UnionFacebookGDPRGoogleInternetMediaPrivacySocial mediaSocial networkingTechnologyTwitterWorld news

EU: data-harvesting tech firms are ‘sweatshops of connected world’ | Technology

no thumb


The European data protection supervisor has hit out at social media and tech firms over the recent constant stream of privacy policy emails in the run up to GDPR, calling them them the “sweatshops of the connected world”.

With the tough new General Data Protection Regulations coming into force on 25 May, companies around the world are being forced to notify their users to accept new privacy policies and data processing terms to continue to use the services.

But Giovanni Buttarelli, the European data protection supervisor (EDPS), lambasted the often-hostile approach of the recent deluge of notifications.

“If this encounter seems a take-it-or-leave it proposition – with perhaps a hint of menace – then it is a travesty of at least the spirit of the new regulation, which aims to restore a sense of trust and control over what happens to our online lives,” said Buttarelli. “Consent cannot be freely given if the provision of a service is made conditional on processing personal data not necessary for the performance of a contract.”

“The most recent [Facebook] scandal has served to expose a broken and unbalanced ecosystem reliant on unscrupulous personal data collection and micro-targeting for whatever purposes promise to generate clicks and revenues.

“The digital information ecosystem farms people for their attention, ideas and data in exchange for so called ‘free’ services. Unlike their analogue equivalents, these sweatshops of the connected world extract more than one’s labour, and while clocking into the online factory is effortless it is often impossible to clock off.”

The European Union’s new stronger, unified data protection laws, the General Data Protection Regulation (GDPR), will come into force on 25 May 2018, after more than six years in the making.

GDPR will replace the current patchwork of national data protection laws, give data regulators greater powers to fine, make it easier for companies with a “one-stop-shop” for operating across the whole of the EU, and create a new pan-European data regulator called the European Data Protection Board.

The new laws govern the processing and storage of EU citizens’ data, both that given to and observed by companies about people, whether or not the company has operations in the EU. They state that data protection should be both by design and default in any operation.

GDPR will refine and enshrine the “right to be forgotten” laws as the “right to erasure”, and give EU citizens the right to data portability, meaning they can take data from one organisation and give it to another. It will also bolster the requirement for explicit and informed consent before data is processed, and ensure that it can be withdrawn at any time.

To ensure companies comply, GDPR also gives data regulators the power to fine up to €20m or 4% of annual global turnover, which is several orders of magnitude larger than previous possible fines. Data breaches must be reported within 72 hours to a data regulator, and affected individuals must be notified unless the data stolen is unreadable, ie strongly encrypted.

While data protection and privacy has become a hot-button issue in part thanks to the Cambridge Analytica files, Buttarelli is concerned that it is simply being used as part of the “PR toolkit” of firms. He said that there is “a growing gulf between hyperbole and reality, where controllers learn to talk a good game while continuing with the same old harmful habits”.

A new social media subgroup of data protection regulators will be convened in mid-May to tackle what Buttarelli called the “manipulative approaches” that must change with GDPR.

“Brilliant lawyers will always be able to fashion ingenious arguments to justify almost any practice. But with personal data processing we need to move to a different model,” said Buttarelli. “The old approach is broken and unsustainable – that will be, in my view, the abiding lesson of the Facebook/ Cambridge Analytica case.”



Source link

read more
Australia newsAustralian politicsBiometricsData protectionFacial recognitionPeter DuttonSurveillanceVictoriaVictorian politics

Victoria threatens to pull out of facial recognition scheme citing fears of Dutton power grab | Technology

no thumb


Victoria has threatened to pull out of a state and federal government agreement for the home affairs department to run a facial recognition system because the bill expands Peter Dutton’s powers and allows access to information by the private sector and local governments.

In October the Council of Australia Governments agreed to give federal and state police real-time access to passport, visa, citizenship and driver’s licence images for a wide range of criminal investigations.

The identity matching services bill, introduced in February, enables the home affairs department to collect, use and disclose identification information including facial biometric matching.

Sign up to receive the top stories in Australia every day at noon


In a submission to the parliamentary joint committee on intelligence and security, the Victorian special minister of state, Gavin Jennings, warned that the bill provided “significant scope” for the home affairs minister to expand his powers beyond what was agreed.

This includes the ability to collect new types of identification information and expand identity matching services. For example the bill would allow the commonwealth to collect not just driver’s licence information but also proof-of-age cards, and firearms and marine licences – some of which can be held by children as young as 12.

The commonwealth could collect information that Victoria was not authorised to disclose under its own legislation, the submission warned.

It said citizens may not be adequately informed that information they provide to get a driver’s licence, including biometric data, could be “reused for other law enforcement purposes”.

In its submission the home affairs department said the bill would “enable rather than authorise the use of the services by various government agencies” and the systems would still be governed by federal, state and territory privacy laws.

The Victorian submission said the states had agreed that the private sector would not be given access to the facial verification and identity data-sharing services.

But the bill did not “contain such a restriction, allowing non-government entities to use all identity-matching services” if they met certain conditions, it said.

As the home affairs department explained in its submission, those conditions included that private-sector entities would only have access to verification services, not to identify unknown individuals, and would require the consent of the person whose identity was being checked.

It defended private-sector access to the information, arguing that it would allow financial institutions and telcos “to contribute to national security and law enforcement outcomes”.

The Victorian government submission also complained that providing identity-matching services to local government authorities “goes beyond what was agreed to”. VicRoads “may not be authorised” to share information with the national driver’s licence facial recognition system because of this overreach.

Jennings requested that the commonwealth revise the bill to align it with the agreement. He warned that if the scope of the driver’s licence facial recognition scheme was expanded as proposed in the bill Victoria “would need to consider whether it wishes to participate … and if so, the legal basis on which it would rely”.

Victoria also called for further checks and balances to limit the home affairs department’s use and disclosure of information.

The department submitted that the bill would allow it to help prevent identity crime, which affects 5% of Australians and is estimated to cost $2.2bn a year. “Identity crime is also a key enabler of serious and organised crime, including terrorism,” the submission said.

In addition to invoking the most serious offences, the home affairs department also acknowledges that the systems would be used for “the provision of more secure and accessible government and private-sector services” and “improving road safety through the detection and prosecution of traffic offences”.

The Queensland office of the information commissioner submitted that the bill did not prevent “blanket surveillance techniques” but argued that “indiscriminate use of the face-matching services would not be feasible in practice”.

Agencies would “continue to be subject to legislative privacy protections and information-sharing restrictions that already apply to them”, it said.

The home affairs department said further protections would be included in face-matching services participation agreements, which would require agencies to undertake compliance audits.

“These arrangements are being established and agreed between the commonwealth and all states and territories,” it said. “They are based on the principle that each state and territory retains control over decisions on how its data is shared.”



Source link

read more
Data protectionLawPolicePoliticsSurveillanceTechnologyUK newsUK security and counter-terrorismWorld news

UK has six months to rewrite snooper’s charter, high court rules | Technology

no thumb


The British government must rewrite its mass data surveillance legislation because it is incompatible with European law, the high court has ruled.

Judges have given ministers and officials six months to redraft the 2016 Investigatory Powers Act, labelled the snooper’s charter by critics, following a crowdfunded challenge by the human rights group Liberty.

Ministers had already accepted that some aspects of the act do not comply with EU law and needed to be revised. They wanted until April next year to introduce new rules.

On Friday, however, Lord Justice Singh and Lord Justice Holgate said legislation must be drawn up by the start of November.

Lawyers for Liberty argued in February that the act violates the public’s right to privacy by allowing the storage of and access to internet data.

The government accepted the act was inconsistent with EU law because access to retained data was not limited to the purpose of combating “serious crime” and was not subject to prior review by a court or other independent body.

The case was the first stage of Liberty’s legal challenge against the act and was funded by supporters who raised more than £50,000.

The Home Office announced a series of new safeguards last year in anticipation of the ruling. They included removing the power of self-authorisation for senior police officers, and requiring approval for requests for confidential communications data to be granted by the investigatory powers commissioner. Liberty said the safeguards did not go far enough.

The group has launched another fundraising effort for the next stage of its case, which includes challenging rules on bulk interception of digital communications.

It argues that the powers to intercept communications in bulk and create files known as personal datasets undermine free speech, privacy and patient confidentiality, legal privilege and journalists’ sources.

Speaking after the ruling, Liberty’s director, Martha Spurrier, said: “Police and security agencies need tools to tackle serious crime in the digital age, but creating the most intrusive surveillance regime of any democracy in the world is unlawful, unnecessary and ineffective.”

The latest ruling follows an appeal court decision in January against previous surveillance rules in the 2014 Data Retention and Investigatory Powers Act, which expired at the end of 2016.

Three senior judges concluded that Dripa was inconsistent with EU law following a challenge by the Labour deputy leader, Tom Watson, and campaigners, who were supported by Liberty.



Source link

read more
Chat and messaging appsChildrenData protectionEuropeEuropean UnionFacebookGDPRMediaPrivacySocial networkingSocietyTechnologyWhatsApp

WhatsApp raises minimum age to 16 for Europeans ahead of GDPR | Technology

no thumb


WhatsApp is raising the minimum user age from 13 to 16, potentially locking out large numbers of teenagers as the messaging app looks to comply with the EU’s upcoming new data protection rules.

The Facebook-owned messaging service that has more than 1.5 billion users will ask people in the 28 EU states to confirm they are 16 or older as part of a prompt to accept a new terms of service and an updated privacy policy in the next few weeks.

How WhatsApp will confirm age and enforce the new limit is unclear. The service does not currently verify identity beyond requirements for a working mobile phone number.

WhatsApp said it was not asking for any new rights to collect personal information in the agreement it has created for the European Union. It said: “Our goal is simply to explain how we use and protect the limited information we have about you.”

WhatsApp’s minimum age will remain 13 years outside of Europe, in line with its parent company. In order to comply with the European General Data Protection Regulation (GDPR), which comes into force on 25 May, Facebook has taken a different approach for its primary social network. As part of its separate data policy, the company requires those aged between 13 and 15 years old to nominate a parent or guardian to give permission for them to share information with the social network, or otherwise limit the personalisation of the site.

WhatsApp also announced Tuesday that it would begin allowing users to download a report detailing the data it holds on them, such as the make and model of the device they used, their contacts and groups and any blocked numbers.

GDPR is the biggest overhaul of online privacy since the birth of the internet, giving Europeans the right to know what data is stored on them and the right to have it deleted. The new laws also give regulatorsthe power to fine corporations up to 4% of their global turnover or €20m, whichever is larger, for failing to meet the tough new data protection requirements.

WhatsApp, founded in 2009 and bought by Facebook for $19bn in 2014, has come under pressure from some European governments in recent years because of its use of end-to-end encryption and its plan to share user data with its parent company.

In 2017 European regulators disrupted a move by WhatsApp to change its policies to allow it to share users’ phone numbers and other information with Facebook for ad targeting and other uses. WhatsApp suspended the change in Europe after widespread regulatory scrutiny, and signed an undertaking in March with the UK Information commissioner’s office to not share any EU citizen’s data with Facebook until GDPR comes into force.

But on Tuesday the messaging firm said it wanted to continue sharing data with Facebook at some point. It said: “As we have said in the past, we want to work closer with other Facebook companies in the future and we will keep you updated as we develop our plans.”



Source link

read more
Cambridge AnalyticaData protectionFacebookMark ZuckerbergPrivacySocial networkingTechnologyUK newsUniversity of CambridgeWorld news

Cambridge University rejected Facebook study over ‘deceptive’ privacy standards | Technology

no thumb


A Cambridge University ethics panel rejected research by the academic at the centre of the Facebook data harvesting scandal over the social network’s “deceptive” approach to its users privacy, newly released documents reveal.

A 2015 proposal by Aleksandr Kogan, a member of the university’s psychology department, involved the personal data from 250,000 Facebook users and their 54 million friends that he had already gleaned via a personality quiz app in a commercial project funded by SCL, the parent company of Cambridge Analytica.

Separately, Kogan proposed an academic investigation on how Facebook likes are linked to “personality traits, socioeconomic status and physical environments”, according to an ethics application about the project released to the Guardian in response to a freedom of information request.

The documents shed new light on suggestions from the Facebook CEO, Mark Zuckerberg, that the university’s controls on research did not meet Facebook’s own standards. In testimony to the US Congress earlier this month, Zuckerberg said he was concerned over Cambridge’s approach, telling a hearing: “What we do need to understand is whether there is something bad going on at Cambridge University overall, that will require a stronger action from us.”

But in the newly published material, the university’s psychology research ethics committee says it found the Kogan proposal so “worrisome” that it took the “very rare” decision to reject the project.

The panel said Facebook’s approach to consent “falls far below the ethical expectations of the university”.


Correspondence around the decision was released hours before Kogan appeared before a Commons inquiry into fake news and misinformation. In written and oral evidence to the committee Kogan insisted that all his academic work was reviewed and approved by the university (pdf). But he did not mention to the MPs the ethics committee’s rejection of his proposed research using the Facebook data in May 2015.

Explaining the decision, one member of the panel said the Facebook users involved had not given sufficient consent to allow the research to be conducted, or given a chance to withdraw from the project. The academic, whose name was redacted from the document, said: “Facebook’s privacy policy is not sufficient to address my concerns.”

Appealing against the panel’s rejection, a letter believed to be written by Kogan pointed out that “users’ social network data is already downloaded and used without their direct consent by thousands of companies who develop apps for Facebook”.

It added: “In fact, access to data by third parties for various purposes is fundamental to every app on Facebook; so users have already had their data downloaded and used by companies for private interest.”

Another panel member felt that information shared with Facebook friends should not be regarded as public data. In a response to Kogan’s appeal, the academic said: “Once you have persuaded someone to run your Facebook app, you can really only collect that subject’s data. What his or her friends have disclosed is by default disclosed to ‘friends’ only, that is, with an expectation of confidence.”

The ethics panel member added: “Facebook is rather deceptive on this and creates the appearance of a cosy and confidential peer group environment, as a means of gulling users into disclosing private information that they then sell to advertisers, but this doesn’t make it right to an ethical researcher to follow their lead.”

The academic also likened Facebook to a contagion. The letter sent in July 2015 said: “My view of Facebook is that it’s a bit like an infectious disease; you end up catching what your friends have. If there are bad apps, or malware, or even dodgy marketing offers, they get passed along through friendship networks. An ethical approach to using social networking data has to take account of this.”

Kogan accepted that he made mistakes in how the Facebook data was collected. He told Tuesday’s committee hearing: “Fundamentally I made a mistake, by not being critical about this. I should have got better advice on what is and isn’t appropriate.”

But asked if he accepted that he broke Facebook’s terms and conditions, Kogan said: “I do not. I would agree that my actions were inconsistent with the language of these documents, but that’s slightly different.”

Kogan collected Facebook data before the network changed its terms of service in 2014 to stop developers harvesting data via apps.

Facebook has banned Kogan from the network and insisted that he violated its platform policy by transferring data his app collected to Cambridge Analytica. The company has previously said it is “strongly committed to protecting people’s information”.

In an initial statement on Kogan’s research, Mark Zuckerberg said Facebook had already taken key steps to secure users’ data and said it would go further to prevent abuse. He added: “This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it.”

Kogan has been approached for comment.



Source link

read more
Data protectionPrivacyTechnologyUK newsUS newsWorld news

Eventbrite apologises for footage rights grab | World news

no thumb


A website that allows users to create, promote and sell tickets to events has apologised to users for a clause in its terms of service that allowed it to attend, film and use the footage for its own purposes.

Eventbrite hosts more than 2m events a year, ranging from small free gatherings of friends to large paid-for conferences.

Buried near the bottom of the website’s 10,000-word merchant agreement was a section titled “Permissions you grant us to film and record your events”. It gave Eventbrite wide-ranging powers to use private events for its own purposes, including in adverts for the site.

The terms and conditions also allowed the company to film behind the scenes as an event was being set up or packed away, and required event hosts to obtain, at their own expense, all the “permissions, clearances and licenses” required to allow Eventbrite to do what it wanted, and left the question of whether it had to even credit performers or hosts “in its discretion”.

The clause, which affected the British version of the site as well as the American, was added to the merchant agreement at the beginning of April, but it took until Friday for user Barney Dellar to bring the rights grab to wider attention.

Eventbrite apologised for the clause on Sunday night, and removed it from its site. A spokeswoman told the Guardian: “Earlier this month we made an update to our terms of service and merchant agreement that would allow us the option to work with individual organisers to secure video and photos at their events for marketing and promotional purposes.

“We’ve heard some concerns from our customers and agree that the language of the terms went broader than necessary given our intention of the clause.

“We have not recorded any footage at events since this clause was added, and upon further review, have removed it entirely from both our terms of service and merchant agreement. We sincerely apologise for any concern this caused.”

Many companies are rushing out large updates to their terms of service in the runup to 25 May, the enforcement date for Europe’s general data protection regulation (GDPR), which strengthens individuals’ data protection rights.

In the last week, Facebook, Tumblr, and Airbnb have all notified users of their new terms of service.

The European Union’s new stronger, unified data protection laws, the General Data Protection Regulation (GDPR), will come into force on 25 May 2018, after more than six years in the making.

GDPR will replace the current patchwork of national data protection laws, give data regulators greater powers to fine, make it easier for companies with a “one-stop-shop” for operating across the whole of the EU, and create a new pan-European data regulator called the European Data Protection Board.

The new laws govern the processing and storage of EU citizens’ data, both that given to and observed by companies about people, whether or not the company has operations in the EU. They state that data protection should be both by design and default in any operation.

GDPR will refine and enshrine the “right to be forgotten” laws as the “right to erasure”, and give EU citizens the right to data portability, meaning they can take data from one organisation and give it to another. It will also bolster the requirement for explicit and informed consent before data is processed, and ensure that it can be withdrawn at any time.

To ensure companies comply, GDPR also gives data regulators the power to fine up to €20m or 4% of annual global turnover, which is several orders of magnitude larger than previous possible fines. Data breaches must be reported within 72 hours to a data regulator, and affected individuals must be notified unless the data stolen is unreadable, ie strongly encrypted.





Source link

read more
BrexitCambridge AnalyticaData and computer securityData protectionInsuranceInsurance industryInternetTwitter

Arron Banks, the insurers and my strange data trail | Technology

Arron Banks, the insurers and my strange data trail | Technology


If a 29-year-old Peugeot 309 is the answer, it’s fair to wonder: what on earth is the question? In fact, I had no idea about either the question or the answer when I submitted a “subject access request” to Eldon Insurance Services in December last year. Or that my car – a vehicle that dates from the last millennium – could hold any sort of clue to anything. If there’s one thing I’ve learned, however, in pursuing the Cambridge Analytica scandal, it’s that however weird things look, they can always get weirder.

Because I was simply seeking information, as I have for the last 16-plus months, about what the Leave campaigns did during the referendum – specifically, what they did with data. And the subject access request – a legal mechanism I’d learned about from Paul-Olivier Dehaye, a Swiss mathematician and data expert – was a shot in the dark.

Under British data protection laws, “data subjects” – you and me – have the right to ask companies or organisations what personal information about them they hold. And, a series of incidents had led me to wonder what, if any, personal information Leave.EU – the campaign headed by Nigel Farage and bankrolled by Arron Banks, a Bristol-based businessman – may have held on me. By the time I submitted my request in December, I’d already been writing about them and their relationship with Cambridge Analytica for almost a year – the first piece in February triggering two investigations by the Electoral Commission and Information Commissioner’s Office (ICO).

But, in November, I appeared to touch a nerve. Leave.EU’s persistent but mostly lighthearted attacks on my work began to change in tone. Conservative MPs had started to criticise the government’s Brexit plans, it had been revealed that Robert Mueller was investigating Cambridge Analytica, and it was in the middle of this that Leave.EU put out a video: a spoof video that showed me being beaten up and threatened with a gun. It was intended to creep me out. And it did. What else, I wondered, did Leave.EU have planned? What else did it know about me? And where had it come from? Companies House shows dozens of companies registered in Banks’s name and variants of his name – Aaron Banks, Aron Fraser Andrew Banks, Arron Andrew Fraser Banks, to name three – including a private investigations firm, Precision Risk and Intelligence Ltd. Andy Wigmore – a director of Eldon and Leave.EU’s spokesman – had told me previously that all insurance firms had access to police databases for fraud prevention purposes.

So, on 17 December last year I submitted a request to Liz Bilney, chief executive of both Leave.EU and Eldon Insurance Services, that asked for the personal information held on me by 19 of Banks’s companies.



Email to an employee requesting artwork to make Carole look ‘manic’. It was subsequently used in a Leave.EU tweet.

The letter triggered a steam of abuse, Banks and Wigmore revealing the contents of my letter in a series of tweets. The next day, I complained to the ICO that my attempt to access my private data, as is my right under British law, had been disclosed publicly and used as the basis to attack me further. The ICO found them to be “likely” in breach of the Data Protection Act and said it had written to notify them. Bilney asked me to pay £170 – you can charge up to £10 for each request by law – and just over a month later I received two folders of data, one relating to the personal information held on me by Leave.EU and the other by Eldon Insurance Services.

The second folder was a surprise. And not just to me. “We have no information on you dopey! You are a political adversary not a customer…” Banks had tweeted at me. And when I’d complained, he said: “You aren’t a customer, we don’t hold any data on you and frankly a journalist asking questions isn’t private, dopey!”

He was right: I wasn’t an Eldon customer. But there it was: my Eldon data, a spreadsheet, that showed it had gathered 12 different sets of data on me from three different sources. These were identified by different codes and a legend supplied with the spreadsheet revealed the codes represented software companies. And there was my data: Eldon had my name, age, address, email address, friends and family who had been on my car insurance and how I had been scored for risk.

How did Eldon have it? And where did it come from? Was I – or had I been – a customer of Eldon at some point? I hadn’t, it turned out, but a search of my inbox revealed that on 27 July last year, I’d taken out car insurance on the basis of a quote I’d obtained from Moneysupermarket.com. The telling detail was that it was sent at 13.34, the same time as the final entry on the spreadsheet.

I had given Moneysupermarket.com all sorts of private information: my home, car, personal relationships, and it had passed that private, sensitive information on to Eldon.

Going back to Moneysupermarket, I could see that I’d consented to my data being shared with its partners when I sought quotes and that, according to the terms and conditions it had set out, it could share it if it wanted.

Two months earlier, I’d spent 72 hours getting increasingly unsettled by the video Leave.EU put out and which, despite hundreds of complaints from people, it had refused to take down. Banks had previously told me that “I wouldn’t be so lippy in Russia” and both he and Leave.EU had made a habit of retweeting personal attacks directed at me by the Russian Embassy’s Twitter account. The video, showing a photoshopped image of me being hit in the face to the music of the Russian national anthem, went up the same week the Telegraph launched an attack on “Brexit mutineers”. Brendan Cox – the widow of Jo Cox – said that it created “a context where violence is more likely”. Another Leave.EU tweet called them a “cancer”. The atmosphere was ugly. And the video felt threatening. I felt threatened. It wasn’t so much that it had been put up, but that it stayed up – only coming down, eventually, when the Observer’s editors intervened.

I tell the story at length because this is the context in which I found out this information. And because it turns out that my experience may not be unique. Moneysupermarket responded: “Our providers use the personal information from our customers to generate personalised quotes for the service they have asked for (such as quoting for car insurance) and are not allowed to use this information for anything else unless they have permission from the customer.”

But I had given my consent and it shared my information in accordance with its privacy policy. In its annual report, it reveals it holds data on 24.9 million people – half the British electorate.

A post-Brexit advert for Eldon Insurance.



A post-Brexit advert for Eldon Insurance.

My disquiet about what information companies and organisations hold on me, and how it might be used, is a disquiet that, in the light of the Cambridge Analytica scandal, should perhaps be felt by everyone.

Or at least raise questions. Questions, such as: what private information do Banks’s companies hold on you? Where did it come from? How might it have been used?

Last week an ex-director of Cambridge Analytica, Brittany Kaiser, made explosive new claims in testimony to MPs. She appeared before the select committee for the Department for Digital, Culture, Media and Sport and told MPs that, despite ferocious denials repeated for more than a year, Cambridge Analytica did process data for Leave.EU and Ukip. It did carry out work for the campaign, she said.

But she also told MPs – and submitted evidence – that she had been asked to devise a strategy to combine Ukip, Leave.EU and Eldon insurance data to politically profile people. What’s more, she said, she visited Eldon’s call centre and HQ in Bristol, which had also served as the campaign HQ for Leave.EU, and seen with her “own eyes” how Eldon employees used Eldon data to target people with political messages.

If true, Ravi Naik, a human rights lawyer who specialises in data rights, says it would be a scandal on the level of the one now engulfing Cambridge Analytica and Facebook. Because my attempt to find something as benign and unavoidable as a new insurance deal – just like millions of others – for my Peugeot had inadvertently revealed personal data they potentially had access to.

“It’s what Christopher Wylie has been saying about the weaponisation of data,” says Naik. “The idea that by doing something fundamental to your day-to-day life could have led to sensitive personal information being used in ways you don’t know about, let alone consented to.”

Banks told the Observer that Kaiser’s evidence “was a tissue of lies”, that she had visited Eldon’s offices only once, that the call centre handled calls from the public or those who followed Leave.EU on social media, and that the company “absolutely refutes” that any insurance data was used in the campaign. He said: “Eldon has never given… any data to Leave.EU, they are separate entities with strong data control rules. And vice versa.”

The folder containing my Eldon data was one of two I received back. The other marked Leave.EU contained all sorts of odd material: emails I’d sent Banks and Wigmore, and replies they’d sent to me. Emails they’d sent employees about me. Emails about mocking up Photoshopped images of me to put out on Twitter.

Typical is this one from 13 December, in which Wigmore writes: “Can we get a picture of carole codswallop accepting her award Oscar style thanking the Russians, Facebook Arron and myself with caption only 75p spent on Brexit etc – make it funny.”

Or this one from May last year, four days after the Observer published the first story that used Wylie, the Cambridge Analytica whistleblower, as an anonymous source: “Can you do a tile of Carole Cadwalladr with a tin foil hat on, looking manic at a computer with a big whiteboard with illuminati triangles with a big chalkboard filled with formulae etc. No copy. She’s looking into the campaign trying to find a big global conspiracy and we want to take the piss out of her…”

Carole’s 29-year-old Peugeot



Carole’s 29-year-old Peugeot has been an unlikely gateway to new discoveries about data.

He did. The image – flatearth.jpg – is attached in the next email and later @LeaveEUOfficial put it out on Twitter. “Madwoman @carolecadwalla is desperate to unearth some global conspiracy to undermine the #EURef. There isn’t one. Leave won, get over it!”

So far, so predictable. The “piss taking” was – until November’s video – the main mode of communication from Leave.EU to me. But the email chain and others in the folder pose more questions. Questions about the relationship between Banks and Leave.EU. About the relationship of Banks with Eldon insurance. And their relationship to each other. Questions that urgently need answers.

Because the request to Leave.EU was assigned to an employee whose email states “(Eldon Insurance Services)” and who has worked for Eldon Insurance Services Ltd since October 2016. He was assigned the “task” by someone with a Leave.EU email address and the email links through to a password-protected website called www.eldoninsuranceservices.eu.teamwork.com.

Also cc-ed is another employee who Kaiser’s emails, released via parliament last week, show was involved with the work that Cambridge Analytica did for the campaign. His LinkedIn profile describes him as doing political work on behalf of Eldon Insurance.

Other employees are listed as working for working both companies. Eldon’s operations manager, for example, is also Leave.EU’s operations manager. When asked about this crossover of employees, directors and projects, Banks said: “During the campaign a small number of managers were allocated, expensed in the EC [electoral commission] return and worked on Leave.EU.” When asked about current employees, including current employees who were working for both organisations concurrently, he gave no reply.

Leave.EU was, and still is, based within Eldon Insurance’s HQ. Westmonster, the political news site Banks founded and funded, is registered to an Eldon Insurance address. Adverts for his firm GoSkippy are routinely sent to people on Leave.EU’s mailing list. Last year, Banks defended the practice, saying: “Why shouldn’t I? It’s my data.” When asked again last week, he said: “Leave.EU after the referendum campaign carried the occasional ad for insurance, so what?” In an email a day later, he said: “Eldon has never given … any data to Leave.EU.”

Last week, the Observer revealed that in the same week that the ICO had raided Cambridge Analytica’s office and seized evidence, it had issued “information notices” to both Leave.EU and Banks, a regulatory action that asks for information to be provided, for which non-compliance is a criminal offence. The questions are being asked, it seems. However, Elizabeth Denham, the information commissioner, told a conference last week that it urgently needed stronger powers to conduct its investigations. “We need the regime to reflect that data crimes are real crimes,” she said.

The questions are out there. Whether the ICO has the power to get the answers – or whether we’re going to continue to rely on clues obtained by a parliamentary committee and a 29-year-old Peugeot – remains to be seen.



Source link

read more
Cambridge AnalyticaData protectionFacebookSocial mediaSocial networkingUS news

Americans want tougher rules for big tech amid privacy scandals, poll finds | Technology

no thumb


Americans want major technology companies to be regulated, with legal responsibility for the content they carry on their platforms and harsher punishment for breaches of data privacy, according to a nationally representative survey of 2,500 adults.

In the wake of the Cambridge Analytica scandal, the way that technology companies handle our personal data and moderate the content – including fake news, hate speech and terrorist propaganda – on their platforms has been thrown under the microscope.

The increased scrutiny appears to have increased the public’s appetite for more stringent regulation of the tech giants, with 83% of people seeking tougher regulations and penalties for breaches of data privacy and 84% believing that technology companies should be legally responsible for the content they carry on their systems. Technology companies are currently protected from legal responsibility for most content thanks to section 230 of the Communications Decency Act.

About 53% of people feel the tech sector should be regulated by the federal government in the way big banks are. That figure rose to 62% among baby boomers.

These results, from the inaugural Tech Media Telecom Pulse Survey, by HarrisX, represent a striking departure from responses to a similar set of questions in November 2017, when just 49% of Americans felt that tech firms needed regulation.

“Public opinion last year was largely evenly split on the need to regulate technology companies,” said Dritan Nesho, CEO of HarrisX. “It has now swung in the opposite direction due to a series of scandals around fake news, platform bias, foreign interference and privacy concerns, which have rightly been called the ‘annus horribilis’ of the tech sector.”

The survey was conducted right after the Facebook CEO, Mark Zuckerberg, delivered many hours of testimony before Congress last week. The poll included a section of questions dedicated to the social network.

The results show that Americans don’t believe Facebook is a neutral platform. Sixty-six percent view it as a media company that prioritises some types of content over others, with 55% believing there to be evidence of political bias and censorship in its results – a view that’s more prevalent (70%) among Republicans.

In questions comparing Facebook with other companies (Twitter, Google, YouTube, Apple, Amazon, Microsoft, LinkedIn and Snapchat), the social network came out badly. Forty-four percent disagreed with the statement “Facebook cares about privacy” while 46% disagreed with the notion that it “protects my personal information”. Twitter scored the second-worst in these categories, with 33% and 30% respectively. When asked whether companies improved users’ mental health, Facebook had the most detractors, with 41% disagreeing with the notion, compared with Twitter’s 32%.

Although Facebook has borne the brunt of the scrutiny over the last couple of weeks, 61% of the public believes that other tech CEOs should be summoned before Congress to explain their data privacy and security practices.

The Cambridge Analytica scandal was a “perfect storm that caught Facebook in its crosshairs”, said Nesho, “but it is by no means solely a Facebook issue.”

It wasn’t all bad for the technology sector. Sixty-three percent of Americans perceive technology to be a “good force”, with 68% perceiving it to have a “positive impact on the world”. However, when they were asked about specific companies, none received majority approval for being “good for democracy”.

“The public has a complex relationship with personal technology. Broadly speaking, a majority of Americans perceive technology to be a good force on the world,” said Nesho. “But dig deeper and you find very conflicted views on a series of important social issues.”



Source link

read more
1 2 3 6
Page 1 of 6