close

Privacy

Data protectionFacebookMediaPrivacySocial networkingTechnology

Facebook lets advertisers target users based on sensitive interests | Technology

no thumb


Facebook allows advertisers to target users it thinks are interested in subjects such as homosexuality, Islam or liberalism, despite religion, sexuality and political beliefs explicitly being marked out as sensitive information under new data protection laws.

The social network gathers information about users based on their actions on Facebook and on the wider web, and uses that data to predict on their interests. These can be mundane – football, Manhattan or dogs, for instance – or more esoteric.

A Guardian investigation in conjunction with the Danish Broadcasting Corporation found that Facebook is able to infer extremely personal information about users, which it allows advertisers to use for targeting purposes. Among the interests found in users’ profiles were communism, social democrats, Hinduism and Christianity.

The EU’s general data protection regulation (GDPR), which comes into effect on 25 May, explicitly labels such categories of information as so sensitive, with such a risk of human rights breaches, that it mandates special conditions around how they can be collected and processed. Among those categories are information about a person’s race, ethnic origin, politics, religion, sex life and sexual orientation.

The information commissioner’s office says: “This type of data could create more significant risks to a person’s fundamental rights and freedoms, for example, by putting them at risk of unlawful discrimination.”

Organisations must cite one of 10 special dispensations to process such information, such as “preventive or occupational medicine”, “to protect the vital interests of the data subject”, or “the data subject has given explicit consent to the processing of those personal data for one or more specified purposes”.

Facebook already applies those special categories elsewhere on the site. As part of its GDPR-focused updates, the company asked every user to confirm whether or not “political, religious, and relationship information” they had entered on the site should continue to be stored or displayed. But while it offered those controls for information that users had explicitly given it, it gathered no such consent for information it had inferred about users.

The data means an advertiser can target messages at, for instance, people in the UK who are interested in homosexuality and Hinduism – about 68,000 people, according to the company’s advertising tools.

Facebook does demonstrate some understanding that the information is sensitive and prone to misuse. The company provides advertisers with the ability to exclude users based on their interests, but not for sensitive interests. An advertiser can advertise to people interested in Islam, for instance, but cannot advertise to everyone except those interested in Islam.

The company requires advertisers to agree to a set of policies that, among other things, bar them from “using targeting options to discriminate against, harass, provoke or disparage users, or to engage in predatory advertising practices.”

In a statement, Facebook said classifying a user’s interests was not the same as classifying their personal traits. “Like other internet companies, Facebook shows ads based on topics we think people might be interested in, but without using sensitive personal data. This means that someone could have an ad interest listed as gay pride because they have liked a Pride-associated page or clicked a Pride ad, but it does not reflect any personal characteristics such as gender or sexuality.”

The company also said it provided some controls to users on its ad preferences screen. “People are able to manage their ad preferences tool, which clearly explains how advertising works on Facebook and provides a way to tell us if you want to see ads based on specific interests or not. When interests are removed, we show people the list of removed interests so that they have a record they can access, but these interests are no longer used for ads.”

It added: “Our advertising complies with relevant EU law and, like other companies, we are preparing for the GDPR to ensure we are compliant when it comes into force.”

The findings are reminiscent of Facebook’s previous attempts to skirt the line between profiling users and profiling their interests. In 2016 it was revealed that the company had created a tool for “racial affinity targeting”.

At the time, Facebook repeatedly argued that the tool “is based on affinity, not ethnicity”. Discussing a person who was in the African American affinity group, for instance, the company said: “They like African American content. But we cannot and do not say to advertisers that they are ethnically black.”

Almost a year later, after it was revealed that advertisers could use the ethnic affinity tools to unlawfully discriminate against black Facebook users in housing adverts, Facebook agreed to limit how those tools could be used.



Source link

read more
BiologyCaliforniaData and computer securityDNA databaseFamilyGenealogyGeneticsInternetPrivacyScienceTechnologyUS news

Golden State Killer: the end of DNA privacy? Chips with Everything podcast | Technology

no thumb


Subscribe and review: Acast, Apple, Spotify, Soundcloud, Audioboom, Mixcloud. Join the discussion on Facebook, Twitter or email us at [email protected]

A former police officer called Joseph James DeAngelo was arrested in April in connection with a series of murders, rapes and burglaries attributed to an unknown assailant known as the Golden State Killer.

This 40-year-old cold case was reopened after investigators acquired a discarded DNA sample and uploaded it to an “undercover profile” on a genealogy website called GED match. Through this, they were able to find distant relatives and eventually narrow down their search to match descriptions of the killer obtained throughout the investigation.

But what about the innocent people who sent off their DNA to a genealogy website in hopes of tracing their ancestry who might end up becoming part of a criminal investigation? Our DNA is one of the most inherently personal things we have, but this case raises questions about its privacy. If we spit into a test tube and send it off to a website for analysis, who owns that information? Who has access to it? And what can it be used for?

To try and answer some of these questions, Jordan Erica Webber talks to Prof Charles Tumosa of the University of Baltimore, Prof Denise Syndercombe-Court of King’s College and Lee Rainie of the Pew Research Center.





Source link

read more
Cambridge AnalyticaFacebookMediaPrivacySocial networkingTechnologyUK newsUS newsWorld news

Cambridge Analytica closing after Facebook data harvesting scandal | News

Cambridge Analytica closing after Facebook data harvesting scandal | News


Cambridge Analytica, the data firm at the centre of this year’s Facebook privacy row, is closing and starting insolvency proceedings.

The company has been plagued by scandal since the Observer reported that the personal data of about 50 million Americans and at least a million Britons had been harvested from Facebook and improperly shared with Cambridge Analytica.

Cambridge Analytica denies any wrongdoing, but says that the negative media coverage has left it with no clients and mounting legal fees.


What is the Cambridge Analytica scandal? – video explainer

“Despite Cambridge Analytica’s unwavering confidence that its employees have acted ethically and lawfully, the siege of media coverage has driven away virtually all of the Company’s customers and suppliers,” said the company in a statement, which also revealed that SCL Elections Ltd, the UK entity affiliated with Cambridge Analytica, would also close and start insolvency proceedings.

“As a result, it has been determined that it is no longer viable to continue operating the business, which left Cambridge Analytica with no realistic alternative to placing the company into administration.”

As first reported by the Wall Street Journal, the company has started insolvency proceedings in the US and UK. At Cambridge Analytica’s New York offices on an upmarket block on Manhattan’s Fifth Avenue, it appeared all the staff had already left the premises.

The Guardian rang the doorbell to the company’s seventh-floor office and was met by a woman who would not give her name but said she did not work for the company.



The Cambridge Analytica office in New York. Photograph: Oliver Laughland for the Guardian

Asked if anyone from Cambridge Analytica or SCL was still inside, she said: “They used to be. But they all left today.”

The scandal centres around data collected from Facebook users via a personality app developed by the Cambridge University researcher Aleksandr Kogan. The data was collected via Facebook’s permissive “Graph API”, the interface through which third parties could interact with Facebook’s platform. This allowed Kogan to pull data about users and their friends, including likes, activities, check-ins, location, photos, religion, politics and relationship details. He passed the data to Cambridge Analytica, in breach of Facebook’s platform policies.

Christopher Wylie, the original Cambridge Analytica whistleblower, told the Observer that the data Kogan obtained was used to influence the outcome of the US presidential election and Brexit. According to Wylie the data was fed into software that profiles voters and tries to target them with personalised political advertisements. Cambridge Analytica insists it never incorporated the Kogan data.

Kogan told BBC Radio 4’s Today programme he was being used as a scapegoat.

He said: “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica. Honestly, we thought we were acting perfectly appropriately. We thought we were doing something that was really normal.”

Cambridge Analytica said it had been “vilified for activities that are not only legal, but also widely accepted as a standard component of online advertising in both the political and commercial arenas”.

The CEO of Cambridge Analytica, Alexander Nix, was suspended in late March after Britain’s Channel 4 News broadcast secret recordings in which he claimed credit for the election of Donald Trump.

He told an undercover reporter: “We did all the research, all the data, all the analytics, all the targeting. We ran all the digital campaign, the television campaign and our data informed all the strategy.”

He also revealed that the company used a self-destruct email server to erase its digital history.

“No one knows we have it, and secondly we set our … emails with a self-destruct timer … So you send them and after they’ve been read, two hours later, they disappear. There’s no evidence, there’s no paper trail, there’s nothing.”

Although Cambridge Analytica might be dead, the team behind it has already set up a mysterious new company called Emerdata. According to Companies House data, Alexander Nix is listed as a director along with other executives from SCL Group. The daughters of the billionaire Robert Mercer are also listed as directors.

Damian Collins, chair of the British parliamentary committee looking into data breaches, expressed concern that Cambridge Analytica’s closure might hinder the investigation into the firm.

“Cambridge Analytica and SCL group cannot be allowed to delete their data history by closing. The investigations into their work are vital,” he wrote on Twitter.

The episode has shone a spotlight on the way that Facebook data is collected, shared and used to target people with advertising.

The social network initially scrambled to blame rogue third parties for “platform abuse” – “the entire company is outraged we were deceived,” the company said – before it unveiled sweeping changes to its privacy settings and data sharing practices.

“This was a breach of trust between Kogan, Cambridge Analytica and Facebook,” said Mark Zuckerberg in a Facebook post. “But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.”

Facebook first discovered that Kogan had shared data with Cambridge Analytica when a Guardian journalist contacted the company about it at the end of 2015. It asked Cambridge Analytica to delete the data and revoked Kogan’s apps’ API access. However, Facebook relied on Cambridge Analytica’s word that it had done so.

After it was revealed that the data hadn’t been deleted, Facebook revoked Cambridge Analytica’s access to its platform and launched an investigation of “thousands” of apps that had similar access and made several changes to restrict how much third-party developers can access from people’s profiles.

The company also pledged to verify the identities of administrators of popular Facebook pages and advertisers buying political “issue” ads on “debated topics of national legislative importance” such as education, immigration and abortion.





Source link

read more
Digital mediaInternetPrivacySilicon ValleySocial mediaTechnologyTwitter

Twitter urges all users to change their password after bug discovered | Technology

no thumb


Twitter has urged its 336 million users to change their passwords after the company discovered a bug that stored passwords in plain text in an internal system.

The company said it had fixed the problem and had seen “no indication of breach or misuse”, but it suggested users consider changing their password on Twitter and on all services where they have used the same password “as a precaution”.

“We are very sorry this happened,” said Twitter’s chief technology officer, Parag Agrawal, in a blogpost. “We recognise and appreciate the trust you place in us, and are committed to earning that trust every day.”

Twitter Support
(@TwitterSupport)

We recently found a bug that stored passwords unmasked in an internal log. We fixed the bug and have no indication of a breach or misuse by anyone. As a precaution, consider changing your password on all services where you’ve used this password. https://t.co/RyEDvQOTaZ


May 3, 2018

Companies with good security practices typically store user passwords in a form that cannot be read. In Twitter’s case, passwords are masked through a process called hashing, which replaces the actual password with a random set of numbers and letters that are stored in the company’s system.

“This allows our systems to validate your account credentials without revealing your password,” said Agrawal. “This is an industry standard.”

“Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”

Agrawal advises people to change their passwords, enable two-factor authentication on their Twitter account and use a password manager to create strong, unique passwords on every service they use.





Source link

read more
AlphabetData protectionDigital mediaEuropeEuropean UnionFacebookGDPRGoogleInternetMediaPrivacySocial mediaSocial networkingTechnologyTwitterWorld news

EU: data-harvesting tech firms are ‘sweatshops of connected world’ | Technology

no thumb


The European data protection supervisor has hit out at social media and tech firms over the recent constant stream of privacy policy emails in the run up to GDPR, calling them them the “sweatshops of the connected world”.

With the tough new General Data Protection Regulations coming into force on 25 May, companies around the world are being forced to notify their users to accept new privacy policies and data processing terms to continue to use the services.

But Giovanni Buttarelli, the European data protection supervisor (EDPS), lambasted the often-hostile approach of the recent deluge of notifications.

“If this encounter seems a take-it-or-leave it proposition – with perhaps a hint of menace – then it is a travesty of at least the spirit of the new regulation, which aims to restore a sense of trust and control over what happens to our online lives,” said Buttarelli. “Consent cannot be freely given if the provision of a service is made conditional on processing personal data not necessary for the performance of a contract.”

“The most recent [Facebook] scandal has served to expose a broken and unbalanced ecosystem reliant on unscrupulous personal data collection and micro-targeting for whatever purposes promise to generate clicks and revenues.

“The digital information ecosystem farms people for their attention, ideas and data in exchange for so called ‘free’ services. Unlike their analogue equivalents, these sweatshops of the connected world extract more than one’s labour, and while clocking into the online factory is effortless it is often impossible to clock off.”

The European Union’s new stronger, unified data protection laws, the General Data Protection Regulation (GDPR), will come into force on 25 May 2018, after more than six years in the making.

GDPR will replace the current patchwork of national data protection laws, give data regulators greater powers to fine, make it easier for companies with a “one-stop-shop” for operating across the whole of the EU, and create a new pan-European data regulator called the European Data Protection Board.

The new laws govern the processing and storage of EU citizens’ data, both that given to and observed by companies about people, whether or not the company has operations in the EU. They state that data protection should be both by design and default in any operation.

GDPR will refine and enshrine the “right to be forgotten” laws as the “right to erasure”, and give EU citizens the right to data portability, meaning they can take data from one organisation and give it to another. It will also bolster the requirement for explicit and informed consent before data is processed, and ensure that it can be withdrawn at any time.

To ensure companies comply, GDPR also gives data regulators the power to fine up to €20m or 4% of annual global turnover, which is several orders of magnitude larger than previous possible fines. Data breaches must be reported within 72 hours to a data regulator, and affected individuals must be notified unless the data stolen is unreadable, ie strongly encrypted.

While data protection and privacy has become a hot-button issue in part thanks to the Cambridge Analytica files, Buttarelli is concerned that it is simply being used as part of the “PR toolkit” of firms. He said that there is “a growing gulf between hyperbole and reality, where controllers learn to talk a good game while continuing with the same old harmful habits”.

A new social media subgroup of data protection regulators will be convened in mid-May to tackle what Buttarelli called the “manipulative approaches” that must change with GDPR.

“Brilliant lawyers will always be able to fashion ingenious arguments to justify almost any practice. But with personal data processing we need to move to a different model,” said Buttarelli. “The old approach is broken and unsustainable – that will be, in my view, the abiding lesson of the Facebook/ Cambridge Analytica case.”



Source link

read more
DatingFacebookMark ZuckerbergPrivacyRelationshipsSocial mediaSocial networkingTechnologyUS news

Facebook announces dating app focused on ‘meaningful relationships’ | Technology

no thumb


Facebook is launching a new dating app on the social media platform, its CEO, Mark Zuckerberg, announced at an annual developer conference on Tuesday, unveiling a feature designed to compete with popular services like Tinder.

Speaking in front of a packed crowd in San Jose, Zuckerberg described the new dating feature as a tool to build “real long-term relationships – not just hookups”.

“We want Facebook to be somewhere where you can start meaningful relationships,” he continued. “We’ve designed this with privacy and safety in mind from the beginning.”

The announcement sparked gasps from the crowd and seemed to attract the most interest from the audience during Zuckerberg’s short speech, which focused on the company’s widening privacy scandal, new safeguards meant to protect users’ data and misinformation and fake news on the site.

Chris Cox, the chief product officer, said the dating feature would be “opt-in” and “safe” and that the company “took advantage of the unique properties of the platform”.

Cox showed a user’s hypothetical dating profile, which he said would be separate from an individual’s regular profile, accessed in a different section of the site. The dating feature would use only a first name and only be visible to those using the service, not an individual’s Facebook friends. The feature would not show up in the news feed, he added.

Cox said users of this feature could browse and “unlock” local events and message others planning to attend. If a potential date responded, the two would then connect via a text messaging feature that is not connected to WhatsApp or Facebook Messenger.

“We like this by the way because it mirrors the way people actually date, which is usually at events and institutions they’re connected to,” Cox said. “We hope this will help more folks meet and hopefully find partners.”

The sample profiles displayed at the conference resembled some basic features of Tinder.

Shares of Match, the company that owns Tinder, OkCupid and Match.com, fell by 21% after Zuckerberg announced the new feature, according to Bloomberg.

The CEO noted that one in three marriages in the US now started online. He said couples who met on Facebook have repeatedly thanked him over the years.

Zuckerberg said: “These are some of the moments that I’m really proud of what we’re doing. I know that we’re making a positive difference in people’s lives.”

The announcement of the dating feature came after Zuckerberg acknowledged that it has been a particularly “intense” year for the company, following revelations that millions of Americans’ personal data was harvested from Facebook and improperly shared with the political consultancy Cambridge Analytica.





Source link

read more
Chat and messaging appsChildrenData protectionEuropeEuropean UnionFacebookGDPRMediaPrivacySocial networkingSocietyTechnologyWhatsApp

WhatsApp raises minimum age to 16 for Europeans ahead of GDPR | Technology

no thumb


WhatsApp is raising the minimum user age from 13 to 16, potentially locking out large numbers of teenagers as the messaging app looks to comply with the EU’s upcoming new data protection rules.

The Facebook-owned messaging service that has more than 1.5 billion users will ask people in the 28 EU states to confirm they are 16 or older as part of a prompt to accept a new terms of service and an updated privacy policy in the next few weeks.

How WhatsApp will confirm age and enforce the new limit is unclear. The service does not currently verify identity beyond requirements for a working mobile phone number.

WhatsApp said it was not asking for any new rights to collect personal information in the agreement it has created for the European Union. It said: “Our goal is simply to explain how we use and protect the limited information we have about you.”

WhatsApp’s minimum age will remain 13 years outside of Europe, in line with its parent company. In order to comply with the European General Data Protection Regulation (GDPR), which comes into force on 25 May, Facebook has taken a different approach for its primary social network. As part of its separate data policy, the company requires those aged between 13 and 15 years old to nominate a parent or guardian to give permission for them to share information with the social network, or otherwise limit the personalisation of the site.

WhatsApp also announced Tuesday that it would begin allowing users to download a report detailing the data it holds on them, such as the make and model of the device they used, their contacts and groups and any blocked numbers.

GDPR is the biggest overhaul of online privacy since the birth of the internet, giving Europeans the right to know what data is stored on them and the right to have it deleted. The new laws also give regulatorsthe power to fine corporations up to 4% of their global turnover or €20m, whichever is larger, for failing to meet the tough new data protection requirements.

WhatsApp, founded in 2009 and bought by Facebook for $19bn in 2014, has come under pressure from some European governments in recent years because of its use of end-to-end encryption and its plan to share user data with its parent company.

In 2017 European regulators disrupted a move by WhatsApp to change its policies to allow it to share users’ phone numbers and other information with Facebook for ad targeting and other uses. WhatsApp suspended the change in Europe after widespread regulatory scrutiny, and signed an undertaking in March with the UK Information commissioner’s office to not share any EU citizen’s data with Facebook until GDPR comes into force.

But on Tuesday the messaging firm said it wanted to continue sharing data with Facebook at some point. It said: “As we have said in the past, we want to work closer with other Facebook companies in the future and we will keep you updated as we develop our plans.”



Source link

read more
Cambridge AnalyticaData protectionFacebookMark ZuckerbergPrivacySocial networkingTechnologyUK newsUniversity of CambridgeWorld news

Cambridge University rejected Facebook study over ‘deceptive’ privacy standards | Technology

no thumb


A Cambridge University ethics panel rejected research by the academic at the centre of the Facebook data harvesting scandal over the social network’s “deceptive” approach to its users privacy, newly released documents reveal.

A 2015 proposal by Aleksandr Kogan, a member of the university’s psychology department, involved the personal data from 250,000 Facebook users and their 54 million friends that he had already gleaned via a personality quiz app in a commercial project funded by SCL, the parent company of Cambridge Analytica.

Separately, Kogan proposed an academic investigation on how Facebook likes are linked to “personality traits, socioeconomic status and physical environments”, according to an ethics application about the project released to the Guardian in response to a freedom of information request.

The documents shed new light on suggestions from the Facebook CEO, Mark Zuckerberg, that the university’s controls on research did not meet Facebook’s own standards. In testimony to the US Congress earlier this month, Zuckerberg said he was concerned over Cambridge’s approach, telling a hearing: “What we do need to understand is whether there is something bad going on at Cambridge University overall, that will require a stronger action from us.”

But in the newly published material, the university’s psychology research ethics committee says it found the Kogan proposal so “worrisome” that it took the “very rare” decision to reject the project.

The panel said Facebook’s approach to consent “falls far below the ethical expectations of the university”.


Correspondence around the decision was released hours before Kogan appeared before a Commons inquiry into fake news and misinformation. In written and oral evidence to the committee Kogan insisted that all his academic work was reviewed and approved by the university (pdf). But he did not mention to the MPs the ethics committee’s rejection of his proposed research using the Facebook data in May 2015.

Explaining the decision, one member of the panel said the Facebook users involved had not given sufficient consent to allow the research to be conducted, or given a chance to withdraw from the project. The academic, whose name was redacted from the document, said: “Facebook’s privacy policy is not sufficient to address my concerns.”

Appealing against the panel’s rejection, a letter believed to be written by Kogan pointed out that “users’ social network data is already downloaded and used without their direct consent by thousands of companies who develop apps for Facebook”.

It added: “In fact, access to data by third parties for various purposes is fundamental to every app on Facebook; so users have already had their data downloaded and used by companies for private interest.”

Another panel member felt that information shared with Facebook friends should not be regarded as public data. In a response to Kogan’s appeal, the academic said: “Once you have persuaded someone to run your Facebook app, you can really only collect that subject’s data. What his or her friends have disclosed is by default disclosed to ‘friends’ only, that is, with an expectation of confidence.”

The ethics panel member added: “Facebook is rather deceptive on this and creates the appearance of a cosy and confidential peer group environment, as a means of gulling users into disclosing private information that they then sell to advertisers, but this doesn’t make it right to an ethical researcher to follow their lead.”

The academic also likened Facebook to a contagion. The letter sent in July 2015 said: “My view of Facebook is that it’s a bit like an infectious disease; you end up catching what your friends have. If there are bad apps, or malware, or even dodgy marketing offers, they get passed along through friendship networks. An ethical approach to using social networking data has to take account of this.”

Kogan accepted that he made mistakes in how the Facebook data was collected. He told Tuesday’s committee hearing: “Fundamentally I made a mistake, by not being critical about this. I should have got better advice on what is and isn’t appropriate.”

But asked if he accepted that he broke Facebook’s terms and conditions, Kogan said: “I do not. I would agree that my actions were inconsistent with the language of these documents, but that’s slightly different.”

Kogan collected Facebook data before the network changed its terms of service in 2014 to stop developers harvesting data via apps.

Facebook has banned Kogan from the network and insisted that he violated its platform policy by transferring data his app collected to Cambridge Analytica. The company has previously said it is “strongly committed to protecting people’s information”.

In an initial statement on Kogan’s research, Mark Zuckerberg said Facebook had already taken key steps to secure users’ data and said it would go further to prevent abuse. He added: “This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it.”

Kogan has been approached for comment.



Source link

read more
AdvertisingAustralia newsAustralian Broadcasting CorporationAustralian mediaDigital mediaFacebookMediaPrivacySocial networking

Facebook says its free news feed is helping journalism | Technology

no thumb


Facebook has told the Australian competition regulator that news makes up just 5% of the content shared on the platform, and the social media giant is helping journalism by providing a free global distribution service for publishers.

In its submission to the Australian Competition and Consumer Commission inquiry into the impact of digital platforms on media and advertising, Facebook also downplayed its collection and use of people’s data, saying many organisations, including newspapers, collected similar data.

“Facebook does not sell or provide data to advertisers,” the company said. “We provide them the ability to target their advertisements.”

Sign up to receive the top stories in Australia every day at noon


In a 56-page document, the company said the Facebook news feed was less than 5% news, and was a “free platform for global content distribution and promotion” which allowed publishers to connect with readers and advertisers. Facebook offers tools and products to publishers which allows them to promote their content and reach new readers.

This week Google said in its submission it was not contributing to the death of journalism.

Facebook criticised some of the information in the ACCC’s issues paper as inaccurate in its portrayal of the digital ecosystem of Facebook, publishers, businesses and consumers. The inquiry is looking into the the impact of Facebook, Google and Apple on the level of choice in news content and its quality.

A graphic published by the ACCC “does not adequately convey the value that digital platforms provide to consumers”, Facebook said.

Facebook portrayed itself as just one platform among many in a rapidly changing environment which demanded constant innovation and was competing for advertising with Snapchat, Google, YouTube, Amazon and others.

The average person now used eight different services to connect with friends and businesses and Facebook was just one of them competing for the attention of consumers and advertisers, the company said. Facebook said it spent more than $6bn a year on research and development to keep up with its competitors in innovation.

“If we stop innovating someone else will innovate around us – making us obsolete,” the submission said. “We know if we cease to be useful people will leave.”

But it admitted its privacy settings and other tools had been too hard to find and information about data collection was not clear. It said it had recently improved those services, but users should understand that their information was key to providing a personalised service.

“Our core value to consumers comes from the highly personalised and relevant experience we provide,” the submission said. “Information that people provide about themselves allows us to provide this experience and is therefore integral to the Facebook experience.”

Earlier this month Australia’s privacy commissioner launched an investigation to determine whether Facebook had breached the Australian Privacy Act after it was revealed up to one in 50 local users may have had their personal information accessed by Cambridge Analytica.

Facebook said in its submission that combating the spread of fake news was a priority. It was now banning advertisers who spread false information and users would see less content from those who shared clickbait headlines – even though the chief executive, Mark Zuckerberg, admitted it would “considerably impact our profitability”.

The submission emphasised the benefits to local businesses from advertising on Facebook. More than 200 million people around the world were connected to an Australian business and most of them were small players, it said. More than 350,000 local businesses spent less than $US100 on advertising on Facebook in 2017.

In a separate submission to the inquiry, the ABC said it worked with Facebook, Google and other digital platforms to distribute its content, to increase engagement and to ensure more people discovered ABC content.

In 2017, 49.9% of Australians between the ages of 18 and 75 accessed ABC news and current affairs content, and the ABC reached 18.8% of Australian adults each week through third-party digital platforms, the submission said.

“The challenge of monetising digital content in this disrupted and increasingly global media landscape has coincided with a decline in the level of trust the public places in traditional sources of news media,” the ABC said.

“Overall, audience trust in the Australian media as an institution is at an all-time low, and the level of trust in the mainstream media’s ability to tell full, accurate and fair news has decreased.

“Simultaneously, digital platforms have contributed to an increase in public concern about fake news and there is a growing demand for news and journalistic content that is explained and verified.

“In this environment, the ABC – an independent and trusted Australian media organisation – has an increasingly important role to play; 81% of Australian adults trust the information provided by the ABC.”



Source link

read more
Data protectionPrivacyTechnologyUK newsUS newsWorld news

Eventbrite apologises for footage rights grab | World news

no thumb


A website that allows users to create, promote and sell tickets to events has apologised to users for a clause in its terms of service that allowed it to attend, film and use the footage for its own purposes.

Eventbrite hosts more than 2m events a year, ranging from small free gatherings of friends to large paid-for conferences.

Buried near the bottom of the website’s 10,000-word merchant agreement was a section titled “Permissions you grant us to film and record your events”. It gave Eventbrite wide-ranging powers to use private events for its own purposes, including in adverts for the site.

The terms and conditions also allowed the company to film behind the scenes as an event was being set up or packed away, and required event hosts to obtain, at their own expense, all the “permissions, clearances and licenses” required to allow Eventbrite to do what it wanted, and left the question of whether it had to even credit performers or hosts “in its discretion”.

The clause, which affected the British version of the site as well as the American, was added to the merchant agreement at the beginning of April, but it took until Friday for user Barney Dellar to bring the rights grab to wider attention.

Eventbrite apologised for the clause on Sunday night, and removed it from its site. A spokeswoman told the Guardian: “Earlier this month we made an update to our terms of service and merchant agreement that would allow us the option to work with individual organisers to secure video and photos at their events for marketing and promotional purposes.

“We’ve heard some concerns from our customers and agree that the language of the terms went broader than necessary given our intention of the clause.

“We have not recorded any footage at events since this clause was added, and upon further review, have removed it entirely from both our terms of service and merchant agreement. We sincerely apologise for any concern this caused.”

Many companies are rushing out large updates to their terms of service in the runup to 25 May, the enforcement date for Europe’s general data protection regulation (GDPR), which strengthens individuals’ data protection rights.

In the last week, Facebook, Tumblr, and Airbnb have all notified users of their new terms of service.

The European Union’s new stronger, unified data protection laws, the General Data Protection Regulation (GDPR), will come into force on 25 May 2018, after more than six years in the making.

GDPR will replace the current patchwork of national data protection laws, give data regulators greater powers to fine, make it easier for companies with a “one-stop-shop” for operating across the whole of the EU, and create a new pan-European data regulator called the European Data Protection Board.

The new laws govern the processing and storage of EU citizens’ data, both that given to and observed by companies about people, whether or not the company has operations in the EU. They state that data protection should be both by design and default in any operation.

GDPR will refine and enshrine the “right to be forgotten” laws as the “right to erasure”, and give EU citizens the right to data portability, meaning they can take data from one organisation and give it to another. It will also bolster the requirement for explicit and informed consent before data is processed, and ensure that it can be withdrawn at any time.

To ensure companies comply, GDPR also gives data regulators the power to fine up to €20m or 4% of annual global turnover, which is several orders of magnitude larger than previous possible fines. Data breaches must be reported within 72 hours to a data regulator, and affected individuals must be notified unless the data stolen is unreadable, ie strongly encrypted.





Source link

read more
1 2 3 7
Page 1 of 7