close

Facebook

CernFacebookInternetScienceSocial networkingTechnologyTim Berners-LeeTwitter

Tim Berners-Lee on 30 years of the world wide web: ‘We can get the web we want’ | Technology

Tim Berners-Lee on 30 years of the world wide web: ‘We can get the web we want’ | Technology


Thirty years ago, Tim Berners-Lee, then a fellow at the physics research laboratory Cern on the French-Swiss border, sent his boss a document labelled Information Management: A Proposal. The memo suggested a system with which physicists at the centre could share “general information about accelerators and experiments”.

“Many of the discussions of the future at Cern and the LHC era end with the question: ‘Yes, but how will we ever keep track of such a large project?’” wrote Berners-Lee. “This proposal provides an answer to such questions.”

His solution was a system called, initially, Mesh. It would combine a nascent field of technology called hypertext that allowed for human-readable documents to be linked together, with a distributed architecture that would see those documents stored on multiple servers, controlled by different people, and interconnected.

It didn’t really go anywhere. Berners-Lee’s boss, Mike Sendall, took the memo and jotted down a note on top: “Vague but exciting …” But that was it. It took another year, until 1990, for Berners-Lee to start actually writing code. In that time, the project had taken on a new name. Berners-Lee now called it the World Wide Web.

Thirty years on, and Berners-Lee’s invention has more than justified the lofty goals implied by its name. But with that scale has come a host of troubles, ones that he could never have predicted when he was building a system for sharing data about physics experiments.

Some are simple enough. “Every time I hear that somebody has managed to acquire the [domain] name of their new enterprise for $50,000 (£38,500) instead of $500, I sigh, and feel that money’s not going to a good cause,” Berners-Lee tells me when we speak on the eve of the anniversary.



Berners-Lee demonstrating the world wide web to delegates at the Hypertext 1991 conference in San Antonio, Texas. Photograph: 1994-2017 CERN

It is a minor regret, but one he has had for years about the way he decided to “bootstrap” the web up to something that could handle a lot of users very quickly: by building on the pre-existing service for assigning internet addresses, the domain name system (DNS), he gave up the chance to build something better. “You wanted a name for your website, you’d go and ask [American computer scientist] Jon Postel, you know, back in the day, and he would give you a name.

“At the time that seemed like a good idea, but it relied on it being managed benevolently.” Today, that benevolent management is no longer something that can be assumed. “There are plenty of domain names to go around, but the way people have invested, in buying up domains that they think entrepreneurs or organisations will use – even trying to build AI that would guess what names people will want for their organisations, grabbing the domain name and then selling it to them for a ridiculous amount of money – that’s a breakage.”

It sounds minor, but the problems with DNS can stand in for a whole host of difficulties the web has faced as it has grown. A quick fix, built to let something scale up rapidly, that turns out to provide perverse incentives once it is used by millions of people and is so embedded that it is nearly impossible to change course.

But nearly impossible is not actually impossible. That is the thrust of the message Berners-Lee is aiming to spread. Every year, on the anniversary of his creation, he publishes an open letter on his vision for the future of the web. This year’s letter, given the importance of the anniversary, is broader in scope than most – and expresses a rare level of concern about the direction in which the web is moving.

“While the web has created opportunity, given marginalised groups a voice and made our daily lives easier,” he writes, “it has also created opportunity for scammers, given a voice to those who spread hatred and made all kinds of crime easier to commit.

“It’s understandable that many people feel afraid and unsure if the web is really a force for good. But given how much the web has changed in the past 30 years, it would be defeatist and unimaginative to assume that the web as we know it can’t be changed for the better in the next 30. If we give up on building a better web now, then the web will not have failed us. We will have failed the web.”

Berners-Lee breaks down the problems the web now faces into three categories. The first is what occupies most of the column inches in the press, but is the least intrinsic to the technology itself: “deliberate, malicious intent, such as state-sponsored hacking and attacks, criminal behaviour and online harassment”.

He believes this makes the system fragile. “It’s amazing how clever people can be, but when you build a new system it is very, very hard to imagine the ways in which it can be attacked.”

At the same time, while criminal intentions may be the scariest for many, they aren’t new to the web. They are “impossible to eradicate completely”, he writes, but can be controlled with “both laws and code to minimise this behaviour, just as we have always done offline”.

Berners-Lee in 1998.



Berners-Lee in 1998. Photograph: Elise Amendola/AP

More concerning are the other two sources of dysfunction affecting the web. The second is when a system built on top of Berners-Lee’s creation introduces “perverse incentives” that encourage others to sacrifice users’ interests, “such as ad-based revenue models that commercially reward clickbait and the viral spread of misinformation”. And the third is more diffuse still: those systems and services that, thoughtfully and benevolently created, still result in negative outcomes, “such as the outraged and polarised tone and quality of online discourse”.

The problem is that it is hard to tell what the outcomes of a system you build are going to be. “Given there are more webpages than there are neurons in your brain, it’s a complicated thing. You build Reddit, and people on it behave in a particular way. For a while they all behave in a very positive, constructive way. And then you find a subreddit in which they behave in a nasty way.

“Or, for example, when you build a system such as Twitter, it becomes wildly, wildly effective. And when the ‘Arab Spring’ – I will never say that without the quotes – happens, you’re tempted to claim that Twitter is a great force for good because it allowed people to react against the oppressive regime.

“But then pretty soon people are contacting you about cyberbullying and saying their lives are miserable on Twitter because of the way that works. And then another few iterations of the Earth going around the sun, and you find that the oppressive regimes are using social networks in order to spy on and crack down on dissidents before the dissidents could even get round to organising.”

In conclusion, he says, “You can’t generalise. You can’t say, you know, social networks tend to be bad, tend to be nasty.”

For a creation entering its fourth decade, we still know remarkably little about how the web works. The technical details, sure: they are all laid out there, in that initial document presented to Cern, and in the many updates that Berners-Lee, and the World Wide Web Consortium he founded to succeed him, have approved.

But the social dynamics built on top of that technical underpinning are changing so rapidly and are so unstable that every year we need to reassess its legacy. “Are we now in a stable position where we can look back and decide this is the legacy of the web? Nooooope,” he says, with a chuckle. Which means we are running a never-ending race, trying to work out the effects of new platforms and systems even as competitors launch their eventual replacements.



Sir Tim Berners-Lee: how the web went from idea to reality

Berners-Lee’s solution is radical: a sort of refoundation of the web, creating a fresh set of rules, both legal and technical, to unite the world behind a process that can avoid some of the missteps of the past 30 years.

Calling it the “contract for the web”, he first suggested it last November at the Web Summit in Lisbon. “At pivotal moments,” he says, “generations before us have stepped up to work together for a better future. With the Universal Declaration of Human Rights, diverse groups of people have been able to agree on essential principles. With the Law of Sea and the Outer Space Treaty, we have preserved new frontiers for the common good. Now too, as the web reshapes our world, we have a responsibility to make sure it is recognised as a human right and built for the public good.”

This is a push for legislation, yes. “Governments must translate laws and regulations for the digital age. They must ensure markets remain competitive, innovative and open. And they have a responsibility to protect people’s rights and freedoms online.”

But it is equally important, he says, for companies to join in and for the big tech firms to do more to ensure their pursuit of short-term profit is not at the expense of human rights, democracy, scientific fact or public safety. “This year, we’ve seen a number of tech employees stand up and demand better business practices. We need to encourage that spirit.”

But even if we could fix the web, might it be too late for that to fix the world? Berners-Lee’s invention has waxed and waned in its role in the wider digital society. For years, the web was the internet, with only a tiny portion of hardcore nerds doing anything online that wasn’t mediated through a webpage.

But in the past decade, that trend has reversed: the rise of the app economy fundamentally bypasses the web, and all the principles associated with it, of openness, interoperability and ease of access. In theory, any webpage should be accessible from any device with a web browser, be that an iPhone, a Windows PC or an internet-enabled fridge. The same is not true for content and services locked inside apps, where the distributor has absolute power over where and how users can interact with their platforms.

In fact, the day before I speak to Berners-Lee, Facebook boss Mark Zuckerberg published his own letter on the future of the internet, describing his goal of reshaping Facebook into a “privacy-focused social network”. It had a radically different set of aims: pulling users into a fundamentally closed network, where not only can you only get in touch with Facebook users from other Facebook products, but even the very idea of accessing core swathes of Facebook’s platform from a web browser was deprioritised, in favour of the extreme privacy provided by universal end-to-end encryption.

For Berners-Lee, these shifts are concerning, but represent the strengths as well as the weaknesses of his creation. “The crucial thing is the URL. The crucial thing is that you can link to anything.

This is for Everyone seen during the opening ceremony of the London Olympics in 2012, a nod to Berners-Lee’s creation.



This is for Everyone seen during the opening ceremony of the London Olympics in 2012, a nod to Berners-Lee’s creation. Photograph: Martin Rickett/PA

“The web platform [the bundle of technologies that underpin the web] is always, at every moment, getting more and more powerful. The good news is that because the web platform is so powerful, a lot of the apps which are actually built, are built using the web platform and then cranked out using the various frameworks which allow you to generate an app or something from it.” All the installable applications that run on smartphones and tablets work in this way, with the app acting as little more than a wrapper for a web page.

“So there’s web technology inside, but what we’re saying is if, from the user’s point of view, there’s no URL, then we’ve lost.”

In some cases, that battle really has been lost. Apple runs an entire media operation inside its app store that can’t be read in normal browsers, and has a news app that spits out links that do not open if Apple News has been uninstalled.

But in many more, the same viral mechanics that allow platforms to grow to a scale that allow them to consider breaking from the web ultimately keep them tied to the openness that the platform embodies. Facebook posts still have permanent links buried in the system, as do tweets and Instagrams. Even the hot new thing, viral video app TikTok, lets users send URLs to each other: how else to encourage new users to hop on board?

It may be too glib to say, as the early Netscape executive Ram Shriram once did, that “open always wins out” – tech is littered with examples where a closed technology was the ultimate victor – but the web’s greatest strength over the past 30 years has always been the ability of anyone to build anything on top of it, without needing permission from Berners-Lee or anyone else.

But for that freedom to stick around for another 30 years – long enough to get the 50% of the world that isn’t online connected, long enough to see the next generation of startups grow to maturity – it requires others to join Berners-Lee in the fight. “The web is for everyone,” he says, “and collectively we hold the power to change it. It won’t be easy. But if we dream a little and work a lot, we can get the web we want.”



Source link

read more
Data protectionFacebookMediaPrivacySocial networkingTechnology

Facebook lets advertisers target users based on sensitive interests | Technology

no thumb


Facebook allows advertisers to target users it thinks are interested in subjects such as homosexuality, Islam or liberalism, despite religion, sexuality and political beliefs explicitly being marked out as sensitive information under new data protection laws.

The social network gathers information about users based on their actions on Facebook and on the wider web, and uses that data to predict on their interests. These can be mundane – football, Manhattan or dogs, for instance – or more esoteric.

A Guardian investigation in conjunction with the Danish Broadcasting Corporation found that Facebook is able to infer extremely personal information about users, which it allows advertisers to use for targeting purposes. Among the interests found in users’ profiles were communism, social democrats, Hinduism and Christianity.

The EU’s general data protection regulation (GDPR), which comes into effect on 25 May, explicitly labels such categories of information as so sensitive, with such a risk of human rights breaches, that it mandates special conditions around how they can be collected and processed. Among those categories are information about a person’s race, ethnic origin, politics, religion, sex life and sexual orientation.

The information commissioner’s office says: “This type of data could create more significant risks to a person’s fundamental rights and freedoms, for example, by putting them at risk of unlawful discrimination.”

Organisations must cite one of 10 special dispensations to process such information, such as “preventive or occupational medicine”, “to protect the vital interests of the data subject”, or “the data subject has given explicit consent to the processing of those personal data for one or more specified purposes”.

Facebook already applies those special categories elsewhere on the site. As part of its GDPR-focused updates, the company asked every user to confirm whether or not “political, religious, and relationship information” they had entered on the site should continue to be stored or displayed. But while it offered those controls for information that users had explicitly given it, it gathered no such consent for information it had inferred about users.

The data means an advertiser can target messages at, for instance, people in the UK who are interested in homosexuality and Hinduism – about 68,000 people, according to the company’s advertising tools.

Facebook does demonstrate some understanding that the information is sensitive and prone to misuse. The company provides advertisers with the ability to exclude users based on their interests, but not for sensitive interests. An advertiser can advertise to people interested in Islam, for instance, but cannot advertise to everyone except those interested in Islam.

The company requires advertisers to agree to a set of policies that, among other things, bar them from “using targeting options to discriminate against, harass, provoke or disparage users, or to engage in predatory advertising practices.”

In a statement, Facebook said classifying a user’s interests was not the same as classifying their personal traits. “Like other internet companies, Facebook shows ads based on topics we think people might be interested in, but without using sensitive personal data. This means that someone could have an ad interest listed as gay pride because they have liked a Pride-associated page or clicked a Pride ad, but it does not reflect any personal characteristics such as gender or sexuality.”

The company also said it provided some controls to users on its ad preferences screen. “People are able to manage their ad preferences tool, which clearly explains how advertising works on Facebook and provides a way to tell us if you want to see ads based on specific interests or not. When interests are removed, we show people the list of removed interests so that they have a record they can access, but these interests are no longer used for ads.”

It added: “Our advertising complies with relevant EU law and, like other companies, we are preparing for the GDPR to ensure we are compliant when it comes into force.”

The findings are reminiscent of Facebook’s previous attempts to skirt the line between profiling users and profiling their interests. In 2016 it was revealed that the company had created a tool for “racial affinity targeting”.

At the time, Facebook repeatedly argued that the tool “is based on affinity, not ethnicity”. Discussing a person who was in the African American affinity group, for instance, the company said: “They like African American content. But we cannot and do not say to advertisers that they are ethnically black.”

Almost a year later, after it was revealed that advertisers could use the ethnic affinity tools to unlawfully discriminate against black Facebook users in housing adverts, Facebook agreed to limit how those tools could be used.



Source link

read more
BrexitCambridge AnalyticaEuropean UnionFacebookForeign policyMediaPoliticsSocial networkingTechnologyUK news

‘We’re waiting for answers’: Facebook, Brexit and 40 questions | Technology

‘We’re waiting for answers’: Facebook, Brexit and 40 questions | Technology


Mike Schroepfer, Facebook’s chief technology officer, was the second executive Facebook offered up to answer questions from parliament’s select committee for Digital, Culture, Media and Sport (DCMS).

He took his place in the hot seat in the wake of the first attendee, Simon Milner, Facebook’s (now ex-) head of policy for Europe, who answered a series of questions about Cambridge Analytica’s non-use of Facebook data that came back to haunt the company in the furore that followed the Observer and New York Times revelations from Christopher Wylie.

Schroepfer is Facebook’s nerd-in-chief. He was the tech guy sent to answer a series of questions from MPs about how his platform had facilitated what appeared to be a wholesale assault on Britain’s democracy, and though there was much he couldn’t answer, when he was asked about spending by Russian entities directed at British voters before the referendum, he spoke confidently: “We did look several times at the connections between the IRA [the Kremlin-linked Internet Research Agency] … and the EU referendum and we found $1 of spend. We found almost nothing.”

But new evidence released by the United States Congress suggests adverts were targeted at UK Facebook users, and paid for in roubles, in the months preceding the short 10-week period “regulated” by the Electoral Commission but when the long campaigns were already under way.

This is the latest episode in a series of miscommunications between the company and British legislators, which has come to a head in the week the Electoral Commission finally published the findings of its investigation into the Leave.EU campaign.

Damian Collins, the chair of the DCMS committee, said: “We asked them to look for evidence of Russian influence and they came back and told us something we now know appears misleading. And we’re still waiting for answers to 40 questions that Mike Schroepfer was unable to answer, including if they have any record of any dark ads.

“It could be that these adverts are just the tip of the iceberg. It’s just so hard getting any sort of information out of them, and then not knowing if that information is complete.”



Leave.EU supporters celebrate the Leave vote in Sunderland after polling stations closed in the Brexit referendum. Photograph: Toby Melville/Reuters

Preliminary research undertaken by Twitter user Brexitshambles suggests anti-immigrant adverts were targeted at Facebook users in the UK and the US.

One – headlined “You’re not the only one to despise immigration”, which cost 4,884 roubles (£58) and received 4,055 views – was placed in January 2016. Another, which accused immigrants of stealing jobs, cost 5,514 roubles and received 14,396 impressions. Organic reach can mean such adverts are seen by a wider audience.

Facebook says that it only looked for adverts shown during the officially regulated campaign period. A spokesperson said: “The release of the set of IRA adverts confirms the position we shared with the Electoral Commission and DCMS committee. We did not find evidence of any significant, coordinated activity by the IRA operatives directed towards the Brexit referendum.

“This is supported by the release of this data set which shows a significant amount of activity by the IRA with only a handful of their ads listing the UK as a possible audience.”

Collins said that the committee was becoming increasingly frustrated by Facebook’s reluctance to answer questions and by founder Mark Zuckerberg’s ongoing refusal to come to the UK to testify.

Milner told the committee in February that Cambridge Analytica had no Facebook data and could not have got data from Facebook.

The news reinforces MPs’ frustrations with a system that last week many of them were describing as “broken”. On Friday, 15 months after the first Observer article that triggered the Electoral Commission’s investigation into Leave.EU was published, it found the campaign – funded by Arron Banks and endorsed by Nigel Farage – guilty of multiple breaches of electoral law and referred the “responsible person” – its chief executive, Liz Bilney – to the police.

Banks described the commission’s report as a “politically motivated attack on Brexit”.

Leading academics and MPs called the delay in referring the matter to the police “catastrophic”, with others saying British democracy had failed. Liam Byrne, Labour’s shadow digital minister, described the current situation as “akin to the situation with rotten boroughs” in the 19th century. “It’s at that level. What we’re seeing is a wholesale failure of the entire system. We have 20th-century bodies fighting a 21st-century challenge to our democracy. It’s totally lamentable.”

Stephen Kinnock, Labour MP for Aberavon, said it was unacceptable that the Electoral Commission had still not referred the evidence about Vote Leave from Christopher Wylie and Shahmir Sanni – published in the Observer and submitted to the Electoral Commission – to the police. He said: “What they seem to have done, and are continuing to do, is to kick this into the long grass. There seems to be political pressure to kick this down the road until Britain has exited the EU.”

He accused the commission of ignoring what he considered key evidence, including about Cambridge Analytica. The commission had found Leave.EU guilty of not declaring work done by its referendum strategist, Goddard Gunster, but said it had found no evidence of work done by Cambridge Analytica.

“The whole thing stinks,” Kinnock said. “I wrote to the commission with evidence that the value of work carried out by Cambridge Analytica was around £800,000. The glib way it dismissed the multiple pieces of evidence about the company was extraordinary. I just think it is absolutely not fit for purpose.”

Gavin Millar QC, a leading expert in electoral law at Matrix Chambers, said: “Our entire democratic system is vulnerable and wide open to attack. If we allow this kind of money into campaigning on national basis – and the referendum was the paradigm for this – you have to have an organisation with teeth to police it.”

Damian Tambini, director of research in the department of media and communications at the London School of Economics, described the whole system as broken and said there was not a single investigatory body that seemed capable of uncovering the truth. “The DCMS Select Committee has found itself in this extraordinary position of, in effect, leading this investigation because it at least has the power to compel witnesses and evidence – something the Electoral Commission can’t do. It’s the classic British solution of muddling through.

“The big picture here is it’s possible for an individual or group with lots of money and some expertise to change the course of history and buy an election outcome. And with our regulatory system, we’ll never know if it’s happened.”

This article was amended on 13 May 2018 to clarify that a remark from Damian Tambini referred to the DCMS Select Committee.





Source link

read more
Cambridge AnalyticaFacebookMediaPrivacySocial networkingTechnologyUK newsUS newsWorld news

Cambridge Analytica closing after Facebook data harvesting scandal | News

Cambridge Analytica closing after Facebook data harvesting scandal | News


Cambridge Analytica, the data firm at the centre of this year’s Facebook privacy row, is closing and starting insolvency proceedings.

The company has been plagued by scandal since the Observer reported that the personal data of about 50 million Americans and at least a million Britons had been harvested from Facebook and improperly shared with Cambridge Analytica.

Cambridge Analytica denies any wrongdoing, but says that the negative media coverage has left it with no clients and mounting legal fees.


What is the Cambridge Analytica scandal? – video explainer

“Despite Cambridge Analytica’s unwavering confidence that its employees have acted ethically and lawfully, the siege of media coverage has driven away virtually all of the Company’s customers and suppliers,” said the company in a statement, which also revealed that SCL Elections Ltd, the UK entity affiliated with Cambridge Analytica, would also close and start insolvency proceedings.

“As a result, it has been determined that it is no longer viable to continue operating the business, which left Cambridge Analytica with no realistic alternative to placing the company into administration.”

As first reported by the Wall Street Journal, the company has started insolvency proceedings in the US and UK. At Cambridge Analytica’s New York offices on an upmarket block on Manhattan’s Fifth Avenue, it appeared all the staff had already left the premises.

The Guardian rang the doorbell to the company’s seventh-floor office and was met by a woman who would not give her name but said she did not work for the company.



The Cambridge Analytica office in New York. Photograph: Oliver Laughland for the Guardian

Asked if anyone from Cambridge Analytica or SCL was still inside, she said: “They used to be. But they all left today.”

The scandal centres around data collected from Facebook users via a personality app developed by the Cambridge University researcher Aleksandr Kogan. The data was collected via Facebook’s permissive “Graph API”, the interface through which third parties could interact with Facebook’s platform. This allowed Kogan to pull data about users and their friends, including likes, activities, check-ins, location, photos, religion, politics and relationship details. He passed the data to Cambridge Analytica, in breach of Facebook’s platform policies.

Christopher Wylie, the original Cambridge Analytica whistleblower, told the Observer that the data Kogan obtained was used to influence the outcome of the US presidential election and Brexit. According to Wylie the data was fed into software that profiles voters and tries to target them with personalised political advertisements. Cambridge Analytica insists it never incorporated the Kogan data.

Kogan told BBC Radio 4’s Today programme he was being used as a scapegoat.

He said: “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica. Honestly, we thought we were acting perfectly appropriately. We thought we were doing something that was really normal.”

Cambridge Analytica said it had been “vilified for activities that are not only legal, but also widely accepted as a standard component of online advertising in both the political and commercial arenas”.

The CEO of Cambridge Analytica, Alexander Nix, was suspended in late March after Britain’s Channel 4 News broadcast secret recordings in which he claimed credit for the election of Donald Trump.

He told an undercover reporter: “We did all the research, all the data, all the analytics, all the targeting. We ran all the digital campaign, the television campaign and our data informed all the strategy.”

He also revealed that the company used a self-destruct email server to erase its digital history.

“No one knows we have it, and secondly we set our … emails with a self-destruct timer … So you send them and after they’ve been read, two hours later, they disappear. There’s no evidence, there’s no paper trail, there’s nothing.”

Although Cambridge Analytica might be dead, the team behind it has already set up a mysterious new company called Emerdata. According to Companies House data, Alexander Nix is listed as a director along with other executives from SCL Group. The daughters of the billionaire Robert Mercer are also listed as directors.

Damian Collins, chair of the British parliamentary committee looking into data breaches, expressed concern that Cambridge Analytica’s closure might hinder the investigation into the firm.

“Cambridge Analytica and SCL group cannot be allowed to delete their data history by closing. The investigations into their work are vital,” he wrote on Twitter.

The episode has shone a spotlight on the way that Facebook data is collected, shared and used to target people with advertising.

The social network initially scrambled to blame rogue third parties for “platform abuse” – “the entire company is outraged we were deceived,” the company said – before it unveiled sweeping changes to its privacy settings and data sharing practices.

“This was a breach of trust between Kogan, Cambridge Analytica and Facebook,” said Mark Zuckerberg in a Facebook post. “But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.”

Facebook first discovered that Kogan had shared data with Cambridge Analytica when a Guardian journalist contacted the company about it at the end of 2015. It asked Cambridge Analytica to delete the data and revoked Kogan’s apps’ API access. However, Facebook relied on Cambridge Analytica’s word that it had done so.

After it was revealed that the data hadn’t been deleted, Facebook revoked Cambridge Analytica’s access to its platform and launched an investigation of “thousands” of apps that had similar access and made several changes to restrict how much third-party developers can access from people’s profiles.

The company also pledged to verify the identities of administrators of popular Facebook pages and advertisers buying political “issue” ads on “debated topics of national legislative importance” such as education, immigration and abortion.





Source link

read more
AlphabetAppleComputingFacebookGoogleMark ZuckerbergSilicon ValleyTechnology

Why Silicon Valley can’t fix itself | News

Why Silicon Valley can’t fix itself | News


Big Tech is sorry. After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.

Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.” Justin Rosenstein, an engineer who helped build Facebook’s “like” button and Gchat, regrets having contributed to technology that he now considers psychologically damaging, too. “Everyone is distracted,” Rosenstein says. “All of the time.”

Ever since the internet became widely used by the public in the 1990s, users have heard warnings that it is bad for us. In the early years, many commentators described cyberspace as a parallel universe that could swallow enthusiasts whole. The media fretted about kids talking to strangers and finding porn. A prominent 1998 study from Carnegie Mellon University claimed that spending time online made you lonely, depressed and antisocial.

In the mid-2000s, as the internet moved on to mobile devices, physical and virtual life began to merge. Bullish pundits celebrated the “cognitive surplus” unlocked by crowdsourcing and the tech-savvy campaigns of Barack Obama, the “internet president”. But, alongside these optimistic voices, darker warnings persisted. Nicholas Carr’s The Shallows (2010) argued that search engines were making people stupid, while Eli Pariser’s The Filter Bubble (2011) claimed algorithms made us insular by showing us only what we wanted to see. In Alone, Together (2011) and Reclaiming Conversation (2015), Sherry Turkle warned that constant connectivity was making meaningful interaction impossible.

Still, inside the industry, techno-utopianism prevailed. Silicon Valley seemed to assume that the tools they were building were always forces for good – and that anyone who questioned them was a crank or a luddite. In the face of an anti-tech backlash that has surged since the 2016 election, however, this faith appears to be faltering. Prominent people in the industry are beginning to acknowledge that their products may have harmful effects.

Internet anxiety isn’t new. But never before have so many notable figures within the industry seemed so anxious about the world they have made. Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.

It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity. The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.

The hub of the new tech humanism is the Center for Humane Technology in San Francisco. Founded earlier this year, the nonprofit has assembled an impressive roster of advisers, including investor Roger McNamee, Lyft president John Zimmer, and Rosenstein. But its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction. In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.

As suspicion of Silicon Valley grows, the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track. For this, they have been getting a lot of attention. As the backlash against tech has grown, so too has the appeal of techies repenting for their sins. The Center for Humane Technology has been profiled – and praised by – the New York Times, the Atlantic, Wired and others.

But tech humanism’s influence cannot be measured solely by the positive media coverage it has received. The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”, and Twitter boss Jack Dorsey recently claimed he wants to improve the platform’s “conversational health”.



Tristan Harris, founder of the Center for Humane Technology. Photograph: Robert Gumpert for the Guardian

Even Mark Zuckerberg, famous for encouraging his engineers to “move fast and break things”, seems to be taking a tech humanist turn. In January, he announced that Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.

Zuckerberg’s choice of words is significant: Time Well Spent is the name of the advocacy group that Harris led before co-founding the Center for Humane Technology. In April, Zuckerberg brought the phrase to Capitol Hill. When a photographer snapped a picture of the notes Zuckerberg used while testifying before the Senate, they included a discussion of Facebook’s new emphasis on “time well spent”, under the heading “wellbeing”.

This new concern for “wellbeing” may strike some observers as a welcome development. After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.

But these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires. Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes. These changes may soothe some of the popular anger directed towards the tech industry, but they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.


The Center for Humane Technology argues that technology must be “aligned” with humanity – and that the best way to accomplish this is through better design. Their website features a section entitled The Way Forward. A familiar evolutionary image shows the silhouettes of several simians, rising from their crouches to become a man, who then turns back to contemplate his history.

“In the future, we will look back at today as a turning point towards humane design,” the header reads. To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”.

There is a good reason why the language of tech humanism is penetrating the upper echelons of the tech industry so easily: this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives. Their success turned the Bay Area tech industry into a global powerhouse – and produced the digitised world that today’s tech humanists now lament.

The story begins in the 1960s, when Silicon Valley was still a handful of electronics firms clustered among fruit orchards. Computers came in the form of mainframes then. These machines were big, expensive and difficult to use. Only corporations, universities and government agencies could afford them, and they were reserved for specialised tasks, such as calculating missile trajectories or credit scores.

Computing was industrial, in other words, not personal, and Silicon Valley remained dependent on a small number of big institutional clients. The practical danger that this dependency posed became clear in the early 1960s, when the US Department of Defense, by far the single biggest buyer of digital components, began cutting back on its purchases. But the fall in military procurement wasn’t the only mid-century crisis around computing.

Computers also had an image problem. The inaccessibility of mainframes made them easy to demonise. In these whirring hulks of digital machinery, many observers saw something inhuman, even evil. To antiwar activists, computers were weapons of the war machine that was killing thousands in Vietnam. To highbrow commentators such as the social critic Lewis Mumford, computers were instruments of a creeping technocracy that threatened to extinguish personal freedom.

But during the course of the 1960s and 70s, a series of experiments in northern California helped solve both problems. These experiments yielded breakthrough innovations like the graphical user interface, the mouse and the microprocessor. Computers became smaller, more usable and more interactive, reducing Silicon Valley’s reliance on a few large customers while giving digital technology a friendlier face.

Apple founder Steve Jobs ‘got the notion of tools for human use’.



Apple founder Steve Jobs ‘got the notion of tools for human use’. Photograph: Ted Thai/Polaris / eyevine

The pioneers who led this transformation believed they were making computing more human. They drew deeply from the counterculture of the period, and its fixation on developing “human” modes of living. They wanted their machines to be “extensions of man”, in the words of Marshall McLuhan, and to unlock “human potential” rather than repress it. At the centre of this ecosystem of hobbyists, hackers, hippies and professional engineers was Stewart Brand, famed entrepreneur of the counterculture and founder of the Whole Earth Catalog. In a famous 1972 article for Rolling Stone, Brand called for a new model of computing that “served human interest, not machine”.

Brand’s disciples answered this call by developing the technical innovations that transformed computers into the form we recognise today. They also promoted a new way of thinking about computers – not as impersonal slabs of machinery, but as tools for unleashing “human potential”.

No single figure contributed more to this transformation of computing than Steve Jobs, who was a fan of Brand and a reader of the Whole Earth Catalog. Jobs fulfilled Brand’s vision on a global scale, launching the mass personal computing era with the Macintosh in the mid-80s, and the mass smartphone era with the iPhone two decades later. Brand later acknowledged that Jobs embodied the Whole Earth Catalog ethos. “He got the notion of tools for human use,” Brand told Jobs’ biographer, Walter Isaacson.

Building those “tools for human use” turned out to be great for business. The impulse to humanise computing enabled Silicon Valley to enter every crevice of our lives. From phones to tablets to laptops, we are surrounded by devices that have fulfilled the demands of the counterculture for digital connectivity, interactivity and self-expression. Your iPhone responds to the slightest touch; you can look at photos of anyone you have ever known, and broadcast anything you want to all of them, at any moment.

In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention. To guide us out of that wilderness, tech humanists say we need more humanising. They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.


Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.

It is difficult to imagine human beings without technology. The story of our species began when we began to make tools. Homo habilis, the first members of our genus, left sharpened stones scattered across Africa. Their successors hit rocks against each other to make sparks, and thus fire. With fire you could cook meat and clear land for planting; with ash you could fertilise the soil; with smoke you could make signals. In flickering light, our ancestors painted animals on cave walls. The ancient tragedian Aeschylus recalled this era mythically: Prometheus, in stealing fire from the gods, “founded all the arts of men.”

All of which is to say: humanity and technology are not only entangled, they constantly change together. This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used. The evolutionary scientist Mary Marzke shows that we developed “a unique pattern of muscle architecture and joint surface form and functions” for this purpose.

The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities. For millennia, people have feared that new media were eroding the very powers that they promised to extend. In The Phaedrus, Socrates warned that writing on wax tablets would make people forgetful. If you could jot something down, you wouldn’t have to remember it. In the late middle ages, as a culture of copying manuscripts gave way to printed books, teachers warned that pupils would become careless, since they no longer had to transcribe what their teachers said.

Yet as we lose certain capacities, we gain new ones. People who used to navigate the seas by following stars can now program computers to steer container ships from afar. Your grandmother probably has better handwriting than you do – but you probably type faster.

The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology. Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.

Intentionally or not, this is what tech humanists are doing when they talk about technology as threatening human nature – as if human nature had stayed the same from the paleolithic era until the rollout of the iPhone. Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them. And while the tech humanists may believe they are acting in the common good, they themselves acknowledge they are doing so from above, as elites. “We have a moral responsibility to steer people’s thoughts ethically,” Tristan Harris has declared.

Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes. The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.

This paternalism produces a central irony of tech humanism: the language that they use to describe users is often dehumanising. “Facebook appeals to your lizard brain – primarily fear and anger,” says McNamee. Harris echoes this sentiment: “Imagine you had an input cable,” he has said. “You’re trying to jack it into a human being. Do you want to jack it into their reptilian brain, or do you want to jack it into their more reflective self?”

The Center for Humane Technology’s website offers tips on how to build a more reflective and less reptilian relationship to your smartphone: “going greyscale” by setting your screen to black-and-white, turning off app notifications and charging your device outside your bedroom. It has also announced two major initiatives: a national campaign to raise awareness about technology’s harmful effects on young people’s “digital health and well-being”; and a “Ledger of Harms” – a website that will compile information about the health effects of different technologies in order to guide engineers in building “healthier” products.

These initiatives may help some people reduce their smartphone use – a reasonable personal goal. But there are some humans who may not share this goal, and there need not be anything unhealthy about that. Many people rely on the internet for solace and solidarity, especially those who feel marginalised. The kid with autism may stare at his screen when surrounded by people, because it lets him tolerate being surrounded by people. For him, constant use of technology may not be destructive at all, but in fact life-saving.

Pathologising certain potentially beneficial behaviours as “sick” isn’t the only problem with the Center for Humane Technology’s proposals. They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

This may be why their approach is so appealing to the tech industry. There is no reason to doubt the good intentions of tech humanists, who may genuinely want to address the problems fuelling the tech backlash. But they are handing the firms that caused those problems a valuable weapon. Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power. By channelling popular anger at Big Tech into concerns about health and humanity, tech humanism gives corporate giants such as Facebook a way to avoid real democratic control. In a moment of danger, it may even help them protect their profits.


One can easily imagine a version of Facebook that embraces the principles of tech humanism while remaining a profitable and powerful monopoly. In fact, these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.

When Zuckerberg announced that Facebook would prioritise “time well spent” over total time spent, it came a couple weeks before the company released their 2017 Q4 earnings. These reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”.

Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable. In a recent interview, he said: “Over the long term, even if time spent goes down, if people are spending more time on Facebook actually building relationships with people they care about, then that’s going to build a stronger community and build a stronger business, regardless of what Wall Street thinks about it in the near term.”

Sheryl Sandberg has also stressed that the shift will create “more monetisation opportunities”. How? Everyone knows data is the lifeblood of Facebook – but not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”. Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently. Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.

Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact. Advertisers can target the closest friends of the users who already like a product, on the assumption that close friends tend to like the same things.

Facebook CEO Mark Zuckerberg testifies before a joint hearing of the US Senate Commerce, Science and Transportation Committee earlier this year.



Facebook CEO Mark Zuckerberg testifies before the US Senate last month. Photograph: Jim Watson/AFP/Getty Images

So when Zuckerberg talks about wanting to increase “meaningful” interactions and building relationships, he is not succumbing to pressure to take better care of his users. Rather, emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable.

In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

In many ways, this process recalls an earlier stage in the evolution of capitalism. In the 19th century, factory owners in England discovered they could only make so much money by extending the length of the working day. At some point, workers would die of exhaustion, or they would revolt, or they would push parliament to pass laws that limited their working hours. So industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.

A similar situation confronts Facebook today. They have to make the attention of the user more valuable – and the language and concepts of tech humanism can help them do it. So far, it seems to be working. Despite the reported drop in total time spent, Facebook recently announced huge 2018 Q1 earnings of $11.97bn (£8.7bn), smashing Wall Street estimates by nearly $600m.


Today’s tech humanists come from a tradition with deep roots in Silicon Valley. Like their predecessors, they believe that technology and humanity are distinct, but can be harmonised. This belief guided the generations who built the “humanised” machines that became the basis for the industry’s enormous power. Today it may provide Silicon Valley with a way to protect that power from a growing public backlash – and even deepen it by uncovering new opportunities for profit-making.

Fortunately, there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.

To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should embrace every new invention. But it does suggest that living well with technology can’t be a matter of making technology more “human”. This goal isn’t just impossible – it’s also dangerous, because it puts us at the mercy of experts who tell us how to be human. It cedes control of our technological future to those who believe they know what’s best for us because they understand the essential truths about our species.

The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power.

Today, that power is wielded by corporations, which own our technology and run it for profit. The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.

There is an alternative. If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right. The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.

Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology – rather than the small group of people who have captured society’s wealth.

What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power. Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources. After all, Silicon Valley wouldn’t exist without billions of dollars of public funding, not to mention the vast quantities of information that we all provide for free. Facebook’s market capitalisation is $500bn with 2.2 billion users – do the math to estimate how much the time you spend on Facebook is worth. You could apply the same logic to Google. There is no escape: whether or not you have an account, both platforms track you around the internet.

In addition to taxing and shrinking tech firms, democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation coming into effect in the European Union later this month. But more robust regulation of Silicon Valley isn’t enough. We also need to pry the ownership of our digital infrastructure away from private firms.

This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run. These democratic digital structures can focus on serving personal and social needs rather than piling up profits for investors. One inspiring example is municipal broadband: a successful experiment in Chattanooga, Tennessee, has shown that publicly owned internet service providers can supply better service at lower cost than private firms. Other models of digital democracy might include a worker-owned Uber, a user-owned Facebook or a socially owned “smart city” of the kind being developed in Barcelona. Alternatively, we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.

More experimentation is needed, but democracy should be our guiding principle. The stakes are high. Never before have so many people been thinking about the problems produced by the tech industry and how to solve them. The tech backlash is an enormous opportunity – and one that may not come again for a long time.

The old techno-utopianism is crumbling. What will replace it? Silicon Valley says it wants to make the world a better place. Fulfilling this promise may require a new kind of disruption.

Main illustration by Lee Martin/Guardian Design

Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.





Source link

read more
BusinessDigital mediaFacebookInstagramSnapchatSocial mediaSocial networkingStock marketsTechnologyUS newsWorld news

Shares in Snapchat owner plummet as redesign hits results | Technology

no thumb


Shares in Snapchat’s parent company have hit a record low after its results revealed the cost of a backlash against a redesign of the social messaging app.

Snap’s share price fell 22% to $10.96 (£8.05) in early trading as investors reacted to ongoing concerns over its struggle to compete with Facebook and its subsidiary Instagram.

Snapchat started its first major redesign late last year and by February more than 1.2 million users had signed a petition calling for it to reverse the “annoying” changes. It capped a bad start to 2018 after a tweet from Kylie Jenner asking her 24 million followers “does anyone else not open Snapchat anymore?” proved a stock market kiss of death, wiping $1.3bn off the company’s value.

On Tuesday, the user backlash against Snapchat, known for its disappearing messages and photograph filters, affected the company’s first quarter results, which missed targets. The service managed to add just 4 million new users in the first quarter, just over half the number forecast.

Snapchat also issued a growth warning saying the redesign fallout would mean a substantial slowdown in revenue in the current quarter. Snapchat’s 27-year old founder, Evan Spiegel, attempted to brush off the disaster, saying the redesign was necessary to broaden the app’s popularity with users and advertisers. Even taking Wednesday’s share slump into account, Snap is worth $17.6bn (£13bn).

However, analysts were not impressed. “It is not clear to us why the app redesign – the first product Snap ever tested at scale – was rolled out broadly, and we are even less clear on why it hasn’t been more aggressively rolled back already,” said Lloyd Walmsley, a Deutsche Bank analyst.

Snapchat reported 191 million daily active users in the first quarter, missing expectations of 194.15 million. Revenue came in at $230.7m, an increase of more than 50% year on year but below the $244.5m forecast.

Snapchat, which launched in 2011, has proved hugely popular with younger users, many of whom have defected from older social media platforms such as Facebook, and in the UK it is forecast to make more in ad revenue than Twitter next year.

However, Snapchat’s biggest threat is Facebook and its Instagram service, with Mark Zuckerberg’s social platforms frequently aping Snapchat’s innovations.

Analysts believe that for Snapchat to succeed against Facebook and Instagram it must appeal to a much wider range of users beyond its core youth fanbase.

“While the user base continues to be dominated by younger age groups, Snapchat’s full revenue potential will remain somewhat restricted,” said Bill Fisher, an analyst at eMarketer. “And with the financial muscle of Facebook behind Snapchat’s close competitor, Instagram, the company is going to have to work ever harder for those ad dollars.”



Source link

read more
FacebookSurveillanceUS news

Facebook fires engineer accused of stalking, possibly by abusing data access | Technology

Facebook fires engineer accused of stalking, possibly by abusing data access | Technology


Facebook has fired a security engineer after he was accused of stalking women online possibly by abusing his “privileged access” to data, raising renewed concerns about users’ privacy at the social network.

The controversy, which came to light after the employee allegedly called himself a “professional stalker” in a message to a woman he met on Tinder, is particularly bad timing for Facebook, which announced this week that it is launching an online dating feature while it continues to battle a major privacy scandal in the US and the UK.

Facebook confirmed to the Guardian that the employee was terminated, but it did not provide any details on his position or the data he may have accessed, saying in a statement it was “investigating this as a matter of urgency”.

The allegations emerged on Sunday in tweets from Jackie Stokes, founder of the cybersecurity consultancy Spyglass Security, who said she learned that “a security engineer currently employed at Facebook is likely using privileged access to stalk women online”.

Jackie Stokes 🙋🏽
(@find_evil)

I really, really hope I’m wrong about this. pic.twitter.com/NDkOptx8Hv


April 30, 2018

Stokes, who did not immediately respond to a request for comment, posted a screenshot of text messages in which the man said that he was “more than” a security analyst, writing: “I also try to figure out who hackers are in real life. So professional stalker … so out of habit have to say that you are hard to find lol”.

The woman reportedly replied: “Is that what you’re currently doing? Trying to internet stalk me?”

Stokes, who said she was not the recipient of the messages, later tweeted that “many Facebook employees” had reached out to her, and she praised them for “deft handling of a dicey issue during a time when words and actions matter more than ever”.

“It’s everyone’s issue when someone uses … possible privileged access to the biggest social media network of our time, and privilege of working in infosec [information security] … to lord it over potential partners,” Stokes tweeted.

Alex Stamos, Facebook’s chief security officer, who gave a speech about safety at the company’s annual developer conference on Tuesday, said in a statement: “It’s important that people’s information is kept secure and private when they use Facebook. It’s why we have strict policy controls and technical restrictions so employees only access the data they need to do their jobs – for example to fix bugs, manage customer support issues or respond to valid legal requests. Employees who abuse these controls will be fired.”

The controversy resembles a major scandal at Uber in 2016, when a former forensic investigator at the company testified that employees regularly abused the company’s “God view” feature to spy on the movements of high-profile politicians, celebrities, personal acquaintances and ex-partners.

In 2013, in the wake of the whistleblower Edward Snowden uncovering US mass surveillance operations, the National Security Agency also admitted that some of its analysts had abused the government spy tools, including by targeting exes and spouses.

The news broke just as the Facebook CEO, Mark Zuckerberg, revealed that the site was launching a dating app for the social network meant to rival sites like Tinder and Match.com. In unveiling the new service in a speech at the San Jose conference, Zuckerberg said: “We’ve designed this with privacy and safety in mind from the beginning.”





Source link

read more
AlphabetData protectionDigital mediaEuropeEuropean UnionFacebookGDPRGoogleInternetMediaPrivacySocial mediaSocial networkingTechnologyTwitterWorld news

EU: data-harvesting tech firms are ‘sweatshops of connected world’ | Technology

no thumb


The European data protection supervisor has hit out at social media and tech firms over the recent constant stream of privacy policy emails in the run up to GDPR, calling them them the “sweatshops of the connected world”.

With the tough new General Data Protection Regulations coming into force on 25 May, companies around the world are being forced to notify their users to accept new privacy policies and data processing terms to continue to use the services.

But Giovanni Buttarelli, the European data protection supervisor (EDPS), lambasted the often-hostile approach of the recent deluge of notifications.

“If this encounter seems a take-it-or-leave it proposition – with perhaps a hint of menace – then it is a travesty of at least the spirit of the new regulation, which aims to restore a sense of trust and control over what happens to our online lives,” said Buttarelli. “Consent cannot be freely given if the provision of a service is made conditional on processing personal data not necessary for the performance of a contract.”

“The most recent [Facebook] scandal has served to expose a broken and unbalanced ecosystem reliant on unscrupulous personal data collection and micro-targeting for whatever purposes promise to generate clicks and revenues.

“The digital information ecosystem farms people for their attention, ideas and data in exchange for so called ‘free’ services. Unlike their analogue equivalents, these sweatshops of the connected world extract more than one’s labour, and while clocking into the online factory is effortless it is often impossible to clock off.”

The European Union’s new stronger, unified data protection laws, the General Data Protection Regulation (GDPR), will come into force on 25 May 2018, after more than six years in the making.

GDPR will replace the current patchwork of national data protection laws, give data regulators greater powers to fine, make it easier for companies with a “one-stop-shop” for operating across the whole of the EU, and create a new pan-European data regulator called the European Data Protection Board.

The new laws govern the processing and storage of EU citizens’ data, both that given to and observed by companies about people, whether or not the company has operations in the EU. They state that data protection should be both by design and default in any operation.

GDPR will refine and enshrine the “right to be forgotten” laws as the “right to erasure”, and give EU citizens the right to data portability, meaning they can take data from one organisation and give it to another. It will also bolster the requirement for explicit and informed consent before data is processed, and ensure that it can be withdrawn at any time.

To ensure companies comply, GDPR also gives data regulators the power to fine up to €20m or 4% of annual global turnover, which is several orders of magnitude larger than previous possible fines. Data breaches must be reported within 72 hours to a data regulator, and affected individuals must be notified unless the data stolen is unreadable, ie strongly encrypted.

While data protection and privacy has become a hot-button issue in part thanks to the Cambridge Analytica files, Buttarelli is concerned that it is simply being used as part of the “PR toolkit” of firms. He said that there is “a growing gulf between hyperbole and reality, where controllers learn to talk a good game while continuing with the same old harmful habits”.

A new social media subgroup of data protection regulators will be convened in mid-May to tackle what Buttarelli called the “manipulative approaches” that must change with GDPR.

“Brilliant lawyers will always be able to fashion ingenious arguments to justify almost any practice. But with personal data processing we need to move to a different model,” said Buttarelli. “The old approach is broken and unsustainable – that will be, in my view, the abiding lesson of the Facebook/ Cambridge Analytica case.”



Source link

read more
DatingFacebookMark ZuckerbergPrivacyRelationshipsSocial mediaSocial networkingTechnologyUS news

Facebook announces dating app focused on ‘meaningful relationships’ | Technology

no thumb


Facebook is launching a new dating app on the social media platform, its CEO, Mark Zuckerberg, announced at an annual developer conference on Tuesday, unveiling a feature designed to compete with popular services like Tinder.

Speaking in front of a packed crowd in San Jose, Zuckerberg described the new dating feature as a tool to build “real long-term relationships – not just hookups”.

“We want Facebook to be somewhere where you can start meaningful relationships,” he continued. “We’ve designed this with privacy and safety in mind from the beginning.”

The announcement sparked gasps from the crowd and seemed to attract the most interest from the audience during Zuckerberg’s short speech, which focused on the company’s widening privacy scandal, new safeguards meant to protect users’ data and misinformation and fake news on the site.

Chris Cox, the chief product officer, said the dating feature would be “opt-in” and “safe” and that the company “took advantage of the unique properties of the platform”.

Cox showed a user’s hypothetical dating profile, which he said would be separate from an individual’s regular profile, accessed in a different section of the site. The dating feature would use only a first name and only be visible to those using the service, not an individual’s Facebook friends. The feature would not show up in the news feed, he added.

Cox said users of this feature could browse and “unlock” local events and message others planning to attend. If a potential date responded, the two would then connect via a text messaging feature that is not connected to WhatsApp or Facebook Messenger.

“We like this by the way because it mirrors the way people actually date, which is usually at events and institutions they’re connected to,” Cox said. “We hope this will help more folks meet and hopefully find partners.”

The sample profiles displayed at the conference resembled some basic features of Tinder.

Shares of Match, the company that owns Tinder, OkCupid and Match.com, fell by 21% after Zuckerberg announced the new feature, according to Bloomberg.

The CEO noted that one in three marriages in the US now started online. He said couples who met on Facebook have repeatedly thanked him over the years.

Zuckerberg said: “These are some of the moments that I’m really proud of what we’re doing. I know that we’re making a positive difference in people’s lives.”

The announcement of the dating feature came after Zuckerberg acknowledged that it has been a particularly “intense” year for the company, following revelations that millions of Americans’ personal data was harvested from Facebook and improperly shared with the political consultancy Cambridge Analytica.





Source link

read more
FacebookMark ZuckerbergMediaSocial networkingTechnologyUS news

Amid privacy scandal, Facebook unveils tool that lets you clear browsing history | Technology

no thumb


Mark Zuckerberg unveiled a new Facebook privacy control called “clear history” at the social media company’s annual developer conference, and admitted that he “didn’t have clear enough answers” about data control when he recently testified before Congress.

The CEO announced the new tool on Tuesday, describing it in a post as a “simple control to clear your browsing history on Facebook – what you’ve clicked on, websites you’ve visited, and so on”. The move comes at a time when Zuckerberg is battling some of the worst publicity his company has faced since it launched 14 years ago.

After reporting by the the Observer and the Guardian in March revealed that millions of Americans’ personal data was harvested from Facebook and impromperly shared with political consultancy Cambridge Analytica, the company has consistently been on the defense – struggling through protracted government inquiries and hearings in the US and the UK, and forced to respond to renewed calls for strict regulations.

Last month, Zuckerberg survived a two-day grilling by Congress in Washington DC, remaining composed and in some cases cleverly deflecting lawmakers’ toughest questions about data collection. The CEO has also worked to overcome a viral #DeleteFacebook campaign, fueled by concerns about the social media company’s potential impacts on elections in the US and Europe and a steady stream of revelations about the controversial ways the company tracks its users.

In advance of his speech in a packed conference hall in San Jose, Zuckerberg wrote: “Once we roll out this update, you’ll be able to see information about the apps and websites you’ve interacted with, and you’ll be able to clear this information from your account. You’ll even be able to turn off having this information stored with your account.”

He added, “One thing I learned from my experience testifying in Congress is that I didn’t have clear enough answers to some of the questions about data. We’re working to make sure these controls are clear, and we will have more to come soon.”

Zuckerberg also cautioned users against clearing cookies in their browser, saying “it can make parts of your experience worse”, and adding, “Your Facebook won’t be as good while it relearns your preferences.”

Even though the company’s stocks suffered in the wake of the recent privacy scandal, Facebook still posted record revenues for the first quarter of 2018, making $11.97bn in the first three months of the year.

In 2018, Zuckerberg pledged that his personal new year’s resolution – an annual tradition for the CEO – was to “fix” Facebook, an ambitious goal at the end of a year of relentless criticism surrounding the site’s role in spreading misinformation and having negative impacts on users’ mental health.

This year’s developer conference features a number of events that appear to emphasize Facebook’s positive influences on society, including sessions called “amplifying youth voices to influence policy” and “using technology to solve social and environmental issues”.



Source link

read more
1 2 3 26
Page 1 of 26