close

Mark Zuckerberg

AlphabetAppleComputingFacebookGoogleMark ZuckerbergSilicon ValleyTechnology

Why Silicon Valley can’t fix itself | News

Why Silicon Valley can’t fix itself | News


Big Tech is sorry. After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.

Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.” Justin Rosenstein, an engineer who helped build Facebook’s “like” button and Gchat, regrets having contributed to technology that he now considers psychologically damaging, too. “Everyone is distracted,” Rosenstein says. “All of the time.”

Ever since the internet became widely used by the public in the 1990s, users have heard warnings that it is bad for us. In the early years, many commentators described cyberspace as a parallel universe that could swallow enthusiasts whole. The media fretted about kids talking to strangers and finding porn. A prominent 1998 study from Carnegie Mellon University claimed that spending time online made you lonely, depressed and antisocial.

In the mid-2000s, as the internet moved on to mobile devices, physical and virtual life began to merge. Bullish pundits celebrated the “cognitive surplus” unlocked by crowdsourcing and the tech-savvy campaigns of Barack Obama, the “internet president”. But, alongside these optimistic voices, darker warnings persisted. Nicholas Carr’s The Shallows (2010) argued that search engines were making people stupid, while Eli Pariser’s The Filter Bubble (2011) claimed algorithms made us insular by showing us only what we wanted to see. In Alone, Together (2011) and Reclaiming Conversation (2015), Sherry Turkle warned that constant connectivity was making meaningful interaction impossible.

Still, inside the industry, techno-utopianism prevailed. Silicon Valley seemed to assume that the tools they were building were always forces for good – and that anyone who questioned them was a crank or a luddite. In the face of an anti-tech backlash that has surged since the 2016 election, however, this faith appears to be faltering. Prominent people in the industry are beginning to acknowledge that their products may have harmful effects.

Internet anxiety isn’t new. But never before have so many notable figures within the industry seemed so anxious about the world they have made. Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.

It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity. The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.

The hub of the new tech humanism is the Center for Humane Technology in San Francisco. Founded earlier this year, the nonprofit has assembled an impressive roster of advisers, including investor Roger McNamee, Lyft president John Zimmer, and Rosenstein. But its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction. In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.

As suspicion of Silicon Valley grows, the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track. For this, they have been getting a lot of attention. As the backlash against tech has grown, so too has the appeal of techies repenting for their sins. The Center for Humane Technology has been profiled – and praised by – the New York Times, the Atlantic, Wired and others.

But tech humanism’s influence cannot be measured solely by the positive media coverage it has received. The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”, and Twitter boss Jack Dorsey recently claimed he wants to improve the platform’s “conversational health”.



Tristan Harris, founder of the Center for Humane Technology. Photograph: Robert Gumpert for the Guardian

Even Mark Zuckerberg, famous for encouraging his engineers to “move fast and break things”, seems to be taking a tech humanist turn. In January, he announced that Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.

Zuckerberg’s choice of words is significant: Time Well Spent is the name of the advocacy group that Harris led before co-founding the Center for Humane Technology. In April, Zuckerberg brought the phrase to Capitol Hill. When a photographer snapped a picture of the notes Zuckerberg used while testifying before the Senate, they included a discussion of Facebook’s new emphasis on “time well spent”, under the heading “wellbeing”.

This new concern for “wellbeing” may strike some observers as a welcome development. After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.

But these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires. Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes. These changes may soothe some of the popular anger directed towards the tech industry, but they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.


The Center for Humane Technology argues that technology must be “aligned” with humanity – and that the best way to accomplish this is through better design. Their website features a section entitled The Way Forward. A familiar evolutionary image shows the silhouettes of several simians, rising from their crouches to become a man, who then turns back to contemplate his history.

“In the future, we will look back at today as a turning point towards humane design,” the header reads. To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”.

There is a good reason why the language of tech humanism is penetrating the upper echelons of the tech industry so easily: this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives. Their success turned the Bay Area tech industry into a global powerhouse – and produced the digitised world that today’s tech humanists now lament.

The story begins in the 1960s, when Silicon Valley was still a handful of electronics firms clustered among fruit orchards. Computers came in the form of mainframes then. These machines were big, expensive and difficult to use. Only corporations, universities and government agencies could afford them, and they were reserved for specialised tasks, such as calculating missile trajectories or credit scores.

Computing was industrial, in other words, not personal, and Silicon Valley remained dependent on a small number of big institutional clients. The practical danger that this dependency posed became clear in the early 1960s, when the US Department of Defense, by far the single biggest buyer of digital components, began cutting back on its purchases. But the fall in military procurement wasn’t the only mid-century crisis around computing.

Computers also had an image problem. The inaccessibility of mainframes made them easy to demonise. In these whirring hulks of digital machinery, many observers saw something inhuman, even evil. To antiwar activists, computers were weapons of the war machine that was killing thousands in Vietnam. To highbrow commentators such as the social critic Lewis Mumford, computers were instruments of a creeping technocracy that threatened to extinguish personal freedom.

But during the course of the 1960s and 70s, a series of experiments in northern California helped solve both problems. These experiments yielded breakthrough innovations like the graphical user interface, the mouse and the microprocessor. Computers became smaller, more usable and more interactive, reducing Silicon Valley’s reliance on a few large customers while giving digital technology a friendlier face.

Apple founder Steve Jobs ‘got the notion of tools for human use’.



Apple founder Steve Jobs ‘got the notion of tools for human use’. Photograph: Ted Thai/Polaris / eyevine

The pioneers who led this transformation believed they were making computing more human. They drew deeply from the counterculture of the period, and its fixation on developing “human” modes of living. They wanted their machines to be “extensions of man”, in the words of Marshall McLuhan, and to unlock “human potential” rather than repress it. At the centre of this ecosystem of hobbyists, hackers, hippies and professional engineers was Stewart Brand, famed entrepreneur of the counterculture and founder of the Whole Earth Catalog. In a famous 1972 article for Rolling Stone, Brand called for a new model of computing that “served human interest, not machine”.

Brand’s disciples answered this call by developing the technical innovations that transformed computers into the form we recognise today. They also promoted a new way of thinking about computers – not as impersonal slabs of machinery, but as tools for unleashing “human potential”.

No single figure contributed more to this transformation of computing than Steve Jobs, who was a fan of Brand and a reader of the Whole Earth Catalog. Jobs fulfilled Brand’s vision on a global scale, launching the mass personal computing era with the Macintosh in the mid-80s, and the mass smartphone era with the iPhone two decades later. Brand later acknowledged that Jobs embodied the Whole Earth Catalog ethos. “He got the notion of tools for human use,” Brand told Jobs’ biographer, Walter Isaacson.

Building those “tools for human use” turned out to be great for business. The impulse to humanise computing enabled Silicon Valley to enter every crevice of our lives. From phones to tablets to laptops, we are surrounded by devices that have fulfilled the demands of the counterculture for digital connectivity, interactivity and self-expression. Your iPhone responds to the slightest touch; you can look at photos of anyone you have ever known, and broadcast anything you want to all of them, at any moment.

In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention. To guide us out of that wilderness, tech humanists say we need more humanising. They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.


Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.

It is difficult to imagine human beings without technology. The story of our species began when we began to make tools. Homo habilis, the first members of our genus, left sharpened stones scattered across Africa. Their successors hit rocks against each other to make sparks, and thus fire. With fire you could cook meat and clear land for planting; with ash you could fertilise the soil; with smoke you could make signals. In flickering light, our ancestors painted animals on cave walls. The ancient tragedian Aeschylus recalled this era mythically: Prometheus, in stealing fire from the gods, “founded all the arts of men.”

All of which is to say: humanity and technology are not only entangled, they constantly change together. This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used. The evolutionary scientist Mary Marzke shows that we developed “a unique pattern of muscle architecture and joint surface form and functions” for this purpose.

The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities. For millennia, people have feared that new media were eroding the very powers that they promised to extend. In The Phaedrus, Socrates warned that writing on wax tablets would make people forgetful. If you could jot something down, you wouldn’t have to remember it. In the late middle ages, as a culture of copying manuscripts gave way to printed books, teachers warned that pupils would become careless, since they no longer had to transcribe what their teachers said.

Yet as we lose certain capacities, we gain new ones. People who used to navigate the seas by following stars can now program computers to steer container ships from afar. Your grandmother probably has better handwriting than you do – but you probably type faster.

The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology. Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.

Intentionally or not, this is what tech humanists are doing when they talk about technology as threatening human nature – as if human nature had stayed the same from the paleolithic era until the rollout of the iPhone. Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them. And while the tech humanists may believe they are acting in the common good, they themselves acknowledge they are doing so from above, as elites. “We have a moral responsibility to steer people’s thoughts ethically,” Tristan Harris has declared.

Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes. The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.

This paternalism produces a central irony of tech humanism: the language that they use to describe users is often dehumanising. “Facebook appeals to your lizard brain – primarily fear and anger,” says McNamee. Harris echoes this sentiment: “Imagine you had an input cable,” he has said. “You’re trying to jack it into a human being. Do you want to jack it into their reptilian brain, or do you want to jack it into their more reflective self?”

The Center for Humane Technology’s website offers tips on how to build a more reflective and less reptilian relationship to your smartphone: “going greyscale” by setting your screen to black-and-white, turning off app notifications and charging your device outside your bedroom. It has also announced two major initiatives: a national campaign to raise awareness about technology’s harmful effects on young people’s “digital health and well-being”; and a “Ledger of Harms” – a website that will compile information about the health effects of different technologies in order to guide engineers in building “healthier” products.

These initiatives may help some people reduce their smartphone use – a reasonable personal goal. But there are some humans who may not share this goal, and there need not be anything unhealthy about that. Many people rely on the internet for solace and solidarity, especially those who feel marginalised. The kid with autism may stare at his screen when surrounded by people, because it lets him tolerate being surrounded by people. For him, constant use of technology may not be destructive at all, but in fact life-saving.

Pathologising certain potentially beneficial behaviours as “sick” isn’t the only problem with the Center for Humane Technology’s proposals. They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

This may be why their approach is so appealing to the tech industry. There is no reason to doubt the good intentions of tech humanists, who may genuinely want to address the problems fuelling the tech backlash. But they are handing the firms that caused those problems a valuable weapon. Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power. By channelling popular anger at Big Tech into concerns about health and humanity, tech humanism gives corporate giants such as Facebook a way to avoid real democratic control. In a moment of danger, it may even help them protect their profits.


One can easily imagine a version of Facebook that embraces the principles of tech humanism while remaining a profitable and powerful monopoly. In fact, these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.

When Zuckerberg announced that Facebook would prioritise “time well spent” over total time spent, it came a couple weeks before the company released their 2017 Q4 earnings. These reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”.

Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable. In a recent interview, he said: “Over the long term, even if time spent goes down, if people are spending more time on Facebook actually building relationships with people they care about, then that’s going to build a stronger community and build a stronger business, regardless of what Wall Street thinks about it in the near term.”

Sheryl Sandberg has also stressed that the shift will create “more monetisation opportunities”. How? Everyone knows data is the lifeblood of Facebook – but not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”. Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently. Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.

Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact. Advertisers can target the closest friends of the users who already like a product, on the assumption that close friends tend to like the same things.

Facebook CEO Mark Zuckerberg testifies before a joint hearing of the US Senate Commerce, Science and Transportation Committee earlier this year.



Facebook CEO Mark Zuckerberg testifies before the US Senate last month. Photograph: Jim Watson/AFP/Getty Images

So when Zuckerberg talks about wanting to increase “meaningful” interactions and building relationships, he is not succumbing to pressure to take better care of his users. Rather, emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable.

In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

In many ways, this process recalls an earlier stage in the evolution of capitalism. In the 19th century, factory owners in England discovered they could only make so much money by extending the length of the working day. At some point, workers would die of exhaustion, or they would revolt, or they would push parliament to pass laws that limited their working hours. So industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.

A similar situation confronts Facebook today. They have to make the attention of the user more valuable – and the language and concepts of tech humanism can help them do it. So far, it seems to be working. Despite the reported drop in total time spent, Facebook recently announced huge 2018 Q1 earnings of $11.97bn (£8.7bn), smashing Wall Street estimates by nearly $600m.


Today’s tech humanists come from a tradition with deep roots in Silicon Valley. Like their predecessors, they believe that technology and humanity are distinct, but can be harmonised. This belief guided the generations who built the “humanised” machines that became the basis for the industry’s enormous power. Today it may provide Silicon Valley with a way to protect that power from a growing public backlash – and even deepen it by uncovering new opportunities for profit-making.

Fortunately, there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.

To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should embrace every new invention. But it does suggest that living well with technology can’t be a matter of making technology more “human”. This goal isn’t just impossible – it’s also dangerous, because it puts us at the mercy of experts who tell us how to be human. It cedes control of our technological future to those who believe they know what’s best for us because they understand the essential truths about our species.

The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power.

Today, that power is wielded by corporations, which own our technology and run it for profit. The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.

There is an alternative. If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right. The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.

Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology – rather than the small group of people who have captured society’s wealth.

What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power. Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources. After all, Silicon Valley wouldn’t exist without billions of dollars of public funding, not to mention the vast quantities of information that we all provide for free. Facebook’s market capitalisation is $500bn with 2.2 billion users – do the math to estimate how much the time you spend on Facebook is worth. You could apply the same logic to Google. There is no escape: whether or not you have an account, both platforms track you around the internet.

In addition to taxing and shrinking tech firms, democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation coming into effect in the European Union later this month. But more robust regulation of Silicon Valley isn’t enough. We also need to pry the ownership of our digital infrastructure away from private firms.

This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run. These democratic digital structures can focus on serving personal and social needs rather than piling up profits for investors. One inspiring example is municipal broadband: a successful experiment in Chattanooga, Tennessee, has shown that publicly owned internet service providers can supply better service at lower cost than private firms. Other models of digital democracy might include a worker-owned Uber, a user-owned Facebook or a socially owned “smart city” of the kind being developed in Barcelona. Alternatively, we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.

More experimentation is needed, but democracy should be our guiding principle. The stakes are high. Never before have so many people been thinking about the problems produced by the tech industry and how to solve them. The tech backlash is an enormous opportunity – and one that may not come again for a long time.

The old techno-utopianism is crumbling. What will replace it? Silicon Valley says it wants to make the world a better place. Fulfilling this promise may require a new kind of disruption.

Main illustration by Lee Martin/Guardian Design

Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.





Source link

read more
DatingFacebookMark ZuckerbergPrivacyRelationshipsSocial mediaSocial networkingTechnologyUS news

Facebook announces dating app focused on ‘meaningful relationships’ | Technology

no thumb


Facebook is launching a new dating app on the social media platform, its CEO, Mark Zuckerberg, announced at an annual developer conference on Tuesday, unveiling a feature designed to compete with popular services like Tinder.

Speaking in front of a packed crowd in San Jose, Zuckerberg described the new dating feature as a tool to build “real long-term relationships – not just hookups”.

“We want Facebook to be somewhere where you can start meaningful relationships,” he continued. “We’ve designed this with privacy and safety in mind from the beginning.”

The announcement sparked gasps from the crowd and seemed to attract the most interest from the audience during Zuckerberg’s short speech, which focused on the company’s widening privacy scandal, new safeguards meant to protect users’ data and misinformation and fake news on the site.

Chris Cox, the chief product officer, said the dating feature would be “opt-in” and “safe” and that the company “took advantage of the unique properties of the platform”.

Cox showed a user’s hypothetical dating profile, which he said would be separate from an individual’s regular profile, accessed in a different section of the site. The dating feature would use only a first name and only be visible to those using the service, not an individual’s Facebook friends. The feature would not show up in the news feed, he added.

Cox said users of this feature could browse and “unlock” local events and message others planning to attend. If a potential date responded, the two would then connect via a text messaging feature that is not connected to WhatsApp or Facebook Messenger.

“We like this by the way because it mirrors the way people actually date, which is usually at events and institutions they’re connected to,” Cox said. “We hope this will help more folks meet and hopefully find partners.”

The sample profiles displayed at the conference resembled some basic features of Tinder.

Shares of Match, the company that owns Tinder, OkCupid and Match.com, fell by 21% after Zuckerberg announced the new feature, according to Bloomberg.

The CEO noted that one in three marriages in the US now started online. He said couples who met on Facebook have repeatedly thanked him over the years.

Zuckerberg said: “These are some of the moments that I’m really proud of what we’re doing. I know that we’re making a positive difference in people’s lives.”

The announcement of the dating feature came after Zuckerberg acknowledged that it has been a particularly “intense” year for the company, following revelations that millions of Americans’ personal data was harvested from Facebook and improperly shared with the political consultancy Cambridge Analytica.





Source link

read more
FacebookMark ZuckerbergMediaSocial networkingTechnologyUS news

Amid privacy scandal, Facebook unveils tool that lets you clear browsing history | Technology

no thumb


Mark Zuckerberg unveiled a new Facebook privacy control called “clear history” at the social media company’s annual developer conference, and admitted that he “didn’t have clear enough answers” about data control when he recently testified before Congress.

The CEO announced the new tool on Tuesday, describing it in a post as a “simple control to clear your browsing history on Facebook – what you’ve clicked on, websites you’ve visited, and so on”. The move comes at a time when Zuckerberg is battling some of the worst publicity his company has faced since it launched 14 years ago.

After reporting by the the Observer and the Guardian in March revealed that millions of Americans’ personal data was harvested from Facebook and impromperly shared with political consultancy Cambridge Analytica, the company has consistently been on the defense – struggling through protracted government inquiries and hearings in the US and the UK, and forced to respond to renewed calls for strict regulations.

Last month, Zuckerberg survived a two-day grilling by Congress in Washington DC, remaining composed and in some cases cleverly deflecting lawmakers’ toughest questions about data collection. The CEO has also worked to overcome a viral #DeleteFacebook campaign, fueled by concerns about the social media company’s potential impacts on elections in the US and Europe and a steady stream of revelations about the controversial ways the company tracks its users.

In advance of his speech in a packed conference hall in San Jose, Zuckerberg wrote: “Once we roll out this update, you’ll be able to see information about the apps and websites you’ve interacted with, and you’ll be able to clear this information from your account. You’ll even be able to turn off having this information stored with your account.”

He added, “One thing I learned from my experience testifying in Congress is that I didn’t have clear enough answers to some of the questions about data. We’re working to make sure these controls are clear, and we will have more to come soon.”

Zuckerberg also cautioned users against clearing cookies in their browser, saying “it can make parts of your experience worse”, and adding, “Your Facebook won’t be as good while it relearns your preferences.”

Even though the company’s stocks suffered in the wake of the recent privacy scandal, Facebook still posted record revenues for the first quarter of 2018, making $11.97bn in the first three months of the year.

In 2018, Zuckerberg pledged that his personal new year’s resolution – an annual tradition for the CEO – was to “fix” Facebook, an ambitious goal at the end of a year of relentless criticism surrounding the site’s role in spreading misinformation and having negative impacts on users’ mental health.

This year’s developer conference features a number of events that appear to emphasize Facebook’s positive influences on society, including sessions called “amplifying youth voices to influence policy” and “using technology to solve social and environmental issues”.



Source link

read more
Cambridge AnalyticaEuropeFacebookInternetMark ZuckerbergMediaPoliticsSocial networkingTechnologyUK newsUS newsWorld news

MPs threaten Mark Zuckerberg with summons over Facebook data | News

MPs threaten Mark Zuckerberg with summons over Facebook data | News


MPs have threatened to issue Mark Zuckerberg with a formal summons to appear in front of parliament when he next enters the UK, unless he voluntarily agrees to answer questions about the activities of his social network and the Cambridge Analytica scandal.

Damian Collins, the chair of the parliamentary committee that is investigating online disinformation, said he was unhappy with the information the company had provided and now wanted to hear evidence from the Facebook chief executive before parliament went into recess on 24 May.

Saturday 17 March

The Observer publishes online its first story on the Facebook and Cambridge Analytica scandal, written by Carole Cadwalladr and Emma Graham-Harrison.

Former Cambridge Analytica employee Christopher Wylie reveals how the firm used personal information taken in early 2014 to build a system that could profile individual US voters.

The data was collected through an app, built by academic Aleksandr Kogan, separately from his work at Cambridge University, through his company Global Science Research (GSR).

Sunday 18 March

As the Observer publishes its full interview with Wylie in the print edition, the fallout begins. US congressional investigators call for Cambridge Analytica boss Alexander Nix to testify again before their committee.

Monday 19 March

Channel 4 News airs the findings of an undercover investigation where Cambridge Analytica executives ​boast of using honey traps, fake news campaigns and operations with ex-spies to swing election campaigns.

Tuesday 20 March

​A former Facebook employee claims​ hundreds of millions of Facebook users may have had their private information harvested by companies in similar methods.

Wednesday 21 March

UK MPs summon Mark Zuckerberg to appear before a select committee investigating fake news, and accuse Facebook of misleading them at a previous hearing. 

Thursday 22 March

It emerges Facebook had previously provided Kogan with an anonymised, aggregate dataset of 57bn Facebook friendships. Zuckerberg breaks his silence to call the misuse of data a ‘breach of trust’.

Friday 23 March

Brittany Kaiser, formerly Cambridge Analytica’s business development director, reveals the blueprint for how CA claimed to have won the White House for Donald Trump by using Google, Snapchat, Twitter, Facebook and YouTube.


Photograph: Antonio Olmos

“It is worth noting that, while Mr Zuckerberg does not normally come under the jurisdiction of the UK parliament, he will do so the next time he enters the country,” Collins wrote in a public letter to Facebook. “We hope that he will respond positively to our request, but, if not, the committee will resolve to issue a formal summons for him to appear when he is next in the UK.”

Collins referred to an unconfirmed report by Politico that Zuckerberg planned to appear in front of the European parliament this month, suggesting it would be simple for the Facebook chief to extend his trip to attend a hearing in the UK.

The committee has repeatedly invited Zuckerberg to give evidence but Facebook has sent more junior executives to answer questions from MPs.

Facebook declined to comment on the possibility of a formal summons. In theory, Zuckerberg could be found in contempt of parliament if he refuses one.

When Rupert Murdoch and his son James resisted appearing in front of a select committee in 2011 it was speculated that potential punishments could include “fines and imprisonment”. In reality it is likely that, at worst, the punishment for ignoring such a summons would include an arcane process resulting in little more than a formal warning.

Collins said last week’s five-hour evidence session by Facebook’s chief technology officer, Mike Schroepfer, was unsatisfactory and his answers “lacked many of the important details” needed.

Collins’ committee formally issued a list of 39 supplementary questions they wanted answered following Schroepfer’s session, in which Facebook was labelled a “morality-free zone”.

Zuckerberg did make time to appear in front of the US Congress, where politicians were allocated five minutes each to ask questions. British select committee hearings allow politicians more time to ask follow-up questions, potentially making it a more testing experience.



Source link

read more
BusinessFacebookGDPRMark ZuckerbergSocial mediaSocial networkingTechnologyUS news

Facebook posts record revenues for first quarter despite privacy scandal | Technology

no thumb


Facebook’s data privacy problems have had little impact on its profitability as the company posted record revenues for the first quarter of 2018.

The company made $11.97bn in revenue in the first three months of the year, up 49% from the previous year, beating Wall Street estimates of $11.41bn.

“Despite facing important challenges, our community and business are off to a strong start in 2018,” said Mark Zuckerberg, Facebook’s founder and CEO, in a statement. “We are taking a broader view of our responsibility and investing to make sure our services are used for good. But we also need to keep building new tools to help people connect, strengthen our communities, and bring the world closer together.”

Facebook’s approach to privacy has come under intense scrutiny over the last few weeks, following the revelation that the personal data of millions of Americans was harvested from Facebook and improperly shared with the political consultancy Cambridge Analytica. Stoked by fears that the data may have been used to try and influence elections in the US and Europe and the discovery that Facebook collects a lot more data than the average person realises (including web browsing history and, in some cases, text messages), some users have started a #DeleteFacebook movement, including the co-founder of the Facebook-owned WhatsApp.

Facebook’s user figures indicate that the movement had little effect, with daily active users rising to 1.45bn (48 million more than the previous quarter) and monthly active users to 2.2bn. Both figures mark a 13% increase from the same quarter in 2017.

However, much of the momentum behind the movement came in April, when Zuckerberg delivered eight hours of testimony before Congress, after the quarter was over.

In the last few weeks, the social network has announced a slew of changes to its privacy tools and to the way that it collects and shares user data. It has also introduced a verification process for political advertisers and page administrators and plans to comply with Europe’s General Data Protection Regulation (GDPR). As if to demonstrate its continued commitment to transparency, this week the company published its content moderation guidelines – a year after the Guardian revealed the secret rules.

Facebook’s chief financial officer, Dave Wehner, warned that GDPR could lead to a fall in users and revenue in Europe, particularly as people start tightening up their accounts to prevent so much targeted advertising.

“While we don’t expect the changes will have a significant impact on ad revenue, any change in our ability for us and advertisers to use data can impact our optimisation potential at the margin,” he said.

In spite of pledging to roll out GDPR-compliant settings to all Facebook users, the chief operating officer, Sheryl Sandberg, said the settings “would not be exactly the same format”. Because of this, Facebook doesn’t expect there to be any impact on ad revenue outside of Europe, indicating that those settings would not make a difference to the way Facebook uses people’s personal data, which is the whole point of GDPR.

Sandberg rejected any suggestion that Facebook should diversify its business model away from micro-targeted advertising.

“We are proud of the ad model we’ve built,” she said. “It ensures people see more useful ads, allows businesses to grow, and keeps the service free. Advertising-supported businesses like Facebook equalise access and improve opportunity.”

Zuckerberg said that changes the company had made to the news feed to prioritise “meaningful interaction” rather than “passive consumption” were already having an effect – with a decline in people passively watching video.

“People want [Facebook] to be more about friends and family and less about content consumption,” he said.



Source link

read more
Cambridge AnalyticaData protectionFacebookMark ZuckerbergPrivacySocial networkingTechnologyUK newsUniversity of CambridgeWorld news

Cambridge University rejected Facebook study over ‘deceptive’ privacy standards | Technology

no thumb


A Cambridge University ethics panel rejected research by the academic at the centre of the Facebook data harvesting scandal over the social network’s “deceptive” approach to its users privacy, newly released documents reveal.

A 2015 proposal by Aleksandr Kogan, a member of the university’s psychology department, involved the personal data from 250,000 Facebook users and their 54 million friends that he had already gleaned via a personality quiz app in a commercial project funded by SCL, the parent company of Cambridge Analytica.

Separately, Kogan proposed an academic investigation on how Facebook likes are linked to “personality traits, socioeconomic status and physical environments”, according to an ethics application about the project released to the Guardian in response to a freedom of information request.

The documents shed new light on suggestions from the Facebook CEO, Mark Zuckerberg, that the university’s controls on research did not meet Facebook’s own standards. In testimony to the US Congress earlier this month, Zuckerberg said he was concerned over Cambridge’s approach, telling a hearing: “What we do need to understand is whether there is something bad going on at Cambridge University overall, that will require a stronger action from us.”

But in the newly published material, the university’s psychology research ethics committee says it found the Kogan proposal so “worrisome” that it took the “very rare” decision to reject the project.

The panel said Facebook’s approach to consent “falls far below the ethical expectations of the university”.


Correspondence around the decision was released hours before Kogan appeared before a Commons inquiry into fake news and misinformation. In written and oral evidence to the committee Kogan insisted that all his academic work was reviewed and approved by the university (pdf). But he did not mention to the MPs the ethics committee’s rejection of his proposed research using the Facebook data in May 2015.

Explaining the decision, one member of the panel said the Facebook users involved had not given sufficient consent to allow the research to be conducted, or given a chance to withdraw from the project. The academic, whose name was redacted from the document, said: “Facebook’s privacy policy is not sufficient to address my concerns.”

Appealing against the panel’s rejection, a letter believed to be written by Kogan pointed out that “users’ social network data is already downloaded and used without their direct consent by thousands of companies who develop apps for Facebook”.

It added: “In fact, access to data by third parties for various purposes is fundamental to every app on Facebook; so users have already had their data downloaded and used by companies for private interest.”

Another panel member felt that information shared with Facebook friends should not be regarded as public data. In a response to Kogan’s appeal, the academic said: “Once you have persuaded someone to run your Facebook app, you can really only collect that subject’s data. What his or her friends have disclosed is by default disclosed to ‘friends’ only, that is, with an expectation of confidence.”

The ethics panel member added: “Facebook is rather deceptive on this and creates the appearance of a cosy and confidential peer group environment, as a means of gulling users into disclosing private information that they then sell to advertisers, but this doesn’t make it right to an ethical researcher to follow their lead.”

The academic also likened Facebook to a contagion. The letter sent in July 2015 said: “My view of Facebook is that it’s a bit like an infectious disease; you end up catching what your friends have. If there are bad apps, or malware, or even dodgy marketing offers, they get passed along through friendship networks. An ethical approach to using social networking data has to take account of this.”

Kogan accepted that he made mistakes in how the Facebook data was collected. He told Tuesday’s committee hearing: “Fundamentally I made a mistake, by not being critical about this. I should have got better advice on what is and isn’t appropriate.”

But asked if he accepted that he broke Facebook’s terms and conditions, Kogan said: “I do not. I would agree that my actions were inconsistent with the language of these documents, but that’s slightly different.”

Kogan collected Facebook data before the network changed its terms of service in 2014 to stop developers harvesting data via apps.

Facebook has banned Kogan from the network and insisted that he violated its platform policy by transferring data his app collected to Cambridge Analytica. The company has previously said it is “strongly committed to protecting people’s information”.

In an initial statement on Kogan’s research, Mark Zuckerberg said Facebook had already taken key steps to secure users’ data and said it would go further to prevent abuse. He added: “This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it.”

Kogan has been approached for comment.



Source link

read more
FacebookFacial recognitionMark ZuckerbergMediaSocial networkingTechnology

Facebook admits tracking users and non-users off-site | Technology

no thumb


Facebook has released more information on the social media platform’s tracking of users off-site, after its CEO, Mark Zuckerberg, failed to answer questions about the process from US politicians and as the company prepares to fight a lawsuit over facial recognition in California.

In a blog post, Facebook’s product management director, David Baser, wrote that the company tracked users and non-users across websites and apps for three main reasons: providing services directly, securing the company’s own site, and “improving our products and services”.

“When you visit a site or app that uses our services, we receive information even if you’re logged out or don’t have a Facebook account. This is because other apps and sites don’t know who is using Facebook,” Baser wrote.

“Whether it’s information from apps and websites, or information you share with other people on Facebook, we want to put you in control – and be transparent about what information Facebook has and how it is used.”

But the company’s transparency has still not extended to telling non-users what it knows about them – an issue Zuckerberg also faced questions over from Congress. Asked by Texas representative Gene Green whether all information Facebook holds about a user is in the file the company offers as part of its “download your data” feature, Zuckerberg had responded he believed that to be the case.

Privacy campaigner Paul-Olivier Dehaye disagreed, noting that, as a non-Facebook user, he had been unable to access personal data collected through the company’s off-site tracking systems. Following an official subject access request under EU law, he told MPs last month, Facebook had responded that it was unable to provide the information.

“They’re saying they’re so big the cost would be too large to provide me data,” he said. “They’re really arguing that they’re too big to comply with data protection law, the cost is too high, which is mind-boggling.”

Following Zuckerberg’s testimony, Dehaye made a complaint to the Irish data protection commissioner, noting the CEO “contradicted the pronouncements to me over past year by his own privacy Operations team”.

The Facebook blogpost comes as the company faces a class action lawsuit in California over its decision to launch a facial recognition feature for US users in 2011.

The “tag suggestions” feature, involves Facebook running facial recognition tech on uploaded photos to match them with other users automatically. But a class action suit representing Facebook users in Illinois argues that the technology breaches the law in that state.

On Monday, US district judge James Donato ruled the suit could go ahead, representing all Illinois users “for whom Facebook created and stored a face template after 7 June 2011”.

The feature was turned off in the EU shortly after it launched, and Facebook committed in 2012 to delete all face templates by October that year, as part of a wide-ranging agreement with the Irish data protection commissioner.

Now, however, the company intends to relaunch facial recognition features across the EU, building on an approach that it trialled in America. Facebook users will see a splash screen that informs them that they are reviewing “an option for turning on face recognition”. They can then click “accept and continue”, giving Facebook consent to use their face templates, or click “manage data settings”, then “continue”, then check a box marked “don’t allow Facebook to recognise me in photos and videos” to opt out.

Facebook believes this model, which the company describes as providing explicit consent, will allow it to avoid the concerns that were previously raised by the Irish data protection commissioner.





Source link

read more
FacebookMark ZuckerbergMediaSocial networkingTechnology

I was one of the first people on Facebook. I shouldn’t have trusted Mark Zuckerberg | Technology

I was one of the first people on Facebook. I shouldn’t have trusted Mark Zuckerberg | Technology


Fourteen years, two months, and eight days ago, I made a mistake. Like a lot of mistakes made at the age of 20 inside a college dorm room, it involved trusting a man I shouldn’t have, and it still affects me to this day.

No, Mark Zuckerberg didn’t give me herpes. But in the wake of the Cambridge Analytica revelations, I have been thinking back to my decision to sign up for thefacebook.com on the site’s fifth day in existence, and I am struck by the parallels between Zuckerberg’s creation and a pesky (if generally benign) virus. Facebook isn’t going to kill me, but it has wormed its way into all of my relationships, caused me to infect other people, and I will never, ever be fully rid of it.

Last week, Zuckerberg was called to answer for himself. Over the course of two days of questioning before Congress, Zuckerberg sought to assure the public that we, not he, are in “complete control” of our relationships with Facebook. He repeated this guarantee dozens of times, returning again and again to the idea that users can control their Facebook data.

But the Zuckerberg of 2018 sounds suspiciously like the “Mark E Zuckerberg ’06” who was interviewed by the Harvard Crimson on 9 February 2004 about his brand new website. It was this article that prompted my roommates and I to start entrusting a stranger behind a computer screen with the keys to our identities: names, birthdates, photographs, email addresses, and more.



Here’s what the original website for ‘The Facebook’ used to look like back in 2004. Photograph: web.archive.org

“There are pretty intensive privacy options,” he told the paper. “People have very good control over who can see their information.”

“On Facebook, everything that you share there you have control over,” he told Senator Dean Heller just moments after failing to give a straight answer on whether Facebook has ever collected the contents of its users phone calls. “You can say I don’t want this information to be there. You have full access to understand all, every piece of information that Facebook might know about you, and you can get rid of all of it.”

Zuckerberg was lying then and he’s lying now. We do not have “complete control” and we never have, as evidenced by the fact that even people who never signed up for Facebook have “shadow profiles” created without their consent. He has been getting away with this same spin for 14 years, two months, and eight days. Watching him dissemble in front of Congress, I couldn’t help but see him as one of those fresh-faced boys at Harvard who transitioned seamlessly from their New England prep schools to the Ivy League, and excelled at maintaining steady eye contact with the professor while they opined about books they hadn’t read.

I can still remember our excitement and curiosity for the new website that promised to enhance and replace the physical facebooks that Harvard passed out to first-year students. Those thin, hardcover volumes were a frequent source of useful information and prurient entertainment. We used to pore over the book, trying to figure out the name of this guy from class, or that girl from Saturday night, judging the looks of other students and generally indulging in a kind of pre-cyber cyber-stalking: it was a way to learn things about other people without having to ask them directly.

Zuckerberg’s website broke the facebook out of its bindings. During those first weeks and months, we bore witness to Facebook’s power to reorient social interactions. With Facebook, you were friends, or not friends; in a relationship, single, or “it’s complicated”; popularity was easily quantifiable; those who chose not to sign up for Facebook were defining themselves as abstainers, whether they wanted to or not. All of the beautiful and painful mess of human interactions was reducible to a data point in the social graph.

We embraced this recalibration of social relations without thinking about who or what was behind them. Judging strangers based on their facebook photo transitioned seamlessly into judging people based on their Facebook profile and Facebook habits. It’s embarrassing to remember now my own decision, born of a hefty sense of my own too-coolness, that I would only ever respond to other people’s friend requests, and not send any myself, as if this were a meaningful form of self-definition.

I’d like to think that I spared a thought for the motivations of the man behind the computer screen, but I’m sure I didn’t. Even if I had thought to assign a word, let alone a value, to the idea that I should maintain control over the pieces of information by which others would come to know and judge me – “privacy”, I think we call this – I probably would have been taken in by Zuckerberg’s assurances in that first Crimson article that his website was perfectly safe.

Mark Zuckerberg in 2010. ‘The truth is that Facebook’s great value has come from making the rest of us lose control.’



Mark Zuckerberg in 2010. ‘The truth is that Facebook’s great value has come from making the rest of us lose control.’ Photograph: Justin Sullivan/Getty Images

The truth is that Facebook’s great value has come from making the rest of us lose control. Yes, we can decide what photos and status updates and biographical details we plug into Facebook’s gaping maw. But the most valuable insights have been gleaned from the things we didn’t even realize we were giving away.

Facebook knows what I read on the internet, where I want to go on vacation, how late I stay up at night, whose posts I scroll quickly by, and whose posts I pause to linger over. It knows that I took reporting trips to Montana and Seattle and San Diego, despite the fact that I have never allowed it to track me by GPS. It knows my father’s cell phone number, despite the fact that he has never signed up for its service, because I was stupid enough to share my contacts with it once, several years ago.

It knows all of these things that are, in my opinion, none of its goddamn business.

If I’ve learned one thing from Mark Zuckerberg it’s that the most valuable knowledge about another person comes from learning things about them that they wouldn’t tell you themselves.

So here’s what I know about Mark Zuckerberg. During those first few weeks of Facebook’s existence, while he was assuring his fellow college students that we could trust him with their identities, he had a private conversation on instant messenger with a friend. That conversation was subsequently leaked, and published by Silicon Valley Insider. It is as follows:

ZUCK: yea so if you ever need info about anyone at harvard

ZUCK: just ask

ZUCK: i have over 4000 emails, pictures, addresses, sns

FRIEND: what!? how’d you manage that one?

ZUCK: people just submitted it

ZUCK: i don’t know why

ZUCK: they “trust me”

ZUCK: dumb fucks

In the intervening years, I’ve learned that Zuckerberg values his own privacy so much that he has security guards watching his trash, that he bought four houses surrounding his own house to avoid having neighbors, that he sued hundreds of Hawaiians to sever their claim to tiny plots of land within his massive Kauai estate, and that he secretly built tools to prevent further private messages from coming back to haunt him.

What I haven’t learned, or seen any sign of, is that he has changed his opinion of the intelligence of his users. It’s Zuckerberg’s world, and we’re all just a bunch of dumb fucks living in it.



Source link

read more
FacebookMark ZuckerbergSocial networkingTechnologyUS news

Facebook paid $7.3m for Mark Zuckerberg’s security last year | Technology

no thumb


Facebook increased its spending on security for Mark Zuckerberg by 50% last year, the company has disclosed, paying more than $7.3m (£5.1m) to protect its top executive.

The security funds were required “due to specific threats to his safety arising directly as a result of his position as our founder, chairman, and CEO,” the under-fire social media company said in a new filing to US regulators.

Facebook also said it spent more than $1.5m (£1m) on Zuckerberg’s travel aboard private jets in 2017, meaning that in all the company has spent about $20m on security and private planes for Zuckerberg since 2015.

Security measures at Zuckerberg’s home in California are paid for by the company, according to the filing, as well as bodyguards who accompany the 33-year-old on his travels outside Facebook’s headquarters.

“We require these security measures for the company’s benefit because of the importance of Mr Zuckerberg to Facebook, and we believe that the costs of this overall security program are appropriate and necessary,” the company said.

Zuckerberg has come under sharp criticism for his company’s role in the spread of disinformation and political propaganda online. Last week he faced 10 hours of questions from US congressional committees but escaped largely unscathed.

The hearings were prompted by the disclosure by the Observer that tens of millions of Facebook users had their data harvested by a researcher working with Cambridge Analytica, a political consultancy hired by Donald Trump’s 2016 presidential campaign.

The filing to the Securities and Exchange Commission (SEC) said that Zuckerberg continued to take a token $1 (70p) annual salary for his work at Facebook, which he founded as a student at Harvard University.

Zuckerberg’s fortune, which fluctuates with Facebook’s stock price, has recently totalled about $70bn (£49bn). He consistently appears in lists of the world’s top ten richest people.

About $2.7m (£1.9m) was spent in 2017 on security for Sheryl Sandberg, Facebook’s chief operating officer, according to the company. That total was only slightly higher than Sandberg’s security bill in 2016.

Sandberg, who has assisted Zuckerberg on an apology tour since news broke of the exploitation of users’ data, received a total pay package last year of more than $22.5m (£15.8m) in salary, bonuses and company stock.

Facebook said in the filing that the spending totals cited for Zuckerberg’s use of private planes “include passenger fees, fuel, crew, and catering costs”.

Zuckerberg raised alarms last week by stating during the congressional hearings that Facebook collects “data of people who have not signed up for Facebook,” apparently for security reasons.

No further information was offered on this disclosure, prompting calls from privacy campaigners for Facebook to explain how it gathers data on non-users and what information is retained by the company.

Zuckerberg also warned last week of an online propaganda “arms race” with Russia and said that fighting interference in elections around the world was now his top priority as Facebook’s boss.

Earlier this month the company said it had discovered and removed 273 accounts and pages that were being run on its platforms by the so-called Internet Research Agency (IRA), a notorious Russian government “troll farm”.

“The IRA has repeatedly used complex networks of inauthentic accounts to deceive and manipulate people who use Facebook, including before, during and after the 2016 US presidential elections,” Alex Stamos, Facebook’s chief security officer, said in a blog post announcing the action.

Prosecutors working for Robert Mueller, the special counsel, have charged Russian operatives with using social media platforms including Facebook to pose as American activists and sow political discord during the 2016 campaign.



Source link

read more
Cambridge AnalyticaData and computer securityData protectionFacebookMark ZuckerbergMediaSocial networkingTechnologyUS CongressUS newsWorld news

Five questions Mark Zuckerberg should be asked by Congress | Technology

no thumb


1) You’re abusing your power as a global monopoly. Why shouldn’t we break you up?

Zuckerberg made the rookie error of leaving out his notes, which an AP reporter promptly snapped. One section said: “Break up FB? US tech companies key asset for America; break up strengthens Chinese companies.” Really? That’s the best you’ve got? The senators need to drive this one home hard.

2) You keep talking about Aleksandr Kogan, who harvested data for Cambridge Analytica. What about his business partner Joseph Chancellor, a Facebook employee?

You haven’t mentioned Chancellor. Why? He went to work for you before the story first broke in December 2015. Does his involvement not concern you? Who knew what and when?


Did senators questioning Facebook’s Mark Zuckerberg understand the internet? – video

3) Why won’t you come to Britain, and why won’t you answer questions put to you by the UK parliament?

Please ask this, America. We’ve helped you get him before Congress. Now we need your help getting answers on what happened during the EU referendum. On what possible grounds can you justify refusing an official request to appear before MPs?(Also, why won’t you entertain interview requests from the Guardian and Observer?)

4) Why didn’t you suspend Cambridge Analytica from Facebook when you first found out it had misappropriated data in December 2015?

Last week you said it was because Cambridge Analytica didn’t use Facebook at the time, which is not true. On Tuesday you repeated it. Then after the break, you said you “mis-spoke”. So, answer the question: why didn’t you suspend the company? Why didn’t you inform users their data had been taken without their consent? Why did you – even after you knew what Cambridge Analytica had done with your data – embed your employees with the company during the Trump campaign?

5) Does Facebook track users around the web when they are not logged in? Does it track people between devices?

On Tuesday you said you didn’t know. Are you sure? Have you had a chance to catch up on your reading?



Source link

read more
1 2 3 6
Page 1 of 6