close

Silicon Valley

AlphabetArtificial intelligence (AI)BusinessComputingGoogleSilicon ValleyTechnologyUS news

Google’s robot assistant now makes eerily lifelike phone calls for you | Technology

no thumb


Google’s virtual assistant can now make phone calls on your behalf to schedule appointments, make reservations in restaurants and get holiday hours.

The robotic assistant uses a very natural speech pattern that includes hesitations and affirmations such as “er” and “mmm-hmm” so that it is extremely difficult to distinguish from an actual human phone call.

The unsettling feature, which will be available to the public later this year, is enabled by a technology called Google Duplex, which can carry out “real world” tasks on the phone, without the other person realising they are talking to a machine. The assistant refers to the person’s calendar to find a suitable time slot and then notifies the user when an appointment is scheduled.

Google’s CEO, Sundar Pichai, demonstrated the capability on stage at the Shoreline Amphitheater during the company’s annual developer conference, I/O. He played a recording of Google Assistant calling and interacting with someone at a hair salon to make an appointment.

A phone call
to a hairstylist.

When the salon picks up the phone, a female computer-generated voice says she’s calling to arrange a haircut for “a client” on 3 May. The salon employee says “give me one second”, to which the robot replies “mmm-hmm” – a response that triggered a wave of laughter in the 7,000-strong audience.

“What time are you looking for?” the salon employee asks. “At 12pm,” replies the robot. The salon doesn’t have an opening then, so the robot suggests a window between 10am and 12pm, before confirming the booking and notifying its human master.

Pichai showed a second demo, one of “many examples where the call doesn’t go as expected”, in which a male-sounding virtual assistant tries to reserve a table at a restaurant but is told that he doesn’t need a booking if there are only four people in his party. The robot appears to navigate the confusing conversation with ease.

The virtual assistant
calls a restaurant.

From the onstage demonstrations, it seemed like a significant upgrade from the automated phone systems most people have interacted with. The natural interaction is enabled by advancements in automatic speech recognition, text-to-speech synthesis and an understanding of how humans pace their conversations.

Pichai said the tool was useful for the 60% of small businesses in the US that don’t already have an online booking system. “We think AI can help,” he said.

The Duplex system can also call a company to ask about hours of operation during a holiday, and then make that information available online with Google, reducing the volume of similar calls a business might receive.

“Businesses can operate as they always have. There’s no learning curve or changes to make to benefit from this technology,” the principal engineer, Yaniv Leviathan, and vice-president, engineering, Yossi Matias wrote in a blogpost about the technology.

During the demonstrations, the virtual assistant did not identify itself and instead appeared to deceive the human at the end of the line. However, in the blogpost, the company indicated that might change.

“It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.”



Source link

read more
Digital mediaInternetPrivacySilicon ValleySocial mediaTechnologyTwitter

Twitter urges all users to change their password after bug discovered | Technology

no thumb


Twitter has urged its 336 million users to change their passwords after the company discovered a bug that stored passwords in plain text in an internal system.

The company said it had fixed the problem and had seen “no indication of breach or misuse”, but it suggested users consider changing their password on Twitter and on all services where they have used the same password “as a precaution”.

“We are very sorry this happened,” said Twitter’s chief technology officer, Parag Agrawal, in a blogpost. “We recognise and appreciate the trust you place in us, and are committed to earning that trust every day.”

Twitter Support
(@TwitterSupport)

We recently found a bug that stored passwords unmasked in an internal log. We fixed the bug and have no indication of a breach or misuse by anyone. As a precaution, consider changing your password on all services where you’ve used this password. https://t.co/RyEDvQOTaZ


May 3, 2018

Companies with good security practices typically store user passwords in a form that cannot be read. In Twitter’s case, passwords are masked through a process called hashing, which replaces the actual password with a random set of numbers and letters that are stored in the company’s system.

“This allows our systems to validate your account credentials without revealing your password,” said Agrawal. “This is an industry standard.”

“Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”

Agrawal advises people to change their passwords, enable two-factor authentication on their Twitter account and use a password manager to create strong, unique passwords on every service they use.





Source link

read more
AlphabetAppleComputingFacebookGoogleMark ZuckerbergSilicon ValleyTechnology

Why Silicon Valley can’t fix itself | News

Why Silicon Valley can’t fix itself | News


Big Tech is sorry. After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.

Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.” Justin Rosenstein, an engineer who helped build Facebook’s “like” button and Gchat, regrets having contributed to technology that he now considers psychologically damaging, too. “Everyone is distracted,” Rosenstein says. “All of the time.”

Ever since the internet became widely used by the public in the 1990s, users have heard warnings that it is bad for us. In the early years, many commentators described cyberspace as a parallel universe that could swallow enthusiasts whole. The media fretted about kids talking to strangers and finding porn. A prominent 1998 study from Carnegie Mellon University claimed that spending time online made you lonely, depressed and antisocial.

In the mid-2000s, as the internet moved on to mobile devices, physical and virtual life began to merge. Bullish pundits celebrated the “cognitive surplus” unlocked by crowdsourcing and the tech-savvy campaigns of Barack Obama, the “internet president”. But, alongside these optimistic voices, darker warnings persisted. Nicholas Carr’s The Shallows (2010) argued that search engines were making people stupid, while Eli Pariser’s The Filter Bubble (2011) claimed algorithms made us insular by showing us only what we wanted to see. In Alone, Together (2011) and Reclaiming Conversation (2015), Sherry Turkle warned that constant connectivity was making meaningful interaction impossible.

Still, inside the industry, techno-utopianism prevailed. Silicon Valley seemed to assume that the tools they were building were always forces for good – and that anyone who questioned them was a crank or a luddite. In the face of an anti-tech backlash that has surged since the 2016 election, however, this faith appears to be faltering. Prominent people in the industry are beginning to acknowledge that their products may have harmful effects.

Internet anxiety isn’t new. But never before have so many notable figures within the industry seemed so anxious about the world they have made. Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.

It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity. The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.

The hub of the new tech humanism is the Center for Humane Technology in San Francisco. Founded earlier this year, the nonprofit has assembled an impressive roster of advisers, including investor Roger McNamee, Lyft president John Zimmer, and Rosenstein. But its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction. In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.

As suspicion of Silicon Valley grows, the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track. For this, they have been getting a lot of attention. As the backlash against tech has grown, so too has the appeal of techies repenting for their sins. The Center for Humane Technology has been profiled – and praised by – the New York Times, the Atlantic, Wired and others.

But tech humanism’s influence cannot be measured solely by the positive media coverage it has received. The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”, and Twitter boss Jack Dorsey recently claimed he wants to improve the platform’s “conversational health”.



Tristan Harris, founder of the Center for Humane Technology. Photograph: Robert Gumpert for the Guardian

Even Mark Zuckerberg, famous for encouraging his engineers to “move fast and break things”, seems to be taking a tech humanist turn. In January, he announced that Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.

Zuckerberg’s choice of words is significant: Time Well Spent is the name of the advocacy group that Harris led before co-founding the Center for Humane Technology. In April, Zuckerberg brought the phrase to Capitol Hill. When a photographer snapped a picture of the notes Zuckerberg used while testifying before the Senate, they included a discussion of Facebook’s new emphasis on “time well spent”, under the heading “wellbeing”.

This new concern for “wellbeing” may strike some observers as a welcome development. After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.

But these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires. Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes. These changes may soothe some of the popular anger directed towards the tech industry, but they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.


The Center for Humane Technology argues that technology must be “aligned” with humanity – and that the best way to accomplish this is through better design. Their website features a section entitled The Way Forward. A familiar evolutionary image shows the silhouettes of several simians, rising from their crouches to become a man, who then turns back to contemplate his history.

“In the future, we will look back at today as a turning point towards humane design,” the header reads. To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”.

There is a good reason why the language of tech humanism is penetrating the upper echelons of the tech industry so easily: this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives. Their success turned the Bay Area tech industry into a global powerhouse – and produced the digitised world that today’s tech humanists now lament.

The story begins in the 1960s, when Silicon Valley was still a handful of electronics firms clustered among fruit orchards. Computers came in the form of mainframes then. These machines were big, expensive and difficult to use. Only corporations, universities and government agencies could afford them, and they were reserved for specialised tasks, such as calculating missile trajectories or credit scores.

Computing was industrial, in other words, not personal, and Silicon Valley remained dependent on a small number of big institutional clients. The practical danger that this dependency posed became clear in the early 1960s, when the US Department of Defense, by far the single biggest buyer of digital components, began cutting back on its purchases. But the fall in military procurement wasn’t the only mid-century crisis around computing.

Computers also had an image problem. The inaccessibility of mainframes made them easy to demonise. In these whirring hulks of digital machinery, many observers saw something inhuman, even evil. To antiwar activists, computers were weapons of the war machine that was killing thousands in Vietnam. To highbrow commentators such as the social critic Lewis Mumford, computers were instruments of a creeping technocracy that threatened to extinguish personal freedom.

But during the course of the 1960s and 70s, a series of experiments in northern California helped solve both problems. These experiments yielded breakthrough innovations like the graphical user interface, the mouse and the microprocessor. Computers became smaller, more usable and more interactive, reducing Silicon Valley’s reliance on a few large customers while giving digital technology a friendlier face.

Apple founder Steve Jobs ‘got the notion of tools for human use’.



Apple founder Steve Jobs ‘got the notion of tools for human use’. Photograph: Ted Thai/Polaris / eyevine

The pioneers who led this transformation believed they were making computing more human. They drew deeply from the counterculture of the period, and its fixation on developing “human” modes of living. They wanted their machines to be “extensions of man”, in the words of Marshall McLuhan, and to unlock “human potential” rather than repress it. At the centre of this ecosystem of hobbyists, hackers, hippies and professional engineers was Stewart Brand, famed entrepreneur of the counterculture and founder of the Whole Earth Catalog. In a famous 1972 article for Rolling Stone, Brand called for a new model of computing that “served human interest, not machine”.

Brand’s disciples answered this call by developing the technical innovations that transformed computers into the form we recognise today. They also promoted a new way of thinking about computers – not as impersonal slabs of machinery, but as tools for unleashing “human potential”.

No single figure contributed more to this transformation of computing than Steve Jobs, who was a fan of Brand and a reader of the Whole Earth Catalog. Jobs fulfilled Brand’s vision on a global scale, launching the mass personal computing era with the Macintosh in the mid-80s, and the mass smartphone era with the iPhone two decades later. Brand later acknowledged that Jobs embodied the Whole Earth Catalog ethos. “He got the notion of tools for human use,” Brand told Jobs’ biographer, Walter Isaacson.

Building those “tools for human use” turned out to be great for business. The impulse to humanise computing enabled Silicon Valley to enter every crevice of our lives. From phones to tablets to laptops, we are surrounded by devices that have fulfilled the demands of the counterculture for digital connectivity, interactivity and self-expression. Your iPhone responds to the slightest touch; you can look at photos of anyone you have ever known, and broadcast anything you want to all of them, at any moment.

In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention. To guide us out of that wilderness, tech humanists say we need more humanising. They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.


Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.

It is difficult to imagine human beings without technology. The story of our species began when we began to make tools. Homo habilis, the first members of our genus, left sharpened stones scattered across Africa. Their successors hit rocks against each other to make sparks, and thus fire. With fire you could cook meat and clear land for planting; with ash you could fertilise the soil; with smoke you could make signals. In flickering light, our ancestors painted animals on cave walls. The ancient tragedian Aeschylus recalled this era mythically: Prometheus, in stealing fire from the gods, “founded all the arts of men.”

All of which is to say: humanity and technology are not only entangled, they constantly change together. This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used. The evolutionary scientist Mary Marzke shows that we developed “a unique pattern of muscle architecture and joint surface form and functions” for this purpose.

The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities. For millennia, people have feared that new media were eroding the very powers that they promised to extend. In The Phaedrus, Socrates warned that writing on wax tablets would make people forgetful. If you could jot something down, you wouldn’t have to remember it. In the late middle ages, as a culture of copying manuscripts gave way to printed books, teachers warned that pupils would become careless, since they no longer had to transcribe what their teachers said.

Yet as we lose certain capacities, we gain new ones. People who used to navigate the seas by following stars can now program computers to steer container ships from afar. Your grandmother probably has better handwriting than you do – but you probably type faster.

The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology. Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.

Intentionally or not, this is what tech humanists are doing when they talk about technology as threatening human nature – as if human nature had stayed the same from the paleolithic era until the rollout of the iPhone. Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them. And while the tech humanists may believe they are acting in the common good, they themselves acknowledge they are doing so from above, as elites. “We have a moral responsibility to steer people’s thoughts ethically,” Tristan Harris has declared.

Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes. The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.

This paternalism produces a central irony of tech humanism: the language that they use to describe users is often dehumanising. “Facebook appeals to your lizard brain – primarily fear and anger,” says McNamee. Harris echoes this sentiment: “Imagine you had an input cable,” he has said. “You’re trying to jack it into a human being. Do you want to jack it into their reptilian brain, or do you want to jack it into their more reflective self?”

The Center for Humane Technology’s website offers tips on how to build a more reflective and less reptilian relationship to your smartphone: “going greyscale” by setting your screen to black-and-white, turning off app notifications and charging your device outside your bedroom. It has also announced two major initiatives: a national campaign to raise awareness about technology’s harmful effects on young people’s “digital health and well-being”; and a “Ledger of Harms” – a website that will compile information about the health effects of different technologies in order to guide engineers in building “healthier” products.

These initiatives may help some people reduce their smartphone use – a reasonable personal goal. But there are some humans who may not share this goal, and there need not be anything unhealthy about that. Many people rely on the internet for solace and solidarity, especially those who feel marginalised. The kid with autism may stare at his screen when surrounded by people, because it lets him tolerate being surrounded by people. For him, constant use of technology may not be destructive at all, but in fact life-saving.

Pathologising certain potentially beneficial behaviours as “sick” isn’t the only problem with the Center for Humane Technology’s proposals. They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

This may be why their approach is so appealing to the tech industry. There is no reason to doubt the good intentions of tech humanists, who may genuinely want to address the problems fuelling the tech backlash. But they are handing the firms that caused those problems a valuable weapon. Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power. By channelling popular anger at Big Tech into concerns about health and humanity, tech humanism gives corporate giants such as Facebook a way to avoid real democratic control. In a moment of danger, it may even help them protect their profits.


One can easily imagine a version of Facebook that embraces the principles of tech humanism while remaining a profitable and powerful monopoly. In fact, these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.

When Zuckerberg announced that Facebook would prioritise “time well spent” over total time spent, it came a couple weeks before the company released their 2017 Q4 earnings. These reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”.

Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable. In a recent interview, he said: “Over the long term, even if time spent goes down, if people are spending more time on Facebook actually building relationships with people they care about, then that’s going to build a stronger community and build a stronger business, regardless of what Wall Street thinks about it in the near term.”

Sheryl Sandberg has also stressed that the shift will create “more monetisation opportunities”. How? Everyone knows data is the lifeblood of Facebook – but not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”. Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently. Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.

Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact. Advertisers can target the closest friends of the users who already like a product, on the assumption that close friends tend to like the same things.

Facebook CEO Mark Zuckerberg testifies before a joint hearing of the US Senate Commerce, Science and Transportation Committee earlier this year.



Facebook CEO Mark Zuckerberg testifies before the US Senate last month. Photograph: Jim Watson/AFP/Getty Images

So when Zuckerberg talks about wanting to increase “meaningful” interactions and building relationships, he is not succumbing to pressure to take better care of his users. Rather, emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable.

In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

In many ways, this process recalls an earlier stage in the evolution of capitalism. In the 19th century, factory owners in England discovered they could only make so much money by extending the length of the working day. At some point, workers would die of exhaustion, or they would revolt, or they would push parliament to pass laws that limited their working hours. So industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.

A similar situation confronts Facebook today. They have to make the attention of the user more valuable – and the language and concepts of tech humanism can help them do it. So far, it seems to be working. Despite the reported drop in total time spent, Facebook recently announced huge 2018 Q1 earnings of $11.97bn (£8.7bn), smashing Wall Street estimates by nearly $600m.


Today’s tech humanists come from a tradition with deep roots in Silicon Valley. Like their predecessors, they believe that technology and humanity are distinct, but can be harmonised. This belief guided the generations who built the “humanised” machines that became the basis for the industry’s enormous power. Today it may provide Silicon Valley with a way to protect that power from a growing public backlash – and even deepen it by uncovering new opportunities for profit-making.

Fortunately, there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.

To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should embrace every new invention. But it does suggest that living well with technology can’t be a matter of making technology more “human”. This goal isn’t just impossible – it’s also dangerous, because it puts us at the mercy of experts who tell us how to be human. It cedes control of our technological future to those who believe they know what’s best for us because they understand the essential truths about our species.

The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power.

Today, that power is wielded by corporations, which own our technology and run it for profit. The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.

There is an alternative. If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right. The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.

Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology – rather than the small group of people who have captured society’s wealth.

What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power. Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources. After all, Silicon Valley wouldn’t exist without billions of dollars of public funding, not to mention the vast quantities of information that we all provide for free. Facebook’s market capitalisation is $500bn with 2.2 billion users – do the math to estimate how much the time you spend on Facebook is worth. You could apply the same logic to Google. There is no escape: whether or not you have an account, both platforms track you around the internet.

In addition to taxing and shrinking tech firms, democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation coming into effect in the European Union later this month. But more robust regulation of Silicon Valley isn’t enough. We also need to pry the ownership of our digital infrastructure away from private firms.

This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run. These democratic digital structures can focus on serving personal and social needs rather than piling up profits for investors. One inspiring example is municipal broadband: a successful experiment in Chattanooga, Tennessee, has shown that publicly owned internet service providers can supply better service at lower cost than private firms. Other models of digital democracy might include a worker-owned Uber, a user-owned Facebook or a socially owned “smart city” of the kind being developed in Barcelona. Alternatively, we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.

More experimentation is needed, but democracy should be our guiding principle. The stakes are high. Never before have so many people been thinking about the problems produced by the tech industry and how to solve them. The tech backlash is an enormous opportunity – and one that may not come again for a long time.

The old techno-utopianism is crumbling. What will replace it? Silicon Valley says it wants to make the world a better place. Fulfilling this promise may require a new kind of disruption.

Main illustration by Lee Martin/Guardian Design

Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.





Source link

read more
CitiesRobotsSilicon ValleyTechnology

First robot delivery drivers start work at Silicon Valley campus | Cities

First robot delivery drivers start work at Silicon Valley campus | Cities


If you work in an office park, or study at a campus university, robotic delivery drivers could be coming your way, following the first-ever commercial deployment of the technology.

Starship Technologies, an autonomous delivery startup created in 2014 by two Skype co-founders, has been in public testing mode in 20 countries around the world since 2015. Now the company says it is ready for its first “major commercial rollout”.

Employees of finance developer Intuit in Mountain View, California, will be able to order breakfast, lunch and coffee from their staff cafeteria and have it delivered to any point in the company’s Silicon Valley campus by one of Starship’s 10kg six-wheeled autonomous robots.

“You place your order, it’s one click, then you drop a pin where you want the robot to meet you,” says Starship co-founder Janus Friis. “We’ve seen huge demand for breakfast. For some reason people just don’t want to wait – they want to go straight to work and avoid the queue in the early hours of the day.”

Starship is proposing campus expansion as a middle ground between its tests in urban areas, where the company’s robots have generally been accompanied by human handlers, and a full rollout across a city or suburb.

“A campus is just like a residential neighbourhood,” Friis says. “It’s pretty good for the early stages of the rollout, because they’re well laid out and well planned, so they work well for driving, and that’s why we’ve decided to launch and scale now.”



An Intuit employee gets his delivery from a Starship robot. Photograph: Gustavo Fernandez/Gustavo Fernandez Photography

Starship is now on the lookout for other campuses across western Europe and the US where it can deploy the robots.

“We’ve reached the level of scale where we can deploy this widely,” says Ahti Heinla, says the company’s CEO and co-founder. “This is not just robots, there’s all sort of infrastructure around it that comes with it, you know: the service, the tools, the housing for the robots. We deliver them in pods that integrate them into the environment. So basically there’s a whole system with corporate and academic campuses that we’re starting to deploy.”

Separately, Friis says, Starship is trialling its city deployments in a suburb of Milton Keynes – the site of most of its activity in the UK – and will be launching another trial in San Jose, California, in a few weeks.

In the little over a year since the company made its world-first delivery – of lamb and falafel from a Turkish restaurant in London – it has amassed a global fleet of 150 robots carrying out daily drop-offs in eight cities in the US, UK, Estonia and Germany.

Thus far its deliveries have been mostly of food and parcels through corporate partnerships such as Just Eat, Domino’s Pizza, Hermes and Postmates in the US, though it launched a trial “plug and play” service for small businesses in Milton Keynes earlier this month.

The new commercial rollout in Silicon Valley will bring its delivery robots into contact with more people, and the company hopes these continued interactions will quell some concerns voiced about the rise of automation.

Heinla says most of the response from the public has been positive, contrasting Starship’s offering – squat, wheeled robots rolling along the pavements and co-existing with pedestrians – with “dystopian” efforts from companies such as Amazon and Google to introduce flying delivery drones.

“It would be a rather dystopian future to imagine thousands – tens of thousands – of buzzing drones delivering things to people in an urban environment: that’s not a nice future.

“But these quiet robots that just gently navigate the streets, and cause no inconvenience to people and deliver a great new service, it is – from the reception we get from people – it is a very positive thing.”

Follow Guardian Cities on Twitter, Facebook and Instagram to join the discussion, and explore our archive here





Source link

read more
FacebookSilicon ValleySocial networkingTechnologyUS newsWhatsApp

WhatsApp CEO Jan Koum quits over privacy disagreements with Facebook | Technology

no thumb


The chief executive and co-founder of WhatsApp, the Facebook-owned messaging app, is leaving the company over disagreements about privacy and encryption.

Jan Koum will also step down from Facebook’s board of directors, a role he negotiated when WhatsApp was acquired by Facebook for $19bn in 2014, according to the Washington Post.

“It’s been almost a decade since Brian [Acton] and I started WhatsApp, and it’s been an amazing journey with some of the best people. But it is time for me to move on,” wrote Koum on his Facebook profile.

“I’m taking some time off to do things I enjoy outside of technology, such as collecting rare air-cooled Porsches, working on my cars and playing ultimate frisbee. And I’ll still be cheering WhatsApp on – just from the outside.”

Koum and his co-founder Brian Acton developed WhatsApp with a focus on user privacy and a disdain for advertising. When it was bought by Facebook, he promised users that these values wouldn’t be compromised.

“You can still count on absolutely no ads interrupting your communication. There would have been no partnership between our two companies if we had to compromise on the core principles that will always define our company, our vision and our product,” he said at the time.

However, Facebook has been under pressure to make money out of the free, encrypted messaging service, which now has 1.5 billion monthly users, and has been taking steps that have chipped away at some of WhatsApp’s values.

In 2016, WhatsApp announced it would start sharing some user data, including phone numbers, with Facebook – a move that was deeply unpopular among European regulators, who ordered Facebook to stop collecting data from WhatsApp users and fined the company.

Since then WhatsApp has started building and testing free tools to help businesses use WhatsApp to reach their customers, with the view to later start charging businesses.


Five key moments from Mark Zuckerberg’s testimony – video

In the wake of the Cambridge Analytica scandal, Facebook’s privacy practices have come under the microscope, with momentum gathering behind the #DeleteFacebook movement.

Acton, who left Facebook in September 2017, was one of the most startling people to declare their breakup with the social network, posting to Twitter in March: “It is time. #deletefacebook.”

The first comment to appear below Koum’s Facebook post is a carefully worded statement from Facebook’s CEO Mark Zuckerberg.

“I’m grateful for everything you’ve done to help connect the world, and for everything you’ve taught me, including about encryption and its ability to take power from centralised systems and put it back in people’s hands. Those values will always be at the heart of WhatsApp.”

A spokeswoman for WhatsApp declined to comment but pointed to Koum and Zuckerberg’s posts on Facebook.



Source link

read more
CaliforniaSan FranciscoSilicon ValleyTechnologyUS news

Who ruined San Francisco? How its scooter wars sparked a blame game | US news

Who ruined San Francisco? How its scooter wars sparked a blame game | US news


The cold war between San Francisco and the tech industry erupted into open hostilities again this month, when the overnight arrival of hundreds of motorized scooters across the city’s streetscape reignited tensions between the techies and the tech-nots.

The dockless electric scooters, which were distributed around San Francisco by three competing startups just as the city was preparing to pass legislation to regulate them, have become the latest symbol of competing visions for city living. To critics of the tech industry, they represent everything that is wrong with the “move fast and break things” ethos. To tech evangelists, they are further proof that it’s better to ask for forgiveness than permission.

But in a city whose streets are often littered with the symptoms of a housing affordability crisis and gaping wealth inequality – homeless encampments, human waste and discarded needles are impossible for the affluent to ignore – the scooter wars have evolved into a contentious debate over who is most to blame for making San Francisco so unlivable and who should be trusted to fix it.

“Why is it that we can have emergency action on scooters but we have needles on the sidewalk?” asked Sam Altman, president of the startup incubator Y Combinator. “The existing civic institutions that are supposed to make life better and more affordable and easier in the city have not done a good job.”

Antonio Garcia Martinez, a former Facebook product manager who has lived in San Francisco on and off for 20 years, expressed a similar sentiment on Twitter, where he described the city’s response to “streets covered in human shit, drug needles, and broken glass from an epidemic of property crime” as “meh”, while its response to “startups us[ing] private money for a citywide experiment in personal mobility” was “THIS OUTRAGE CANNOT STAND”.

When another user pointed out that the city spends hundreds of millions of dollars on services for the homeless and police, Martinez shot back: “In Silicon Valley we like to measure performance via outputs, not inputs.”

A year after tech’s favorite bad bro, Travis Kalanick, was ousted from his perch atop Uber, it’s no longer de rigueur for tech entrepreneurs to flash their wealth around or offer products designed exclusively to solve the problems plaguing the 1%. Instead, startups are making the case for their existence using the language of Bay Area liberalism.

“Not everyone can afford their own electric scooter,” Travis VanderZanden, the ex-Uber executive and current CEO of the scooter startup Bird, told the New York Times. “We shouldn’t discriminate against people that are renting versus owning.”

Altman, too, cast the scooter companies as a necessary corrective to the high cost of living in San Francisco. “We have an affordability crisis beyond anything I’d imagined, and when people see a startup that is trying to help people afford to live farther out, and it would really help people, and then they see that get taken away, I think people respond very badly,” he said.

Of course, the other side of the debate also claims the moral high ground. Advocacy groups for pedestrians, seniors and disabled people turned out in force at a city hearing to speak about the importance of keeping sidewalks clear for vulnerable populations. And anti-gentrification and affordable housing activists have long harbored animosity toward the affluent tech workers displacing lower-income renters from the city’s limited housing stock.



‘Poverty and inequity don’t get solved by an algorithm,’ says a city official. Photograph: Alamy Stock Photo

“Sometimes a thing like a scooter or a bus becomes a symbol of something that people feel, and the feeling right now in the Bay Area is that it’s becoming a playground for wealthy tech companies at the expense of the majority of us,” said the local activist Max Alper. “The problem is when we have a system that is built for the enjoyment of the few while public transportation and public good is being cut.”

Alper participated in the first Google bus blockade, in December 2013, where he impersonated a tech worker in an act of “political theater” that was captured on video. His performance of entitlement – shouting “This is a city for the right people who can afford it” – was not actually that far-fetched. In 2016, a startup entrepreneur published an open letter to the mayor that read in part: “The wealthy working people have earned their right to live in the city … I shouldn’t have to see the pain, struggle, and despair of homeless people to and from my way to work every day.”

Alper has adopted a small act of protest against the scooters, which he explained by saying: “When I was growing up my mom always told me that if there is trash in public, that’s it’s all of our responsibility to clean it up.”

He’s not the only person to move a scooter out of the sidewalk and into a garbage bin. Others have thrown unattended scooters into trees or the San Francisco Bay or vandalized them with profane stickers – “Hey dumb fuck get off the sidewalk” – and even feces.

Liz Henry, a disabled longtime San Francisco resident and senior release manager at Mozilla, expressed some frustration at “people invoking disability and poverty as reasons not to do the scooters”.

A co-founder of the first women’s hackerspace in San Francisco, Henry also was understanding of the mindset that leads tech entrepreneurs to behave in the way they do, which she described as a “mentality of approaching social problems as well as technical problems by just trying things out, seeing the possibilities, rapidly trying and discarding solutions”. Still, she acknowledged, when techies try to iterate in the real world, “it doesn’t necessarily come out the same way as if it’s software” because the negative outcomes aren’t just buggy code.

To Aaron Peskin, a city supervisor who introduced the scooter legislation, the idea that the city government is neglecting its other responsibilities to attack a startup is absurd.

“My colleagues and I can rub our bellies and pat our heads at the same time,” he said. Peskin said that, despite the current flare-up, relations between tech companies and the city have actually been better in recent months, and that the resources expended on serious social issues like homelessness and street cleaning exceed those spent on scooter regulation by “a factor of thousands to one”.

And Peskin is not particularly persuaded by the idea that the tech world can or should come up with better ideas to solve San Francisco’s many challenges.

“Poverty and inequity don’t get solved by an algorithm. They got solved by sharing,” he said.





Source link

read more
AlphabetAppsChild protectionChildrenGoogleSilicon ValleySocietyTechnologyUK newsUS newsYouTube

Google to improve YouTube Kids app to let parents control what children watch | Technology

Google to improve YouTube Kids app to let parents control what children watch | Technology


Google is updating its YouTube Kids app to improve the control over the videos and channels that can be watched by children.

YouTube Kids is a separate app for smartphones and tablets that provides access to a subset of the videos available on the main site. There have been 70bn video views since the app, which is used by 11m families, launched in 2015. In the app’s biggest change yet, Google is giving parents much greater control over what their children can find and watch.

In controls rolling out later this year, parents will be able to manually approve individual videos or channels that their children can access through the app, giving them the ability to pre-vet and handpick a collection of videos to ensure they are appropriate.

Google is also creating collections of videos and channels from trusted channels such as Sesame Workshop and the YouTube Kids team, so that parents can select only the collections or topics they would like their children to have access to.

James Beser, product director for YouTube Kids, said said: “From collections of channels from trusted partners to enabling parents to select each video and channel themselves, we’re putting parents in the driver’s seat like never before.”



YouTube Kids settings. Photograph: Google

In addition the firm will limit the video recommendations the app will make to only those from channels that have been verified by the YouTube Kids team, when parents turn off the search function within the app. Previously recommendations could include content from any video or channel that is accessible through the YouTube Kids app, not just those verified by Google’s team.

Google is also rolling out signed-in profiles for YouTube Kids across Europe, the Middle East and Asia this week, which means parents can create individual profiles for each of their children on shared devices.

Beser said that the current version of the app will still be available for those who are content to use it, but that Google was continuing to “fine tune” its filters for the more open selection of content, encouraging parents to block and flag content they do not see as appropriate.

Josh Golin, executive director of the Campaign for a Commercial Free Childhood, saw the move as broadly positive, but said that more parental controls do not absolve Google of its responsibility to keep inappropriate content out of the YouTube Kids app.

“It’s been clear for a long time now that algorithmic filtering on YouTube Kids doesn’t work and giving parents the ability to select more human-reviewed content is an improvement,” Golin said. “But let’s not forget that most kids are watching the main YouTube platform, where they not only exposed regularly to inappropriate recommendations and content, but also ensnared by Google’s troubling data collection practices.”



Source link

read more
Newtown shootingSilicon ValleyTechnologyThe far rightUS gun controlUS newsYouTube

YouTube under fire for censoring video exposing conspiracy theorist Alex Jones | Technology

YouTube under fire for censoring video exposing conspiracy theorist Alex Jones | Technology


YouTube’s algorithm has long promoted videos attacking gun violence victims, allowing the rightwing conspiracy theorist Alex Jones to build a massive audience. But when a not-for-profit recently exposed Jones’ most offensive viral content in a compilation on YouTube, the site was much less supportive – instead deleting the footage from the platform, accusing it of “harassment and bullying”.

Media Matters, a leftwing watchdog, last week posted a series of clips of Jones spreading falsehoods about the 2012 Sandy Hook elementary school massacre, a newsworthy video of evidence after the victims’ families filed a defamation lawsuit against the Infowars host. But YouTube, for reasons it has yet to explain, removed the video three days after it was published, a move that once again benefitted Jones, who is now arguing that the defamation suit has defamed him.

The video was censored for several days, but reinstated Monday after the Guardian’s inquiry and backlash on social media. Still, the case offered yet another stark illustration of the way tech companies and social media algorithms have failed to distinguish between fake news and legitimate content – while continuing to provide a powerful platform to the most repugnant views and dangerous propaganda.

“This just shows the capriciousness and arbitrariness by which they are enforcing these standards,” said Angelo Carusone, the president of Media Matters.

Jones, who has faced numerous lawsuits accusing him of spreading harmful misinformation, has skyrocketed to international fame by fueling a range of conspiracy theories that suggest high-profile mass shootings in America may have been “false flags” or hoaxes, in which the government and “crisis actors” staged the tragedies to push new gun laws.

YouTube, which is owned by Google, has been instrumental in promoting Jones’ channel to new followers, helping him garner millions of views to false news that has had devastating real-world consequences.

Shooting victims’ families and survivors have faced widespread abuse and harassment, with some conspiracy theorists facing arrest and prison for their death threats and attacks. One fake news story about Hillary Clinton, widely promoted by Jones and Infowars, led to a shooting in a Washington DC restaurant.

Media Matters’ seven-minute video – titled “What Alex Jones said about the Sandy Hook shooting” – offered examples of the host sharing blatantly false information, calling the massacre that killed 20 children an “inside job” and “completely fake”, one time saying, “it just pretty much didn’t happen.”



Francine Wheeler, the mother of Ben Wheeler, 6, who was killed at Sandy Hook, at a recent gun violence forum held in Washington. Photograph: UPI / Barcroft Images

Those quotes are critical now that Jones, in the face of litigation, is alleging that the families’ lawsuit has accused him of making statements he never actually said.

“There was an actual news component to this,” said Carusone, noting that the high-profile student activists from Parkland, Florida had also been sharing the video.

YouTube told Media Matters the video was “flagged for review” and that the company determined it violated “harassment and bullying” policies. Media Matters also received a “strike”, a preliminary penalty that can lead to an account being altogether shut down.

YouTube declined to answer questions about the video and has not explained whether Jones or his associates flagged the video or if the company considered the rightwing commentator to be a harassment victim.

After Media Matters wrote about the censorship and the Guardian inquired, YouTube reinstated the video. A spokesperson said in an email: “With the massive volume of videos on our site, sometimes we make the wrong call. When it’s brought to our attention that a video or channel has been removed mistakenly, we act quickly to reinstate it.”

Media Matters noted that dozens of Sandy Hook videos were still permitted on Jones’ YouTube page, including some the watchdog had used in the compilation. Videos labeled “Crisis Actors Used at Sandy Hook!”, “Sandy Hook was a Total False Flag!” and “Retired FBI Agent Investigates Sandy Hook: MEGA MASSIVE COVER UP” all remained live on Jones’ channel on Monday.

Jones videos have had a long-term impact, Carusone noted. “Years later, parents of these students are still receiving attacks and threats.”

The incident has also served as a reminder that tech companies like Google and Facebook have, in the face of pressure from conservatives, worked to show they are unbiased, Carusone said.

“They are extraordinarily sensitive to their rightwing critics.”





Source link

read more
AlphabetAmazonFacebookGoogleSilicon ValleyTechnology

Will Democrats be bold and pledge to break up tech monopolies? | Ross Barkan | Opinion

no thumb


For those Democrats who dream of being president, it’s no longer safe to play it safe. We live in a dangerous, unstable time in a democracy that is far from healthy. Many of the forces corroding it precede Donald Trump, despite progressives who would tell you otherwise – this entire century, so far, has been a misery for many Americans.

Four corporations dominate American life. They have the wealth of nations. They have generated unfathomable revenue, created a number of jobs, and decimated many more. Their control of the economy is total.

They are Google, Amazon, Apple and Facebook. Unless we do something, their power will remain limitless.

Any Democrat running for president who claims to be a progressive should put trust-busting at the top of their agenda. Socialist or capitalist, big government or small, the priority should be the same: to ensure the people who consume goods and create goods are not exploited.

All four corporate behemoths are too large. These monopolies fuel staggering inequality and stifle the kind of economic growth that used to be more evenly distributed. Profits are immense and gains for actual workers are small – these corporations do not generate employment, let alone unionized employment, on the scale of earlier revolutionary giants.

Amazon is worth twice as much as Walmart, controls one of the largest newspapers in America, holds cities hostage for its new headquarters, and generates 30% of all online and offline retail sales growth nationwide.

All brick-and-mortar retail combined cannot compete with Amazon. After the offshoring of major industries, smaller towns and cities only had retail jobs, some of which remained unionized, to fall back on. Now that final safety net has been lit on fire. Amazon, a virulently anti-union corporation, pays good wages to a select number of highly educated workers and stashes the rest in warehouses, where they face grueling conditions for little pay.

Google controls more than 70% of the market share of search advertising. Together with Facebook, it has effectively gobbled up all online advertising. News publishers are at their mercy. Local news outlets, which depend on advertising to keep the lights on, are on the verge of extinction.

Facebook is now in the news for selling off the personal data it has harvested and profited from, stealing away what little privacy we had left. But this was a problem long before Cambridge Analytica, fake news and Trump’s ascendance.

For most Americans, their entire awareness of current events is shaped, one way or another, by Facebook and Google. These corporations unite as our sole digital consciousness, determining what we see and when we see it, with hidden algorithms beyond the reach of any regulator.

We consume this corporate-curated information on Apple smartphones. With a market cap of nearly $900bn, Apple is the world’s most valuable company. Its phones, manufactured under oppressive conditions in China, devour the attention span of millions.

When so much wealth and power is concentrated in the hands of a handful of men, we no longer have a functioning, fair democracy. We have an oligarchy where the whims of a few moguls of world-historical wealth – Bezos, Zuckerberg, Cook, Brin and Page – determine the fate of the national and world economies, the information we consume, and the range of opportunities available to us.

We know Trump, despite his obsession with Amazon’s Jeff Bezos, has no serious interest in shattering these monopolies. For him, this is simply personal: one billionaire besting him, by remarkable lengths, in a test of net worth.

Our only hope is a new president who has the guts and vision to confront the behemoths in our midst and restore some sense of fairness to the country. All Democrats fighting for the nomination must promise to break up these unprecedented monopolies. Otherwise, their progressivism is hollow.

We live in a new gilded age, as dramatically unequal as the first one, and only swift, radical action will turn the tide. In the wake of such tech dominance, no one can compete: not the small business owner, the news publisher, the grocery store worker, the musician, the retailer, the entrepreneur, or anyone else.

The Big Four own the present. They don’t have to own the future too.

  • Ross Barkan is a journalist and candidate for the New York state senate



Source link

read more
AlphabetData protectionEuropeFacebookGDPRGoogleSilicon ValleySocial mediaSocial networkingTechnology

How Europe’s ‘breakthrough’ privacy law takes on Facebook and Google | Technology

no thumb


Despite the political theatre of Mark Zuckerberg’s congressional interrogations last week, Facebook’s business model isn’t at any real risk from regulators in the US. In Europe, however, the looming General Data Protection Regulation will give people better privacy protections and force companies including Facebook to make sweeping changes to the way they collect data and consent from users – with huge fines for those who don’t comply.

“It’s changing the balance of power from the giant digital marketing companies to focus on the needs of individuals and democratic society,” said Jeffrey Chester, founder of the Center for Digital Democracy. “That’s an incredible breakthrough.”

Here’s a simple guide to the new rules.

What is GDPR?

It is a regulation that requires companies to protect the personal data and privacy of residents of EU countries. It replaces an outdated data protection directive from 1995 and restricts the way businesses collect, store and export people’s personal data.

“Consumers have been abused,” said David Carroll, an associate professor at Parsons School of Design in New York. “Marketers have succeeded in making people feel powerless and resigned to getting the short end of the bargain. GDPR gives consumers the chance to renegotiate that very unfair deal.”

Does it only affect European companies?

No. It applies to all companies that process the personal data of people residing in the European Union.

What counts as personal data?

Any information related to a person that can be used to identify them, including their name, photo, email address, IP address, bank details, posts on a social networking site, medical information, biometric data and sexual orientation.

What new rights do people get?

Under GDPR, people get expanded rights to obtain the data that a company has collected about them for free through a “data subject request”. People will also have the “right to be forgotten”, which means companies must delete someone’s data if they withdraw their consent for it to be held. Companies will only be able to collect data if there’s a specific business purpose for it, rather than collecting extra information at the point of sign-up just in case.

“It makes companies become much more thoughtful and rigorous about the data they collect and what they use it for,” Carroll said.

Companies will have to replace long terms and conditions filled with legalese with simple-to-digest consent requests. It must be as easy to withdraw consent as to give it. Finally, if a company has a data breach, it must inform users within 72 hours.

“What makes this a potential game changer is the amount of power it places into the hands of the public,” said attorney Jason Straight, who is chief privacy officer at legal services company UnitedLex.

What about people outside of Europe?

Although it only applies to residents of the EU, the new rules will probably put pressure on companies offer further protections for the rest of their users. Facebook, for example, has pledged to offer GDPR privacy controls globally.

“This will be good for everyone,” said Kris Lahiri, co-founder at the cloud-sharing company Egnyte, pointing out that global customers will demand the same rights as their European counterparts.

Which companies have the most work to do?

The big data-hungry technology platforms like Amazon, Google and Facebook and advertising technology companies such as Criteo, whose technology powers those ads featuring products you’ve browsed online that follow you around the internet.

What is Facebook doing to comply?

Having said it would follow GDPR “in spirit”, Facebook’s actions tell a different story. On Wednesday Reuters reported that the company would change its terms of service so that its 1.5 billion non-European users would no longer be covered by the privacy law. Until now, all users outside of the US and Canada have been governed by terms of service agreed with the company’s international headquarters in Ireland. Since any user data processed in Ireland will soon fall under GDPR, Facebook is changing the agreement so users in Africa, Asia, Australia and Latin America are governed by more lenient US privacy laws.

Where it needs to comply with GDPR, Facebook seems to have focused its efforts on getting user consent for its data collection practices (including facial biometric data) rather than reducing the data it collects. It has developed a sequence of consent requests that explicitly outline how each type of data will be used. However, as TechCrunch highlighted, the company has designed these requests in a way that makes it harder to opt out than opt in.

What about startups who don’t have the same resources?

Complying with GDPR may be a little onerous for companies that don’t have the engineering resources of Facebook or Google. According to a PwC survey, 68% of US companies expect to spend between $1m and $10m to comply with GDPR.

And there’s another way they’ll get stung: GDPR consultants charging enormous fees for patchy advice.

What are the penalties for companies that don’t comply?

Companies can be fined up to 4% of annual global revenue, but it will come down to how regulators in individual countries choose to enforce the law.

When does it come into effect?

The twenty-fifth of May 2018. That’s too early for some: “There’s a panic mode setting in as everyone is getting closer to this deadline,” said Lahiri.



Source link

read more
1 2 3 11
Page 1 of 11