US news

AmazonHomelessnessInternetSeattleSocietyTechnologyUS news

Amazon threatens to move jobs out of Seattle over new tax | Technology

no thumb

Amazon has threatened to move jobs out of its hometown of Seattle after the city council introduced a new tax to try and address the homelessness crisis.

The world’s second-biggest company has warned that the “hostile” tax, which will charge firms $275 per worker a year to fund homelessness outreach services and affordable housing, “forces us to question our growth here”.

Amazon, which is Seattle’s biggest private sector employer with more than 40,000 staff in the city, had halted construction work on a 17-storey office tower in protest against the tax.

Pressure from Amazon and other big employers, including Starbucks and Expedia, had forced councillors to reduce the tax from an initial proposal of $500 per worker. The tax will only effect companies making revenue of more than $20m-a-year.

The tax is expected to raise between $45m and $49m a year, of which about $10m would come from Amazon.

The company said it would restart building work on the tower but may sublease another new office block to reduce its tax bill.

“We are disappointed by today’s city Council decision to introduce a tax on jobs,” said Drew Herdener, an Amazon vice-president. We remain very apprehensive about the future created by the council’s hostile approach and rhetoric toward larger businesses, which forces us to question our growth here.”

Amazon’s chief executive, Jeff Bezos, is the world’s richest man with a $133bn fortune.

Campaigners said the company should be forced to take financial responsibility for Seattle’s cost of living, which has forced many families on to the streets. There are almost 12,000 homeless people in Seattle region, equating to the third-highest rate per capita in the US. Last year 169 homeless people died in Seattle. The city declared a state of emergency because of homelessness in late 2015

Before the council vote on Monday, more than 100 people marched through Amazon’s campus and held a rally outside the company’s new spherical greenhouses, some holding signs saying “Tax Amazon”.

Seattle councillor Teresa Mosqueda said: “People are dying on the doorsteps of prosperity. This is the richest city in the state and in a state that has the most regressive tax system in the country.”

The vote was passed unanimously, with several council members saying they were voting reluctantly in favour of the lower rate for the tax after Seattle’s mayor, Jenny Durkan, threatened to veto a higher rate.

“This was a tough debate. Not just here at city hall, but all across this city,” Durkan, said. “No one is saying that this will solve everything, but it will make a meaningful difference. This legislation will help us address our homelessness crisis without jeopardising critical jobs.”

Politicians from 50 other US cities wrote an open letter to Seattle council in a show solidarity with the councillors attempt to tackle Amazon’s impact on the city.

“By threatening Seattle over this tax, Amazon is sending a message to all of our cities: we play by our own rules,” the letter said.

Starbucks had also fought against the tax, with its public affairs chief, John Kelly, accusing the city of continuing to “spend without reforming and fail without accountability, while ignoring the plight of hundreds of children sleeping outside”.

He added: “If they cannot provide a warm meal and safe bed to a five-year-old child, no one believes they will be able to make housing affordable or address opiate addiction.”

Marilyn Strickland, the head of Seattle’s chamber of commerce, voiced business leaders’ opposition to the tax. “Taxing jobs will not fix our region’s housing and homelessness problems,” she said.

Source link

read more
BusinessTechnologyUberUK newsUS newsWorld news

Uber to allow sexual assault and harassment victims to sue company | Technology

no thumb

Uber’s ride-hailing service will give its US passengers and drivers more leeway to pursue claims of sexual misconduct, its latest attempt to reverse its reputation for brushing aside bad behaviour.

The shift announced on Tuesday will allow riders and drivers to file allegations of rape, sexual assault and harassment in courts and mediation instead of being locked into an arbitration hearing.

The San Francisco company is also scrapping a policy requiring all settlements of sexual misconduct to be kept confidential, giving victims the choice of whether they want to make their experience public.

The new rules mark a conciliatory step made by the Uber chief executive, Dara Khosrowshahi. He was hired last August amid a wave of revelations and allegations about rampant sexual harassment in its workforce, a cover-up of a massive data breach, dirty tricks and stolen trade secrets.

Khosrowshahi has launched a campaign to “do the right thing” to repair the damage left by Uber’s previous regime and lure back alienated riders who defected to rivals such as Lyft.

The changes governing sexual misconduct come a month after Uber announced it will do criminal background checks on its US drivers annually and add a 911 button for summoning help in emergencies. They are an effort to reassure its riders and address concerns that it had not done enough to keep criminals from using its service to prey on potential victims.

Giving victims of sexual assault or perceived sexual harassment more options sends an important message that Uber is taking the issue more seriously, said Kristen Houser, a spokeswoman for Raliance, a coalition of groups working with Uber to prevent sexual abuse on its service.

It may also spur more complaints. Houser said riders may now be more emboldened to report inappropriate behaviour, such as when a driver asks them out for a date.

Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk

“You want people to report lower-level infractions so you can nip them in the bud before they become bigger problems,” she said.

By the end of the year, Uber will also start to publicly report incidents of alleged sexual misconduct in hopes of establishing more transparency about the issue throughout the ride-hailing and traditional taxi industries.

“We think the numbers are going to be disturbing,” said Tony West, a former government prosecutor during the Obama administration who became Uber’s chief legal officer after Khosrowshahi took over.

Source link

read more
Automotive industryElon MuskSelf-driving carsTechnologyTeslaUS newsUtah

Tesla driver says car was in autopilot when it crashed at 60mph | Technology

Tesla driver says car was in autopilot when it crashed at 60mph | Technology

The driver of a Tesla car that failed to stop at a red light and collided with a firetruck told investigators that the vehicle was operating on “autopilot” mode when it crashed, police said.

A Tesla Model S was traveling at 60mph when it collided with the emergency vehicle in South Jordan, Utah, on Friday, causing minor injuries to both drivers, officials said Monday. The Tesla driver’s claim that the car was using the autopilot technology has raised fresh questions about the electric car company’s semi-autonomous system, which is supposed to assist drivers in navigating the road.

The exact cause of the crash, which left the driver with a broken ankle, remains unknown, with Tesla saying it did not yet have the car’s data and could not comment on whether autopilot was engaged. South Jordan police also said the 28-year-old driver “admitted that she was looking at her phone prior to the collision” and that witnesses said the car did not brake or take any action to avoid the crash.

“As a reminder for drivers of semi-autonomous vehicles, it is the driver’s responsibility to stay alert, drive safely, and be in control of the vehicle at all times,” the police department said in a statement.

The scene of the crash in Utah. Photograph: Courtesy of the South Jordan police department

While driverless technology is expected to make the roads significantly safer by reducing human error and crashes, companies like Tesla are currently in a transition period that some experts say has created unique risks. That’s because semi-autonomous features, research has shown, can lull drivers into a false sense of security and make it hard for them to remain alert and intervene as needed.

Tesla has faced backlash for its decision to brand the technology “autopilot”, given that the drivers are expected not to depend on the feature to keep them safe.

After a Tesla autopilot crash in March resulted in the driver’s death, the company issued a series of lengthy statements blaming the victim for “not paying attention”.

On Monday, Tesla’s CEO Elon Musk complained about an article on the Utah crash, writing on Twitter: “It’s super messed up that a Tesla crash resulting in a broken ankle is front page news and the ~40,000 people who died in US auto accidents alone in past year get almost no coverage.”

He also wrote that it was “actually amazing” the collision at 60mph only resulted in a broken ankle: “An impact at that speed usually results in severe injury or death.”

Musk has on numerous occasions forcefully chastised journalists investigating Tesla crashes, arguing that the unflattering news coverage was dissuading people from using the technology and thus “killing people” in the process. After Tesla recently labeled an award-winning news outlet an “extremist organization”, some critics compared the company’s hyperbolic denouncements of the press to the anti-media strategy of president Donald Trump.

Source link

read more
BiologyCaliforniaData and computer securityDNA databaseFamilyGenealogyGeneticsInternetPrivacyScienceTechnologyUS news

Golden State Killer: the end of DNA privacy? Chips with Everything podcast | Technology

no thumb

Subscribe and review: Acast, Apple, Spotify, Soundcloud, Audioboom, Mixcloud. Join the discussion on Facebook, Twitter or email us at [email protected]

A former police officer called Joseph James DeAngelo was arrested in April in connection with a series of murders, rapes and burglaries attributed to an unknown assailant known as the Golden State Killer.

This 40-year-old cold case was reopened after investigators acquired a discarded DNA sample and uploaded it to an “undercover profile” on a genealogy website called GED match. Through this, they were able to find distant relatives and eventually narrow down their search to match descriptions of the killer obtained throughout the investigation.

But what about the innocent people who sent off their DNA to a genealogy website in hopes of tracing their ancestry who might end up becoming part of a criminal investigation? Our DNA is one of the most inherently personal things we have, but this case raises questions about its privacy. If we spit into a test tube and send it off to a website for analysis, who owns that information? Who has access to it? And what can it be used for?

To try and answer some of these questions, Jordan Erica Webber talks to Prof Charles Tumosa of the University of Baltimore, Prof Denise Syndercombe-Court of King’s College and Lee Rainie of the Pew Research Center.

Source link

read more
AlphabetArtificial intelligence (AI)ComputingGoogleTechnologyUS newsWorld news

Google’s ‘deceitful’ AI assistant to identify itself as a robot during calls | Technology

no thumb

Google’s AI assistant will identify itself as a robot when calling up businesses on behalf of human users, the company has confirmed, following accusations that the technology was deceitful and unethical.

The feature, called Google Duplex, was demonstrated at the company’s I/O developers’ conference on Tuesday. It is not yet a finished product, but in the two demos played for the assembled crowd, it still managed to be eerily lifelike as it made bookings at a hair salon and a restaurant.

But the demonstrations sparked concern that the company was misleading those on the other end of the conversation into thinking they were dealing with another human, not a machine. The generated voice not only sounds extremely natural, but also inserts lifelike pauses, um-ing and ah-ing, and even responding with a wordless “mmm-hmm” when asked by the salon worker to “give me one second”.

Social media theorist Zeynep Tufekci was one of many concerned by the demo. She tweeted:

zeynep tufekci

Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding “ummm” and “aaah” to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.

May 9, 2018

In its initial blogpost announcing the tech, Google said: “It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.”

In a statement to the Verge, the company has confirmed that that will include explicitly letting people know they’re interacting with a machine: “We understand and value the discussion around Google Duplex — as we’ve said from the beginning, transparency in the technology is important,” a Google spokesperson said. “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”

Google’s hope with Duplex is that it will enable a range of interactions with businesses that only have a phone connection, where this was previously limited to those with more hi-tech set-ups. The company envisages being able to call businesses to ask about opening hours then posting the information on Google; allowing users to make a reservation even when a business is closed, scheduling the Duplex call for when doors open; and solving accessibility problems by, for instance, letting hearing-impared users book over the phone, or enabling phone bookings across a language barrier.

The company said it would begin testing Duplex more widely “this summer … to help users make restaurant reservations, schedule hair salon appointments, and get holiday hours over the phone”.

Source link

read more
InternetIslamic StateTechnologyUS newsWorld newsYouTube

Fake terror attacks: why are the frightening pranks going viral? | Technology

no thumb

It sounds like another terrifying story of insurgent terrorism in the Middle East: on Tuesday, men dressed in the black garb of the Islamic State stormed through a mall in Iran, brandishing swords and guns, shouting “Allahu Akbar”. Shoppers reportedly fled the scene in fear.

It was reminiscent of the 2017 Tehran Isis attacks in which 17 people were killed. Except that the mall “attack” was actually a Punk’d style prank. The weapons were fake, and the presumed terrorists were actually actors. The whole incident was a piece of viral marketing for a film called Damascus Time about an Iranian father and son who are kidnapped by Isis. Some shoppers worked out what was going in and filmed the stunt on camera phones, but others can be heard screaming in terror.

The film’s director has since apologised – he said he had not been expecting one of the actors to arrive on horseback – but he is far from the first person to pull this kind of stunt. He was just following a tradition of prank terror plots begun by American teenagers.

There are so many videos of fake terrorist atrocities that you can watch entire compilations of unsuspecting members of the public, running, screaming and vomiting in fear. Most of them have been created by young western YouTube stars, many with millions of subscribers. They tend to involve someone dressed in stereotypical Arabic clothing, dropping a package at the feet of some strangers and running away. In one clip, people drinking on a boat all jump into the sea after a bag is thrown aboard. In another, laughing emojis flash on the screen when a man urinates on himself in fear after being surprised in a public restroom.

A “prank” video of terror attacks by Joey Salads has been viewed 3.3m times.

Joey Salads, a YouTuber with 2 million subscribers, has become notorious for these kinds of pranks. He tries to couch his videos as a “social experiments”, claiming to compare reactions between a man shouting “Allahu Akbar” when he drops a steel box on the floor with that of a man in western dress saying “praise Jesus”. Unsurprisingly, people are more distressed by the former prank, but the videos say less about Islamophobia than they do about the wild west of YouTube content, where pranksters seem to be able to get away with almost anything, with little interference from the site.

Last year, the British YouTuber Arya Mosallah, who had 650,000 subscribers, apologised after he made prank videos in which he approached strangers for a conversation and then threw liquid in their faces and ran away, leading them to believe they were victims of an acid attack, common in Britain at the time. Re-uploaded versions of the video can still be viewed on YouTube.

Some of these pranks seem too horrifying to be real, and in some cases they aren’t. Sam Pepper, a YouTuber with 2.3 million subscribers, apologised for faking a prank in which he appeared to kill someone’s best friend in front of him – admitting everyone in the video knew what was happening. In his apology, he said the pressure in the pranking community to make new videos led him to fake some of his content – a very odd version of peer pressure.

YouTube has said videos like Pepper’s do not violate its community guidelines and the site rarely removes prank videos. In most cases it’s more likely that the police will get involved than online moderators. Australian pranksters the Jalal Brothers were arrested by anti-terror police after they faked a series of terror attacks, including aiming a fake AK-47 at a small child. They later admitted that that video was entirely staged, but the police had not been not aware.

Despite the dangers and clear distress involved, new videos are emerging all the time. At this point, many people are more likely to be caught up in a faked YouTube prank than an actual terror attack.

Source link

read more
AppleDrones (non-military)IntelMicrosoftTechnologyUberUS newsWorld news

Apple, Microsoft and Uber test drones approved but Amazon left out in cold | Technology

no thumb

Apple, Intel, Microsoft and Uber will soon start flying drones for a range of tasks including food and package delivery, digital mapping and conducting surveillance as part of 10 pilot programmes approved Wednesday by the US government.

The drone-testing projects have been given waivers for regulations that currently ban their use in the US and will be used to help the Federal Aviation Authority draw up suitable laws to govern the use of the unmanned aerial vehicles (UAV) for myriad tasks.

“The enthusiastic response to our request for applications demonstrated the many innovative technological and operational solutions already on the horizon,” said US transportation secretary Elaine Chao.

Apple will be using drones to capture images of North Carolina with the state’s Department of Transportation. Uber is working on air-taxi technology and will deliver food by drone in San Diego, California, because “we need flying burgers” said the company’s chief executive Dara Khosrowshahi.

Others including startup Flirtey, which successfully made the first drone delivery in the US in 2015 test, will be using UAVs to deliver medical supplies to heart attack victims in Nevada , track mosquitoes in Florida and develop other new uses.

FedEx will use drones to inspect aircraft at its Tennessee hub and for some package deliveries between the airport and other Memphis locations. Virginia Tech said that it would explore emergency management, package delivery and infrastructure inspection by drone, partnering with Alphabet’s Project Wing, AT&T, Intel, Airbus and Dominion Energy.

Notable absentees from the approved list of 10 pilots were Amazon, which applied for a project to deliver goods within New York City, and the world’s largest non-military drone manufacturer, DJI.

Chao said dozens more projects could be approved in coming months, either with new waivers or under existing rules. A rigorous process was cited in conjunction with Amazon’s rejection, but deputy transportation secretary Jeff Rosen said there were “no losers”.

Amazon said the fate of its applications was unfortunate, but it was focused on developing safe operations for drones. The company has worked with the FAA on policy before, and has tested its drone technology around the world, including the UK.

The Unmanned Aircraft Systems Integration Pilot Program was launched last year by President Trump after the US fell behind in drone experimentation.

A total of149 bids were drawn from locales looking to host flights at night, flights over people and other drone operations that are currently prohibited under US regulations. The winners are expected to gain a head start at the billions of dollars and tens of thousands of jobs the young industry expects to create.

Flying taxis are the most high profile of the current drone development projects, with Google co-founder Larry Page’s Kitty Hawk unveiling its designs in March and Uber holding a conference for its Elevate programme this week. But drones are being tested by a broad range of companies for purposes ranging from package delivery to crop inspection.

The current legislation lags behind the technology in many countries, including the US and the UK, with most novel uses ruled out by regulations that prohibit the flying of drones over people and out of the line of sight.

The FAA is seeking to allow drones to fly over people and to remotely identify and track unmanned aerial vehicles while they are in flight, with two new regulations awaiting formal proposal and approval by the Trump administration – a process that could take months.

Source link

read more
Digital mediaMediaSocial mediaTechnologyUS newsWorld news

Santa Clarita Principles could help tech firms with self-regulation | Technology

no thumb

Social networks should publish the number of posts they remove, provide detailed information for users whose content is deleted about why, and offer the chance to appeal against enforcement efforts, according to a groundbreaking effort to provide a set of principles for large-scale moderation of online content.

The Santa Clarita Principles, agreed at a conference in the Californian town this week, were proposed by a group of academics and non-profit organisations including the Electronic Frontier Foundation, ACLU, and the Center for Democracy and Technology.

They are intended to provide a guiding light for tech companies keen on self-regulation, akin to similar sets of principles established by other industries – most famously the Asilomar principles, drawn up in 1975 to regulate genetic engineering.

The principles are made up of three key recommendations: Numbers, Notice, and Appeal. “Companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines,” the first principle advises.

Currently, only YouTube provides such a report, of the major content sites, and in less detail than the principle recommends: it calls for information including the number of posts and accounts flagged and suspended, broken down by category of rule violated, format of content, and locations, among other things. YouTube’s content moderation transparency report revealed the company removed 8.3m videos in the first quarter of 2018.

The second principle, Notice, recommends that “companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension.

“In general, companies should provide detailed guidance to the community about what content is prohibited, including examples of permissible and impermissible content and the guidelines used by reviewers.” Many companies keep such detailed guidelines secret, arguing that explaining the law lets users find loopholes they can abuse.

In 2017, the Guardian published Facebook’s community moderation guidelines, revealing some examples of how the company draws the line on sex, violence and hate speech. Last month, almost a year later, Facebook finally decided to publish the documents itself. Mark Zuckerberg said the publication was a step towards his goal “to develop a more democratic and independent system for determining Facebook’s community standards”.

Finally, the principles call for a right to appeal. “Companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.” Most companies allow for some sort of appeal, in principle, although many users report little success in overturning incorrect decisions in practice.

Instead, observers have noted that the press has increasingly become an independent ombudsman for large content companies, with many of the most flagrant mistakes only being overturned when journalists highlight them. Twitter, for example, “is slow or unresponsive to harassment reports until they’re picked up by the media,” according to Buzzfeed writer Charlie Warzel.

Facebook’s Zuckerberg has said he wants a more explicit appeals process. “Over the long term, what I’d really like to get to is an independent appeal,” he said, in an interview with Vox. “So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion.

“You can imagine some sort of structure, almost like a supreme court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.”

Neither Facebook, Google nor Twitter commented for this article.

Source link

read more
AlphabetArtificial intelligence (AI)BusinessComputingGoogleSilicon ValleyTechnologyUS news

Google’s robot assistant now makes eerily lifelike phone calls for you | Technology

no thumb

Google’s virtual assistant can now make phone calls on your behalf to schedule appointments, make reservations in restaurants and get holiday hours.

The robotic assistant uses a very natural speech pattern that includes hesitations and affirmations such as “er” and “mmm-hmm” so that it is extremely difficult to distinguish from an actual human phone call.

The unsettling feature, which will be available to the public later this year, is enabled by a technology called Google Duplex, which can carry out “real world” tasks on the phone, without the other person realising they are talking to a machine. The assistant refers to the person’s calendar to find a suitable time slot and then notifies the user when an appointment is scheduled.

Google’s CEO, Sundar Pichai, demonstrated the capability on stage at the Shoreline Amphitheater during the company’s annual developer conference, I/O. He played a recording of Google Assistant calling and interacting with someone at a hair salon to make an appointment.

A phone call
to a hairstylist.

When the salon picks up the phone, a female computer-generated voice says she’s calling to arrange a haircut for “a client” on 3 May. The salon employee says “give me one second”, to which the robot replies “mmm-hmm” – a response that triggered a wave of laughter in the 7,000-strong audience.

“What time are you looking for?” the salon employee asks. “At 12pm,” replies the robot. The salon doesn’t have an opening then, so the robot suggests a window between 10am and 12pm, before confirming the booking and notifying its human master.

Pichai showed a second demo, one of “many examples where the call doesn’t go as expected”, in which a male-sounding virtual assistant tries to reserve a table at a restaurant but is told that he doesn’t need a booking if there are only four people in his party. The robot appears to navigate the confusing conversation with ease.

The virtual assistant
calls a restaurant.

From the onstage demonstrations, it seemed like a significant upgrade from the automated phone systems most people have interacted with. The natural interaction is enabled by advancements in automatic speech recognition, text-to-speech synthesis and an understanding of how humans pace their conversations.

Pichai said the tool was useful for the 60% of small businesses in the US that don’t already have an online booking system. “We think AI can help,” he said.

The Duplex system can also call a company to ask about hours of operation during a holiday, and then make that information available online with Google, reducing the volume of similar calls a business might receive.

“Businesses can operate as they always have. There’s no learning curve or changes to make to benefit from this technology,” the principal engineer, Yaniv Leviathan, and vice-president, engineering, Yossi Matias wrote in a blogpost about the technology.

During the demonstrations, the virtual assistant did not identify itself and instead appeared to deceive the human at the end of the line. However, in the blogpost, the company indicated that might change.

“It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.”

Source link

read more
AdvertisingGoogleMediaTechnologyUS newsUS politics

Google promises better verification of political ad buyers in US | Technology

no thumb

Google says it will do a better job of verifying the identity of political ad buyers in the US by requiring a copy of a government-issued ID and other information.

In a blogpost, Google executive Kent Walker said the company would also require the disclosure of who is paying for the ad.

He also repeated a pledge he made in November to create a library of such ads that will be searchable by anyone by this summer: “We’ll also release a new Transparency Report specifically focused on election ads. This Report will describe who is buying election-related ads on our platforms and how much money is being spent. We’re also building a searchable library for election ads, where anyone can find which election ads purchased on Google and who paid for them.”

Google’s blogpost stops short of declaring support for the Honest Ads Act, a bill that would impose disclosure requirements on online ads, similar to what is required for television and other media. Facebook and Twitter support that bill.

Google says applications under the new system will open by the end of May, with approval taking up to five days.

Source link

read more
1 2 3 32
Page 1 of 32