Artificial intelligence (AI)

Artificial intelligence (AI)Self-driving carsTechnology

Autonomous car innovations: from jam busters to cures for queasiness | Technology

Autonomous car innovations: from jam busters to cures for queasiness | Technology

Insurers at the wheel

An Oxford University startup, Oxbotica, proposes to solve the problem of liability in a collision involving autonomous vehicles by allowing insurers access to the vast amounts of data the car generates, even allowing them to control a car in real time if it detects a dangerous situation.

Technology could solve the unexplained traffic jam. Photograph: Alamy

Ending random jams

A recent paper published in Transportation Research found that autonomous cars could bring about the end of congestion with no obvious explanation. These are caused by one driver’s unexpected behaviour (most often braking) being copied and exaggerated by following vehicles. The study demonstrated the networked cars were able to slow more gently and not create jams.

Deepmind claims to have come up with an AI program that mimics the brain’s ‘neural GPS’ system.

Deepmind claims to have come up with an AI program that mimics the brain’s ‘neural GPS’ system. Photograph: Alamy

Self-learning brains

A recent study published in Nature from Google-backed AI company Deepmind claims to have developed an AI program that resembles the neural GPS system found inside the brain. At present, its algorithm can only work in mazes but it plans to test it in more “challenging environments”.

A startup is developing shock absorbers to combat travel sickness.

A startup is developing shock absorbers to combat travel sickness. Photograph: Alamy

No more motion sickness

Driving helps mitigate motion sickness by making us engaged with the experience of movement. But passengers in an autonomous car will find it hard to anticipate movement and could feel queasy. Boston startup ClearMotion is working on shock absorbers that will counter the feeling of movement – thereby, it hopes, reducing the need for sick bags.

BMW is developing a self-driving version of one of its luxury saloons.

BMW is developing a self-driving version of one of its luxury saloons. Photograph: Stefan Wermuth/Reuters

Robot taxis

Earlier this month, BMW demonstrated a self-driving 7 Series that pedestrians could hail and direct to their destination via a tablet. In a rather analogue touch, passengers were also allowed to honk the car horn to alert pedestrians and stray dogs to their self-driving presence.

Source link

read more
AlphabetArtificial intelligence (AI)ComputingGoogleTechnologyUS newsWorld news

Google’s ‘deceitful’ AI assistant to identify itself as a robot during calls | Technology

no thumb

Google’s AI assistant will identify itself as a robot when calling up businesses on behalf of human users, the company has confirmed, following accusations that the technology was deceitful and unethical.

The feature, called Google Duplex, was demonstrated at the company’s I/O developers’ conference on Tuesday. It is not yet a finished product, but in the two demos played for the assembled crowd, it still managed to be eerily lifelike as it made bookings at a hair salon and a restaurant.

But the demonstrations sparked concern that the company was misleading those on the other end of the conversation into thinking they were dealing with another human, not a machine. The generated voice not only sounds extremely natural, but also inserts lifelike pauses, um-ing and ah-ing, and even responding with a wordless “mmm-hmm” when asked by the salon worker to “give me one second”.

Social media theorist Zeynep Tufekci was one of many concerned by the demo. She tweeted:

zeynep tufekci

Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding “ummm” and “aaah” to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.

May 9, 2018

In its initial blogpost announcing the tech, Google said: “It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.”

In a statement to the Verge, the company has confirmed that that will include explicitly letting people know they’re interacting with a machine: “We understand and value the discussion around Google Duplex — as we’ve said from the beginning, transparency in the technology is important,” a Google spokesperson said. “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”

Google’s hope with Duplex is that it will enable a range of interactions with businesses that only have a phone connection, where this was previously limited to those with more hi-tech set-ups. The company envisages being able to call businesses to ask about opening hours then posting the information on Google; allowing users to make a reservation even when a business is closed, scheduling the Duplex call for when doors open; and solving accessibility problems by, for instance, letting hearing-impared users book over the phone, or enabling phone bookings across a language barrier.

The company said it would begin testing Duplex more widely “this summer … to help users make restaurant reservations, schedule hair salon appointments, and get holiday hours over the phone”.

Source link

read more
AlphabetArtificial intelligence (AI)BusinessComputingGoogleSilicon ValleyTechnologyUS news

Google’s robot assistant now makes eerily lifelike phone calls for you | Technology

no thumb

Google’s virtual assistant can now make phone calls on your behalf to schedule appointments, make reservations in restaurants and get holiday hours.

The robotic assistant uses a very natural speech pattern that includes hesitations and affirmations such as “er” and “mmm-hmm” so that it is extremely difficult to distinguish from an actual human phone call.

The unsettling feature, which will be available to the public later this year, is enabled by a technology called Google Duplex, which can carry out “real world” tasks on the phone, without the other person realising they are talking to a machine. The assistant refers to the person’s calendar to find a suitable time slot and then notifies the user when an appointment is scheduled.

Google’s CEO, Sundar Pichai, demonstrated the capability on stage at the Shoreline Amphitheater during the company’s annual developer conference, I/O. He played a recording of Google Assistant calling and interacting with someone at a hair salon to make an appointment.

A phone call
to a hairstylist.

When the salon picks up the phone, a female computer-generated voice says she’s calling to arrange a haircut for “a client” on 3 May. The salon employee says “give me one second”, to which the robot replies “mmm-hmm” – a response that triggered a wave of laughter in the 7,000-strong audience.

“What time are you looking for?” the salon employee asks. “At 12pm,” replies the robot. The salon doesn’t have an opening then, so the robot suggests a window between 10am and 12pm, before confirming the booking and notifying its human master.

Pichai showed a second demo, one of “many examples where the call doesn’t go as expected”, in which a male-sounding virtual assistant tries to reserve a table at a restaurant but is told that he doesn’t need a booking if there are only four people in his party. The robot appears to navigate the confusing conversation with ease.

The virtual assistant
calls a restaurant.

From the onstage demonstrations, it seemed like a significant upgrade from the automated phone systems most people have interacted with. The natural interaction is enabled by advancements in automatic speech recognition, text-to-speech synthesis and an understanding of how humans pace their conversations.

Pichai said the tool was useful for the 60% of small businesses in the US that don’t already have an online booking system. “We think AI can help,” he said.

The Duplex system can also call a company to ask about hours of operation during a holiday, and then make that information available online with Google, reducing the volume of similar calls a business might receive.

“Businesses can operate as they always have. There’s no learning curve or changes to make to benefit from this technology,” the principal engineer, Yaniv Leviathan, and vice-president, engineering, Yossi Matias wrote in a blogpost about the technology.

During the demonstrations, the virtual assistant did not identify itself and instead appeared to deceive the human at the end of the line. However, in the blogpost, the company indicated that might change.

“It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.”

Source link

read more
AgeingArtificial intelligence (AI)ComputingConsciousnessEthicsScienceTechnologyUK newsWorld news

No death and an enhanced life: Is the future transhuman? | Technology

No death and an enhanced life: Is the future transhuman? | Technology

The aims of the transhumanist movement are summed up by Mark O’Connell in his book To Be a Machine, which last week won the Wellcome Book prize. “It is their belief that we can and should eradicate ageing as a cause of death; that we can and should use technology to augment our bodies and our minds; that we can and should merge with machines, remaking ourselves, finally, in the image of our own higher ideals.”

The idea of technologically enhancing our bodies is not new. But the extent to which transhumanists take the concept is. In the past, we made devices such as wooden legs, hearing aids, spectacles and false teeth. In future, we might use implants to augment our senses so we can detect infrared or ultraviolet radiation directly or boost our cognitive processes by connecting ourselves to memory chips. Ultimately, by merging man and machine, science will produce humans who have vastly increased intelligence, strength, and lifespans; a near embodiment of gods.

Is that a desirable goal? Advocates of transhumanism believe there are spectacular rewards to be reaped from going beyond the natural barriers and limitations that constitute an ordinary human being. But to do so would raise a host of ethical problems and dilemmas. As O’Connell’s book indicates, the ambitions of transhumanism are now rising up our intellectual agenda. But this is a debate that is only just beginning.

There is no doubt that human enhancement is becoming more and more sophisticated – as will be demonstrated at the exhibition The Future Starts Here which opens at the V&A museum in London this week. Items on display will include “powered clothing” made by the US company Seismic. Worn under regular clothes, these suits mimic the biomechanics of the human body and give users – typically older people – discrete strength when getting out of a chair or climbing stairs, or standing for long periods.

In many cases these technological or medical advances are made to help the injured, sick or elderly but are then adopted by the healthy or young to boost their lifestyle or performance. The drug erythropoietin (EPO) increases red blood cell production in patients with severe anaemia but has also been taken up as an illicit performance booster by some athletes to improve their bloodstream’s ability to carry oxygen to their muscles.

And that is just the start, say experts. “We are now approaching the time when, for some kinds of track sports such as the 100-metre sprint, athletes who run on carbon-fibre blades will be able outperform those who run on natural legs,” says Blay Whitby, an artificial intelligence expert at Sussex University.

The question is: when the technology reaches this level, will it be ethical to allow surgeons to replace someone’s limbs with carbon-fibre blades just so they can win gold medals? Whitby is sure many athletes will seek such surgery. “However, if such an operation came before any ethics committee that I was involved with, I would have none of it. It is a repulsive idea – to remove a healthy limb for transient gain.”

Scientists think there will come a point when athletes with carbon blades will be able to out-run able-bodied rivals. Photograph: Alexandre Loureiro/Getty Images

Not everyone in the field agrees with this view, however. Cybernetics expert Kevin Warwick, of Coventry University, sees no problem in approving the removal of natural limbs and their replacement with artificial blades. “What is wrong with replacing imperfect bits of your body with artificial parts that will allow you to perform better – or which might allow you to live longer?” he says.

Warwick is a cybernetics enthusiast who, over the years, has had several different electronic devices implanted into his body. “One allowed me to experience ultrasonic inputs. It gave me a bat sense, as it were. I also interfaced my nervous system with my computer so that I could control a robot hand and experience what it was touching. I did that when I was in New York, but the hand was in a lab in England.”

Such interventions enhance the human condition, Warwick insists, and indicate the kind of future humans might have when technology augments performance and the senses. Some might consider this unethical. But even doubters such as Whitby acknowledge the issues are complex. “Is it ethical to take two girls under the age of five and train them to play tennis every day of their lives until they have the musculature and skeletons of world champions?” he asks. From this perspective the use of implants or drugs to achieve the same goal does not look so deplorable.

This last point is a particular issue for those concerned with the transhumanist movement. They believe that modern technology ultimately offers humans the chance to live for aeons, unshackled – as they would be – from the frailties of the human body. Failing organs would be replaced by longer-lasting high-tech versions just as carbon-fibre blades could replace the flesh, blood and bone of natural limbs. Thus we would end humanity’s reliance on “our frail version 1.0 human bodies into a far more durable and capable 2.0 counterpart,” as one group has put it.

However, the technology needed to achieve these goals relies on as yet unrealised developments in genetic engineering, nanotechnology and many other sciences and may take many decades to reach fruition. As a result, many advocates – such as the US inventor and entrepreneur Ray Kurzweil, nanotechnology pioneer Eric Drexler and PayPal founder and venture capitalist Peter Thiel have backed the idea of having their bodies stored in liquid nitrogen and cryogenically preserved until medical science has reached the stage when they can be revived and their resurrected bodies augmented and enhanced.

Four such cryogenic facilities have now been constructed: three in the US and one in Russia. The largest is the Alcor Life Extension Foundation in Arizona whose refrigerators store more than 100 bodies (nevertheless referred to as “patients” by staff) in the hope of their subsequent thawing and physiological resurrection. It is “a place built to house the corpses of optimists”, as O’Connell says in To Be a Machine.

The Alcor Life Extension Foundation where ‘patients’ are cryogenically stored in the hope of future revival.

The Alcor Life Extension Foundation where ‘patients’ are cryogenically stored in the hope of future revival. Photograph: Alamy

Not everyone is convinced about the feasibility of such technology or about its desirability. “I was once interviewed by a group of cryonic enthusiasts – based in California – called the society for the abolition of involuntary death,” recalls the Astronomer Royal Martin Rees. “I told them I’d rather end my days in an English churchyard than a Californian refrigerator. They derided me as a deathist – really old-fashioned.”

For his part, Rees believes that those who choose to freeze themselves in the hope of being eventually thawed out would be burdening future generations expected to care for these newly defrosted individuals. “It is not clear how much consideration they would deserve,” Rees adds.

Ultimately, adherents of transhumanism envisage a day when humans will free themselves of all corporeal restraints. Kurzweil and his followers believe this turning point will be reached around the year 2030, when biotechnology will enable a union between humans and genuinely intelligent computers and AI systems. The resulting human-machine mind will become free to roam a universe of its own creation, uploading itself at will on to a “suitably powerful computational substrate”. We will become gods, or more likely “star children” similar to the one at the end of 2001: A Space Odyssey.

These are remote and, for many people, very fanciful goals. And the fact that much of the impetus for establishing such extreme forms of transhuman technology comes from California and Silicon Valley is not lost on critics. Tesla and SpaceX founder Elon Musk, the entrepreneur who wants to send the human race to Mars, also believes that to avoid becoming redundant in the face of the development of artificial intelligence, humans must merge with machines to enhance our own intellect.

This is a part of the world where the culture of youth is followed with fanatical intensity and where ageing is feared more acutely than anywhere else on the planet. Hence the overpowering urge to try to use technology to overcome its effects.

It is also one of the world’s richest regions, and many of those who question the values of the transhuman movement warn it risks creating technologies that will only create deeper gulfs in an already divided society where only some people will be able to afford to become enhanced while many other lose out.

The position is summed up by Whitby. “History is littered with the evil consequences of one group of humans believing they are superior to another group of humans,” he said. “Unfortunately in the case of enhanced humans they will be genuinely superior. We need to think about the implications before it is too late.”

For their part, transhumanists argue that the costs of enhancement will inevitably plummet and point to the example of the mobile phone, which was once so expensive only the very richest could afford one, but which today is a universal gadget owned by virtually every member of society. Such ubiquity will become a feature of technologies for augmenting men and women, advocates insist.

Many of these issues seem remote, but experts warn that the implications involved need to be debated as a matter of urgency. An example is provided by the artificial hand being developed by Newcastle University. Current prosthetic limbs are limited by their speed of response. But project leader Kianoush Nazarpour believes it will soon be possible to create bionic hands that can assess an object and instantly decide what kind of grip it should adopt.

“It will be of enormous benefit, but its use raises all sorts of issues. Who will own it: the wearer or the NHS? And if it is used to carry a crime, who ultimately will be responsible for its control? We are not thinking about these concerns and that is a worry.”

The position is summed up by bioethicist professor Andy Miah of Salford University.

“Transhumanism is valuable and interesting philosophically because it gets us to think differently about the range of things that humans might be able to do – but also because it gets us to think critically about some of those limitations that we think are there but can in fact be overcome,” he says. “We are talking about the future of our species, after all.”

Body count

The artificial limbs of Luke Skywalker and the Six Million Dollar Man are works of fiction. In reality, bionic limbs have suffered from multiple problems: becoming rigid mid-action, for example. But new generations of sensors are now making it possible for artificial legs and arms to behave in much more complex, human-like ways.

The light that is visible to humans excludes both infrared and ultra-violet radiation. However, researchers are working on ways of extending the wavelengths of radiation that we can detect, allowing us to see more of the world – and in a different light. Ideas like these are particularly popular with military researchers trying to create cyborg soldiers.

Powered suits or exoskeletons are wearable mobile machines that allow people to move their limbs with increased strength and endurance. Several versions are being developed by the US army, while medical researchers are working on easy-to-wear versions that would be able to help people with severe medical conditions or who have lost limbs to move about naturally.

Transhumanists envisage the day when memory chips and neural pathways are actually embedded into people’s brains, thus bypassing the need to use external devices such as computers in order to access data and to make complicated calculations. The line between humanity and machines will become increasingly blurred.

Robotic exoskeletons such as this one can help people who have suffered spinal injuries.

Robotic exoskeletons such as this one can help people who have suffered spinal injuries. Photograph: Alamy

Source link

read more
AlphabetArtificial intelligence (AI)ComputingConsciousnessGoogleTechnology

Google’s Talk to Books: is the future of AI just a rambling pub bore? | Technology

Google’s Talk to Books: is the future of AI just a rambling pub bore? | Technology

I’m confuddled by Google’s new search feature. Talk to Books lets you ask whole questions, and pulls up a list of responses from 100,000 books in the Google Books database. In theory, this is incredible. We know the internet is a trash can of terrible people and misinformation – a kitten clutching a cat-print edition of Mein Kampf and aggressively selling you maxi dresses. But books! Books are portals to the greatest minds of recorded time. Why ask Jeeves when you can ask PG Wodehouse? Or Montaigne or Tolstoy or Woolf? They’re sure to know more than Reddit. And at times, the tool does seem strangely wise.

What’s the best way to live, I ask. “There is no one best way to live a life … There are penalties and compensations for being ‘good’ as well as for being ‘bad’,” I read. The advice comes from Robert K Greenleaf’s Servant Leadership: A Journey Into The Nature of Legitimate Power and Greatness. Intriguing. I go a little more abstract. Who’s to blame, I wonder, and get an apt response from The Solitude of Prime Numbers by Paolo Giordano: “To blame for what? His father asked, bewildered and slightly annoyed.” Touché. Will machines be kind to us? “We’ll probably never want to deal with machines that are too much like us,” says John McCarthy, from Formalising Common Sense.

But there’s also something disorienting in the experiment, in which answers are given to slightly different questions than the one I ask. Sometimes they’re dry and literal. “What is death?” generates passages from medical textbooks, but nothing more spiritual. “Do you know the way to San Jose?” pulls up a Los Angeles hiking guide, which advises me to take the 118 Freeway West, then park in a dirt lot to the right. But it doesn’t ask where I’m coming from. I’m not sure I know where it’s coming from either – frequently, it sounds quite mad. “How deep is your love?” yields Charles Darwin’s thoughts on the evolutionary reduction of crest in Silk fowl. (Probably good the Bee Gees cut that verse.)

Regular, full-fat Google became our default search engine by offering relevant results, in a clear interface. Talk to Books, on the other hand, is an experimental AI: it parses whole sentences to offer results based on an understanding of their possible meaning, rather than merely functional keywords. One question I ask leads to excerpts from John Le Carré’s The Night Manager, a Spanish-American short story anthology, and most thrilling of all, Advanced Well Completion Engineering. It’s not exactly useful, but maybe it’s not trying to be – according to the site’s front page, it’s a creativity tool, hence the unusual connections. Or perhaps, it’s just not very good.

It is refreshing, though, to step away from personalising algorithms and data tracking, which have given us a curiously claustrophobic internet, closer to a hall of mirrors. We’ve built a servile model of technology, and invariably imagine its flipside – the single-minded, aggressive robot uprising. Talk to Books presents an alternative vision of AI: the rambling pub bore. Semi-coherent, undeniably literate, accidentally profound. It suggests the future of the internet may be far stranger than predicted. What if a vast intelligence, that contains the sum of human thought patterns, and can predict exactly what we want, becomes bored of our questions? What if, knowing too much, it simply starts to lose its mind?

Barry and Paul Chuckle. Photograph: BBC Pictures Archives

Can the Chuckle Brothers survive our modern age?

The Chuckle Brothers are to return to our screens, after a 10-year absence. The main takeaway from the news is, surely: only 10 years? I’m used to internet memes reminding me things are depressingly older than I remember. We’re closer to the year 2036 than we are to Jennifer Lopez’s Grammy dress! Rupert Grint is turning 30! The sandwich you enjoyed this morning? You actually ate that on a Pict settlement in the Dark Ages! You’re going to die very soon LOL!

Even in my youth, I recall Chucklevision’s premise being flimsy. Two naive painter and decorators, who seemed to be 100, but were probably no older than Rupert Grint is now, displaying a formulaic incompetence at their job. (Sadly, as an adult, I can relate.)

What can this pair of mustachioed buffoons possibly offer a new Channel 5 audience, raised on Towie and Tinder and terrorism? We no longer live in innocent times. No one chuckles any more. Tinchy Stryder summed up Paul and Barry’s chances back in 2014, singing “Fuck all that me to you to me to you stuff” on his diss track, To Me To You (Bruv). And that was a voluntary collaboration with the pair, for charity.

I’m worried they’ll be eaten alive. Or succumb to dispiriting, channel-mandated storylines about leaking client nudes by accident, winding up on a reality dating show, or being caught up in riots, and throwing their ladder into the window of a Footlocker. Hang on, what am I talking about? These storylines sound amazing. I’ve changed my mind. Roll on the Chuckle Brothers reboot!

Kids v Olympic athletes: there’s only one winner

It’s been officially confirmed that children have greater energy levels, metabolic capacity and recovery rates than endurance athletes. The next step is obvious. A no-holds-barred Olympics, in which six-year-olds compete against sports professionals, who are encouraged to take all the steroids and stem cells they can handle, to level the field. Imagine the potential of cross-country tag, or a violent-reprisals version of Simon says, or test-match length ring a ring o’ roses, played to the point of collapse. Show me one exhausted parent who doesn’t think this is at least worth a try.

Source link

read more
AmazonArtificial intelligence (AI)BusinessComputingE-commerceInternetTechnologyUS newsWorld news

The two-pizza rule and the secret of Amazon’s success | Technology

The two-pizza rule and the secret of Amazon’s success | Technology

In the early days of Amazon, Jeff Bezos instituted a rule: every internal team should be small enough that it can be fed with two pizzas. The goal wasn’t to cut down on the catering bill. It was, like almost everything Amazon does, focused on two aims: efficiency and scalability. The former is obvious. A smaller team spends less time managing timetables and keeping people up to date, and more time doing what needs to be done. But it’s the latter that really matters for Amazon.

The thing about having lots of small teams is that they all need to be able to work together, and to be able to access the common resources of the company, in order to achieve their larger goals.

That’s what turns the company into, in the words of Benedict Evans, of venture capital firm Andreessen Horowitz, “a machine that makes the machine”.

“You can add new product lines without adding new internal structure or direct reports, and you can add them without meetings and projects and process in the logistics and e-commerce platforms,” Evans notes. “You don’t need to fly to Seattle and schedule a bunch of meetings to get people to implement support for launching makeup in Italy, or persuade anyone to add things to their roadmap.”

Amazon is good at being an e-commerce company that sells things, but what it’s great at is making new e-commerce companies that sell new things.

The company calls this approach its “flywheel”: it takes the scale that can smother a typical multinational, and uses it to provide an ever-increasing momentum backing up its entire business. The faster the flywheel spins, and the heavier it is, the harder it is for anyone else to stop it.

Amazon’s distribution centre in Phoenix, Arizona. Photograph: Ralph Freso / Reuters/Reuters

Perhaps the best example of that approach in action is the birth and growth of AWS (previously called Amazon Web Services). That’s the division of Amazon that provides cloud computing services, both internally and for other companies – including those that are competitors to Amazon in other areas (both Netflix and Tesco use the platform, for instance, despite Amazon also selling streaming video and groceries).

It started, like so many things at Amazon, with an edict from the top. Every team, Bezos ordered, should begin to work with each other only in a structured, systematic way. If an advertising team needed some data on shoe sales to decide how best to spend their resources, they could not email analytics and ask for it; they needed to go to the analytics dashboard themselves and get it. If that dashboard didn’t exist, it needed to be created. And that approach needed to cover everything.

From there, it was almost an afterthought to take the obvious next step, and let others use the same technology that Amazon made available internally.

Those humble beginnings spawned a beast. The business is now 10% of Amazon’s overall revenue, making so much money that financial regulations forced the company to report it as a top-level division in its own right: Amazon divides its company into “US and Canada”, “International”, and “AWS”.

AWS is large enough that it is dealt with on the same tier as the entire rest of the world. AWS is large enough that Netflix, a company that accounts for around a third of all internet traffic in North America, is just another customer.

AWS is large enough that in 2016 the company released the “Snowmobile”, a literal truck for moving data. The companies that work with AWS move so much information around that sometimes the internet simply cannot cope. So now, if you want to upload a lot of data to Amazon’s cloud, the company will drive a truck to your office, fill it with data, then drive it back. If you need to upload 100 petabytes – that’s roughly 5m movies in 4k with surround sound – it turns out there’s no quicker way to do it than driving it down the freeway at 75mph.

While AWS saw Amazon open up its internal technology to external customers, another part of the company does the same trick with Amazon’s actual website.

Amazon Marketplace launched in 2000, allowing third-party sellers to put up their own wares on the site. The feature has expanded over the years to become a major plank in the company’s quest to be the “everything store” – the one destination on the internet you need to go to to buy anything in existence.

Marketplace goes one better than the pizza rule, allowing Amazon to expand into new sectors without needing to employ a single extra employee.

The variety of things sold on Amazon is now so huge that its internal computer scientists faced a problem. “E-commerce companies such as Amazon … process billions of orders every year,” a team of Amazon researchers wrote. “However, these orders represent only a small fraction of all plausible orders.” The solution? Train an artificial intelligence purely to generate plausible fake orders, to better guess how to market brand-new products.

Amazon reports the revenue it makes from Marketplace as around 20% of the company’s total income. But that metric, which only counts the fees paid to the company by third-party sellers, understates the colossal scale of the business. “Marketplace is now around half of the total volume of goods sold through Amazon,” estimates Andreessen Horowitz’s Evans. “In other words, Marketplace means that Amazon handles (but does not, incidentally, itself set prices for) double the share of e-commerce that it reports as revenue.”

Increasingly, then, Amazon resembles less a big-box retailer such as Tesco or Walmart, hovering at the edge of town sucking up commerce and killing the local high street, and more a shopping mall: independent retailers can exist, and even make a tidy living, but only if they get a slot in the mall itself – and if they always remember that the real moneymaker is the landlord.

Since 2014, Amazon has added a third flywheel to its business: artificial intelligence. The company has always been near the leading edge of the industry, most obviously in its neural-network-powered recommendation algorithms. But, until recently, that approach was scattershot, segmented, and hardly world-class (think of the last time you bought something on Amazon only to have it recommended to you for weeks afterwards. “You like duvets? Why not buy 10 more?”).

An Amazon Echo Dot.

An Amazon Echo Dot. Photograph: Elaine Thompson/AP

That changed when the company decided to build the hardware that would become the Echo. In classic Amazon fashion, it started at the end and worked backwards from there, writing a “press release” for the notional future product and then trying to figure out what expertise needed to be developed – or bought – to make it. Need a personal assistant? Buy Cambridge-based True Knowledge, once working on a Siri competitor, and you get Alexa. Require far-field voice recognition, letting you hear people on the other side of the room? Start working on that now, because no one’s really cracked the problem.

Institutionally, the bulk of the Alexa AI team still sits under AWS, using its infrastructure and offering another tranche of digital services to third parties who want to build speech control into their devices. But the economies of scale that come to AI are unique. There’s the value to the data, of course: the more people use an Echo, the more speech samples it has to train with, and so the better the Echo becomes. And beyond that, machine learning technologies are so fundamental, and so general purpose, that every advancement Amazon makes ricochets throughout its business, increasing efficiency, opening up new fields, and suggesting further avenues of research.

But nothing lasts forever, and even Amazon has its weak points. The two-pizza rule, for instance, may be a good strategy for building an infinitely expandable company, but it doesn’t lend itself to a pleasant, stress-free working environment.

Amazon has long faced criticism over its treatment of warehouse workers: as with many companies in its sector, huge valuations and high-tech aspirations sit uneasily alongside the low-paid, low-skilled work that makes the company tick over.

Where Amazon differs from companies such as Deliveroo, Apple and Facebook is that the highly skilled employees sitting in the headquarters have almost as many complaints.

A New York Times exposé from 2015 described employees crying at their desks and suffering near-breakdowns from the pressure they were put under. The company’s rapid employee turnover is legendary, with insiders describing a sort of Forth Bridge of technical debt: someone leaves, and someone else has to rewrite all their code to make it understandable to the people still there – but by the time the rewrite is finished, the person doing the rewriting has also left, requiring someone else to start the whole process again.

But the one thing that’s been true from day one is that Jeff Bezos sits at the top of the food chain, with direct control of a $740bn (£530bn) business matched by few other bosses. It would take a bold gambler to bet against the company right now.

Source link

read more
Artificial intelligence (AI)ComputingElon MuskHouse of LordsMotoringPoliticsRobotsTechnologyTesla

Artificial intelligence, robots and a human touch | Letters | Technology

no thumb

Elon Musk’s comment that humans are underrated (Humans replace robots at flagging Tesla plant, 17 April) doesn’t come as much of a surprise, even though his company is at the forefront of the technological revolution. Across industries, CEOs are wrestling with the balance between humans and increasingly cost-effective and advanced robots and artificial intelligence. However, as Mr Musk has discovered, the complexity of getting a machine to cover every possibility results in a large web of interconnected elements that can overcomplicate the underlying problem. This is why so many organisations fail when they try to automate everything they do. Three key mistakes I see time and again in these situations are missing the data basics, applying the wrong strategy, and losing the human touch.

There are some clear cases where automation works well: low value, high repetition tasks or even complex ones where additional data will give a better outcome, for example, using medical-grade scanners on mechanical components to identify faults not visible to the human eye. But humans are better at reacting to unlikely, extreme, or unpredictable edge cases, for example being aware that a music festival has relocated and extra cider needs to go to stores near the new venue rather than the previous location.

Regardless of industry, it’s only by maintaining a human touch – thinking and seeing the bigger picture – that automation and AI can add the most value to businesses.
Deborah O’Neill
Partner, Oliver Wyman

The House of Lords report (Cambridge Analytica scandal ‘highlights need for AI regulation’,, 16 March) outlining the UK’s potential to be a global leader in artificial intelligence – and its calls for governmental support of businesses in the field and education to equip people to work alongside AI in the jobs of the future – should be welcomed for two reasons. First, it recognises the potential of UK-based AI companies to benefit the economy. Supporting these fast-growing companies to ensure that they continue to scale – and eventually exit – here should be a strategic priority, particularly at a time when a new generation of fast-growth providers, such as and Benevolent AI in life sciences, and ThoughtRiver in legal tech, is emerging to build on an impressive track record of AI innovation in the UK, from Alan Turing to DeepMind.

Second, it acknowledges that AI can contribute significantly to businesses’ competitive advantage – a view that few too UK businesses seem to appreciate at a time when media coverage of the topic is dominated by scaremongering about job losses, security threats, ethics, and bias. It’s refreshing to see a more positive narrative about AI and the workplace starting to emerge. What we now need to see is more of from the business world is openness to the opportunities that AI creates in terms of continuing, and expanding on, the positivity of this report, and leadership in sharing their successes in this area that others can learn from.
Matt Meyer
CEO, Taylor Vinters

The announcement from the House of Lords that Britain must “lead the way” on the regulation of artificial intelligence (AI) highlights the current climate of concern around the ways that AI could impact society, in particular, fears of weaponised AI used by militaries and other unethical usage. But there are many other applications where “ethical” AI is crucial – in making accurate medical diagnoses, for example. 

There is no doubt AI will transform how society operates, and that there is a need for improper use to be safeguarded against. However, creating ethical AI algorithms will take more than just an announcement. It will require far greater collaboration between governments, and industry and technology experts. By working with those that understand AI, regulators can put in place standards that protect us while ensuring AI can augment humans safely, so that we can still reap its full potential.
Dr Nick Lynch
The Pistoia Alliance

Join the debate – email [email protected]

Read more Guardian letters – click here to visit

Source link

read more
Artificial intelligence (AI)BusinessElon MuskMoneyMotoringRobotsTechnologyTesla

Elon Musk drafts in humans after robots slow down Tesla Model 3 production | Technology

Elon Musk drafts in humans after robots slow down Tesla Model 3 production | Technology

Elon Musk has admitted that automation has been holding back Tesla’s Model 3 production and that humans, rather than machines, were the answer.

The electric car maker’s chief executive said that one of the reasons Tesla has struggled to reach promised production volumes was because of the company’s “excessive automation”.

Asked whether robots had slowed down production, rather than speeding it up, during a tour around Tesla’s factory by CBS, Musk replied: “Yes, they did … We had this crazy, complex network of conveyor belts … And it was not working, so we got rid of that whole thing.”

“Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated,” Musk added later.

The South African-born computer programmer and businessman is worth about $21bn (£15bn) today. He was catapulted into the ranks of the super-rich with the sale of PayPal to eBay, which netted him $165m

In 2002, he used $100m to found SpaceX, which aims to cut the cost of space travel through technology such as reusable rockets. One of Musk’s ultimate goals is to pioneer efforts to colonise Mars.

Musk became a major investor in electric car company Tesla in 2004 and took over the reins in 2008. Tesla has focused on building a vehicle with mass market appeal. Despite low sales, its stock market value overtook Ford last year.

Another ambitious Musk project is the Hyperloop, his vision of a super-fast underground transport system to whisk passengers between major US cities, such as LA and San Francisco, at hypersonic speed. He has called the idea a “cross between a Concorde and a railgun and an air hockey table”. Critics say it is too impractical and expensive.

More recent ideas include OpenAI, a not-for-profit firm researching artificial intelligence, and Neuralink, a company exploring ways to connect the human brain with AI.

Photograph: Peter Parks/AFP

Caught in what Musk has called “manufacturing hell”, the electric car firm has failed to hit its weekly production target of 2,500 Model 3 vehicles in the first quarter of 2018, fostering doubt within the industry that Tesla will be able to hit its 5,000-a-week target in three months time.

The significant production shortfall has delayed crucial customer deliveries. Musk said he was forced to take direct control of the production line at the beginning of April, resorting to pulling all-nighters and sleeping at the factory.

“We were able to unlock some of the critical things that were holding us back from reaching 2,000 cars a week. But since then, we’ve continued to do 2,000 cars a week,” he said.

At the same time Tesla is facing negative publicity over a fatal crash of one of its Model X SUVs that was driving using the firm’s Autopilot mode, openly feuding with the US National Transportation Safety Board and attempting to suppress issues before media attention.

Source link

read more
Artificial intelligence (AI)Cambridge AnalyticaComputingData protectionHouse of LordsPoliticsTechnologyUK news

Cambridge Analytica scandal ‘highlights need for AI regulation’ | Technology

no thumb

Britain needs to lead the way on artificial intelligence regulation, in order to prevent companies such as Cambridge Analytica setting precedents for dangerous and unethical use of the technology, the head of the House of Lords select committee on AI has warned.

The Cambridge Analytica scandal, Lord Clement-Jones said, reinforced the committee’s findings, released on Monday in the report “AI in the UK: ready, willing and able?”

“These principles do come to life a little bit when you think about the Cambridge Analytica situation,” he told the Guardian. “Whether or not the data analytics they carried out was actually using AI … It gives an example of where it’s important that we do have strong intelligibility of what the hell is going on with our data.”

Clement-Jones added: “With the whole business in [the US] Congress and Cambridge Analytica, the political climate in the west now is much riper in terms of people agreeing to … a more public response to the ethics and so on involved. It isn’t just going to be left to Silicon Valley to decide the principles.”

At the core of the committee’s recommendations are five ethical principles which, it says, should be applied across sectors, nationally and internationally:

  • Artificial intelligence should be developed for the common good and benefit of humanity.
  • Artificial intelligence should operate on principles of intelligibility and fairness.
  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  • All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

The goal is not to write the principles directly into legislation, Clement-Jones said, but rather to have them as a broad guiding beacon for AI regulation. “For instance, in the financial services area it would be the Financial Conduct Authority” that actually applied the principles, “and they would be looking at how insurance companies use algorithms to assess your premiums, how banks assess people for mortgages, and so on and so forth.

“Basically, these regulators have to make the connection with the ethics, and this is the way we think they should do it,” Clement-Jones said. “Of course, if in due course people are not observing these ethical principles and the regulator thinks that their powers are inadequate, then there may be a time down the track that we need to rethink this.”

In a wide-ranging report, the committee has identified a number of threats that mismanagement of AI could bring to Britain. One concern is of the creation of “data monopolies”, large multinational companies – generally American or Chinese, with Facebook, Google and Tencent all named as examples – with such a grip on the collection of data that they can build better AI than anyone else, enhancing their grip on the data sources and creating a virtuous cycle that renders smaller companies and nations unable to compete.

The report stops short of calling for active enforcement to prevent the creation of data monopolies, but does explicitly recommend that the Competition and Markets Authority “review proactively the use and potential monopolisation of data by the big technology companies operating in the UK”.

Clement-Jones said: “We want there to be an open market in AI, basically, and if all that happens is we get five or six major AI systems and you have to belong to one of them in order to survive in the modern world, well, that would be something that we don’t want to see.”

Source link

read more
Arms tradeArtificial intelligence (AI)ComputingMilitaryRobotsScienceTechnologyUK newsWeapons technologyWorld news

Killer robots: pressure builds for ban as governments meet | Technology

Killer robots: pressure builds for ban as governments meet | Technology

They will be “weapons of terror, used by terrorists and rogue states against civilian populations. Unlike human soldiers, they will follow any orders however evil,” says Toby Walsh, professor of artificial intelligence at the University of New South Wales, Australia.

“These will be weapons of mass destruction. One programmer and a 3D printer can do what previously took an army of people. They will industrialise war, changing the speed and duration of how we can fight. They will be able to kill 24-7 and they will kill faster than humans can act to defend themselves.”

Governments are meeting at the UN in Geneva on Monday for the fifth time to discuss whether and how to regulate lethal autonomous weapons systems (Laws). Also known as killer robots, these AI-powered ships, tanks, planes and guns could fight the wars of the future without any human intervention.

The Campaign to Stop Killer Robots, backed by Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman, is calling for a preemptive ban on Laws, since the window of opportunity for credible preventative action is fast closing and an arms race is already in full swing.

“In 2015, we warned that there would be an arms race to develop lethal autonomous weapons,” says Walsh. “We can see that race has started. In every theatre of war – in the air, on the sea, under the sea and on the land – there are prototype autonomous weapons under development.”

Ahead of the meeting, the US has argued that rather than trying to “stigmatise or ban” Laws, innovation should be encouraged, and that use of the technology could actually reduce the risk of civilian casualties in war. Experts, however, fear the systems will not be able to distinguish between combatants and civilians and act proportionally to the threat.

France and Germany have been accused of shying away from tough rules by activists who say Europe should be leading the charge for a ban. A group of non-aligned states, led by Venezuela, is calling for the negotiation of a new international law to regulate or ban killer robots. The group seeks general agreement from states that “all weapons, including those with autonomous functions, must remain under the direct control and supervision of humans at all times”.

The new global arms race

The US launched an autonomous ship, Sea Hunter, on 7 April 2016. Photograph: Steve Dipaola/Reuters

Fully autonomous weapons do not yet exist, but high-ranking military officials have said the use of robots will be widespread in warfare in a matter of years. At least 381 partly autonomous weapon and military robotics systems have been deployed or are under development in 12 states, including China, France, Israel, the UK and the US.

Automatic systems, such as Israel’s Iron Dome and mechanised sentries in the Korean demilitarised zone, have already been deployed but cannot act fully autonomously. Research by the International Data Corporation suggests global spending on robotics will double from $91.5bn in 2016 to $188bn in 2020, bringing full autonomy closer to realisation.

The US, the frontrunner in the research and development of Laws, has cited autonomy as a cornerstone of its plan to modernise its army and ensure its strategic superiority across the globe. This has caused other major military powers to increase their investment in AI and robotics, as well as in autonomy, according to the Stockholm International Peace Research Institute.

“We very much acknowledge that we’re in a competition with countries like China and Russia,” says Steven Walker, director of the US Defense Advanced Research Projects Agency, which develops emerging technologies and whose 2018 budget was increased by 27% last year.

The US X-47B unmanned autonomous aircraft.

The US X-47B unmanned autonomous aircraft. Photograph: Rex Features

The US is currently working on the prototype for a tail-less, unmanned X-47B aircraft, which will be able to land and take off in extreme weather conditions and refuel in mid-air. The country has also completed testing of an autonomous anti-submarine vessel, Sea Hunter, that can stay at sea for months without a single person onboard and is able to sink other submarines and ships. A 6,000kg autonomous tank, Crusher, is capable of navigating incredibly difficult terrain and is advertised as being able to “tackle almost any mission imaginable”.

The UK is developing its own unmanned vehicles, which could be weaponised in the future. Taranis, an unmanned aerial combat vehicle drone named after the Celtic god of thunder, can avoid radar detection and fly in autonomous mode.

Russia, meanwhile, is amassing an arsenal of unmanned vehicles, both in the air and on the ground; commentators say the country sees this as a way to compensate for its conventional military inferiority compared with the US. “Whoever leads in AI will rule the world,” said Vladimir Putin, the recently re-elected Russian president, last year. “Artificial intelligence is the future, not only for Russia, but for all humankind.”

Russia’s Armata T-14 battle tank.

Russia’s Armata T-14 battle tank. Photograph: Mikhail Metzel/Tass/PA Images

Russia has developed a robot tank, Nerehta, which can be fitted with a machine gun or a grenade launcher, while its semi-autonomous tank, the T-14, will soon be fully autonomous. Kalashnikov, the Russian arms manufacturer, has developed a fully automated, high-calibre gun that uses artificial neural networks to choose targets.

China has various similar semi-autonomous tanks and is developing aircraft and seaborne swarms, but information on these projects is tightly guarded. “As people are still preparing for a high-tech war, the old and new are becoming intertwined to become a new form of hidden complex ‘hybrid war’,” wrote Wang Weixing, a Chinese military research director, last year.

“Unmanned combat is gradually emerging. While people have their heads buried in the sand trying to close the gap with the world’s military powers in terms of traditional weapons, technology-driven ‘light warfare’ is about to take the stage.”

Pandora’s box

According to the Campaign to Stop Killer Robots, these systems threaten to become the “third revolution in warfare”, after the invention of gunpowder and nuclear bombs, and once the Pandora’s box is opened it will be difficult to close.

“Inanimate machines cannot understand or respect the value of life, yet they would have the power to determine when to take it away,” says Mary Wareham, the campaign coordinator. “Our campaign believes that machines should never be permitted to take human life on the battlefield or in policing, border control, or any circumstances.”

Supporters of a ban say fully autonomous weapons are unlikely to be able to fully comply with the complex and subjective rules of international humanitarian and human rights law, which require human understanding and judgment as well as compassion.

Pointing to the 1997 ban on landmines, now one of the most widely accepted treaties in international law, and the ban on cluster munitions, which has 120 signatories, Wareham says: “History shows how responsible governments have found it necessary in the past to supplement the limits already provided in the international legal framework due to the significant threat posed to civilians.”

It is believed that the weaponisation of artificial intelligence could bring the world closer to apocalypse than ever before. “Imagine swarms of autonomous tanks and jet fighters meeting on a border and one of them fires in error or because it has been hacked,” says Noel Sharkey, professor of artificial intelligence and robots at the University of Sheffield, who first wrote about the reality of robot war in 2007.

“This could automatically invoke a battle that no human could understand or untangle. It is not even possible for us to know how the systems would interact in conflict. It could all be over in minutes with mass devastation and loss of life.”

Source link

read more
1 2 3 6
Page 1 of 6