close

Science

CernFacebookInternetScienceSocial networkingTechnologyTim Berners-LeeTwitter

Tim Berners-Lee on 30 years of the world wide web: ‘We can get the web we want’ | Technology

Tim Berners-Lee on 30 years of the world wide web: ‘We can get the web we want’ | Technology


Thirty years ago, Tim Berners-Lee, then a fellow at the physics research laboratory Cern on the French-Swiss border, sent his boss a document labelled Information Management: A Proposal. The memo suggested a system with which physicists at the centre could share “general information about accelerators and experiments”.

“Many of the discussions of the future at Cern and the LHC era end with the question: ‘Yes, but how will we ever keep track of such a large project?’” wrote Berners-Lee. “This proposal provides an answer to such questions.”

His solution was a system called, initially, Mesh. It would combine a nascent field of technology called hypertext that allowed for human-readable documents to be linked together, with a distributed architecture that would see those documents stored on multiple servers, controlled by different people, and interconnected.

It didn’t really go anywhere. Berners-Lee’s boss, Mike Sendall, took the memo and jotted down a note on top: “Vague but exciting …” But that was it. It took another year, until 1990, for Berners-Lee to start actually writing code. In that time, the project had taken on a new name. Berners-Lee now called it the World Wide Web.

Thirty years on, and Berners-Lee’s invention has more than justified the lofty goals implied by its name. But with that scale has come a host of troubles, ones that he could never have predicted when he was building a system for sharing data about physics experiments.

Some are simple enough. “Every time I hear that somebody has managed to acquire the [domain] name of their new enterprise for $50,000 (£38,500) instead of $500, I sigh, and feel that money’s not going to a good cause,” Berners-Lee tells me when we speak on the eve of the anniversary.



Berners-Lee demonstrating the world wide web to delegates at the Hypertext 1991 conference in San Antonio, Texas. Photograph: 1994-2017 CERN

It is a minor regret, but one he has had for years about the way he decided to “bootstrap” the web up to something that could handle a lot of users very quickly: by building on the pre-existing service for assigning internet addresses, the domain name system (DNS), he gave up the chance to build something better. “You wanted a name for your website, you’d go and ask [American computer scientist] Jon Postel, you know, back in the day, and he would give you a name.

“At the time that seemed like a good idea, but it relied on it being managed benevolently.” Today, that benevolent management is no longer something that can be assumed. “There are plenty of domain names to go around, but the way people have invested, in buying up domains that they think entrepreneurs or organisations will use – even trying to build AI that would guess what names people will want for their organisations, grabbing the domain name and then selling it to them for a ridiculous amount of money – that’s a breakage.”

It sounds minor, but the problems with DNS can stand in for a whole host of difficulties the web has faced as it has grown. A quick fix, built to let something scale up rapidly, that turns out to provide perverse incentives once it is used by millions of people and is so embedded that it is nearly impossible to change course.

But nearly impossible is not actually impossible. That is the thrust of the message Berners-Lee is aiming to spread. Every year, on the anniversary of his creation, he publishes an open letter on his vision for the future of the web. This year’s letter, given the importance of the anniversary, is broader in scope than most – and expresses a rare level of concern about the direction in which the web is moving.

“While the web has created opportunity, given marginalised groups a voice and made our daily lives easier,” he writes, “it has also created opportunity for scammers, given a voice to those who spread hatred and made all kinds of crime easier to commit.

“It’s understandable that many people feel afraid and unsure if the web is really a force for good. But given how much the web has changed in the past 30 years, it would be defeatist and unimaginative to assume that the web as we know it can’t be changed for the better in the next 30. If we give up on building a better web now, then the web will not have failed us. We will have failed the web.”

Berners-Lee breaks down the problems the web now faces into three categories. The first is what occupies most of the column inches in the press, but is the least intrinsic to the technology itself: “deliberate, malicious intent, such as state-sponsored hacking and attacks, criminal behaviour and online harassment”.

He believes this makes the system fragile. “It’s amazing how clever people can be, but when you build a new system it is very, very hard to imagine the ways in which it can be attacked.”

At the same time, while criminal intentions may be the scariest for many, they aren’t new to the web. They are “impossible to eradicate completely”, he writes, but can be controlled with “both laws and code to minimise this behaviour, just as we have always done offline”.

Berners-Lee in 1998.



Berners-Lee in 1998. Photograph: Elise Amendola/AP

More concerning are the other two sources of dysfunction affecting the web. The second is when a system built on top of Berners-Lee’s creation introduces “perverse incentives” that encourage others to sacrifice users’ interests, “such as ad-based revenue models that commercially reward clickbait and the viral spread of misinformation”. And the third is more diffuse still: those systems and services that, thoughtfully and benevolently created, still result in negative outcomes, “such as the outraged and polarised tone and quality of online discourse”.

The problem is that it is hard to tell what the outcomes of a system you build are going to be. “Given there are more webpages than there are neurons in your brain, it’s a complicated thing. You build Reddit, and people on it behave in a particular way. For a while they all behave in a very positive, constructive way. And then you find a subreddit in which they behave in a nasty way.

“Or, for example, when you build a system such as Twitter, it becomes wildly, wildly effective. And when the ‘Arab Spring’ – I will never say that without the quotes – happens, you’re tempted to claim that Twitter is a great force for good because it allowed people to react against the oppressive regime.

“But then pretty soon people are contacting you about cyberbullying and saying their lives are miserable on Twitter because of the way that works. And then another few iterations of the Earth going around the sun, and you find that the oppressive regimes are using social networks in order to spy on and crack down on dissidents before the dissidents could even get round to organising.”

In conclusion, he says, “You can’t generalise. You can’t say, you know, social networks tend to be bad, tend to be nasty.”

For a creation entering its fourth decade, we still know remarkably little about how the web works. The technical details, sure: they are all laid out there, in that initial document presented to Cern, and in the many updates that Berners-Lee, and the World Wide Web Consortium he founded to succeed him, have approved.

But the social dynamics built on top of that technical underpinning are changing so rapidly and are so unstable that every year we need to reassess its legacy. “Are we now in a stable position where we can look back and decide this is the legacy of the web? Nooooope,” he says, with a chuckle. Which means we are running a never-ending race, trying to work out the effects of new platforms and systems even as competitors launch their eventual replacements.



Sir Tim Berners-Lee: how the web went from idea to reality

Berners-Lee’s solution is radical: a sort of refoundation of the web, creating a fresh set of rules, both legal and technical, to unite the world behind a process that can avoid some of the missteps of the past 30 years.

Calling it the “contract for the web”, he first suggested it last November at the Web Summit in Lisbon. “At pivotal moments,” he says, “generations before us have stepped up to work together for a better future. With the Universal Declaration of Human Rights, diverse groups of people have been able to agree on essential principles. With the Law of Sea and the Outer Space Treaty, we have preserved new frontiers for the common good. Now too, as the web reshapes our world, we have a responsibility to make sure it is recognised as a human right and built for the public good.”

This is a push for legislation, yes. “Governments must translate laws and regulations for the digital age. They must ensure markets remain competitive, innovative and open. And they have a responsibility to protect people’s rights and freedoms online.”

But it is equally important, he says, for companies to join in and for the big tech firms to do more to ensure their pursuit of short-term profit is not at the expense of human rights, democracy, scientific fact or public safety. “This year, we’ve seen a number of tech employees stand up and demand better business practices. We need to encourage that spirit.”

But even if we could fix the web, might it be too late for that to fix the world? Berners-Lee’s invention has waxed and waned in its role in the wider digital society. For years, the web was the internet, with only a tiny portion of hardcore nerds doing anything online that wasn’t mediated through a webpage.

But in the past decade, that trend has reversed: the rise of the app economy fundamentally bypasses the web, and all the principles associated with it, of openness, interoperability and ease of access. In theory, any webpage should be accessible from any device with a web browser, be that an iPhone, a Windows PC or an internet-enabled fridge. The same is not true for content and services locked inside apps, where the distributor has absolute power over where and how users can interact with their platforms.

In fact, the day before I speak to Berners-Lee, Facebook boss Mark Zuckerberg published his own letter on the future of the internet, describing his goal of reshaping Facebook into a “privacy-focused social network”. It had a radically different set of aims: pulling users into a fundamentally closed network, where not only can you only get in touch with Facebook users from other Facebook products, but even the very idea of accessing core swathes of Facebook’s platform from a web browser was deprioritised, in favour of the extreme privacy provided by universal end-to-end encryption.

For Berners-Lee, these shifts are concerning, but represent the strengths as well as the weaknesses of his creation. “The crucial thing is the URL. The crucial thing is that you can link to anything.

This is for Everyone seen during the opening ceremony of the London Olympics in 2012, a nod to Berners-Lee’s creation.



This is for Everyone seen during the opening ceremony of the London Olympics in 2012, a nod to Berners-Lee’s creation. Photograph: Martin Rickett/PA

“The web platform [the bundle of technologies that underpin the web] is always, at every moment, getting more and more powerful. The good news is that because the web platform is so powerful, a lot of the apps which are actually built, are built using the web platform and then cranked out using the various frameworks which allow you to generate an app or something from it.” All the installable applications that run on smartphones and tablets work in this way, with the app acting as little more than a wrapper for a web page.

“So there’s web technology inside, but what we’re saying is if, from the user’s point of view, there’s no URL, then we’ve lost.”

In some cases, that battle really has been lost. Apple runs an entire media operation inside its app store that can’t be read in normal browsers, and has a news app that spits out links that do not open if Apple News has been uninstalled.

But in many more, the same viral mechanics that allow platforms to grow to a scale that allow them to consider breaking from the web ultimately keep them tied to the openness that the platform embodies. Facebook posts still have permanent links buried in the system, as do tweets and Instagrams. Even the hot new thing, viral video app TikTok, lets users send URLs to each other: how else to encourage new users to hop on board?

It may be too glib to say, as the early Netscape executive Ram Shriram once did, that “open always wins out” – tech is littered with examples where a closed technology was the ultimate victor – but the web’s greatest strength over the past 30 years has always been the ability of anyone to build anything on top of it, without needing permission from Berners-Lee or anyone else.

But for that freedom to stick around for another 30 years – long enough to get the 50% of the world that isn’t online connected, long enough to see the next generation of startups grow to maturity – it requires others to join Berners-Lee in the fight. “The web is for everyone,” he says, “and collectively we hold the power to change it. It won’t be easy. But if we dream a little and work a lot, we can get the web we want.”



Source link

read more
GadgetsScienceTechnology

Jetpacks: why aren’t we all flying to work? | Technology

no thumb


Those of a certain age may remember the opening ceremony of the 1984 Olympic Games in Los Angeles. As Rafer Johnson lit the eternal flame, a man strapped into a rocket-propelled backpack launched himself across the arena above the ticker tape and balloons, landing gracefully on the track before a TV audience of 2.5 billion.

It was a moment of triumph seeming to herald a new age in which, finally, teased for decades by Buck Rogers’ “degravity belt” and King of the Rocketmen, we’d all soon be fizzing off to work with our own personal jetpacks. Even Isaac Asimov confidently predicted that by the turn of the century, they would be “as common as a bicycle”.

So what happened? In 2018, shouldn’t we all be flying to work?

So called “rocketmen” were not entirely the work of fiction. Russian pilot Aleksandr Andreyev had been working on one as early as 1919. Germany’s second world war rocket whiz Wernher Von Braunallegedly worked on a “jet vest” for the US army after the war, which later became project Grasshopper, aimingto build a “jump belt”.

All these attempts fizzled out due to lack of funds. It wasn’t until engineer Wendell Moore’s Bell Rocket Belt was first tested in 1960 that the world witnessed a working jetpack – using a turbo jet rather than a rocket.

The US military commissioned Moore and John K Hulbert – a gas turbine specialist – to work on the Jet Belt, or “man rocket”, for possible military use. Moore’s first problem was fuel – anything capable of producing enough thrust burned up in a flash.

Moore hit upon using hydrogen peroxide, a compound commonly used as bleach, as a fuel. Two cylinders were attached to a fibreglass frame with another of nitrogen gas. Forced on to a catalyst, this mix explodes into superheated steam, shooting through twin nozzles at 700C.

Thrust sorted, they soon encountered the human body’s natural resistance to aerial navigation. The device, which used directional thrusters controlled by hand-operated levers, was extremely tricky to stabilise.

Undaunted, Moore flew the first flights himself, but in February 1961 the belt swerved like an unattended firehose, snapped its tether and Moore fell 2.5 metres, breaking his kneecap.

After 36 tethered flights in a hangar, the untethered belt, was finally flown outside by Harold “Hal” Graham, a 27-year-old Bell test engineer with no previous flight experience. Amid a cacophony of steam, Graham flew for just 13 seconds, covering over 34 metres (112 feet) at an altitude of 18 inches.

During the first public display at Fort Eustis, Virginia, on 8 June 1961, Graham lifted himself to around 4.5 metres, dangled around for 15 seconds and landed, offering a salute.

It was a big hit with the public. Graham piloted the device all around the world to great acclaim, but after landing on his head from 6.7 metres at a demo in Florida, he retired.

Graham handed over the reins to his friend Bill Suitor, who proved adept at flying the machine. Again, the public loved it, but the US army, who was paying for it, was disappointed.

The belt weighed 56.7kg with fuel and consumed 19 litres of expensive hydrogen peroxide during its 30 second flight and required a platoon of service personnel to attend to it. It flew neither high nor low enough to be at a safe height, and it was difficult to fly. In the opinion of the military, the Bell Rocket Belt was more a spectacular toy than an effective means of transport, so it withdrew funding.

But by then the idea had caught on. Jetpack enthusiast and engineer Nelson Tyler, approached Suitor with his own belt. Suitor flew the Tyler belt to much acclaim at exhibitions across the US, culminating in the triumph flight at the Olympics in 1984.

Stunt pilot Kinnie Gibson was next to fly Tyler’s belt, making himself a millionaire flying as the Rocketman until his passing – from natural causes – in 2015. But it was Gibson’s success that is partly why you haven’t come to work on a rocket pack this morning.

By 1990, the 90% pure oxygen he needed was becoming too expensive. Gibson tried his engines with 88% oxygen and a catalyst to compensate, but the pack malfunctioned, smashing up Gibson’s knee. He sued the chemical company for the malfunction – and won – leading to companies refusing to make the 90% oxygen you would need for that homebuilt rocket-belt.

Then, in 1992, came the RB2000, a project based on the Bell model, built by Brad Barker, Larry Stanley and Joe Wright. If ever there was a moral reason not to build a rocket belt, this story told in The Rocketbelt Caper by Paul Brown – involving lump hammers, lawsuits, drug smuggling and murder – is it.

Fast-forward to the 2010s and jetpacks have become a reality again, if not quite in the form of the personal backpack we thought we would all be dangling from. Water-propelled Hydrolift/Jetlev devices have become a commonplace exotic seaside pursuit, while jetpack enthusiasts build untethered versions for themselves, usually similar to the designs of Wendell Moore at Bell. The big difficulty here is still the scarcity of hydrogen peroxide.

Inventor, tech whiz and head of Google’s research laboratory Google X, Astro Teller, says even Google has looked into jetpacks, but at a quarter-mile a gallon and with a motor as loud as a Harley Davidson, decided they weren’t practical.

More hopeful is the offering from Jetpack Aviation, who specialise in personal vertical takeoff and landing devices. It demoed its JB10 in Monaco (two minutes aloft) and in London in October (four minutes aloft) last year to some acclaim. The brain power behind it? One Nelson Tyler, who aims to release an electric version in 2019 … yours for £200k.



‘Wiltshire’s Iron Man’ in test flight

Then there’s Swiss ex-military and commercial pilot Yves “JetMan” Rossy, who straps two 8ft carbon wings and four small kerosene jet engines to his body. Online footage shows Rossy confidently executing loop-the-loops and roaring over the Grand Canyon strapped into his device. Drawback: one must throw oneself out of a plane, which makes for a rather awkward commute. Cost: £190k.

Richard Browning, meanwhile, is a British inventor dubbed “Wiltshire’s Iron Man”, whose Daedalus personal flightsuit sees him strap jet engines to his back and hands.

Perhaps the most promising development, which is already on the market, is that made by Tecnologia Aeroespacial Mexicana. It’s already sold four of its Tecaeromex Rocket Belts, and a helicopter version is in the works. You can even train to use one using a JetLev.

We might soon be able to fly about strapped into jetpacks, but whether we should is another matter. It’s hard to disagree with original rocketman Suitor, who said, after his experiences with Barker and co: “I hope they never become popular. Nobody would be safe.

“You’d have people falling out of the air like unwanted Santa Clauses. I’ve had several close shaves myself and almost sliced myself up like a big soft slice of silky cheese. Could you imagine every idiot who could afford one flying about?”



Source link

read more
BiologyCaliforniaData and computer securityDNA databaseFamilyGenealogyGeneticsInternetPrivacyScienceTechnologyUS news

Golden State Killer: the end of DNA privacy? Chips with Everything podcast | Technology

no thumb


Subscribe and review: Acast, Apple, Spotify, Soundcloud, Audioboom, Mixcloud. Join the discussion on Facebook, Twitter or email us at chipspodcast[email protected]

A former police officer called Joseph James DeAngelo was arrested in April in connection with a series of murders, rapes and burglaries attributed to an unknown assailant known as the Golden State Killer.

This 40-year-old cold case was reopened after investigators acquired a discarded DNA sample and uploaded it to an “undercover profile” on a genealogy website called GED match. Through this, they were able to find distant relatives and eventually narrow down their search to match descriptions of the killer obtained throughout the investigation.

But what about the innocent people who sent off their DNA to a genealogy website in hopes of tracing their ancestry who might end up becoming part of a criminal investigation? Our DNA is one of the most inherently personal things we have, but this case raises questions about its privacy. If we spit into a test tube and send it off to a website for analysis, who owns that information? Who has access to it? And what can it be used for?

To try and answer some of these questions, Jordan Erica Webber talks to Prof Charles Tumosa of the University of Baltimore, Prof Denise Syndercombe-Court of King’s College and Lee Rainie of the Pew Research Center.





Source link

read more
Australia newsClimate changeEnvironmentInternetScienceTechnologyWikipedia

Wikipedia: the most cited authors revealed to be three Australian scientists | Technology

no thumb


An academic paper on global climate zones written by three Australians more than a decade ago has been named the most cited source on Wikipedia, having being referenced more than 2.8m times.

But the authors of the paper, who are still good friends, had no idea about the wider impact of their work until recently.

The paper, published in 2007 in the journal Hydrology and Earth System Sciences, used contemporary data to update a widely used model for classifying the world’s climates.

Known as the Köppen Climate Classification System, the model was first published by climatologist Wladimir Köppen in 1884, but it had not been comprehensively updated for decades.

The lead author of the paper is Dr Murray Peel, a senior lecturer in the department of infrastructure engineering at the University of Melbourne, and he co-authored the updated climate map with geography professor Brian Finlayson and engineering professor Thomas McMahon, both now retired.

“We are amazed, absolutely amazed at the number of citations,” Finlayson told Guardian Australia from his home in Melbourne. “We are not so much amazed at the fact it’s been cited as we are about the number of people who have cited it.

“It’s pleasing that research you’ve done is something other people are finding useful.”

The trio knew their paper had an impact in academic circles and in scientific literature, with the Köppen Climate Classification System used by researchers in a range of fields including geology, sociology, public health and climatology.

But Finlayson said they were unaware of the more widespread success until a journalist from Wired contacted them about the results of an analysis by Wikipedia of the top 10 sources by citation across every Wikipedia language. All 10 were reference books or scientific articles. The updated world map of the Köppen-Geiger climate classification boasted 2,830,341 citations, easily surpassing what came in at No 2, a paper published in the Journal of Physical Chemistry that had 21,350 citations.

Finlayson said the popularity of the paper emphasised the importance of open science, which is the concept that data and findings should be openly and freely available so that others can use and benefit from them. Wikipedia operates on a similar concept, and credible citations are crucial to the encyclopaedia’s reliability.

“The journal we originally published the paper in is free and open access, and we chose the journal for that reason,” he said. “People noticed and said, ‘Hey, we have an updated climate map, we’ll use that’, and then it spread.”

At the time, open access journals were rare.

“I have always been a supporter of open science,” Finlayson said. “Research is no good to anyone locked in a cupboard, or published in a journal you have to pay a lot of money to access.”

He said he first began working with paper co-author McMahon in 1981, and that they got to know Murray, who is “a fair bit younger,” when he became one of their PhD students.

“He did his PhD on global hydrology and kept working with us in that area over the years, and we are all still very good friends and kept publishing together,” Finlayson, now 73, said. “We agree on most of the serious things and then every now and then we have differences of opinion. So we talk about it, and then we set out to test who is right, and write a paper on the results.

“If you want to form an academic group of people who work together well, the fact that they’re friends helps a lot. You’re not concerned about things like someone getting more kudos than you are.”



Source link

read more
Barack ObamaBeyoncéBiologyCultureFilmInsectsJohn CleeseKate WinsletLeonardo DiCaprioScienceTechnologyWildlife

Celebrity species: from the DiCaprio water beetle to Obama spiders | Technology

Celebrity species: from the DiCaprio water beetle to Obama spiders | Technology


Leonardo DiCaprio

A new species of water beetle, discovered by scientists in Borneo, has been named after the Oscar-winning star of The Revenant. With its partially retractable head and slightly protruding eyes, Grouvellinus leonardodicaprioi was not named for its resemblance to the 43-year-old actor and environmentalist but because the scientists “wanted to highlight that even the smallest creature is important”.




Captia beyonceae. Photograph: Bryan Lessard/CSIRO

Beyoncé

“It was the unique dense golden hairs on the fly’s abdomen that led me to name this fly in honour of the performer,” said Australian scientist Bryan Lessard upon the naming of Scaptia beyonceae, a rare species of horse fly found in Queensland. Australia’s science agency CSIRO contacted Beyoncé but, unsurprisingly, never heard back.

John Cleese and a woolly lemur.



Mad about Madagascar: John Cleese and a woolly lemur.

John Cleese

The Monty Python actor, on the other hand, was thrilled to have a woolly lemur named after him by a team of scientists from Zurich University in 2005. “I’m absurdly fond of the little creatures,” said Cleese, who made a documentary about lemurs in Madagascar in 1998, and now lives on there in name at least in the Bemaraha woolly lemur (Avahi cleesei).

Barack Obama receives a photograph of the Tosanoides obama reef fish from ocean explorer Sylvia Earle last year.



Scales for the chief: Barack Obama receives a photograph of the Tosanoides obama reef fish from ocean explorer Sylvia Earle last year. Photograph: Brian Skerry/National Geographic

Barack Obama

A dozen species have been named after the 44th US president, including a species of lichen, two spiders, a Cuban bee, an extinct lizard and – most picturesque of all – a coral reef fish that goes by the name Tosanoides obama and can be found swimming around the Papahānaumokuākea marine national monument in Obama’s native Hawaii.

Agra katewinsletae.


Agra katewinsletae. Photograph: Karolyn Darrow/Courtesy of National Museum of Natural History

Kate Winslet

DiCaprio’s co-star in Titanic has also had a beetle named after her, though some 11,000 miles of ocean divides the two species. Agra katewinsletae was discovered in Costa Rica by entomologist Terry Erwin, who explained: “Her character did not go down with the ship, but we will not be able to say the same for this elegant canopy species if all the rainforest is converted to pastures.”



Source link

read more
AgeingArtificial intelligence (AI)ComputingConsciousnessEthicsScienceTechnologyUK newsWorld news

No death and an enhanced life: Is the future transhuman? | Technology

No death and an enhanced life: Is the future transhuman? | Technology


The aims of the transhumanist movement are summed up by Mark O’Connell in his book To Be a Machine, which last week won the Wellcome Book prize. “It is their belief that we can and should eradicate ageing as a cause of death; that we can and should use technology to augment our bodies and our minds; that we can and should merge with machines, remaking ourselves, finally, in the image of our own higher ideals.”

The idea of technologically enhancing our bodies is not new. But the extent to which transhumanists take the concept is. In the past, we made devices such as wooden legs, hearing aids, spectacles and false teeth. In future, we might use implants to augment our senses so we can detect infrared or ultraviolet radiation directly or boost our cognitive processes by connecting ourselves to memory chips. Ultimately, by merging man and machine, science will produce humans who have vastly increased intelligence, strength, and lifespans; a near embodiment of gods.

Is that a desirable goal? Advocates of transhumanism believe there are spectacular rewards to be reaped from going beyond the natural barriers and limitations that constitute an ordinary human being. But to do so would raise a host of ethical problems and dilemmas. As O’Connell’s book indicates, the ambitions of transhumanism are now rising up our intellectual agenda. But this is a debate that is only just beginning.

There is no doubt that human enhancement is becoming more and more sophisticated – as will be demonstrated at the exhibition The Future Starts Here which opens at the V&A museum in London this week. Items on display will include “powered clothing” made by the US company Seismic. Worn under regular clothes, these suits mimic the biomechanics of the human body and give users – typically older people – discrete strength when getting out of a chair or climbing stairs, or standing for long periods.

In many cases these technological or medical advances are made to help the injured, sick or elderly but are then adopted by the healthy or young to boost their lifestyle or performance. The drug erythropoietin (EPO) increases red blood cell production in patients with severe anaemia but has also been taken up as an illicit performance booster by some athletes to improve their bloodstream’s ability to carry oxygen to their muscles.

And that is just the start, say experts. “We are now approaching the time when, for some kinds of track sports such as the 100-metre sprint, athletes who run on carbon-fibre blades will be able outperform those who run on natural legs,” says Blay Whitby, an artificial intelligence expert at Sussex University.

The question is: when the technology reaches this level, will it be ethical to allow surgeons to replace someone’s limbs with carbon-fibre blades just so they can win gold medals? Whitby is sure many athletes will seek such surgery. “However, if such an operation came before any ethics committee that I was involved with, I would have none of it. It is a repulsive idea – to remove a healthy limb for transient gain.”



Scientists think there will come a point when athletes with carbon blades will be able to out-run able-bodied rivals. Photograph: Alexandre Loureiro/Getty Images

Not everyone in the field agrees with this view, however. Cybernetics expert Kevin Warwick, of Coventry University, sees no problem in approving the removal of natural limbs and their replacement with artificial blades. “What is wrong with replacing imperfect bits of your body with artificial parts that will allow you to perform better – or which might allow you to live longer?” he says.

Warwick is a cybernetics enthusiast who, over the years, has had several different electronic devices implanted into his body. “One allowed me to experience ultrasonic inputs. It gave me a bat sense, as it were. I also interfaced my nervous system with my computer so that I could control a robot hand and experience what it was touching. I did that when I was in New York, but the hand was in a lab in England.”

Such interventions enhance the human condition, Warwick insists, and indicate the kind of future humans might have when technology augments performance and the senses. Some might consider this unethical. But even doubters such as Whitby acknowledge the issues are complex. “Is it ethical to take two girls under the age of five and train them to play tennis every day of their lives until they have the musculature and skeletons of world champions?” he asks. From this perspective the use of implants or drugs to achieve the same goal does not look so deplorable.

This last point is a particular issue for those concerned with the transhumanist movement. They believe that modern technology ultimately offers humans the chance to live for aeons, unshackled – as they would be – from the frailties of the human body. Failing organs would be replaced by longer-lasting high-tech versions just as carbon-fibre blades could replace the flesh, blood and bone of natural limbs. Thus we would end humanity’s reliance on “our frail version 1.0 human bodies into a far more durable and capable 2.0 counterpart,” as one group has put it.

However, the technology needed to achieve these goals relies on as yet unrealised developments in genetic engineering, nanotechnology and many other sciences and may take many decades to reach fruition. As a result, many advocates – such as the US inventor and entrepreneur Ray Kurzweil, nanotechnology pioneer Eric Drexler and PayPal founder and venture capitalist Peter Thiel have backed the idea of having their bodies stored in liquid nitrogen and cryogenically preserved until medical science has reached the stage when they can be revived and their resurrected bodies augmented and enhanced.

Four such cryogenic facilities have now been constructed: three in the US and one in Russia. The largest is the Alcor Life Extension Foundation in Arizona whose refrigerators store more than 100 bodies (nevertheless referred to as “patients” by staff) in the hope of their subsequent thawing and physiological resurrection. It is “a place built to house the corpses of optimists”, as O’Connell says in To Be a Machine.

The Alcor Life Extension Foundation where ‘patients’ are cryogenically stored in the hope of future revival.



The Alcor Life Extension Foundation where ‘patients’ are cryogenically stored in the hope of future revival. Photograph: Alamy

Not everyone is convinced about the feasibility of such technology or about its desirability. “I was once interviewed by a group of cryonic enthusiasts – based in California – called the society for the abolition of involuntary death,” recalls the Astronomer Royal Martin Rees. “I told them I’d rather end my days in an English churchyard than a Californian refrigerator. They derided me as a deathist – really old-fashioned.”

For his part, Rees believes that those who choose to freeze themselves in the hope of being eventually thawed out would be burdening future generations expected to care for these newly defrosted individuals. “It is not clear how much consideration they would deserve,” Rees adds.

Ultimately, adherents of transhumanism envisage a day when humans will free themselves of all corporeal restraints. Kurzweil and his followers believe this turning point will be reached around the year 2030, when biotechnology will enable a union between humans and genuinely intelligent computers and AI systems. The resulting human-machine mind will become free to roam a universe of its own creation, uploading itself at will on to a “suitably powerful computational substrate”. We will become gods, or more likely “star children” similar to the one at the end of 2001: A Space Odyssey.

These are remote and, for many people, very fanciful goals. And the fact that much of the impetus for establishing such extreme forms of transhuman technology comes from California and Silicon Valley is not lost on critics. Tesla and SpaceX founder Elon Musk, the entrepreneur who wants to send the human race to Mars, also believes that to avoid becoming redundant in the face of the development of artificial intelligence, humans must merge with machines to enhance our own intellect.

This is a part of the world where the culture of youth is followed with fanatical intensity and where ageing is feared more acutely than anywhere else on the planet. Hence the overpowering urge to try to use technology to overcome its effects.

It is also one of the world’s richest regions, and many of those who question the values of the transhuman movement warn it risks creating technologies that will only create deeper gulfs in an already divided society where only some people will be able to afford to become enhanced while many other lose out.

The position is summed up by Whitby. “History is littered with the evil consequences of one group of humans believing they are superior to another group of humans,” he said. “Unfortunately in the case of enhanced humans they will be genuinely superior. We need to think about the implications before it is too late.”

For their part, transhumanists argue that the costs of enhancement will inevitably plummet and point to the example of the mobile phone, which was once so expensive only the very richest could afford one, but which today is a universal gadget owned by virtually every member of society. Such ubiquity will become a feature of technologies for augmenting men and women, advocates insist.

Many of these issues seem remote, but experts warn that the implications involved need to be debated as a matter of urgency. An example is provided by the artificial hand being developed by Newcastle University. Current prosthetic limbs are limited by their speed of response. But project leader Kianoush Nazarpour believes it will soon be possible to create bionic hands that can assess an object and instantly decide what kind of grip it should adopt.

“It will be of enormous benefit, but its use raises all sorts of issues. Who will own it: the wearer or the NHS? And if it is used to carry a crime, who ultimately will be responsible for its control? We are not thinking about these concerns and that is a worry.”

The position is summed up by bioethicist professor Andy Miah of Salford University.

“Transhumanism is valuable and interesting philosophically because it gets us to think differently about the range of things that humans might be able to do – but also because it gets us to think critically about some of those limitations that we think are there but can in fact be overcome,” he says. “We are talking about the future of our species, after all.”

Body count

Limbs
The artificial limbs of Luke Skywalker and the Six Million Dollar Man are works of fiction. In reality, bionic limbs have suffered from multiple problems: becoming rigid mid-action, for example. But new generations of sensors are now making it possible for artificial legs and arms to behave in much more complex, human-like ways.

Senses
The light that is visible to humans excludes both infrared and ultra-violet radiation. However, researchers are working on ways of extending the wavelengths of radiation that we can detect, allowing us to see more of the world – and in a different light. Ideas like these are particularly popular with military researchers trying to create cyborg soldiers.

Power
Powered suits or exoskeletons are wearable mobile machines that allow people to move their limbs with increased strength and endurance. Several versions are being developed by the US army, while medical researchers are working on easy-to-wear versions that would be able to help people with severe medical conditions or who have lost limbs to move about naturally.

Brains
Transhumanists envisage the day when memory chips and neural pathways are actually embedded into people’s brains, thus bypassing the need to use external devices such as computers in order to access data and to make complicated calculations. The line between humanity and machines will become increasingly blurred.

Robotic exoskeletons such as this one can help people who have suffered spinal injuries.



Robotic exoskeletons such as this one can help people who have suffered spinal injuries. Photograph: Alamy



Source link

read more
Arms tradeArtificial intelligence (AI)ComputingMilitaryRobotsScienceTechnologyUK newsWeapons technologyWorld news

Killer robots: pressure builds for ban as governments meet | Technology

Killer robots: pressure builds for ban as governments meet | Technology


They will be “weapons of terror, used by terrorists and rogue states against civilian populations. Unlike human soldiers, they will follow any orders however evil,” says Toby Walsh, professor of artificial intelligence at the University of New South Wales, Australia.

“These will be weapons of mass destruction. One programmer and a 3D printer can do what previously took an army of people. They will industrialise war, changing the speed and duration of how we can fight. They will be able to kill 24-7 and they will kill faster than humans can act to defend themselves.”

Governments are meeting at the UN in Geneva on Monday for the fifth time to discuss whether and how to regulate lethal autonomous weapons systems (Laws). Also known as killer robots, these AI-powered ships, tanks, planes and guns could fight the wars of the future without any human intervention.

The Campaign to Stop Killer Robots, backed by Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman, is calling for a preemptive ban on Laws, since the window of opportunity for credible preventative action is fast closing and an arms race is already in full swing.

“In 2015, we warned that there would be an arms race to develop lethal autonomous weapons,” says Walsh. “We can see that race has started. In every theatre of war – in the air, on the sea, under the sea and on the land – there are prototype autonomous weapons under development.”

Ahead of the meeting, the US has argued that rather than trying to “stigmatise or ban” Laws, innovation should be encouraged, and that use of the technology could actually reduce the risk of civilian casualties in war. Experts, however, fear the systems will not be able to distinguish between combatants and civilians and act proportionally to the threat.

France and Germany have been accused of shying away from tough rules by activists who say Europe should be leading the charge for a ban. A group of non-aligned states, led by Venezuela, is calling for the negotiation of a new international law to regulate or ban killer robots. The group seeks general agreement from states that “all weapons, including those with autonomous functions, must remain under the direct control and supervision of humans at all times”.

The new global arms race



The US launched an autonomous ship, Sea Hunter, on 7 April 2016. Photograph: Steve Dipaola/Reuters

Fully autonomous weapons do not yet exist, but high-ranking military officials have said the use of robots will be widespread in warfare in a matter of years. At least 381 partly autonomous weapon and military robotics systems have been deployed or are under development in 12 states, including China, France, Israel, the UK and the US.

Automatic systems, such as Israel’s Iron Dome and mechanised sentries in the Korean demilitarised zone, have already been deployed but cannot act fully autonomously. Research by the International Data Corporation suggests global spending on robotics will double from $91.5bn in 2016 to $188bn in 2020, bringing full autonomy closer to realisation.

The US, the frontrunner in the research and development of Laws, has cited autonomy as a cornerstone of its plan to modernise its army and ensure its strategic superiority across the globe. This has caused other major military powers to increase their investment in AI and robotics, as well as in autonomy, according to the Stockholm International Peace Research Institute.

“We very much acknowledge that we’re in a competition with countries like China and Russia,” says Steven Walker, director of the US Defense Advanced Research Projects Agency, which develops emerging technologies and whose 2018 budget was increased by 27% last year.

The US X-47B unmanned autonomous aircraft.



The US X-47B unmanned autonomous aircraft. Photograph: Rex Features

The US is currently working on the prototype for a tail-less, unmanned X-47B aircraft, which will be able to land and take off in extreme weather conditions and refuel in mid-air. The country has also completed testing of an autonomous anti-submarine vessel, Sea Hunter, that can stay at sea for months without a single person onboard and is able to sink other submarines and ships. A 6,000kg autonomous tank, Crusher, is capable of navigating incredibly difficult terrain and is advertised as being able to “tackle almost any mission imaginable”.

The UK is developing its own unmanned vehicles, which could be weaponised in the future. Taranis, an unmanned aerial combat vehicle drone named after the Celtic god of thunder, can avoid radar detection and fly in autonomous mode.

Russia, meanwhile, is amassing an arsenal of unmanned vehicles, both in the air and on the ground; commentators say the country sees this as a way to compensate for its conventional military inferiority compared with the US. “Whoever leads in AI will rule the world,” said Vladimir Putin, the recently re-elected Russian president, last year. “Artificial intelligence is the future, not only for Russia, but for all humankind.”

Russia’s Armata T-14 battle tank.



Russia’s Armata T-14 battle tank. Photograph: Mikhail Metzel/Tass/PA Images

Russia has developed a robot tank, Nerehta, which can be fitted with a machine gun or a grenade launcher, while its semi-autonomous tank, the T-14, will soon be fully autonomous. Kalashnikov, the Russian arms manufacturer, has developed a fully automated, high-calibre gun that uses artificial neural networks to choose targets.

China has various similar semi-autonomous tanks and is developing aircraft and seaborne swarms, but information on these projects is tightly guarded. “As people are still preparing for a high-tech war, the old and new are becoming intertwined to become a new form of hidden complex ‘hybrid war’,” wrote Wang Weixing, a Chinese military research director, last year.

“Unmanned combat is gradually emerging. While people have their heads buried in the sand trying to close the gap with the world’s military powers in terms of traditional weapons, technology-driven ‘light warfare’ is about to take the stage.”

Pandora’s box

According to the Campaign to Stop Killer Robots, these systems threaten to become the “third revolution in warfare”, after the invention of gunpowder and nuclear bombs, and once the Pandora’s box is opened it will be difficult to close.

“Inanimate machines cannot understand or respect the value of life, yet they would have the power to determine when to take it away,” says Mary Wareham, the campaign coordinator. “Our campaign believes that machines should never be permitted to take human life on the battlefield or in policing, border control, or any circumstances.”

Supporters of a ban say fully autonomous weapons are unlikely to be able to fully comply with the complex and subjective rules of international humanitarian and human rights law, which require human understanding and judgment as well as compassion.

Pointing to the 1997 ban on landmines, now one of the most widely accepted treaties in international law, and the ban on cluster munitions, which has 120 signatories, Wareham says: “History shows how responsible governments have found it necessary in the past to supplement the limits already provided in the international legal framework due to the significant threat posed to civilians.”

It is believed that the weaponisation of artificial intelligence could bring the world closer to apocalypse than ever before. “Imagine swarms of autonomous tanks and jet fighters meeting on a border and one of them fires in error or because it has been hacked,” says Noel Sharkey, professor of artificial intelligence and robots at the University of Sheffield, who first wrote about the reality of robot war in 2007.

“This could automatically invoke a battle that no human could understand or untangle. It is not even possible for us to know how the systems would interact in conflict. It could all be over in minutes with mass devastation and loss of life.”



Source link

read more
NasaRobotsScienceSpaceTechnology

Bio bots: robots that mimic animal physiology | Technology

Bio bots: robots that mimic animal physiology | Technology


Last week, Nasa announced that it is developing robotic bees to gather information about areas of Mars that wouldn’t be accessible to a Mars rover. The bots could detect, for example, methane, a possible sign of life.



Dog lead: the canine-inspired SpotMini. Photograph: Boston Dynamics

Boston Dynamics’ latest robot resembles a dog with an arm where its head should be. It recently demonstrated it can use the arm for the complex (in robot terms) action of opening a door, despite the intervention of man with a hockey stick.

Ray of hope: the aquatic MantaDroid.



Flexible friend: the aquatic MantaDroid. Photograph: National University of Singapore,

Designed by Singaporean researchers, this bot swims like a manta ray. Its fins are flexible, giving it the ability to glide through turbulent seas. The team hopes that the bot could prove useful for underwater searches and gathering marine data.

Twist in the tale: Snakebot has been used on earthquake sites.



Twist in the tale: Snakebot has been used on earthquake sites. Photograph: Carnegie Mellon University

Researchers at Carnegie Mellon University’s biorobotics lab have designed a series of non-lethal reptilian robots. Snakebots have been used to search sewers and earthquake sites and by surgeons to explore normally inaccessible sites.



Soft launch: Harvard’s Octobot. Photograph: Harvard Wyss Institute

Like its inspiration the octopus, this bot from Harvard’s Wyss Institute doesn’t feature any solid components. Underneath its silicone skin, chemical reactions between 3D-printed chambers power the pneumatic movement of its tentacles.



Source link

read more
AlphabetAppleArtificial intelligence (AI)BusinessComputingConsciousnessGoogleScienceSmart speakersTechnologyTechnology sectorUK newsUS newsWorld news

Apple poaches Google’s AI chief in push to save Siri | Technology

no thumb


Apple has poached Google’s AI chief, John Giannandrea, to run its machine learning and AI operations, in the clearest sign yet that the iPhone creator is attempting to fix the problems that saw its early lead in the field crumble.

Scottish-born Giannandrea, who joined Google in 2010 after his startup, Metaweb, was acquired, has led the search firm’s push to become market leader in AI and machine learning. Under his command, Google Brain, the company’s main AI research team, has rebuilt the technology that underpins some of Google’s landmark products, including search, translation and voice recognition.

He also led Google into its position today, where it battles with Amazon for technological supremacy in the field of voice controlled assistants. That role was once held by Apple, whose Siri technology introduced the feature to many, but which failed to capitalise on the lead.

In March, technology site The Information detailed seven years of infighting within the Siri team at Apple, with multiple attempts to reorganise the basic technology that underpins the feature falling prey to internal politics which limited attempts to improve the overall product.

Siri’s problems came to a head in February, when the HomePod – Apple’s attempt to compete with Amazon’s Echo and Google’s Home smart speakers – received reviews which praised it for its audio quality even as they damned it for its poor AI.

In recent weeks, however, Apple has accelerated hiring for Siri, peaking with 161 openings posted in one day in March – and now Giannandrea’s hiring, first reported by the New York Times.

Beyond talent, though, the company has another issue to overcome: persuading customers that it can build an acceptable set of services without taking the same data-heavy approach favoured by Amazon and Google.

Where major technology firms have increased their acquisition of customer data in recent years, arguing that large datasets are crucial for training effective personalised AI, Apple has moved in the opposite direction, altering its technologies to gather less personal data about users than it used to.

It believes its users understand the value of privacy, and will accept a certain amount of friction in exchange for keeping their secrets secret. And the differing approach has offered chief executive Tim Cook the chance to hit out at competitors. Speaking last week, he said: “We could make a ton of money if we monetised our customers, if our customers were our product … We’ve elected not to do that. We’re not going to traffic in your personal life. Privacy to us is a human right, a civil liberty.”



Source link

read more
ScienceSelf-driving carsTechnologyTransportUK news

Being a driverless car passenger proves ‘unsettling and extraordinary’ | Technology

no thumb


How many people does it take to drive a driverless car? Five: a safety driver behind the wheel, an operator to program the route, and three engineers monitoring it in another car behind.

It is, to be fair, barely even a prototype. The autonomous car unveiled in Milton Keynes last week is bleeding-edge engineering, Britain’s entry in a global race to get the first driverless car on the road.

The converted Range Rover Sport can steer itself, speed up and slow down, stop at red lights and move off when they turn green. It can even cope with roundabouts, a fundamental skill in Milton Keynes. The five operators are there to examine every nuance of the car’s reaction to the ever-changing road conditions – cyclists, pedestrians and other drivers, and the weather, to name a few.

The public demonstration of the car by UK Autodrive, a consortium led by engineering company Arup, supported by Jaguar Land Rover, Ford and Tata Motors, should have been a celebratory milestone for British motor manufacturing.

Yet growing excitement about self-driving cars was shattered by the death of Elaine Herzberg in Tempe, Arizona, last week. She was hit by an autonomous Uber that did not apparently detect the 49-year-old when she wheeled her bicycle across the road at night.

“It’s dreadful,” says Tim Armitage, the project director for UK Autodrive. “It’s dreadful for the person involved – everyone involved. And it shows just how important it is to make sure that what we are demonstrating today is safe and that we don’t oversell the technology.”

UK Autodrive’s engineering efforts are focused on safety, but that is not the only concern. The government has invested £250m into several major research projects, involving at least 1,000 people, Armitage estimates. By 2035 the Department for Transport expects the industry to be worth £50bn to the economy – about a third of all UK manufacturing, although the current motor industry contributes about £58bn. Still, there’s a lot riding on the success of self-driving cars.

The government strategy for getting there is all about research. By dangling substantial grants, it has managed to corral car companies, universities and other interests into working together – a contrast to the US, for example, where Uber, Google’s Waymo and Toyota are entirely competitive.

In the UK there are 15 government-sponsored projects led by four major consortiums like UK Autodrive which are coming at the problem from different angles – predicting the behaviour of other drivers, cyclists and pedestrians, understanding the needs of elderly or disabled drivers, and the challenges of motorways.

All the research returns to the same theme: how to make roads safe while allowing people to travel where they need to.

As with any technology, there are glitches. On the demonstration drive in Milton Keynes, we have a near-miss when the car lurches forward at a junction in the car park. The safety driver, Jim O’Donoghue, saves us from an impending collision with a parked car by grabbing the wheel. Afterwards he’s not quite sure if he realised the car was going wrong or if it warned him it couldn’t cope. “I’ve been driving it for several weeks, so I’m really tuned in to it,” he says, underlining the unconscious, near-visceral familiarity that people have with technology they use regularly. Driver behaviour and human-machine interaction are all important elements of the research.

Being a passenger in this Range Rover is like being driven by a clumsy taxi driver. Acceleration feels aggressive – from the traffic lights, it’s foot-to-the-floor until we hit 30 miles per hour. Braking starts rather later than I would like.

Once up to speed, it drives extraordinarily close to the kerb. The side sensors can judge the distance much more precisely than a human. It all adds up to an unsettling experience that feels entirely unlike riding any other vehicle. But the car, it must be emphasised, is still being built – sensors and controls are adjusted on a daily basis.

These cars are the most eye-catching part of the research, with the aim to design a vehicle that can cope with any situation more safely than a human driver. The other, more immediate focus is on connected cars – using mobile phone technology to allow every vehicle on the road to talk to each other. Another demonstration shows how drivers in standard cars can be given information, such as whether cars in traffic ahead are braking sharply, or if an ambulance is approaching.

Possibly the most compelling project is on finding parking spaces. Cars with visual sensors can detect parking spaces and share the information with a network of connected cars, and inform the driver where the closest spot is. “About 30% of traffic is people driving round looking for parking spaces,” Armitage says. “In the future we might need far less car parking. Imagine what you could do with the space.”

The research invites questions that are much more fundamental than simply how to build a safe, self-driving car to replicate the UK’s fleet of driver-owned internal combustion vehicles. Instead of owning or leasing one vehicle for all your needs, people could access different types of vehicle for different journey.

Another fundamental question is how much UK roads might change to enable self-driving cars to work more effectively. Cars could connect with each other, but also with road furniture like traffic lights, prompting drivers to slow down ahead of red lights. The growing infrastructure around electric cars shows how quickly this could happen. Arup laid electric cables and charging infrastructure in Milton Keynes when the council decided it would try to become the centre of electric cars in the UK. “When they first arrived, there were complaints from drivers of internal combustion engine cars that too many parking spaces were given over to charging points. Now they’re all full.”

One of the projects closest to completion involves self-driving pods. The Gateway consortium in Greenwich peninsula, London, has been operating four driverless pods in pedestrian areas, examining how members of the public react. There are similar pods in Milton Keynes.“We’re hoping that will be ready before the end of the year,” Armitage says. “Singapore is very interested in what we’re doing., “They want autonomous buses because they can’t recruit enough drivers.” It’s a potential export opportunity that would establish the UK’s self-driving credentials on the world stage. So while this Range Rover may not be fit to drive unsupervised yet, we are likely to see more autonomous vehicles operating very soon.



Source link

read more
1 2 3 5
Page 1 of 5