Look ma, no hands

There’s a two-bar music sting, then a friendly female voice says: “Auto driving.” And we are.

Welcome to the future where it’s look ma, no hands on the wheel, no feet on the pedals and if you’re brave enough, no need for eyes on the road.

Google’s all-seeing, self-driving technology is taking care of it.

If you believe Dmitri Dolgov, the 36-year-old Russian-born doctor of computer science, who is cradling his laptop in the passenger seat, his baby will do a better job than you ever could.

We are gliding through the leafy streets of Mountain View, California, and the car is doing the driving. There’s plenty of traffic, plenty of roadworks, cyclists, pedestrians, unpredictable drivers and, duly noted, even a fire engine, siren wailing. The car decides it won’t impact our “path” as it’s on the other side of the road, so we continue.

This plain vanilla Lexus RX wouldn’t be out of place at school drop off with a soccer mom at the wheel — except for the mast protruding from the roof and the radar in the front grille. It can see through fog and rain better than the human eye and right now it is doing a better job of flowing with the traffic than any cabdriver you’ve ever ridden with. A gentle lift of the throttle here, a dab of the brakes there when a white car cuts across lanes 100 yards up the road. “Cross walk ahead,” says our friendly Siri-clone. Indeed there is, but we can’t see it yet. Or the cyclist just over the crest, beyond our field of view.

Dmitri can. He is scanning his laptop which is alive with colored wire frame boxes, representing the world around us in real time 3D. Cooling fans hum from the cargo area, which houses all the computers. It is strictly a no look zone. He makes it quite clear before we take off. No cameras inside the car. Google doesn’t want to tip its hand to the automotive industry, which at this stage hasn’t worked out if Google founder Sergey Brin’s team is friend or foe. A licensor of this technology maybe, or a new rival in a showroom. He isn’t saying.

I am one of only a handful of outsiders permitted to ride in what promises to be a revolution in personal mobility — the autonomous, or driverless, car. And I am bowled over at what it is doing, seamlessly, in real world traffic. So far, the Google fleet has logged over 600,000 miles of real world driving with a further 1.8 million on the simulator. This is no one-off prototype. There are 24 other plain vanilla Lexus RX models circulating around Silicon Valley and as far afield as Reno, Nevada.

What we are looking at is the most impressive piece of technology this humble motoring journalist has seen in 30 years and it is coming, ready or not, with enormous promise for road trauma reduction, productivity improvements so great they can improve GDP, and of course, new mobility options for the disabled. To say nothing of traffic congestion, parking and the design of mega cities.

With this technology, you can safely text, talk, check the emails, surf the web, do the conference call or even watch a movie while the car takes you to your destination. No need to find a parking spot either.

Dmitri is Google’s lead software engineer on the project with more degrees than a compass, including applied physics and maths and a PhD specializing in artificial intelligence. He came to Google six years ago from Stanford University via the US Defense Advanced Research Project Agency (DARPA) and its challenge to develop autonomous, robotic like vehicles for use in urban environments, a.k.a. military drones for the road.

So, a red traffic signal ahead is easy stuff. The Google car gently squeezes the brakes and comes to a halt.

“It does not anticipate the light cycle,” says Dmitri. “It reads the state of the traffic signal. It knows how long it will be yellow and will decide like a human driver whether to go through based on the traffic situation, speed and how long the light has been yellow. Or to stop.” And it pauses, ever so briefly when we get the green.

“We know statistically most accidents when people run a red light happen in the first 2 seconds of light transition. So we will not enter an intersection for the first 1.5 seconds after it. It’s one of the many things we build into the car to be a defensive driver. We stay out of people’s blind spots, we see a bike or a pedestrian on the side of the road and we nudge over so we give them a bit more room,” says the T-shirt and jeans Dr Goldov.

When he says the car can “see” and his matter-of-fact descriptor: “The car realized it was OK to continue,” he is describing human abilities rendered real by artificial intelligence.

He points to the laptop as we glide through Silicon Valley’s back streets. “Here is another car tracking in and we know we will have to share the same lane in the future. We have a way to predict based on probability what objects are going to do, so you saw our car start reacting to it pretty early. We slowed well before it was in our lane.”

The impressive thing is just how routine it all feels — inputs to brakes, throttle and steering are so linear and deft they could belong to an expert driver. There’s none of the binary on/off jerkiness common in so many simulators. The on board technology is a dazzling combination of sciences. Spinning around on top of the roof mast is a laser rangefinder made by Velodyne with a 360 degree field of view and 64 laser beams that cycle around 10 times a second. This Lidar technology gives a 3D view of everything around the car. “It also gives you intensity data in the infra red spectrum. This is very useful if you look at the road. You can see where the lane markers are,” says Dmitri. The disc below contains cameras that see and classify the world around it —what is a bicycle, what is a roadworks traffic cone, what is a car.

Below the cameras is a Google-developed laser rangefinder that has a much longer range and much higher resolution.

“It helps finding small obstacles on the road and helps with classification. It has high vertical resolution so it helps us tell a pedestrian from a tree. We get very nice 3D data from it,”says the science man. And then there’s the grille-mounted radar. That it all works is beyond dispute but it’s ugly and expensive — at least for now.

“Once you move from experimental hardware to investment the cost is not prohibitive. Look at the radars. When we started they were military, huge and expensive, multiple millions of dollars. Now there is a market for them in the car industry, they are a couple of hundred bucks. There is nothing fundamentally challenging there. You have pretty good cameras in your cell phones. These cameras are more expensive but if there is a market for them it will be a similar story. Right now the laser on this roof is about $75,000 but they only make a couple of hundred a year. Very small market, very specialized application, but with scale we can take it from there…”

Hiding the various bits of technology in a more aesthetically pleasing form is the next step, some of it evident already with Google’s smaller Shout prototype, a whimsical egg-shaped vehicle which dispenses with a steering wheel altogether. That one is further in the future and not yet allowed on road. As we glide back to Google’s discreet autonomous HQ, the man who writes the software responds to the obvious question. How soon will we be driving — or not driving — this stuff?

“Sergey says within a number of years that you can count on one hand you will see this technology deployed in one form or another. I don’t know how many years it is going to take but I think we will have autonomous cars driving everywhere.”

He is low key but very passionate about the benefits.

Text
This is one of the most important things technology will do in the next 10 years. Self driving for us is a safety, efficiency, comfort thing but for many others it will be an enabler, people with disabilities, the elderly, this will give them mobility and independence.
Author

The checklist of advantages includes the big safety fillip of cutting the 80+ percent of accidents attributed to human error. Factor in huge improvements inproductivity as the mobile office becomes truly mobile. A teleconference on the drive home will be no problem.

Dr Lawrence Burns from the University of Michigan is also a former director of Columbia University’s Sustainable Mobility program and is a former General Motors research chief. He believes autonomous driving is inevitable, highly beneficial and will change the way we live but will face a fierce baptism from the naysayers and vested interests. Detroit and Silicon Valley are separated by more than geography.

Individual car ownership levels will be called into question when people work out that 90% of a vehicle’s total cradle to crusher hours are spent stationary, chewing its head off in depreciation, insurance, registration and maintenance. Dr Burns wouldn’t be investing in parking garages either. The autonomous car can do a pick up and drop off at destination then return to base saving a lot of frustration and opening up prime downtown real estate for other things. But there are obstacles — potentially the largest is not the technology but the psychology.

Testing by Google and institutions as diverse as Stanford University’s Automotive Innovation Center and Mercedes Benz Research and Development have reached the same conclusion. Within a very short period of experiencing autonomous driving, drivers lose the plot.

Most prototypes in circulation now are semi autonomous, meaning the driver toggles on and off to autonomous mode, rather like cruise control. As a failsafe, if the artificial intelligence on board decides it can’t do the job with 100% safety it hands back control with an audible warning and a few seconds advance notice. This is called the transition and it is a thorny problem.

Giving up control is much less of an issue as drivers switch readily into passenger mode and relax. In doing so they lose situational awareness. When the car needs to hand back control to the driver, as it will in all but fully autonomous operations, we can’t seem to cope quickly enough.

“If you have an autonomous car, but not completely autonomous, that transition is very tricky,” says Dr Goldov. “You are checking your emails on the way to work and the car needs you to retake control. This transition — looking up, understanding the context of the scene and assuming control can be very challenging for the first three to four seconds.”

Legislation and legal liability will be a bureaucratic battlefield in every US state. It’s this stuff that will delay the arrival of the driverless car, not the technology. As the Google RX announces it is handing back manual control, we enter the HQ car park with a sense of inevitability about it all.

“There are some acceptance challenges but the benefits are so clear we will get over those hurdles,” says Dmitri Goldov.

“Some of the stuff we hear is: ‘I love driving, don’t take that away from me!’ Nobody is going to take that away but it is a rare occasion that you actually have a nice car on a nice road. Most of the time you spend stuck in traffic wasting time.

“So what keeps this 36 year old whizz awake at night? “Pure excitement. This is “awesome.”