Skip to content
Future of driving

Why experts believe cheaper, better lidar is right around the corner

Lidar used to cost $75,000. Experts expect this to fall to less than $100.

Timothy B. Lee | 314
Illustration: A car in traffic uses liar to create awareness of other vehicles
Credit: Aurich / Getty
Credit: Aurich / Getty

On November 3, 2007, six vehicles made history by successfully navigating a simulated urban environment—and complying with California traffic laws—without a driver behind the wheel. Five of the six were sporting a revolutionary new type of lidar sensor that had recently been introduced by an audio equipment maker called Velodyne.

A decade later, Velodyne's lidar continues to be a crucial technology for self-driving cars. Lidar costs are coming down but are still fairly expensive. Velodyne and a swarm of startups are trying to change that.

In this article, we'll take a deep dive into lidar technology. We'll explain how the technology works and the challenges technologists face as they try to build lidar sensors that meet the demanding requirements for commercial self-driving cars.

Some experts believe the key to building lidar that costs hundreds of dollars instead of thousands is to abandon Velodyne's mechanical design—where a laser physically spins around 360 degrees, several times per second—in favor of a solid-state design that has few if any moving parts. That could make the units simpler, cheaper, and much easier to mass-produce.

Nobody knows how long it will take to build cost-effective automotive-grade lidar. But all of the experts we talked to were optimistic. They pointed to the many previous generations of technology—from handheld calculators to antilock brakes—that became radically cheaper as they were manufactured at scale. Lidar appears to be on a similar trajectory, suggesting that in the long run, lidar costs won't be a barrier to mainstream adoption of self-driving cars.

An unlikely lidar pioneer

Velodyne's early, pioneering 64-laser lidar.
Velodyne's 64-channel lidar was a breakthrough for driverless cars.
Velodyne's 64-channel lidar was a breakthrough for driverless cars. Credit: Steve Jurvetson

Ars Video

 

Scientists have been using laser light to measure distances since the 1960s, when a team from MIT precisely measured the distance to the moon by bouncing laser light off of it. But the story of lidar for self-driving cars starts with entrepreneur and inventor David Hall.

In the early 2000s, Hall was the founder and CEO of Velodyne, a successful audio equipment maker. Hall was also a robotics enthusiast.

"We were appearing on BattleBots and Robot Wars around that time, which was mostly an excuse for advertising our Velodyne loudspeaker," Hall told Wired in a recent interview.

So when DARPA, the military research agency that birthed the Internet, announced a robot car race called the Grand Challenge, Hall decided to enter. For the 2004 competition, Hall and his brother built a robot truck guided by a pair of cameras. That approach didn't work well enough to finish the race—but neither did anyone else's robot.

For the second DARPA race in 2005, the Hall brothers abandoned cameras and focused on lidar instead. Other teams were also using lidar, but lidar sensors on the market were primitive. A popular lidar at the time was the SICK LMS-291. This was a two-dimensional lidar system, meaning it could only scan a single horizontal slice of the world.

That helped cars detect objects like walls and tree trunks that went straight up from the ground. But it could get teams in trouble if they encountered obstacles—like railroad crossing arms—with more irregular shapes. And it was useless if cars wanted to actually recognize objects—like distinguishing a pedestrian from a street sign, for example—rather than just avoid obstacles.

Map: A three-dimensional cloud generated by Velodyne versus a competitor
The three-dimensional point cloud captured by a Velodyne 64-laser lidar (left) is far richer than the point clouds captured by two-dimensional lidars like the SICK 200-series (right).
The three-dimensional point cloud captured by a Velodyne 64-laser lidar (left) is far richer than the point clouds captured by two-dimensional lidars like the SICK 200-series (right). Credit: Velodyne / Gerry

So the Hall brothers developed a new type of lidar. They mounted a stack of 64 lasers onto a rotating gimbal that spun around 360 degrees. This allowed the unit to collect a truly three-dimensional view of the world around the vehicle.

The Hall brothers didn't win the 2005 race, but their superior lidar sensor attracted interest from other teams. By the time of DARPA's third and final DARPA race in 2007, Velodyne had begun manufacturing lidar units and selling them. Hall didn't enter the 2007 race, but most of the top teams—including five out of six of the eventual winners—were sporting Velodyne lidar.

Why lidar is essential for self-driving cars

An Uber driverless Ford Fusion drives in Pittsburgh, Pennsylvania, with a spinning lidar on top.
An Uber driverless Ford Fusion drives in Pittsburgh, Pennsylvania, with a spinning lidar on top. Credit: Jeff Swensen/Getty Images

Velodyne has dominated the market for self-driving car lidar ever since. Google hired DARPA Grand Challenge veterans to run its self-driving car program, and they put Velodyne units on the early Google cars. Other companies working on self-driving cars bought Velodyne gear, too.

Today, most self-driving cars rely on a trio of sensor types: cameras, radar, and lidar. Each has its own strengths and weaknesses. Cameras capture high-resolution color images, but they can't measure distances with any precision, and they're even worse at estimating the velocity of distant objects.

Radar can measure both distance and velocity, and automotive radars have gotten a lot more affordable in recent years. "Radar is good when you're close to the vehicle," says Craig Glennie, a lidar expert at the University of Houston. "But because radar uses radio waves, they're not good at mapping fine details at large distances."

Lidar offers the best of both worlds. Like radar, lidar scanners can measure distances with high accuracy. Some lidar sensors can even measure velocity. Lidar also offers higher resolution than radar. That makes lidar better at detecting smaller objects and at figuring out whether an object on the side of the road is a pedestrian, a motorcycle, or a stray pile of garbage.

And unlike cameras, lidar works about as well in any lighting condition.

The big downside of lidar is that it's expensive. Velodyne's original 64-laser lidar cost a whopping $75,000. More recently, Velodyne has begun offering smaller and cheaper models with 32 and 16 lasers. Velodyne is advertising a $7,999 price for a 16-laser model introduced in 2014. Velodyne recently announced a new 128-laser unit, though it has been tight-lipped about pricing.

Last year, Velodyne announced an order from Ford for a new solid-state lidar design. Velodyne said that it had "set target pricing of less than $500 per unit in automotive mass production quantities." But it didn't say how many units Ford was buying, how much Ford was actually paying, or how soon Velodyne expected to reach mass-market scales and price points.

Velodyne can't afford to rest on its laurels. The company is about to face a lot of competition from rivals building lidar systems with very different designs.

The rise of solid-state lidar

Carmakers expect automotive components to last for hundreds of thousands of miles over bumpy roads and a wide range of temperatures. It's more challenging to make a system cheap and reliable if it has moving parts, as Velodyne's spinning lidar units do.

So a lot of experts believe the key to making lidar a mainstream technology is to shift toward solid-state designs with no moving parts. That requires some mechanism for directing laser light in different directions without mechanically moving the laser unit itself.

Researchers have developed three major ways for doing this:

Microelectromechanical systems (MEMS) use a tiny mirror—millimeters across—to steer a fixed laser beam in different directions. Such a tiny mirror has a low moment of inertia, allowing it to move very quickly—quickly enough to trace out a two-dimensional scanning pattern in a fraction of a second.

Two leading startups working on MEMS lidar sensors are Luminar and Innoviz, according to Sam Abuelsamid, an analyst at Navigant Research. Another lidar company, Infineon, recently acquired Innoluce, a startup with MEMS expertise.

Abuelsamid told Ars that one advantage of the MEMS approach is that a lidar sensor can dynamically adjust its scan pattern to focus on objects of particular interest, directing more fine-grained laser pulses in the direction of a small or distant object to better identify it—something that's not possible with a conventional mechanical laser scanner.

Phased arrays use a row of emitters that can change the direction of a laser beam by adjusting the relative phase of the signal from one emitter to the next. If the emitters all emit light in sync, the resulting beam will point straight ahead. But if emitters on the left-hand side have a phase slightly behind the emitters on the right, the beam will point toward the left—and vice versa. An illustration from Wikipedia shows how this works:

Credit: Afraz

Phased arrays allow non-mechanical beam steering in one dimension. To steer the beam in the second dimension, these systems typically use a grating array that works like a prism, changing the direction of light based on its frequency.

At this point, phased-array lidars are mostly still in laboratories. "I'd say phased array is a cool thing for the future," says Alex Lidow, the CEO of Efficient Power Conversion, which makes integrated circuits that are incorporated in a number of lidar products. "Today it's spinning disk or MEMS, and spinning disk is by far the dominant."

Quanergy is one startup reportedly working on phased-array lidar. A key technical advisor to Strobe, the lidar startup GM acquired in October has done research focused on phased-array lidar systems—suggesting Strobe may be working on phased-array technology.

Flash lidar dispenses with the scanning approach altogether and operates more like a camera. A laser beam is diffused so it illuminates an entire scene in a single flash. Then a grid of tiny sensors captures the light as it bounces back from various directions.

One big advantage of this approach is that it captures the entire scene in a single instant, avoiding the complexities that occur when an object—or the lidar unit itself—moves while a scan is in progress. But it also has some significant disadvantages.

"The larger the pixel, the more signal you have," Sanjiv Singh, a robotics expert at Carnegie Mellon, told Ars. Shrinking photodetectors down enough to squeeze thousands of them into a single array will produce a noisier sensor. "You get this precipitous drop in accuracy."

Range is a key limitation for automotive lidar

Credit: Dllu

What this means in practice is that flash lidar isn't well suited for long-range detection. And that's significant because experts believe that fully self-driving cars will need lidar capable of detecting objects 200 to 300 meters away.

Jim McBride, a Ford executive who organized a team for the 2005 DARPA Grand Challenge, explained why in a September interview with Ars.

McBride said to imagine a self-driving car that wants to merge into traffic moving at highway speeds. "Traffic is moving probably 60 miles per hour, roughly 30 meters a second," McBride told Ars. "Most cars will take 6 to 10 seconds to get up to highway speed." So a car will want to be able to see cars that are 6 to 10 seconds—or 180 to 300 meters—away.

Bouncing a laser off an object 300 meters away and then detecting reflection isn't easy for any lidar system. There are a few different approaches for extending range that are being explored by lidar makers.

Most lasers on lidar sensors today operate in the near-infrared range—905 nanometers is a popular wavelength. Because they're close to the wavelength of visible light (red light starts around 780 nanometers), too much laser light can cause damage to people's eyes, frying the sensitive light detectors on our retinas. For this reason, the power level of 905 nanometer lasers is strictly regulated.

So one alternative approach is to use another wavelength that doesn't create a risk of eye damage. Luminar, for example, is developing a lidar product that uses 1,550nm lasers. Because this is far outside the visible light range, it's much safer for people's eyes. As IEEE Spectrum explains it, "the interior of the eye—the lens, the cornea, and the watery fluid inside the eyeball—becomes less transparent at longer wavelengths." The energy from a 1,550nm laser can't reach the retina, so the eye safety concerns are much less serious.

This allows 1,550nm lidars to use much higher power levels—IEEE Spectrum says 40 times as much—which naturally makes it easier to detect laser pulses when they bounce off distant objects. The downside here is that 1,550nm lasers and detectors aren't cheap because they require more exotic materials to manufacture.

Another way to improve the range of lidar units is to increase the sensitivity of the detectors. Argo AI, Ford's self-driving car unit, recently acquired Princeton Lightwave, a lidar company that uses highly sensitive detectors known as single-photon avalanche diodes. As the name suggests, these detectors are sensitive enough to be triggered by a single photon of light at the appropriate frequency.

These highly sensitive detectors have been used in military and surveying applications for a while. Princeton announced last year that it was working on bringing the technology to the automotive market.

Time-of-flight versus frequency modulation

GM's Cruise subsidiary recently acquired Strobe, a company that seems to be developing a lidar sensor based on continuous-wave frequency modulation technology.
GM's Cruise subsidiary recently acquired Strobe, a company that seems to be developing a lidar sensor based on continuous-wave frequency modulation technology. Credit: GM

A third design choice facing companies developing next-generation lidars is how to measure the time—and therefore the distance—it takes a laser beam to reach its target and bounce back. Most lidars today use a straightforward time-of-flight approach. They transmit a very short pulse and then use a super-accurate clock to measure how long it takes for the pulse to bounce back.

Some companies are working on a more complex approach called continuous-wave frequency modulation. As the name suggests, this approach sends a continuous laser beam out to a target. This light is split into two beams, with one beam traveling to the target and bouncing back, where it's recombined with the other beam.

The original beam has a steadily increasing frequency, and the two beams travel different distances, so when they're recombined they have different frequencies. This produces an interference pattern with a beat frequency that depends on how far the first beam traveled.

This approach has several advantages.

CWFM lidar sensors "are pretty much totally immune to background light," says Paul Suni, a researcher at Lockheed Martin who has worked on the technology. A conventional time-of-flight lidar can get confused if there are other light sources transmitting at the same frequency. A CWFM system is more nimble, and Suni says it can continue functioning even in the face of glare from the Sun.

This will be particularly important in a future where every car has several lidar sensors. With so many lasers bouncing around, it would be easy for conventional time-of-flight lidar sensors to get confused.

Another big advantage: CWFM lidar can detect the velocity of objects as well as their distance. Suni explained to Ars how this works: "If you then have relative motion between your sensor and a car driving down the street, then you will also get a doppler shift of the signal as it reflects off the moving target. If you only measure one frequency, you can't tell if it was because it was at a given range, or if it was moving."

To distinguish the two, Suni says, CWFM lidars use an "upchirp followed by a downchirp." Measuring first with a rising frequency and then with a falling one reverses the sign of the distance-frequency relationship, while the velocity-frequency relationship is the same for both measurements. Then it's a simple matter of algebra to figure out both distance and velocity simultaneously.

That seems to be the approach taken by the lidar startup Aeda, which was recently covered by The New York Times. It may also be the approach of Strobe, a lidar startup that was recently acquired by General Motors.

Cheaper lidar is coming

An indium phosphide wafer contains photonic integrated circuits. Scientists are developing the capability to fit all the components of a lidar sensor onto a single chip.
An indium phosphide wafer contains photonic integrated circuits. Scientists are developing the capability to fit all the components of a lidar sensor onto a single chip. Credit: JonathanMarks

Companies are experimenting with a lot of different designs for next-generation lidar, and experts we talked to couldn't say which of these designs would be successful. But everyone we talked to was confident we'd see substantial price declines in the next few years.

Experts told us there are a lot of historical examples where previously expensive hardware became affordable—and then downright cheap—as it was manufactured at larger and larger volumes.

Lidow points to the falling cost of anti-lock braking systems, which automatically pump the brakes to prevent skidding, as a model. "I started working with GM on this concept in 1979," Lidow said.

Early versions of the technology cost $2,000 per wheel. That was far too expensive for the consumer market, but it was well within the budgets of airlines. As the price came down, trucking companies started putting them on 18-wheelers.

"City buses started buying them," Lidow told Ars. "Then Cadillac offered it as a high-end option."

Today, Lidow said, ABS hardware costs around $5 per wheel.

"I have lived through this evolution," Lidow added. "Car companies have a way of grinding every bit of cost down. There's nothing in a lidar system that is more complicated than those first ABS systems."

Lidow predicts that in the long run, lidar sensors could cost as little as $10 each.

Carnegie Mellon's Singh also sees big price drops ahead. "In the early going, when calculators first came out, they were $1,000 each," Singh told Ars. "Then we started building millions of them." Companies figured out how to fit all of the necessary electronics onto a single chip, and costs plunged.

A number of researchers, including some sponsored by DARPA, are working on techniques for squeezing all the components for a lidar sensor onto a single chip. They've been able to draw on innovations pioneered in the communications market, because fiber optic chipmakers are trying to solve a similar problem.

"Our lidar chips are produced on 300-millimeter wafers, making their potential production cost on the order of $10 each at production volumes of millions of units per year," MIT researchers Chris Poulton and Michael Watts wrote last year. Their chip uses optical phased arrays for beam steering, avoiding the need for mechanical parts.

Needless to say, they haven't started producing the chips in volumes of millions of units per year, and it's going to take a lot of work to reach that milestone. And Abuelsamid, the Navigant analyst, warned that it might take time for the promising advances in the lab to make their way into consumer products.

"Everyone's struggling with trying to produce low-cost solid-state lidar sensors that can replicate the performance of what companies like Velodyne have done with their more complex mechanical scanning systems," he told Ars. "There are a lot of sensitive components in there. You hit a pothole, those things tend not to stand up so well for that."

Beyond those scaling challenges, solid-state lidar units have another big disadvantage: their field of view is limited.

"Nobody's been able to scan more than 50 degrees" with phased-array systems, says Ford's McBride. "If they're scanned by micromirrors, it could be 30 to 60 degrees."

This means that it will take between six and twelve solid-state lidar sensors to replicate the 360-degree visibility achieved by a single spinning rooftop lidar. So even if spinning mechanical lidar is more expensive per unit, mechanical lidars could still be more affordable overall if one mechanical lidar can replace several solid state units.

Most of the experts we talked to viewed solid-state lidar as the future. But Lidow told Ars that we shouldn't count spinning mechanical lidar sensors out yet.

"A spinning disk is not expensive," he said. After all, there are plenty of spinning mechanical components in a car that last for hundreds of thousands of miles—and they cost a lot less than today's lidar units.

The bottom line is that while bringing lidar costs down will take a significant amount of difficult engineering work, there don't seem to be any fundamental barriers to bringing the cost of high-quality lidar down below $1,000—and eventually below $100.

That means the technology—and ultimately, self-driving vehicles that depend on lidar—should be well within reach for ordinary consumers. For years, pundits have touted cost as a major barrier to mainstream adoption of self-driving cars, with a $75,000 Velodyne lidar as the most expensive item on the equipment list.

But this fundamentally misunderstands the situation. Experimental, low-volume hardware for cutting-edge technology is almost always expensive. It's through the process of mass manufacturing and iterative improvement that companies learn to make it cheaper. Right now, lidar technology is at the very beginning of that curve—where antilock brakes were in the early 1980s.

There's now a massive amount of venture capital being invested in making better, cheaper lidar units. It's impossible to predict whether it will take five, 10, or 15 years to bring the cost down from thousands to tens of dollars. But there's no reason to think lidar sensors are different from other technologies that started out expensive but steadily got cheaper and better over time.

Listing image: Aurich / Getty

Photo of Timothy B. Lee
Timothy B. Lee Senior tech policy reporter
Timothy is a senior reporter covering tech policy and the future of transportation. He lives in Washington DC.
314 Comments