Self-driving cars are revolutionizing society. Such cars also called driverless cars or smart electric cars, use a variety of technologies to analyze the environment, allowing them to drive without — or with minimal — human input. For example, by using sensors, cameras, and radars to scan the immediate vicinity, a self-driving car will be able to know if there’s a car nearby, a person crossing the street or anything that would otherwise require human consideration for action.
Behind the scenes, driverless cars function like other forms of AI. Take facial recognition, for instance. For it to work properly, engineers need to expose it to Fmillions of faces so that it can ‘learn’ to differentiate between them. The same line of thinking goes behind driverless cars; engineers expose self-driving cars to millions of situations and scenarios so they can ‘understand’ what to do and when. In other words, they teach these smart electric cars how to react appropriately in varying scenarios.
The Current Self-Driving Landscape
We’re in the beginning of the revolution, metaphorically dipping our toes into the driverless car waters. But how did we get here? How did we arrive at a point where cars can (almost) drive without human input? Incrementally. Before we delve into that topic, let’s discuss the different levels of automation, as defined by the Society of Automotive Engineers (SAE):
- Level 0 – No Automation: The full-time performance by the human driver of all aspects of the dynamic driving task, even when enhanced by warning or intervention systems.
- Level 1 – Driver Assistance: The driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task.
- Level 2 – Partial Automation: The driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task.
- Level 3 – Conditional Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene.
- Level 4 – High Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene.
- Level 5 – Full Automation: The full-time performance by an Automated Driving System of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver.
As of today, Tesla’s ‘autopilot’ systems are in level 2 of automation. Once we get to level 4, people will be able to multitask while they drive, which we’ll get into in a bit. Okay, now we go back to how we got to level 2.
How We’ve Arrived at Level 2 Automation
Level 2 is partial automation characterized by a human driver with machine assistance. What assistance, you ask? Well, we have:
- Blind-spot warnings that use radars or sonar transmitters to detect objects you would encounter if you switched lanes.
- Lane-departure warning/assistance that uses cameras to detect lane markings and will either flash a warning light or take full control when the car starts to drift into other lanes.
- Cruise control that allows a car to drive at a set speed or the speed of the car directly ahead.
- Parking assistance that detects objects and nearby cars for automatic parking (in parallel parking situations).
- Front collision warning/assistance that detects objects in front of the car and will flash a warning light or actually brake the car to avoid a crash.
Soon, maybe not within the next five years, but soon we’ll reach level 3, then 4, and finally, level 5.
The Many Advantages of Driverless Cars
We’re already seeing positive impacts of self-driving cars. As a matter of fact, a study by the Virginia Tech Transportation Institute found that self-driving cars were involved in fewer accidents than their human-controlled counterparts, with 3.2 accidents per million miles, compared to 4.2 for human drivers.
Most of the benefits we’ll gain from driverless cars begin to appear once we reach level 4 automation. For example, productivity will drastically increase. Whereas we currently need to focus on the road while we drive, once we reach level 4 we’ll be able to work while the car drives. A lawyer can go over briefs, a doctor can check up on patients or look over X-rays, students can study, and a lot more.
Autonomous driving also opens the avenue for the elderly or those with disabilities or impairments to enter the road, as they won’t have to actually drive. Roads will also be much safer once the human error is replaced with the less-likely-to-occur computer error.
With computers in charge, transportation as a whole will be more efficient because the car can choose the most fuel-efficient route. In a future where all cars are ‘connected’ with each other, they will be able to maneuver and weave through other cars with machine precision as well. As for the environment, with safer cars manufacturers won’t have to equip cars with as much safety equipment, making them lighter and more fuel-efficient, leading to fewer emissions
Driverless cars are poised to make a big impact. While we’d like to think that this impact will be positive, there’s no real way of knowing until we actually get there. As a cautionary tale, imagine being able to nap or partake in other activities while you drive. This may prompt some to take more road trips, or simply drive more because they don’t actually have to ‘drive.’ Whereas we think that self-driving cars will be good for the environment, in this scenario the opposite may actually happen.