Self-driving cars come closer to being a reality every day. Many vehicles already have autonomous features, but several challenges remain.
Cybersecurity shortcomings are among the most concerning, and a recent experiment dubbed “MadRadar” heightens these worries.
How Does the MadRadar Hack Work?
Researchers at Duke University demonstrated MadRadar in January 2024 before detailing it at the Network and Distributed System Security Symposium in February. The attack targets driverless vehicles’ radar, making them detect incoming obstacles that aren’t actually there.
First, the system analyzes a car’s radar signal to determine its parameters, such as the operating frequency or signal intervals. It does this within a quarter of a second. From there, it can interfere with radar waves to jeopardize the vehicle’s navigation.
One sample attack modified the radar signal to make the car’s self-driving system think there was a vehicle in front of it. Another did the inverse, masking the presence of a real obstacle. A third fooled the car into thinking a nearby driver was moving in a way they weren’t.
Any of these autonomous car hacks would have significant safety implications in the real world. They could cause a driverless vehicle to brake unnecessarily, veer off course to avoid something that isn’t there or fail to stop for something in front of it.
Other Autonomous Car Hacks
MadRadar is not the first demonstration of how hackers could target self-driving systems. In an infamous 2015 experiment, researchers slowed down a Jeep after accessing its controls through its infotainment platform. While the Jeep was not autonomous, it highlights the dangers of connected cars as a whole.
In 2020, security pros fooled a Tesla into driving 85 mph by using black tape to alter a 35 mph speed limit sign. A human would be able to recognize and reason that the limit is not so high, but the car sped up.
Researchers recently used lasers to interfere with lidar systems in autonomous cars. Similarly to MadRadar, this hack caused the vehicles to think there was an obstacle in the way when, in reality, there was nothing. Notably, newer lidar equipment resisted the attack, but not every car will use the latest technology.
What Can Automakers and Drivers Do to Stay Safe?
Autonomous car hacks are concerning, considering self-driving vehicles could account for one-quarter of all miles driven by 2030. The companies making these cars and the drivers operating them should keep a few things in mind to prevent worst-case scenarios.
Embrace a Mixed Approach to Autonomous Driving
One of the most important security measures is to avoid reliance on a single navigation system. In the MadRadar and lidar examples, hacks target just one technology — radar or lidar. Having vehicles combine inputs from a diverse range of sensors will provide added resilience against such exploits.
The underlying infrastructure is already there. Even driverless models from 2016 featured as many as six different technologies to navigate safely. While many vehicles use each sensor for a different purpose, training self-driving algorithms to consider all inputs when making decisions will reduce the risks if one is jeopardized.
A laser might make an obstacle appear on lidar, but the same wouldn’t show up on radar. Similarly, radar interference wouldn’t affect machine vision. Balancing all these factors will require more complex machine learning models, but the added security benefits are worth it.
Implement Active Cybersecurity Controls
Automakers must equip vehicles with active cybersecurity defenses as autonomous car hacks become an increasingly relevant threat. Tighter access controls and fail-safes are not enough. Risks this imposing require proactive protections.
Continuous monitoring is essential. Recognizing potential interference with a car’s self-driving system is an indispensable tool. Vehicles detecting such attacks could correct themselves by alerting drivers or weighing other inputs more heavily to avoid making a mistake based on one compromised sensor.
Like mixed navigation approaches, these defenses mean added complexity and IT infrastructure within a vehicle. As a result, self-driving cars could become less affordable, but costs will come down over time, and they’re certainly justified in light of the hazards hacking poses.
Stay Alert
Finally, drivers must avoid the temptation to rely on autonomous features. MadRadar and similar experiments show automotive cybersecurity has not advanced enough to assume self-driving systems are foolproof. Attacks won’t work if humans step in and take over manually, but doing so requires attention to how the vehicle is behaving.
The National Highway Traffic Safety Administration states drivers must be fully engaged even with the highest levels of autonomous driving. As technology improves, consumers should stick to that practice. This is a matter of legality in a traffic incident and a crucial safety measure.
Likewise, automakers should stay up to date with emerging security trends. Further adaptation and new protections will likely be necessary as researchers discover novel threats. Constant vigilance is the only way to remain secure.
Autonomous Cars Need Better Security Measures
MadRadar is not the first demonstration of an autonomous car hack, but it does showcase how the issue is still prominent. Self-driving vehicles have a long way to go before they’re reliable enough for everyday use, and cybersecurity is a critical part of that advancement.
Automakers should partner with security specialists to improve their built-in defenses and make autonomous features more resilient. While such steps may be complicated, they’re necessary in light of the potential risks.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.