Self-Driving Cars Can Be Tricked Into Misreading Street Signs

We know that this is the age of smart gadgets and self-operational machines. However, the cons associated with such devices easily surpass their pros. The case is same with self-driving technology. Although the technology is evolving every passing year the vulnerabilities and loopholes that are constantly expanding present a point of concern for automobile experts, people, and governments.

The University of Washington’s team of researchers has identified a huge flaw in the technology of the connected car, which is so dangerous that it makes self-driving car hacking look like a low-grade problem. According to the team, defacing of street signs is a bigger problem because the nature of the threat is highly practical.

It is quite easy to alter street signs through the strategic use of stickers and throwing off an autonomous vehicle’s image recognition system. In case the attackers are aware of the way a vehicle sorts the objects it views like pictures of signs, it is possible for them to create stickers that could deceive the vehicle into believing that the fake sign does mean something.

For example, through simple social engineering skills, attackers can make the car believe that a stop sign is a speed limit sign. Thus, the problems are evident, and attackers can easily create chaos on streets using these stickers that can be printed at home. If the street signs are altered, there will be crashes on roads as people will be misguided.

The method is quite similar to the way Galaxy S8 iris scanner was tricked through a picture; image recognition system in connected vehicles can be confused and forced to misread signs using sticker-borne street signs. To validate their findings, researchers showed two methods of tricking the system.

Firstly, they used to love/hate stickers over a Stop sign to trick the car into believing that it was speed limit sign and they succeeded. Secondly, the concealed a Right Turn sign using grey stickers to manipulate the vehicle’s algorithm so that it views it as Stop or Added Lane sign. Though tricksters need to understand an automated vehicle’s algorithm before launching such attacks still the threat is real and alarming.

The team has suggested ways to thwart this threat; they claim that contextual information can be used to verify if the sign is accurate or not. Local governments can also install signs using the anti-stick material or install them in such a way that they become out of reach of attackers and pranksters.

Using contextual data will be helpful because it would enable the vehicle to read from location-based data and receive accurate information about the upcoming sign. Carmakers can also add failsafe with multiple lidar sensors and camera to avoid misreading of a sign by the car’s system. It is important to take necessary actions because self-driving cars’ sign-reading abilities would otherwise become questionable.

Via: Car and Driver, Wired

Related Posts