This is another attack that convinces the AI to ignore road signs:
Due to the way CMOS cameras operate, rapidly changing light from fast flashing diodes can be used to vary the color. For example, the shade of red on a stop sign could look different on each line depending on the time between the diode flash and the line capture.
The result is the camera capturing an image full of lines that donโt quite match each other. The information is cropped and sent to the classifier, usually based on deep neural networks, for interpretation. Because itโs full of lines that donโt match, the classifier doesnโt recognize the image as a traffic sign.
So far, all of this has been demonstrated before.
Yet these researchers not only executed on the distortion of light, they did it repeatedly, elongating the length of the interference. This meant an unrecognizable image wasnโt just a single anomaly among many accurate images, but rather a constant unrecognizable image the classifier couldnโt assess, and a serious security concern.
[โฆ]
The researchers developed two versions of a stable attack. The first was GhostStripe1, which is not targeted and does not require access to the vehicle, weโre told. It employs a vehicle tracker to monitor the victimโs real-time location and dynamically adjust the LED flickering accordingly.
GhostStripe2 is targeted and does require access to the vehicle, which could perhaps be covertly done by a hacker while the vehicle is undergoing maintenance. It involves placing a transducer on the power wire of the camera to detect framing moments and refine timing control.
Research paper.