People have already succeeded in generating adversarial images that confuse neural networks.
Until networks reach the point where those techniques don't work any more, it's probably a better option - it allows you to automatically perturb the image for a targeted deception, as opposed to specifically defining a large number of parameters without understanding what the impact will be.
People have already succeeded in generating adversarial images that confuse neural networks.
Until networks reach the point where those techniques don't work any more, it's probably a better option - it allows you to automatically perturb the image for a targeted deception, as opposed to specifically defining a large number of parameters without understanding what the impact will be.