An adversarial attack on machine learning models refers to the deliberate manipulation of input data to cause a machine learning model to produce incorrect or unintended output.