Skip to content Skip to navigation

Reti neurali più robuste: un metodo per non ingannare l’intelligenza artificiale

16 October 2020
A joint study by the University of Trieste of Sissa and the University of Oxford has managed to understand the origin of fragility in the classification of objects by algorithms and to remedy it

A modification imperceptible to the human eye can deceive sophisticated artificial intelligences by forcing them to make classification mistakes that a human being would never make, such as confusing a bus with an ostrich. A significant safety issue when it comes to applying deep learning, deep neural networks, to tools like self-driving cars.
A team of researchers and professors from the University of Trieste, Scuola Internazionale Superiore di Studi Avanzati – SISSA and Oxford University, has nevertheless succeeded in showing a new way to make these neural networks more robust and difficult to deceive.