All videos                
                
                    
                        
                            
                                
                                    
                                        
                                            
                                                
                                            
                                            
                                                
                                            
                                        
                                        
                                    
                                
                            
                            
                                
                                        
                                
                                
                                
        
        
    
                                
                                    
                                                                
                            
                        
                    
                    
                    
                        
                    
                
            
        ART 360: Defending AI models against adversarial attacks
                                        October 8, 2019                                    
                                
                                
                                    Adversarial samples are inputs to Deep Neural Networks (DNNs) that an adversary has tampered with in order to cause misclassifications. It is surprisingly easy to create adversarial samples and surprisingly difficult to defend DNNs against them. In this talk, I will review the state-of-the-art and recent progress in better understanding adversarial samples and developing DNNs that are robust against them. I will then give a perspective on the potential threats that adversarial samples pose to security-critical applications of DNNs. Finally, I will show how researchers and developers can experiment with adversarial attacks and defences using the ART 360 open-source library https://github.com/IBM/adversarial-robustness-toolbox.
 
                 
                                                        
                                                             
                
                                                        
                                                             
                                                        
                                                            