Tricking Neural Networks : Explore Adversarial Attacks
Large Language Models are pretty cool, but we need to be aware of how they can be compromised.
I will show how neural networks are vulnerable to attacks through an example of an adversarial attack on deep learning models in Natural Language Processing(NLP).
We’ll explore the mechanisms used to attack models, and you’ll get a new way to think about the security of deep learning models.
An understanding of deep learning is required.