This repository is primarily maintained by Omar Santos (@santosomar) and...
🐢 Open-Source Evaluation & Testing for LLMs and ML models
Build accurate and secure AI applications to unlock value faster
RuLES: a benchmark for evaluating rule-following in language models
A curated list of academic events on AI Security & Privacy
The official implementation of the CCS'23 paper, Narcissus clean-label b...
Code for "Adversarial attack by dropping information." (ICCV 2021)
Train AI (Keras + Tensorflow) to defend apps with Django REST Framework ...
Performing website vulnerability scanning using OpenAI technologie
pytorch implementation of Parametric Noise Injection for adversarial def...
[IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the ad...
🚗 A repository for documenting and exploring the world of autonomous d...
Website Prompt Injection is a concept that allows for the injection of p...