| Nature | Concerns around data, systems, and networks | Concerns around the misuse or manipulation of AI models and data | 
| Typical Threats | Phishing, malware, DDoS attacks, unauthorized access | Adversarial attacks, model inversion, data poisoning | 
| Attack Vector | Exploiting software/hardware vulnerabilities, human error, or system misconfigurations | Manipulating input data, exploiting model behavior, corrupting training data | 
| Primary Objective | Data theft, system disruption, unauthorized access | Model manipulation, knowledge extraction, degraded performance | 
| Risk Drivers | Outdated systems, weak passwords, unpatched software | Insufficient model robustness, over-reliance on AI decisions, lack of interpretability | 
| Mitigation Strategies | Firewalls, patches, MFA, encryption | Robust model training, input validation, model monitoring | 
| Impact of Breach | Data loss, financial implications, reputational damage | Erroneous AI decisions, loss of trust in AI systems, model retraining costs | 
| Unique Concerns | Physical access, insider threats, third-party vendors | Transfer attacks (attack strategies that transfer from one model to another), model stealing, hyperparameter tuning attacks | 
| Evolution Pace | Constantly evolving with technological advancements | Rapidly growing with advancements in AI and machine learning research | 
| Main Targets | Databases, networks, applications, user accounts | Pre-trained models, AI APIs, training datasets, AI inferences |