Skip to content

Is half your workforce breaking AI policy? | The AI Insider Threat Report

Read Now
Uncategorized
18 Oct 2024

Purple Teaming

Purple Teaming

Purple Teaming

A threat simulation scenario used as a security testing/assessment process; it involves having a group of security personnel (the "blue team") respond to a simulated adversarial attack perpetrated by a different group of security personnel (the "red team") that is intended to penetrate or corrupt the target, such as artificial intelligence systems or models; it is used to identify security risks and vulnerabilities, test the agility and response of the security personnel, and assess the strength of system defenses See Red Teaming and Blue Teaming

To learn more about our Inference Platform arrange a callback.

Latest Posts

Blog

CalypsoAI Achieves SOC 2 Certification

News

CalypsoAI’s Insider AI Threat Report: 52% of U.S. Employees Are Willing to Break Policy to Use AI

News

Beyond Human Hackers: Agentic AI Becomes the Primary Threat Actor