Skip to content

Is half your workforce breaking AI policy? | The AI Insider Threat Report

Read Now
Uncategorized
18 Oct 2024

Blue Teaming

Blue Teaming

Blue Teaming

A defensive security testing/assessment process that involves having a group of people (the "blue team") respond to a simulated adversarial attack intended to penetrate or corrupt the target, such as artificial intelligence systems or models; used to identify security risks and vulnerabilities and test the strength of defenses, including both human and technological intelligence See Red Teaming and Purple Teaming

To learn more about our Inference Platform arrange a callback.

Latest Posts

Blog

CalypsoAI Achieves SOC 2 Certification

News

CalypsoAI’s Insider AI Threat Report: 52% of U.S. Employees Are Willing to Break Policy to Use AI

News

Beyond Human Hackers: Agentic AI Becomes the Primary Threat Actor