
Trustworthy AI: Building Security Into AI Systems
Description
Trustworthy AI: Building Security Into AI Systems
This IDC Perspective provides an overview of the security challenges in protecting AI systems, the threat landscape, and the security baselines needed to build trust into AI. "As enterprises embark on major AI-powered digital transformation, security risks to AI systems are set to become greater. With traditional approaches to security insufficient in providing complete protection, embedding appropriate security controls into AI needs to become an integral component of an organization's security risk management program," said Ralf Helkenberg, research manager, European Privacy and Data Security.
Please Note: Extended description available upon request.
Table of Contents
9 Pages
- Executive Snapshot
- Situation Overview
- The Emerging Security Risk to AI Systems
- AI in Cybersecurity
- AI Life Cycle and Model Security
- Types of AI Model Threats
- AI Model Reconnaissance
- Poisoning Attack
- Evasion Attack
- Prompt Injection Attack
- Supply Chain Attack
- Privacy Attacks
- Model Replication
- Model Exfiltration
- Advice for the Technology Buyer
- Defense Against AI Security Threats
- 1. Identify: Assess AI Security Risk and Posture
- AI Asset Mapping
- Use-Case–Based Risk Assessment
- 2. Protect: Implement Safeguarding Measures
- Security Awareness
- Model Safeguards
- Security by Design
- 3. Detect: Enable Timely Discovery of AI Security Events
- Security Monitoring
- 4. Respond: Prepare for AI Security Incidents
- Attack Response Plans
- Learn More
- Related Research
- Synopsis
Pricing
Currency Rates
Questions or Comments?
Our team has the ability to search within reports to verify it suits your needs. We can also help maximize your budget by finding sections of reports you can purchase.