Your Users Are Pasting Sensitive Data Into AI. Stop Them.

Screen prompts, uploads, and RAG documents before sensitive data reaches your model. Deterministic detection in milliseconds—not probabilistic guessing.

JS SDK WordPress Plugin REST API Self-Hosted Real-Time

Data You Never Collect Can't Be Breached

Users paste SSNs into support tickets. Upload spreadsheets with customer PII. Submit prompts with confidential data. Once it hits your system, you own the liability.

Support & Helpdesk

Customers paste account numbers, SSNs, and medical info into tickets constantly. Every message is a potential compliance violation.

AI & LLM Applications

Users paste confidential data into AI prompts. Your RAG pipeline ingests PII. Once it's in the model, it's retrievable.

File Uploads

Resumes with DOB and SSN. Contracts with confidential terms. Spreadsheets with customer data. All landing in your storage.

How It Works

One scanning engine. Multiple integration points.

1

User Submits Content

Form submission, file upload, or AI prompt—any user input triggers a scan.

2

Real-Time Scan

150+ classifiers check for SSNs, credit cards, PHI, credentials, and more. Milliseconds, not seconds.

3

Take Action

Block the submission.

// JavaScript SDK Example
InspectShield.scan(formData)
  .then(result => {
    if (result.hasSensitiveData) {
      // Block or warn
      showWarning(result.findings);
    } else {
      // Safe to submit
      submitForm(formData);
    }
  });

Integration Options

Same scanning engine, multiple entry points

JavaScript SDK

Drop into any web form, chat UI, or file upload. Scans client-side before data leaves the browser.

WordPress Plugin

One-click install. Protects Contact Form 7, Gravity Forms, WooCommerce, and any form plugin.

REST API

Server-side scanning for RAG pipelines, backend uploads, webhooks, and data ingestion workflows.

Self-Hosted Docker

Run the scanning engine in your environment. Zero data egress. Full control.

Built for the AI Era

Every AI application needs a data firewall. Screen prompts before they hit the LLM.

Deterministic, Not Probabilistic

LLMs guess whether something is sensitive. Pattern matching knows. When compliance is on the line, "80% confident" isn't good enough.

LLM: "This might be a social security number"
Inspect-Shield: "This IS SSN: 123-45-6789"

Save AI Cycles (and Cost)

LLMs are slow and expensive for document inspection. Pre-screen with fast, cheap classifiers. Only send clean content to the model.

Pattern matching: ~5ms, $0.00001
LLM inference: ~2000ms, $0.01+

RAG Pipeline Hygiene

Don't let PII into your vector database. Once it's embedded, it's retrievable. Screen documents before ingestion—not after your chatbot surfaces a customer's SSN.

Audit Trail

"What sensitive data did users attempt to send to our AI?" Deterministic scanning gives you logs and evidence. LLM-based detection gives you... confidence scores.

Who Uses Inspect-Shield

Any application that accepts user input

AI/LLM Applications

ChatGPT-style interfaces, copilots, and AI assistants. Screen prompts before they hit the model.

Support & Helpdesk

Zendesk, Intercom, Freshdesk, or custom ticketing. Stop customers from oversharing.

HR & Recruiting

Job applications, employee portals, benefits enrollment. Resumes are PII goldmines.

Healthcare Portals

Patient intake forms, document uploads, messaging. HIPAA compliance at the front door.

Financial Services

Account applications, document submission, customer communication. Every form is a risk.

SaaS Platforms

Any app with file uploads, comments, or user-generated content. Limit your liability.

150+ Classifiers, Zero Configuration

All classifiers run simultaneously. No picking and choosing. No missed detections.

SSN Credit Card Bank Account Driver's License Passport Medical Record # API Keys Passwords Email Phone Address Date of Birth IBAN NHS Number + 135 more

Frequently Asked Questions

How is this different from using an LLM to detect sensitive data?

LLMs use probabilistic detection—they guess. Inspect-Shield uses deterministic pattern matching—it knows. When compliance is on the line, you need certainty. Plus, pattern matching is 100x faster and cheaper than running every input through an LLM.

What integration options are available?

Three options: (1) JavaScript SDK that drops into any web form or AI chat interface, (2) WordPress plugin for instant protection, (3) REST API for server-side scanning of RAG pipelines, backend uploads, or any data flow.

Can I self-host Inspect-Shield?

Yes. The scanning engine runs as a Docker container in your environment. Your data never leaves your infrastructure. We also offer a hosted API if you prefer not to manage infrastructure.

What happens when sensitive data is detected?

You choose: Block the submission entirely, warn the user and let them decide, or automatically redact the sensitive content before it reaches your system. All configurable per-classifier.

How fast is the scanning?

Milliseconds. Pattern matching is nearly instantaneous compared to LLM inference. Users won't notice any delay when submitting forms or uploading files.

What types of sensitive data can Inspect-Shield detect?

150+ classifiers covering SSNs, credit cards, bank accounts, driver's licenses, passports, medical record numbers, API keys, passwords, and more. Covers HIPAA, PCI, GDPR, CCPA, and other regulatory requirements.

Stop Collecting Data You Don't Need

The cheapest breach to remediate is the one that never happened.
Get early access to Inspect-Shield.