Open Source · v0.4.0 · LLM Security

Know exactly how
your LLM breaks
before attackers do.

LANCE fires 195+ adversarial probes at any AI model, scores every response with an LLM-as-Judge, and delivers a board-ready security report mapped to OWASP LLM Top 10 and MITRE ATLAS — in minutes.

lance · terminal
$ lance scan ollama/llama3
--modules all
 
Campaign    a3f8-2b91-cc4d
Probes      195 / 195
 
─────────────────────
Risk Score  8.2 / 10
Verdict    HIGH RISK
 
Critical   28
High      32
Medium    12
 
Report → lance_report_
             llama3_a3f8.html
 
$
5
Attack Modules
195+
Adversarial Probes
39
Seed Payloads
Models Supported
Live Intelligence

What a real campaign looks like.

Below is a sample LANCE scan against a popular open-source LLM. Every chart is generated from actual probe results — not estimates.

Severity Distribution
Attack Success by Module
Probe Outcomes
OWASP LLM Top 10 Exposure Radar
Methodology

Four steps. Zero guesswork.

LANCE replaces ad-hoc prompt throwing with a systematic, repeatable red team methodology.

01
Point
Specify your target model and an optional system prompt. Any provider — OpenAI, Anthropic, Ollama, Azure, Bedrock. If LiteLLM can reach it, LANCE can test it.
02
Fire
LANCE automatically fires 195+ mutated adversarial probes across all 5 attack modules — direct, role-play, hypothetical, encoded, and indirect injection variants.
03
Judge
Every response is evaluated by a two-pass system: heuristic pre-screen followed by module-specific LLM-as-Judge at 72% confidence threshold. Near-zero false positives.
04
Report
Full security report: OWASP LLM Top 10 + MITRE ATLAS mapped, CVSS scored, findings with payload evidence and remediation. HTML and PDF. Ready to present.
Attack Coverage

5 modules. Every major threat vector.

Each module covers a distinct class of LLM vulnerability, with seed payloads mutated across 5 evasion strategies per probe.

LLM01 · AML.T0051
Prompt Injection
9 seeds · 45 probes
Direct overrides, indirect injection, role hijacking, instruction hierarchy attacks, prompt chaining.
LLM06 · AML.T0024
Data Exfiltration
10 seeds · 30 probes
System prompt leaks, PII extraction, credential fishing, training data extraction, embedding inversion.
LLM01 · AML.T0054
Jailbreak
10 seeds · 20 probes
Token splitting, base64 encoding, many-shot bypass, language switching, emoji obfuscation, ASCII art.
LLM03 · AML.T0020
RAG Poisoning
5 seeds · 15 probes
Corpus injection, knowledge base misinformation, embedding manipulation, semantic collision attacks.
LLM04 · AML.T0016
Model DoS
5 seeds · 10 probes
Recursive prompt expansion, context window exhaustion, resource amplification, loop injection.
Framework Coverage

Industry-standard mapping. Out of the box.

Every finding is automatically mapped to OWASP LLM Top 10 and MITRE ATLAS — so your report speaks the language your board and auditors understand.

OWASP LLM Top 10 2025 Edition
LLM01Prompt Injection
LLM03Training Data Poisoning
LLM04Model Denial of Service
LLM06Sensitive Info Disclosure
LLM07Plugin Design Flaws
MITRE ATLAS v4.5
AML.T0051LLM Prompt Injection
AML.T0054Jailbreak ML Model
AML.T0024Exfiltration via Inference
AML.T0020Poison Training Data
AML.T0016Obtain Capabilities
Risk Scoring

CVSS-aligned severity. Every finding.

LANCE scores every confirmed finding on a CVSS-aligned 0–10 scale with four severity tiers, so risk is immediately actionable.

Critical
CVSS 9.0 – 10.0
System prompt leak, credential extraction, full instruction override
High
CVSS 7.0 – 8.9
Role confusion, jailbreak token, virtualization escape, language bypass
Medium
CVSS 4.0 – 6.9
Many-shot bypass, leetspeak obfuscation, ASCII art smuggling
Low
CVSS 0 – 3.9
Minor information leakage, non-exploitable policy drift
Early Access · Open Source · Free

Precision strikes.
Zero false claim.

LANCE is built for security teams who don't have time to wonder if their LLM is safe. Get early access and be first to know when we go public.

lance.iosec.in  ·  iosec.in  ·  v0.4.0