PreBreach Docs
PreBreach Docs
HomeFAQ
AI Agents
Billing
Domains
Getting Started
Reports
Understanding Findings
Scanning
Reports

Understanding Findings

Learn how to interpret vulnerability findings, severity levels, CVSS scores, validation consensus, and AI remediation prompts in PreBreach reports.

Understanding Findings

Each vulnerability discovered during a PreBreach scan is recorded as a finding. A finding is a structured record that includes everything you need to assess the risk, verify the issue, and fix it. This page explains every component of a finding and how to use that information to prioritize remediation.

Severity Levels

Every finding is assigned one of five severity levels. Severity determines how the finding affects your security grade and where it should fall in your remediation queue.

SeverityDescription
CriticalExploitable vulnerabilities that allow full system compromise, data exfiltration, or remote code execution. Immediate action required. Any critical finding caps your grade at F.
HighSerious vulnerabilities that could lead to significant data exposure or privilege escalation. Any high finding caps your grade at D or lower.
MediumVulnerabilities that require specific conditions to exploit but still pose meaningful risk. Examples include CSRF, insecure configurations, and missing security headers.
LowMinor issues with limited exploitability or impact. Typically informational leaks, verbose error messages, or suboptimal configurations.
InfoObservations that are not vulnerabilities but provide useful context. Examples include detected software versions, open ports, and technology fingerprints. Info findings do not affect your security grade.

CVSS v4.0 Scores

PreBreach scores each finding using the Common Vulnerability Scoring System version 4.0. CVSS provides a standardized numeric score from 0.0 to 10.0 that quantifies the technical severity of a vulnerability independent of your specific environment.

The CVSS vector string is included with each finding so your team can review the exact parameters used in the calculation. Key metrics include Attack Vector, Attack Complexity, Privileges Required, User Interaction, and Impact on Confidentiality, Integrity, and Availability.

CVSS scores complement PreBreach severity levels but are not identical. A finding's severity is determined by combining the CVSS score with business context and exploitability observed during the scan.

CWE Identifiers

Each finding maps to one or more Common Weakness Enumeration (CWE) identifiers. CWE IDs classify the underlying software weakness that caused the vulnerability, such as:

  • CWE-79 -- Cross-site Scripting (XSS)
  • CWE-89 -- SQL Injection
  • CWE-287 -- Improper Authentication
  • CWE-862 -- Missing Authorization

CWE mappings help your team search internal knowledge bases, reference external advisories, and track recurring weakness patterns across scans.

OWASP Top 10 Mapping

Every finding is mapped to the relevant OWASP Top 10 (2021) category. This mapping makes it straightforward to align PreBreach results with industry-standard risk taxonomies used in compliance frameworks, security policies, and audit reports.

For example, a broken access control finding maps to A01:2021, while an injection vulnerability maps to A03:2021.

Confidence Scores

Each finding includes a confidence score ranging from 0 to 100. This score represents how certain the AI agents are that the finding is a true positive.

RangeInterpretation
90 - 100Very high confidence. Strong evidence and consistent validation across models.
70 - 89High confidence. Solid evidence with minor ambiguity in one validation dimension.
50 - 69Moderate confidence. The finding is likely valid but may benefit from manual review.
Below 50Low confidence. Consider this a potential lead rather than a confirmed vulnerability.

Higher confidence scores generally correlate with confirmed validation consensus. Findings below 50 are rare in final reports because the validation pipeline filters out most uncertain results.

Validation Consensus

PreBreach uses a multi-model validation pipeline where Claude and GPT independently assess each finding. The consensus result appears on every finding as one of three statuses:

  • Confirmed -- Both models agreed the finding is a true positive. High reliability.
  • Needs Review -- The models disagreed (one flagged it as a true positive, the other as a false positive). Manual verification is recommended.
  • Rejected -- Both models agreed the finding is a false positive. Rejected findings are hidden by default but can be revealed using the report filter controls.

This dual-model approach keeps the false positive rate below 5%, so the findings you see are overwhelmingly real issues.

Evidence

Each finding includes evidence that demonstrates the vulnerability. Evidence types vary by finding category but typically include:

Request and Response Pairs

The exact HTTP requests sent by the agent and the server responses received. Headers, payloads, and status codes are preserved so you can reproduce the issue manually or in tools like Burp Suite.

Screenshots

For client-side and visual vulnerabilities, PreBreach captures browser screenshots showing the exploited state. Screenshots are especially useful for XSS, UI redressing, and content injection findings.

Proof-of-Concept Steps

A step-by-step reproduction guide that describes how to trigger the vulnerability. PoC steps are written for a technical audience and assume familiarity with common security testing tools.

Remediation Guidance

Every finding includes a human-readable remediation description that explains what to fix and why. Remediation steps are specific to the technology stack detected during the scan -- for example, a missing CSRF token finding for a Next.js application references the framework's built-in protection mechanisms.

AI Fix Prompts

In addition to the written guidance, each finding provides an AI remediation prompt -- a pre-crafted instruction you can copy and paste directly into AI-powered coding tools such as Cursor, Bolt, or any LLM-based assistant. The prompt includes the vulnerability context, affected code location (when detected), and a clear fix instruction.

To use an AI fix prompt:

  1. Click the Copy Prompt button on the finding detail card.
  2. Open your AI coding tool (Cursor, Bolt, or similar).
  3. Paste the prompt and let the model generate a fix.
  4. Review and apply the suggested changes.

Prioritizing Findings

When triaging a report with multiple findings, use the following prioritization strategy:

  1. Start with critical and high severity findings. These have the greatest impact on your security grade and represent the most exploitable risks.
  2. Focus on confirmed findings first. Confirmed validation consensus means both AI models agree the issue is real.
  3. Review needs_review findings next. These warrant manual verification before deciding whether to remediate or dismiss.
  4. Use CVSS scores to break ties. When two findings share the same severity level, the higher CVSS score indicates greater technical risk.
  5. Address medium and low findings in subsequent cycles. These are important for defense-in-depth but are less likely to be actively exploited.

Consistent triage across scan cycles will improve your security grade over time and reduce your attack surface incrementally.

Reports

Understand PreBreach security reports, report formats, grade scoring, and how to access and share your results.

Scanning

Run AI-powered penetration tests on your verified domains

On this page

Understanding FindingsSeverity LevelsCVSS v4.0 ScoresCWE IdentifiersOWASP Top 10 MappingConfidence ScoresValidation ConsensusEvidenceRequest and Response PairsScreenshotsProof-of-Concept StepsRemediation GuidanceAI Fix PromptsPrioritizing Findings