feat: add honeypot detection to reduce false positives (Fixes #6403)

/claim #6403


Problem

Certain hosts (commonly observed via Shodan or internet-wide scans) intentionally return static or misleading responses that satisfy matchers across many unrelated templates.

This results in:

  • Dozens of false vulnerability findings on a single host
  • Conflicting technology detections (e.g., Cisco + Apache + PHP + Tomcat)
  • Increased noise and slower triage for users

These hosts effectively behave as honeypots or sinkholes, polluting scan results.


Proposed changes

This PR adds a non-breaking honeypot detection mechanism to Nuclei to reduce false positives caused by hosts that intentionally match a large number of unrelated templates.

Certain hosts (commonly observed via Shodan) return static or misleading responses that satisfy matchers across many unrelated technologies (e.g., Cisco, Fortinet, Apache, PHP, Tomcat), causing excessive noise in scan results.

To address this, the change introduces a lightweight, post-processing detector that analyzes per-host match patterns and flags likely honeypots using multiple conservative signals.


Solution

This PR introduces a non-breaking, conservative honeypot detection mechanism that analyzes per-host match patterns during result aggregation and flags hosts whose results are likely unreliable.

Key properties:

  • Runs only in post-processing/reporting
  • No scan-time impact
  • No template logic changes
  • Default behavior is warn-only (all findings are preserved)

Detection Signals

A host is flagged only when multiple independent signals align:

Signal Description
High template count Large number of unique templates matched on a single host
Category diversity Matches span many unrelated technology tags/categories
Response reuse Majority of templates return identical HTTP response bodies
Technology conflicts Mutually exclusive technologies detected together

Detection is conservative and requires at least 3 signals to be present.


Design Rationale

  • Post-processing avoids altering scan behavior
  • Conservative thresholds minimize false positives
  • Warn-only default ensures no breaking changes
  • Detection logic is isolated and thread-safe

Before vs After

Before

example.com

  • Apache CVE
  • Cisco CVE
  • Fortinet CVE
  • PHP CVE
  • Tomcat CVE
  • MySQL CVE

After

When a host exhibits honeypot-like behavior, Nuclei emits a clear warning while preserving all findings.

[HONEYPOT WARNING]
Host: http://example.com
Matched 41 templates across 9 unrelated categories.
Results may be unreliable.

All findings are still emitted. Users are explicitly informed that results from this host may be unreliable, enabling informed triage decisions.


Usage Examples

# Default behavior (honeypot detection disabled)
nuclei -u target.com
# Enable honeypot detection (warn-only, non-breaking)
nuclei -u target.com --honeypot-detect
# Enable detection and tag results for downstream filtering
nuclei -u target.com --honeypot-detect --honeypot-mode tag

When tagging mode is enabled, affected results include metadata that can be filtered in JSON output:

{
"host": "example.com",
"metadata": {
"honeypot": true
}
}

Tests and Validation

Unit tests were added to verify the following scenarios:

  • Normal vulnerable hosts are not flagged
  • High match count within a single category is not flagged
  • Mixed categories with high response reuse are flagged
  • CDN/WAF-like edge cases are not flagged
  • Detection-disabled behavior produces no warnings
  • Tagging mode correctly annotates result metadata

All tests are deterministic and do not rely on live scanning.


Implementation Notes

  • Dedicated honeypot detection logic with per-host metrics
  • Tracks unique template IDs to avoid inflated detection scores
  • Normalizes tags before conflict analysis
  • Detection runs during result aggregation only
  • Thread-safe design with no impact on scan-time performance

Checklist

  • Pull request is created against the dev branch
  • All checks passed (lint, unit/integration/regression tests etc.) with my changes
  • I have added tests that prove my fix is effective or that my feature works
  • I have added necessary documentation (if appropriate)

Claim

Total prize pool $250
Total paid $0
Status Pending
Submitted February 03, 2026
Last updated February 03, 2026

Contributors

HA

Hardik Taneja

@Hardik-Taneja

100%

Sponsors

PR

ProjectDiscovery

@projectdiscovery

$250