Platform
VerifiedSignal turns uploads and URLs into structured scorecards—fallacies, factuality, authorship, provenance, and more—with human-in-the-loop review and streaming progress.
Overview
Scores are designed to be inspected: fallacies link to passages, factuality carries rationale, and collections reveal trends over time.
Measurement framework
Every document is evaluated through the same lenses so teams can compare sources, time periods, and workflows consistently.
Measures manipulative reasoning. Indicators: Per-fallacy breakdown (ad hominem, straw man, false dichotomy, slippery slope) mapped to specific text triggers.
Measures the reliability of claims. Indicators: Internal consistency, citation signals, and cross-referenced factual claims with rationale.
Measures machine authorship likelihood. Indicators: Multi-model assessment of linguistic patterns versus known LLM signatures, including a specific model guess.
Measures adherence to scientific rigor. Indicators: Unfalsifiable claims, anecdotal evidence, appeal to nature, absence of peer review.
Measures intent and genre. Indicators: Separates reported fact from narrative; satire detection; speculation presented as journalism.
Measures document origin history. Indicators: Domain reputation, WHOIS history, canonical URL verification, archive.org presence.
Measures deep content relevance. Indicators: Vector similarity (kNN) and hybrid retrieval across large collections.
Measures aggregate quality shifts. Indicators: Trend dashboards for factuality and fallacy frequency across sources over time.
How it works
The reference workflow keeps provenance checks upstream of scoring, then makes results durable in collections for comparison.
Step 1
Upload PDF, Word, or HTML—or provide a URL. The system fetches, extracts text, and cleans content automatically.
Step 2
Verify source provenance—publication dates, author identity, and domain history—before analysis begins.
Step 3
A coordinated set of models scores the document across all eight intelligence dimensions.
Step 4
Save documents to collections for side-by-side comparison and trend visualization.
Experience
Side-by-side review, suspicious-document surfacing, and SSE-driven scorecards keep operators oriented during long analyses.
Side-by-side human-in-the-loop verification so users can correct low-confidence extractions with context.
Suspicious-document logic that surfaces items with high AI probability (e.g. >0.7) and low factuality (e.g. <0.4).
Scorecards stream progress via Server-Sent Events (SSE) during analysis.
Commercial packaging
Pricing in the product reference is illustrative—confirm current plans with the team during onboarding.
| Dimension | Price (monthly) | Document limits | Feature inclusions |
|---|---|---|---|
| Reader | Free | 50 documents / month | All 8 dimensions, 3 collections, 7-day history. |
| Analyst | $29 / month | 1,000 documents / month | Unlimited collections, trend dashboards, semantic search, CSV/JSON export. |
| Team | $99 / month | 10,000 documents / month | 10 seats, shared collections, API access, SSO/SAML, self-host (Docker). |
A middle ground between raw SDKs and heavy enterprise suites—usage-shaped tiers with connectors and API depth at the high end.
$49 / month
Up to 200 documents; basic QuickBooks/Xero one-click connectors.
$199 / month
Up to 1,500 documents; agentic email drafting for missing information; few-shot training for unique document types.
$499 / month
Unlimited documents; full API access and premium support.
Governance & reliability
Canonical records live in PostgreSQL; derived search can be rebuilt. Failure modes—from malformed JSON to Redis outages—have explicit mitigations in the engineering reference.
See Technology for pipeline stages, SSE contracts, and deployment targets.
Walk through dimensions, review UI patterns, and how collections analytics would surface in your org.