Import
Upload a file, paste text, or use Data Room to process multiple documents.
Guide
SafeDoc secures external AI usage with a simple process: anonymize, analyze, de‑anonymize.
A controllable flow: import → detect → replace → review → export (with optional restore).
Single document, pasted text, or multi-document processing with Data Room.
Choose replacement type and protection level N1 → N2 (N3 on the roadmap Q3 2026) based on risk.
Copy-ready text + mapping (if reversible) + residual risk indicators.
Upload a file, paste text, or use Data Room to process multiple documents.
Re-inject values locally using mapping_xxx.json and a tokenized AI answer.
Multi-document analysis with consistent pseudonyms across the whole batch.
With pseudonymization, the same entity remains the same pseudonym across documents.
Great for diligence, contracts, multi-piece case files.
Generic tokens: [PERSON], [LOCATION]…
When you don’t need to keep links between occurrences.
Numbered, consistent tokens: [PERSON_1], [ORG_2].
Best to preserve narrative consistency across a document set.
Readable replacements (invented names/addresses) for a “natural” text.
Useful for review and presentation while masking real values.
Higher levels reduce re-identification through context (dates, amounts, locations, writing style).
PII tokenization / basic cleanup.
Stronger generalization and contextual reduction.
Additional protections. Roadmap Q3 2026.
Goal: keep useful meaning while reducing identifiability.
Review, filter and adjust what must be masked.
Copy the anonymized result and export mapping/report if needed.
Automated detection can produce false positives and false negatives. Indicators (residual scan / leakage score) help assess risk, but don’t replace a final review.
Paste the AI answer containing tokens, then import your mapping to re-inject values locally.
Ready to try it on a real document?