What you submit
A PDF or DOCX of the paper, methodology document, or pre-registration plan you want reviewed. Plain text and Markdown are accepted for short pre-registration filings.
- Format: PDF or DOCX. Plain text and Markdown accepted for short filings.
- Size: under 50 MB per file. Larger or zipped portfolios, contact us first.
- Content: the methodology section is what the engine evaluates most directly. Cohort definition, hypothesis, falsification criteria, analysis plan.
- No PHI unless redacted. Identifiable patient data must be removed or pseudonymized before submission. We do not accept raw clinical data.
- Optional fields on the submission form: primary condition, age stratum, OSF cross-reference URL, medRxiv preprint URL.
You also confirm two checkboxes, that you have the rights to submit, and whether you want the rating made public.
Step 02What the engine does
The submission goes to an automated falsification engine. The engine has no human reviewer in the loop.
- Parse. The engine extracts the methodology, claims, and any cited data sources from your document.
- 8-rule pre-registration check. Filed before analysis. Hypothesis stated explicitly. Cohort definition complete. Outcome variables defined. Pre-specified analysis plan. Falsification criteria. Time-stamped immutability. Deviations flagged. The engine reports pass / fail / not applicable per rule.
- Gate-level falsification tests. When a gate is implementable for your domain (e.g., care-process leakage detection on a clinical AI claim), the engine runs the gate and reports its result.
- Literature consistency check. Where the engine has access to a comparable corpus, it cross-references claimed effect sizes against meta-analyses and primary literature.
- Generate rule-by-rule feedback. The output includes the rating, the per-rule outcome, and a short justification for each rule.
The engine produces the same output for the same input. Different submitter, same paper, same rating. Same paper, different day, same rating, as long as the protocol version hasn't changed.
Step 03What you get back
An emailed report with the engine's rating and rule-by-rule feedback.
- Free pre-registration check: rating + per-rule feedback within 5–10 business days.
- Paid tiers (Evidence Integrity Score, Adversarial Validation, Partnership): faster turnaround per the engagement scope.
- Permanent verifiable URL for each rating record:
rocsitediscovery.com/preregistration/<short_id>/. - Cryptographic evidence hash stamped on the rating: SHA-256 over the methodology text plus any uploaded manuscript, plus the protocol version used.
- Right of reply. If you believe the rating is wrong, the contact path is Section 5 of the Terms of Service.
What the rating means
Every rating is an integer / decimal between 0 and 10. The integer floor matters more than the decimal.
What the rating does not tell you: whether the science is correct, whether conclusions generalize, whether the work is novel, whether peer reviewers will agree, or whether the data was collected ethically. See Methodology Limitations for the full list of what an automated rating can and cannot tell you.
Step 05What it costs
The pre-registration check is genuinely free for academic submissions. There is no upgrade gate hidden behind it. If you want a stronger rating record for FDA, journal, or investor purposes, the Evidence Integrity Score tier is the path. Full service tier descriptions →