Evaluate
Editors
Analysts
Engineers
Researchers
Audit reasoning in one pass
Spot correct, weak, and flawed content with clear status icons.

When to use this
- Quickly check facts without slowing reviews
- Catch logic gaps before final drafts
- Judge reasoning or code quality faster
Quick guide
1.Highlight the section you want to stress-test
2.Press ⌘⇧M → Evaluate
3.Scan the icon-tagged findings and status tally

Pro tips
Run Summary first for the quick gist
Pair with Distill to confirm evidence coverage
Use Explain when terminology feels opaque
Turn Evaluate findings into Quiz cards for spaced review
Why evaluation matters
When you're feeding research memos, code snippets, or opinion pieces into Criticly, you need to know whether the content is trustworthy before you act on it. Evaluate inspects correctness and reasoning in context, calling out what holds up and what needs fixing so you can ship with confidence.
Common use cases
- Fact-checking: Verify data points, citations, and historical claims without leaving the document.
- Logic review: Stress-test strategy docs or essays for hidden assumptions and weak inference chains.
- Code QA: Catch syntax issues, deprecated APIs, and insecure patterns early in the review cycle.
- Editorial triage: Summarize the reliability of incoming drafts for stakeholders before handing off.
How Evaluate structures a review
Every run follows the same lightweight report:
- Overview — a single sentence judging overall reliability.
- Findings — bullet points marked with ✅, ⚠️, or ❌, each naming the issue type and bolding the risky phrase so you can jump back to the source.
- Status Tally — a count of how many items were sound, questionable, or flawed so status is obvious to collaborators.
Evaluate adapts its checks to the content you select:
- Factual or scientific text gets verified against established knowledge and flagged when evidence is missing.
- Arguments or reasoning are inspected for fallacies, unsupported leaps, and unstated assumptions.
- Code blocks are reviewed for syntax correctness, outdated methods, and performance or security pitfalls.
- Opinion pieces are graded on internal consistency, with subjective claims clearly marked.
Pairing with Explain, Summary, Distill, and Quiz
- Start with Summary when you need the core idea before diving into issues. It gives the quick brief that focuses your fact-checking.
- Run Explain if terminology or concepts are unclear. Understanding the premise makes Evaluate's findings easier to act on.
- Use Distill to map the argument's structure. Once you see the Core Idea, Key Evidence, and Essential Logic, Evaluate highlights exactly where that structure breaks.
- Convert Evaluate's ⚠️ or ❌ findings into Quiz cards so you and your teammates remember the fixes and the rationale behind them.
Best practices
- Highlight complete sections so Evaluate can judge the full argument or code path.
- Look for clusters of ⚠️ and ❌ icons in the tally—they show where to focus edits first.
- Rewrite flagged passages, then re-run Evaluate to confirm the issue count drops.
- Share the Overview sentence in your docs or PRs to give reviewers a quick reliability readout.
Related reading
Ready to try it yourself?
Download Criticly and start using this workflow today on your Mac.
Download Now
