Every product we recommend goes through the same evaluation framework. This page is the framework, kept short on purpose so you can hold us to it.
The five dimensions
1. Functionality – Does the product do what it claims, with the inputs and edge cases a real user faces?
2. Performance – Speed, resource usage, stability across at least one full work session.
3. Usability – Time-to-first-success for a new user, clarity of the interface, friction in the daily flow.
4. Value – Price-to-capability ratio compared with the closest competitors at the same tier.
5. Trust – Privacy posture, security history, vendor stability, support responsiveness.
The test environment
Software is evaluated on the platforms it claims to support. Where applicable, we test on a current macOS release, a current Windows release, and at least one mobile target. Hardware specs and OS versions are recorded with each review.
Sample sourcing
We prefer to buy products at retail. When a vendor provides a review license or sample, we say so on the page. A free sample never buys a positive verdict, and we return or discard hardware samples after the review window.
Scoring
We use a 0 to 10 scale, calibrated against products we have previously reviewed in the same category. A 7 is solid and recommendable. A 9 is genuinely best-in-class. A 10 is reserved for products that change the category. Re-tests can move scores up or down.
Update cadence
Reviews are revisited when the product ships a major version, when the price changes meaningfully, or when reader feedback exposes an issue we missed. The Last updated date at the bottom of every review tells you when the verdict was last verified.
Last updated: 2026-04-26