Most academic search tools rely on semantic similarity (e.g. keywords, conceptual overlap) to find and rank papers based on your query. EvidenceSeek takes a different approach. It ranks papers based on their estimated evidential relevance to your query—how much weight a study merits when evaluating a hypothesis. EvidenceSeek does this in two steps. First, it determines what kinds of evidence would be most relevant for addressing your query. (For example, RCTs would be weighted more heavily than cohort studies for questions about clinical interventions.) Then it evaluates retrieved studies on their experimental design, results, sample size, and more, computing a relevance score using the query-specific criteria.
For each study, EvidenceSeek provides an explanation of why it was scored the way it was. Review the ranked papers, then export comprehensive reports to share or reference later.
EvidenceSeek was created by Michael W. Begun.
If you have suggestions for how EvidenceSeek can be improved, I would love to hear them. Share your feedback →
EvidenceSeek retrieves literature with the help of these public APIs:
Background images and animations for this website were created using Grainrad.
View some of my other scientific projects on GitHub →