How it Works

Most academic search tools rely on semantic similarity (e.g. keywords, conceptual overlap) to find and rank papers based on your query. EvidenceSeek takes a different approach. It ranks papers based on their estimated evidential relevance to your query—how much weight a study merits when evaluating a hypothesis. EvidenceSeek does this in two steps. First, it determines what kinds of evidence would be most relevant for addressing your query. (For example, RCTs would be weighted more heavily than cohort studies for questions about clinical interventions.) Then it evaluates retrieved studies on their experimental design, results, sample size, and more, computing a relevance score using the query-specific criteria.

Example Hypothesis
"Psilocybin-assisted therapy reduces depressive symptoms in treatment-resistant major depression"
Study A Study B
Design ░░░░░░░░░░Double-blind RCT ░░░░░░░░░░Single-arm open-label
Directness ░░░░░░░░░░Psilocybin + therapy ░░░░░░░░░░Psilocybin only
Population Match ░░░░░░░░░░TRD patients ░░░░░░░░░░Mixed MDD
Sample Size ░░░░░░░░░░n=233 ░░░░░░░░░░n=24
Outcome Match ░░░░░░░░░░MADRS score ░░░░░░░░░░MADRS score
Overall Relevance ░░░░░░░░░High ░░░░░░░░░░Moderate

Explore Your Results

For each study, EvidenceSeek provides an explanation of why it was scored the way it was. Review the ranked papers, then export comprehensive reports to share or reference later.

Evidence Reports — Ranked summaries in PDF or Markdown
Reference Lists — BibTeX, RIS
Overall Summary
Papers ranked by relevance
Paper 1 Info
Relevance Assessment
Paper Abstract
Paper 2 Info

About

EvidenceSeek was created by Michael W. Begun.

EvidenceSeek retrieves literature with the help of these public APIs:

  • OpenAlex — Open catalog of scholarly works
  • PubMed — Biomedical literature database

Background images and animations for this website were created using Grainrad.

View some of my other scientific projects on GitHub →