Home

About PubMed Search Tester

How it works

A common method for constructing a PubMed search strategy for a new hedge or systematic review is to spend time creating a validation set to test different variants. A search developer will spend time collating citations from reviews (or even hand-searching core journals in the field) in order to compile a list of "known good" citations. Armed with this, she can test different iterations of a search strategy and see which ones perform best in terms of picking up relevant citations (those from the "good" list) while rejecting everything else. This is an effective approach, but constructing the initial validation list often takes considerable time and resources.

PubMed Search Tester is designed to streamline this process. Start with an initial search that you think will yield some useful citations. The application will then present you with a randomly selected list of items from your search that you can sort into "good" (relevant) and "bad" (irrelevant) piles. Choose enough good items (you can choose when to stop) and you'll have your very own validation set of 10, 20 or 40 items. Now you can try different iterations of your search strategy, and PubMed Search Tester will automatically test each one against your good set and your bad set. Keep going until you are satisfied with the sensitivity and precision of your search.

Calculating search effectiveness

Sensitivity and precision for a given search are calculated by the formula set out in Agoritsas, et al.:

\[Sensitivity = {\text{Good items found} \over \text{All good items} }.\]


\[Precision = {\text{Good items found} \over \text{Good items found} + \text{Bad items found} }.\]

Technology and Credits

This application is made with JavaScript/jQuery and runs in your browser.

PubMed is queried (and results are retrieved) using the National Center for Biotechnology Information's publicly accessible E-utilities API.

Responsive design made less painful with Bootstrap.