Living systematic review

The term living systematic review was coined in a 2014 paper by Prof. Julian Elliott and others. However, the concept was arguably conceived by Sir Iain Chalmers, founder of the Cochrane Collaboration, in a 1986 letter in the Lanced.

A living systematic review is a continuously updated online publication and repository of evidence surrounding a research question or topic. When new relevant research becomes available, it is ideally immediately incorporated into the review, and if the results from a new trial change the conclusion, an update is published. Living systematic reviews are often on medical topics. However, in theory, living systematic reviews could expand to other scientific fields, for example, climate science.

When is it useful?

Living systematic reviews can help create clinicians’ trust in guidelines and make guideline development and maintenance more transparent and evidence-based. Living systematic reviews are particularly helpful for research topics with many publications each month and significant possible impact on patient outcomes from new insights. One example is the systematic living review of covid-19 drug treatments and the associated living WHO guidelines. 

What makes it possible?

Living systematic reviews are made possible by collaboration in large teams of researchers. Crowdsourcing is another tool researchers use to keep living systematic reviews up to date. Crowdsourcing gets the general public involved to complete specific tasks within the systematic review process. One example is the Cochrane crowd initiative, where volunteers can do assignments like identifying randomized controlled trials.

Machine learning is another tool researchers can use to make conducting a living systematic review more doable. You can use generic machine learning models to identify records based on specific in- or exclusion criteria for screening. One example of this is the model from RobotReviewer to identify RCTs that we have implemented in our tool.

Another possibility is to train a review specific machine learning models. An algorithm is trained, using active learning, to predict which records are probably relevant based on previous in- and exclusion decisions made by the reviewer. The algorithm produces a ranking of articles it indicates are most relevant. We also are researching machine learning for data extraction and are developing models that predict sentences containing relevant population or outcome data.