Systematic review software
Software for systematic review can help you do your systematic review faster and more efficiently. Literature review software helps researchers organize record screening and data extraction. Users can create multiple reviews for different projects. It is possible to collaborate in large teams.
We use an application programming interface, API, integration to automated search with, for example, PubMed. After deciding on a search strategy, you can record search queries and their execution time. When new records become available in a particular database, they are automatically added to the record screening queue. Unfortunately, not all biographical databases have yet made API available for third parties. During subsequent living systematic review updates, you can change the search strategy. You will get an alert when previously included records are excluded from a narrowed search. Users can import records from bibliographic databases like Embase, Scopus, and Biorxiv.
We have developed a web application for the title and abstract screening and full-text screening of research articles. Currently, the default setting is double-blind screening. We are adding the option to resolve conflicts through a third reviewer. We are also adding a single reviewer mode to the screening settings. We provide a dashboard with an overview of reviewing progress and a flowchart with in- an exclusion decision. One of the core principles built into our tool is transparency in user decision-making applied to screening and data extraction. Users can add notes, and we are developing settings for the users to be able to set who can view these notes.
Reviewers can use machine learning to help assist in screening and data extraction. Different machine learning algorithms are implemented for systematic review (semi)automation, for example, a convolutional neural network, long short-term memory, vector machines, or a combination of these algorithms. There are roughly two deployment approaches, generic machine learning, and review-specific machine learning. In the generic machine learning models approach, an ML model is trained to do a specific task applicable across various systematic reviews. Generic machine learning models are often trained to apply specific exclusion criteria like detecting randomized controlled trials. With the review-specific machine learning approach, a model is trained to make in- and exclusion decisions based on a human’s previous decisions with this particular review. Active learning is used to update the ranking of the studies likely needing to be included. These prioritization algorithms help reviewers screen quicker by cueing the papers likely needing to be included to be screened first until the next following article is almost certainly not relevant.
One of the current constrained in automating literature screening is the availability of training data. We are working on enabling easy sharing of datasets under different licensing. Users can configure the settings for the machine learning models that can be used. Users can use the ML evidence dashboard to evaluate and validate these machine learning models.
Google chrome extension
The days of manually uploading PDFs to your screening applications are behind us. You can use the Pitts Google chrome extension directly to access and upload your records via your university library.
We have developed a data extraction interface. You can create your data extraction form with the population characteristics, outcomes, and other custom data items. Annotate text, highlight sentences and add notes are optional. Users can also do a risk of bias assessment using a variety of in-application risk of bias tools like ROB 2.0.
Output and settings
A spreadsheet with .csv or .xlsx format is the final output users can download. You can manually configure the data extraction forms according to your data extraction needs. You can use the setting to configure the user roles like editor and reviewer assign tasks like screening and data extraction. There are also settings for crowdsourcing.