Keywords
Visualization, machine-learning, citations, FAIR, data sharing
This article is included in the Research on Research, Policy & Culture gateway.
This article is included in the Hackathons collection.
Visualization, machine-learning, citations, FAIR, data sharing
The National Institutes of Health (NIH) Common Fund’s Stimulating Peripheral Activity to Relieve Conditions (SPARC) program aims to transform our understanding of nerve-organ interactions with the intent of advancing bioelectronic medicine towards treatments that change lives.1 The SPARC program employs a Findable, Accessible, Interoperable, and Reusable (FAIR) first approach for its datasets, protocols, and publications, hence enabling the data to be easily reused by research communities globally. The SPARC data portal can be used as the gateway to access fully curated datasets at any time.2 Using the portal, researchers can search for data used in real-world experiments to verify or corroborate studies in device development. There is also potential for the data generated by the SPARC program to be useful outside the current field of study showcasing the benefits of multi-discipline data generation and sharing.3
All SPARC datasets are curated by the researchers according to the SPARC Data Standards (SDS), a data and metadata structure derived from the Brain Imaging Data Structure (BIDS).4 Several resources are made available to SPARC researchers for making their data FAIR, such as the cloud data platform Pennsieve, the curated vocabulary selector and annotation platform SciCrunch, the open-source computational modeling platform o2S2PARC, the online microscopy image viewer Biolucida, and the data curation software SODA.4–6 The datasets submitted by researchers also follow an extensive curation process where teams from the SPARC Data Resource Center (DRC) examine the submitted data and work with the researchers to ensure all aspects of the FAIR data principles are being followed.4,6,7 Once these datasets are made public, access to them is provided through the Pennsieve Discover service and sparc.science, the official access point of the SPARC Portal.8
While the submission and curation of data are simplified by such tools, one of the greater benefits of the FAIR guidelines is the ability to reuse data in other studies by other researchers around the world. However, a researcher who has submitted a dataset might not always be aware of the reuse of their original submitted data since current citation indexing tools, like Google Scholar, do not account for datasets. To address this shortcoming, we developed SPARClink during the 2021 SPARC FAIR Codeathon (July 12th, 2021 – July 26th, 2021),9 a system that queries all external publications using open source tools and platforms and creates a database and visualizations of citations that are helpful to showcase the impact of the SPARC consortium. In this instance, we define impact as the frequency of citations of SPARC-funded resources. By using citations as the key measure in SPARClink, we have created a method for showcasing the reuse of generated data and the benefits that FAIR data generation practices have on the overall scientific community. A visual representation of the reuse of data will allow both researchers and the general public to see the benefits of the concept of FAIR data and the immediate utilization of publicly funded datasets in advancing the field of bioelectronic medicine.
Our solution can broadly be categorized into four steps. The first step involves the backend extraction of data using various application programming interfaces (APIs). The second step is setting up and storing the extracted data on a real-time database. The third step involves using machine learning to improve user experience by developing context-sensitive word clouds and smart keyword searches in the portal. The final step is used to create an engaging visualization that users of the SPARClink system will be able to interact with to view the extracted data. A visual representation of this workflow is shown in Figure 1.
We used the dataset information retrieved directly from the Pennsieve data storage platform by running the Pennsieve API to gather all publicly available SPARC datasets.10 The protocols stored on Protocols.io under the SPARC group were also queried via this method.11 A list of public and published DOIs was created in our database with additional information regarding the study authors and descriptions.
We used NIH RePORTER to retrieve data about the papers published as part of SPARC funding. Research articles that reference or mention these datasets, protocols, and publications were queried from NCBI (PubMed, PubMed central) repositories using the search endpoint of their Python API.12 Figure 2 shows the overall flow of data between the APIs and resources queried to get the data. The NIH RePORTER API uses project number (also known as the award number) of NIH funding associated with SPARC datasets (this is provided by the author as additional metadata required when publishing a dataset) as an input to get details including a study identifier, name of the organization that received funding, country of the organization, amount of funding received and keywords of the project topic. The NCBI API uses an identifier for PubMed Central articles to retrieve information such as article name, journal name, year of publications, and authors.
We used Google’s Firebase real-time database to store all the information retrieved via the NIH RePORTER system. The data was stored in a JSON format with read access available to anyone via a dedicated URL. The data in this database was split up into four separate sections labeled Awards, Datasets, Publications, and Protocols. All the entries within this database were given a unique identifier. These identifiers were used to link the data within the database to form a relational database. The links within the data represent the citations or use of resources within other publications. All publications within the database were uniquely identified as either SPARC-funded publications or non-SPARC publications (external publications that cite SPARC datasets and publications.)
The front-end demo of the SPARClink web page uses Vue.js to create a functional prototype of the SPARClink system. An interactive force-based undirected graph visualization was created using the D3.js JavaScript library. The choice to represent the results through such a graph was motivated by the desire to show an intuitively understandable way of showing the connected nature of citations and data reuse. The website itself is hosted on Vercel as a static front end.13 On the webpage, the visualizations can be filtered by key terms or resource type to get a better understanding of the resources created using the SPARC program. A screenshot of the webpage is shown in Figure 3.
To provide some additional functionality on the front-end demo of SPARClink, we used machine learning algorithms to enhance the user experience. We called this function of the SPARClink project the Machine Learning Data Indexing Engine.
We used the Symspell algorithm present in the scikit learn package and trained it on the vocabulary built using the SPARClink database.14 We used delete-only edit candidate generation for generating different combinations of spelling errors, and used both character-level embedding and word embedding for recommending the most probable correct spelling. The output of the spell correction algorithm was used to generate sentence-level embedding and was then compared with the embeddings of different descriptions of the items in the dataset. We obtained a ranking of all the items in the dataset based on their similarity with the searched string. The top 10 were chosen to be shown on the front end.
This module was also used to generate keywords using the keyBERT pretrained model.15 It generated the top 50 keywords associated with the whole document. It also made use of the Maximal marginal relevance algorithm to pick keywords that have a higher distance among them.16 This ensures diversity among the chosen keywords.
The engine also contains algorithms that learn vector embeddings of the descriptors of the elements present in the SPARClink database. Based on these vector embeddings, the algorithms compute the similarity between the vector representation of each word in the vocabulary with the vector representing the whole dataset and find keywords that would describe the resource. A word cloud is generated based on the relevance of these results to further enhance the user experience.
Using SPARClink, researchers can aggregate all the resources created through the SPARC program and quantify their impact. The visualization created by the SPARClink system is shown in Figure 4. The nodes in the undirected graph signify a unique SPARC resource (publication, protocol, and dataset) and the edges in the graph signify the citations or references as found by SPARClink. A well-connected graph of datasets and publications were observed but a significant number of protocols were seemingly distinct from the rest of the resources despite being pulled from the SPARC protocols.io group. This could be associated with protocols that are published on protocols.io but for which the associated datasets have not been made public yet.
The word map generated from the main dataset visualizations is shown in Figure 5. The size of the word with respect to its neighbors corresponds to the frequency and significance of the word within all the searchable metadata that we have indexed. Selecting any of the words in this map will automatically filter the SPARClink visualizations. Using a keyword filter on the graph will also prompt the top-ranking items for the keyword to be displayed on the side of the page. This ranking is shown as a scrollable list, as seen in Figure 6. Both the word map and top-ranked recommendations are continuously updating themselves when new input terms are entered via the SPARClink webpage.
Using FAIR standards can greatly improve the use of data across multiple disciplines and potentially lead to new and exciting discoveries in the field of biomedical science. The benefits of employing the FAIR data principles for data generation, curation, and sharing can, however, be hard to quantify for researchers or members of the general public. Using a system like SPARClink, researchers at all levels can get up-to-date feedback on the use of their data and all the advantages that the FAIR standards provide to efforts in advancing biomedical science. In this work, we developed such a tool for the SPARC program to enable quantification of the reuse of the FAIR SPARC resources (datasets, manuscripts, protocols).
The primary challenge in accomplishing this task lies in the fact that the SPARC datasets and protocols are not referenced in the bibliography of research manuscripts, which is the common practice. Instead, the SPARC dataset and protocol identifiers or URLs are only mentioned in the text or under supplementary materials, which makes querying this information a challenging task. Furthermore, datasets created in the SPARC program can be embargoed for up to 12 months to allow researchers enough time to document and publish their findings. However, protocols are made public immediately since protocols.io does not have an option to embargo the open publishing of these protocols. This could also add to the sparse graphs and we can expect the connectedness of this graph to improve as time goes on.
In the future, we plan on adding the Google Scholar system as an additional resource for data extraction. This should improve the connectedness of our extracted data network as well. Additional filtering functions and performance improvements for very large numbers of nodes are also planned. Currently, the tool is hosted on an independent webpage, but we also aim to integrate it directly within the SPARC portal so that visitors can conveniently visualize the reuse and impact of the different SPARC-generated resources.
At the time of publication, the SPARClink system visualizations can be found at https://sparclink.vercel.app and are expected to be always online going forward. The backend system that queries all the publications is currently paused due to a lack of system resources. The code for SPARClink has been developed to be accessible to anyone who wants to fork the repository from GitHub and run a local version of this project. Instructions on how to run the modules locally are also available in the GitHub repository. The database of currently extracted citation data can be queried via REST protocols using the links provided below. The machine learning data indexing engine is hosted on a web server provided by pythonanywhere.com and is publicly accessible via its API endpoints. This module is also available to be run in local configuration seamlessly.
Source code available from: https://github.com/fairdataihub/SPARClink
Archived source code as at time of publication: https://doi.org/10.5281/zenodo.5550844
License: MIT
David Nickerson confirms that the author has an appropriate level of expertise to conduct this research and confirms that the submission is of an acceptable scientific standard. David Nickerson declares they were an organizer of the Hackathon in which the work described in this paper was performed. Affiliation: Auckland Bioengineering Institute, University of Auckland, New Zealand.
We would like to thank the NIH Common Fund’s SPARC Program and the organizers of the 2021 SPARC FAIR Codeathon for their support during the development of this project.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the rationale for developing the new software tool clearly explained?
Yes
Is the description of the software tool technically sound?
Yes
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?
Yes
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?
Partly
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Epidemiology, FAIR principles
Is the rationale for developing the new software tool clearly explained?
Yes
Is the description of the software tool technically sound?
Yes
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?
Yes
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?
Partly
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Neuroplasticity, data sharing
Is the rationale for developing the new software tool clearly explained?
Yes
Is the description of the software tool technically sound?
Yes
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?
Partly
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?
Yes
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Machine learning and bioinformatics
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 1 31 Jan 22 |
read | read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)