Keywords
Open Science, Open Access, Open Data, Open Education, Open Evaluation, Open Methods, Open Participation, Open Policies, Open Software, Open Tools
This article is included in the Research on Research, Policy & Culture gateway.
Open Science, Open Access, Open Data, Open Education, Open Evaluation, Open Methods, Open Participation, Open Policies, Open Software, Open Tools
In the data set we have now added a column with the discipline from each participant (according to the Frascati manual classification). Previously, the data set contained only cluster-level discipline classification information. We also provided more information on the generation of the 13 items for the Open Science Practices survey. In particular, we have now described the interview survey process in more detail.
See the author's detailed response to the review by Lauren Cadwallader
See the author's detailed response to the review by Suzanne Dumouchel
“Open science” is not a new term in the repertoire of academic disciplines, it has developed over the last several decades. However, what researchers understand by open science and which open science practices they refer to in particular varies greatly between research projects. In some research projects, open science practices are closely related with aspects of the research paradigm’s philosophy of science (e.g., preregistration in critical rationalism), in other research projects, they are at odds (e.g., replicability in constructivism). Accordingly, the implementation of open science practices naturally varies in degree of implementation and stage of development. The purpose of this data collection was to reveal the extent to which subcommunities have emerged that share a common profile of implementing open science practices and how they are related to research paradigms and disciplines.
The study was conducted as a cross-sectional survey using the survey tool formr (Arslan, Walther & Tata, 2020). In order to achieve a high ecological validity, the survey referred to the applicability of open science practices in concrete research projects instead of time periods (e.g., referring to the past year). We focused on evaluating completed research projects as participants were able to draw on their experiences from all phases of the project, which allowed them to make informed assessments of the factors that influenced research project decisions. Participants were recruited via the online access panel provider prolific.co. We used prolific’s built in filter to target researchers (“Industry Role = Researcher”). In the study description for prolific, we merely indicated that the study addressed practices in research projects; the focus on open science practices was avoided to reduce selection bias in the sample:
“In this study, you will indicate whether 13 practices are potentially applicable to a research project you were conducting. The survey contains only 16 items in total.
We are looking for participants who have conducted a research project associated with one of the disciplines
• Natural sciences
• Engineering and technology
• Medical and health sciences
• Agricultural and veterinary sciences
• Social sciences
• Humanities and the arts”
Participants took on average 3.98 minutes (median 3.43) to answer the survey and received USD 0.85 as compensation (approx. USD 12.81 per hour on average). The first participant started on August 20 2021, the last session on March 16 2022.
We aimed for a sample distributed across all research disciplines. For this reason, we drew on the classification of research fields from the OECD Frascati Manual (OECD, 2015). The Frascati Manual is an internationally acknowledged standard on the methodology of collecting and using research and development statistics, developed by the OECD. As a standard, it is the first choice for the definition and taxonomy of research disciplines. For each “broad classification” from the manual (natural sciences, engineering and technology, medical and health sciences, agricultural and veterinary sciences, social sciences, humanities and the arts) we aimed for n=50 participants, which would have led to a total sample of N=300 (limit of allocated financial resources). As soon as 50 participants from a discipline (broad classification) finished the survey, access to the survey closed for participants from that discipline (“cell closed”). For two broad classifications we exceeded the stopping rule (Natural Sciences, Social Sciences), as cells only closed after the last participant from that cell finished the survey, while further participants from that cell were still able to begin the survey (see Table 1). In addition, we were only able to recruit 42 participants for the agricultural and veterinary sciences despite several postings on prolific. This may not be surprising, since agricultural and veterinary sciences is a narrower field compared to the other broad classifications. This is also the reason why the data collection is spread over a longer period of time. After the start of data collection, participants of the other broad classifications could be collected within a few weeks. Due to the repeated invitation of researchers from the agricultural and veterinary sciences, the period of data collection stretched out, unfortunately only a few additional participants could be recruited (see codebook).
On the first page, participants agreed to the declaration of consent.
On the second page they indicated the discipline in which the research project was based regarding which they would like to answer the following questions: “Discipline. On the next page you will answer questions regarding a previous research project. To which discipline is this research project most closely related?” In a dropdown menu, participants were able to choose from all 42 second-level classifications from the OECD Frascati Manual (OECD, 2015). In using the second-level classifications, we tried to avoid inconsistent assignments to the broad classifications by the participants. After that an attention check item was displayed (see below).
The third page gave a quick instruction on how to answer the items following on the next page: “When answering the items on the next page, please think of a research project of yours that you have already completed. Regardless of whether you actually applied the practices in this research project: Which of the practices would have been potentially applicable, given all the characteristics and circumstances of the project? This includes both scientific, and practical considerations in conducting the study.”
On top of the fourth page the following question was displayed: “To what extent are the following behaviors applicable in your research project?” Which was then followed up by 13 items on open science practices (item labels see Table 2). The practices were derived and synthesized using a top-down and bottom-up approach from the FOSTER Taxonomy of Open Science (top-down) and nine additional expert interviews from different disciplines (bottom-up). Through the top-down and bottom-up approach, blind spots were mutually exposed to ensure that the broadness of open science practices are reflected in the survey. The FOSTER taxonomy is the only taxonomy on open science that we know of. It was created as part of the FOSTER Plus project, an EU-funded project on Open Science. The goals in the project explicitly covered the generation of high quality training resources, which includes the taxonomy we use. For the bottom-up approach, we interviewed nine experts in open science. We recruited the experts from an open science fellows program in which they served as mentors. Following the theoretical sampling approach, we recruited mentors who came from a variety of disciplines (e.g., sociology, computer science, sinology) and applied different research paradigms (qualitative, quantitative, mixed methods, theoretical). In a focused interview, interviewees were given a narrative prompt to retrospectively consider open science practices in their field: “Please recall one of your most recently completed research projects. Thinking about the entire span of the project, from the initial idea to the completion of the project, what aspects of open science do you consider significant and how can they be exemplified in research projects?” The interviewer then asked follow-up questions about other practices: “Are there other aspects of Open Science that you consider significant in your research projects (i.e., potentially others as well)? If so, how could these be implemented?”. The interviewer also asked follow-up questions to clarify individual practices mentioned. Two trained coders transcribed and segmented the interview material around each open science practices mentioned. Disagreements in the coding process were resolved through discussion throughout the coding process. With the segmented material, the coders conducted a qualitative content analysis. In two stages they abstracted the practices named by the interviewees to an equivalent level of abstraction. These practices thus obtained were finally compared and synthesized with the FOSTER taxonomy resulting in 13 items on open science practices.
On the bottom of the page we assessed the research paradigm the project was situated in: “Research paradigm. What was the project’s primary research interest and design?” with the single choice answer categories “mainly qualitative empirical”, “mainly quantitative empirical”, “explicitly mixed-methodological (equally qualitative and quantitative empirical)” and “nonempirical”.
For details on items and item statistics, see the codebook (created with the R package codebook; Arslan, 2019) in the Extended data (Schneider, 2022).
Participants had to pass an attention check at the beginning of the survey in order to be able to complete the other questions. The attention check looked as follows:
“Please read the following scenario briefly and answer a question about it:
A famine has broken out in your village. You and some others have been chosen to leave the village and search for food. It begins to rain heavily and soon there will be flooding. Participants in studies like this are sometimes not very attentive. We have included this question here to check if you have actually read the scenario. If you read this, leave the following question unanswered just click next.
According to the scenario, would it be appropriate to take the raft and leave the others behind?”
Followed by a seven-point Likert scale with the ankers “absolutely no” and “absolutely yes”. The attention check was considered “passed” if nothing was marked on the seven-point Likert scale (i.e. an NA value on this item). Overall, 20 participants eligible for participation failed the attention check and were thus excluded. These participants are not included in the data set that is available for download (Schneider, 2022). They jumped to the end of the survey after failing the attention check and therefore did not complete the 13 items on the open science practices.
As a limitation regarding data validation, it should be noted that we did not target a representative sample of researchers across disciplines. For the data set, it was important that we had variance in the backgrounds of the researchers. Any analyses comparing disciplines should therefore be interpreted with caution.
The present data collection received approval from the ethics committee of the Faculty of Economics and Social Sciences at the University of Tübingen (no approval number). Participants agreed to the consent details printed below before beginning the survey.
In the future, the data will be analyzed to answer the questions whether there are different communities in the application of open science practices and to what extent the open science practice profiles of these communities are similar or different to each other. Are there open science practices that all communities share? Are there practices for which there are particularly strong differences between communities? In addition, the role of research disciplines and research paradigms will be explored.
Zenodo: Applicability of open science practices to completed research projects from different disciplines and research paradigms. https://doi.org/10.5281/zenodo.6834569 (Schneider, 2022).
This project contains the following underlying data:
Zenodo: Applicability of open science practices to completed research projects from different disciplines and research paradigms. https://doi.org/10.5281/zenodo.6834569 (Schneider, 2022).
This project contains the following extended data:
• codebook.html (codebook report of survey and its items)
• STROBE-checklist-v4-cross-sectional.pdf (STROBE Statement: Checklist of items that should be included in reports of cross-sectional studies)
• Consent Statement.pdf (Consent Statement: Details of the Consent Statement the participants agreed to)
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
We thank the nine interviewees from the pilot study. We acknowledge support by Open Access Publishing Fund of University of Tübingen.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the rationale for creating the dataset(s) clearly described?
Yes
Are the protocols appropriate and is the work technically sound?
Yes
Are sufficient details of methods and materials provided to allow replication by others?
Yes
Are the datasets clearly presented in a useable and accessible format?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: educational psychology
Is the rationale for creating the dataset(s) clearly described?
Yes
Are the protocols appropriate and is the work technically sound?
Yes
Are sufficient details of methods and materials provided to allow replication by others?
Yes
Are the datasets clearly presented in a useable and accessible format?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Electronic engineering and telecommuication, engineering education, scientific policies
Competing Interests: No competing interests were disclosed.
Is the rationale for creating the dataset(s) clearly described?
Yes
Are the protocols appropriate and is the work technically sound?
Yes
Are sufficient details of methods and materials provided to allow replication by others?
Partly
Are the datasets clearly presented in a useable and accessible format?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Scholarly communications, publishing, open science.
Is the rationale for creating the dataset(s) clearly described?
Yes
Are the protocols appropriate and is the work technically sound?
Yes
Are sufficient details of methods and materials provided to allow replication by others?
Partly
Are the datasets clearly presented in a useable and accessible format?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: research infrastructures, social sciences and humanities, FAIR data, EOSC, Open Science
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||||
---|---|---|---|---|
1 | 2 | 3 | 4 | |
Version 2 (revision) 20 Jul 22 |
read | read | read | |
Version 1 11 Apr 22 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)