Keywords
Open Science, scientific publishing, trust, open access, peer review, governance, gatekeeping
This article is included in the Research on Research, Policy & Culture gateway.
Scientific publishing is a critical part of scientific enquiry; individual excellence is often measured by the number of publications, and the journals in which these publications appeared count enormously. Open Science practices, such as open access, open review, random gatekeeping and shared governance, are implemented in various journals and publishing platforms, providing alternative ways of publishing. But how are these practices trusted?
We have created a theoretical framework for trust in the context of academic publishing and investigated to what extent Dutch researchers find these ways of publishing trustworthy. We have performed a survey to compare the trustworthiness of novel and classical ways of publishing and conducted multiple interviews to figure out why scientists find certain publishing practices more attractive than others.
In the academic publishing system, we have identified various interdependent relationships between stakeholders in which there is a certain level of uncertainty; therefore, we can assume that trust plays a relevant role here. Based on the literature review and interview results, trustworthiness turned out to be one of the most important factors in choosing journals to read relevant scientific discoveries and to publish papers. The survey results suggest that some aspects of open publishing, such as open access, open peer review and shared governance are well-known and widely accepted and trusted amongst the participants, while others, like participatory peer review or random gatekeeping, were less known. In these cases, many researchers formulated concerns about the competence of the randomly assigned gatekeeper and the reviewers coming from the wider community.
Our results highlight a shift in social norms within the Dutch scientific community, formulating critical voices towards the profit-oriented nature of classical scientific publishing and highlighting the importance of open access to scientific results, supporting open peer review and publishers with shared governance.
Open Science, scientific publishing, trust, open access, peer review, governance, gatekeeping
In 2021, UNESCO created a Recommendation on Open Science, which is defined as “a set of principles and practices that aim to make scientific research from all fields accessible to everyone for the benefit of scientists and society as a whole (UNESCO).”
The Open Science movement focuses on, among others, the accessibility of scientific results in the form of publications, data, source materials and digital representations (Sönke Bartling & Friesike, 2014). Free access to the generated scientific knowledge is claimed to be the key to innovation and human development (Cribb & Sari, 2010; Phelps, Fox, & Marincola, 2012), a democratic right of every taxpayer by making publicly funded research publicly accessible (Phelps et al., 2012), and a means to increase the replicability and reliability of scientific records and reduce publication bias (Reynolds, 2016). Sharing scientific results can happen in various ways, from scientific publications through conference presentations, seminar discussions, digital conversations, mail exchanges, writing blogs, and posting on social media. Some scholars even suggested sharing timestamped and referrable ideas and negative or preliminary scientific data (Bartling & Fecher, 2016). Still, researchers are often afraid to share their unpublished results or ideas due to their fear of scooping, not being credited for ideas, public humiliation, risk to reputation, reduced scientific quality, career compromise, backlash from senior figures, or simply being different (Gomes et al., 2022). Therefore, scientific results are still disseminated mainly through publishing them in scientific journals and books (Schimanski & Alperin, 2018).
Publishing practices and policies enormously shape and determine the other phases of the scientific life cycle through the reward structure coupled with publications. Although there are initiatives to get rid of the impact factor and H-index (like recommended by the San Francisco Declaration on Research Assessment - DORA), in most countries, in one way or another, the scientific publication list is still one of the most important factors to determine the academic impact of a scientist, leaving out leadership, vision, teaching qualities, teamwork or collaboration out of measuring excellence, for example in evaluating grant applications (Borycz et al., 2023; Khan, Almoli, Franco, & Moher, 2022; McKiernan et al., 2019; Morton, Ranson, & El-Boghdadly, 2023). The consequences of this are the well-described publish or perish phenomenon (Lee, 2014), the unequal distribution of resources to institutions and researchers that are already successful (Bezuidenhout & Chakauya, 2018; Onie, 2020), and avoiding investing in risky but innovative projects or financing novice researchers with original ideas (Stephan, 1996).
One of the most claimed issues of the current publishing system is the subscription-based business model, in which the authors and the peer reviewers do not receive any monetary incentives for their intellectual work, but the readers have to subscribe to the journal or pay for the article individually (Tennant et al., 2017). Making the final version of scientific publications permanently and freely accessible to everyone (open access) can be achieved by journals in several ways. One often-used version of open access includes article processing charges (APCs), which are paid for by authors prior to publication, often from research grants (Owens, 2003). This type of open access, called the gold open access, has become one of the most widespread manifestations of open science. At the end of 2021, the global open access level for research published reached 49%, most of which are gold open access publications (Neylon & Huang, 2022). The introduction of APCs reinforced a different type of publication bias: authors from less-resourced fields, countries, or organisations publish less this way (Borrego, 2023; Klebel & Ross-Hellauer, 2023; Nabyonga-Orem, Asamani, Nyirenda, & Abimbola, 2020). Various organisations support full open access to research publications funded by public or private grants (“cOAlition S”) by using non-profit ways of academic publishing, called the diamond open access (Fuchs & Sandoval, 2013).
Publishing preprints, self-archiving (also called green open access), and making datasets available and reusable in repositories (Ware & Mabe, 2015) are aimed at solving some other issues related to the current mainstream way of scholarly publishing, such as not acknowledging peer reviewers, the lack of reproducibility (van Rossum, 2017), the low incentives to publish negative results, and the lack of reusability of published work (Niya et al., 2019). Several other approaches aim to change these deeper aspects of academic publishing, providing alternative ways of peer review, such as publishing the peer review together with the article (public review or open peer review) or inviting a broader group from the scientific community to provide feedback to the authors (participatory review or open participation peer review) (Ross-Hellauer, 2017; Walker & Rocha da Silva, 2015), using random gatekeepers instead of editors (Tennant et al., 2017), or trying out shared governance over private companies or decentralised infrastructure (Tenorio-Fornés, Tirador, Sánchez-Ruiz, & Hassan, 2021).
Different journals and publishing platforms have various combinations of these solutions, applying incremental changes in their practices or introducing revolutionary new ways of academic publishing. Of course, the number of publications appearing on these platforms and in these journals provides us with some information about how many scientists are eager to change their publishing practices, but we do not know for sure what scientists still publishing in “traditional” journals think about these “new” practices.
At the same time, although there are critical points of the current mainstream way of publishing, it is still highly accepted and trusted by the broad scientific community (Tenopir, 2014). Due to the spread of predatory journals, which exploit the current academic system based on the publish or perish principle and deceive potential authors to make them pay for publishing their results in journals that do not provide traditional peer review and editing (Shrestha, 2021), trust in the classical way of publishing might even be strengthened.
We were therefore interested in to what extent scientists trust journals and publishing platforms that implement aspects of open science. We first created a theoretical framework to define and conceptualise trust for this context, then investigated the academic publishing system from the perspective of this framework to answer the sub-question: What kind of role does trust play in academic publishing? Then, we asked researchers at one TU Delft faculty to generate ideas and to come up with possible solutions to these issues of academic publishing. We have used the theoretical framework to dissect trust into smaller and more tangible concepts, such as benevolence, competence, integrity and predictability, and we investigated to what extent researchers and PhD candidates working in Dutch research institutes trust the aspects of the new publishing practices such as open access, open review, random gatekeeping and shared governance.
A mixed-methods strategy was chosen to answer the research question: To what extent do researchers trust new initiatives in academic publishing? First, we performed desk research to create a framework for trust and investigated the various trust relationships in academic publishing. Then, we invited several TU Delft-based researchers to generate potential solutions for some issues related to academic publishing. Based on the themes that emerged from this session and the theoretical framework, we created a quantitative online survey to get a general picture of to what extent researchers would trust various aspects of academic publishing. This was followed by qualitative semi-structured interviews with researchers from various Dutch universities to gain a deeper understanding of the reasons why they trust or distrust several aspects of open science.
The literature study, the ideation session, and the B interview series were conducted by a researcher with an engineering and social scientific educational background together with a researcher with a design research background. The survey and interview series A were performed by a team of Master students following the Research Methods course of the Communication Design for Innovation Master track at TU Delft, under the supervision of a researcher with social scientific research background.
Because our research question focused on researchers involved in the academic publishing system, both on the user and production sides, we decided to include researchers who have publishing, peer reviewing and editing experience. To focus the research to out local environment, we decided to invite Dutch researchers.
In total, sixty researchers filled out the survey completely. They represented social sciences, humanities, natural sciences, and engineering disciplines. More than half of the people who filled in the survey were at the beginning of their career; 20% had ten to nineteen years of work experience, while 25% had worked in science for over 20 years. Most participants (95%) have already published scientific publications. Two-thirds of the participants had less than 24 articles published, while almost 14% had more than 75 publications. 78% of the participants have already performed a peer review before, and 20% were involved in the peer review of more than 75 papers.
Ten researchers from one TUD faculty were asked to participate in the ideation workshop. The group consisted of PhD students, assistant, associate, and full professors with various expertise in publishing and peer reviewing. Four participants from various Dutch universities were interviewed in Interview Series A, and eight researchers from TU Delft in Interview Series B. They worked at various faculties and held positions ranging from assistant, associate and full professor.
All parts of the study, including the informed consent forms, the survey, the interviews and the idea generation session, were approved by the Human Research Ethics Committee (HREC) of TU Delft prior to the given part of the research (Letter of approval #2196, signed on the 12th of May 2022 by the Chair HREC TU Delft and Letter of approval #3180, signed on the 19th of May 2023 by the Chair HREC, TU Delft). Participants were fully informed about the research, including the purpose of the study and how their responses would be used. Any potential risks or benefits of participating were also explained clearly, and it was emphasised that participation was voluntary and that participants could withdraw from the study until the data collection was finished. A formal consent form was provided, previously approved by the ethical committee, which all participants were required to verbally agree to (in case of online interviews, in which case the verbal consent was recorded and stored as an audio file) or sign before the idea generation session and interviews. Both types of informed consent (written and verbal) for the surveys and interviews are accepted standard methods to obtain consent and were approved by the ethical committee. As for the online interviews the participants were not present, we have chosen to ask their verbal consent so that the participants did not have to send the signed consent forms via email, including one more risk for data leakage. In the case of verbal consent, the audio recordings were kept in a separate secure folder. The survey started with a short information about the research, explaining the aim of the research, the duration of filling in the survey, the confidential nature of the data collection, the data management process, the potential risks of participating in the research, the voluntary participation and the contact information of the responsible researcher. After this introduction all participants had to agree with these conditions, otherwise their data were not collected. Please find the exact introduction page in the survey document (Kalmar, 2024b). All collected data was stored securely in Microsoft Teams with two-factor authentication, and only group members and the responsible researcher had access. Besides, all personally identifiable data, such as name, IP address and response date, were removed from the dataset during the data cleaning and analysis stages.
Desk research
For the theoretical framework, we performed the first round of literature search with trust-related terms in Google Scholar because of its broad coverage of reports and books (Martín-Martín, Orduna-Malea, Thelwall, & Delgado-López-Cózar, 2019). After the initial search, we used the results to refine the search queries and snowball additional sources. Search terms included trust and trustworthiness, distrust, mistrust and untrust, trustworthiness, vulnerability, risk, threat, uncertainty, dependence, and control. From the included sources, clusters of similar information were formed, and these clusters were enriched with additional information from subsequent literature searches (e.g., by doing an in-depth search into the meaning of uncertainty, which is one of the trust conditions).
The second round of literature review was performed to link the theory in the framework with actual examples from the academic publishing world. For this, we used the Web of Science. The following search terms were used in various combinations: goals, dependence and uncertainty; academic, scholar or scientific publishing; and finally, readers (audience, public), authors (scientists, researchers), editors, peer reviewers (referees), librarians (libraries, universities) and publishers. We have selected articles that turned out to be relevant upon reading the abstract, read the article and summed them up.
Ideation session
Ten TU Delft researchers were invited to brainstorm on possible solutions for problems in academic publishing related to the duration of the manuscript processing, the professionalism in peer review and the power of researchers relative to major publishers. During the first phase, the workshop organisers presented the problems and shared some insights related to the issues. The researchers were asked to generate as many ideas as possible. By rotation, all researchers came across all three problem areas, and each of them went through two iterations of idea generation. During the second phase, the researchers were invited to collect all the ideas, discuss them, put them into various clusters, and then rank the categories based on their importance.
Survey
We created a cross-sectional quantitative online survey using Qualtrics. It contained 47 questions and took approximately 20 to 30 minutes to complete. Before distribution, a pilot survey was conducted with colleagues, and based on their feedback and comments, the survey was adjusted.
The participants were asked to rate their trust in various aspects of academic publishing, such as open access, gatekeeping, peer review, and governance. They had to share to what extent they agreed with statements related to general trust and the trust concepts of competence, predictability, benevolence and integrity (see Table 1 for their definitions). In the case of access, only integrity and benevolence were used. The questions in this part of the survey were answered using a Likert scale with five agreement options ranging from Totally disagree to Totally agree, also inducing a Not Applicable (NA) option.
Participants were recruited via email and newsletters from university-specific graduate schools and open science institutions in the Netherlands. The survey was conducted online from May 9th to June 5th 2023, and 91 responses were received, of which 60 were complete and subject to further analysis.
The questionnaire text, including the questions is made available for further use at the DANS data repository (Kalmar, 2024b).
Interviews
Interview series A: Four semi-structured interviews were conducted with survey respondents who indicated interest in participating. The interviews were conducted to understand better why some participants trust specific options for the publishing attributes. These interviews started with an introduction to the topic, the research, and the research goal. Each interview took about 30 minutes and consisted of several questions about the interviewee’s opinions on academic publishing, as well as their opinions regarding trust. There were no questions related to the specific answers they gave in the survey, but they were encouraged to give reasons why they trust certain aspects more. Interviews were conducted online using Microsoft Teams. Draft transcripts were provided directly from the app and manually cleaned and coded later. Video recordings were securely stored on the university-provided server and were deleted after transcription. The full protocol for the interview series A is made available in the DANS repository.
Interview series B: Furthermore, we performed semi-structured interviews with eight researchers from TU Delft (who did not participate in the survey research) to ask about their publishing preferences. These interviews lasted approximately an hour and were performed using Microsoft Teams. The full protocol for the interview series B is made available in the DANS repository.
Ideation session
The generated papers were collected and photographed, then the cateogires were manually analysed.
Survey
The questionnaire data was first manually cleaned, unfinished answered were excluded from the analysis. Then all personal information (such as IP address and email address) were deleted from the answers. Then the remaining data was subject to descriptive analysis by using SPSS. The Totally Agree and Agree Likert-scale answers were merged into the Positive Answer category. At the same time, Totally Disagree, and Disagree options were merged into the Negative answer category for the analysis. Then, using the Wilcoxon signed-rank test, we performed statistical analysis on the dataset to determine which options for each publishing attribute the participants trusted more.
The anonymised dataset collected via the survey is made available for further use at the DANS data repository (Kalmar, 2024a).
Interviews
All interviews were transcribed and subject to thematic analysis. Provisional and open coding were used in the first round of the analysis by using Atlas Ti, and codes were grouped into code families in the second round (Wertz, 2014).
This article uses the following definition of trust: “trust is an attitude that one takes to the trustworthiness of another; in turn, the other’s trustworthiness is a property that they have” (O’Hara, 2012). Trust, from this perspective, is a relationship between trustors (the trusting persons or entities) and the trustees (the trusted persons or entities) (Nooteboom & Six, 2003). Furthermore, trust is needed when the trustors are dependent upon the trustees, the trustees’ resources or their access to resources for the realisation of the trustors’ goals (Davis & Cobb, 2010) and there is uncertainty about whether the trustees will or can fulfil the trustors’ expectations (Gambetta, 2000).
The trustors decide to trust the trustees directly or via a mediator, also called a trust broker (Nooteboom & Six, 2003). Direct trustees can be actors but also inanimate or intangible objects like data, information, knowledge (Hardwig, 1991), processes (Eshuis & Van Woerkum, 2003), code, quasi-entities and algorithmic authorities (Teng, 2021). The act of trusting can be a conscious decision or based on non-rational clues, such as previous experiences, emotions or moods. The trustors act on their beliefs and/or decisions by following through with trust-informed risk-taking behaviours (Dietz & Den Hartog, 2006). Trustors often consider the trustee’s benevolence, competence or ability, integrity and predictability when determining the trustee’s trustworthiness. Table 1 contains the working definitions of these various trust elements.
In any academic publishing system, trustors and trustees may include (but are not limited to) readers, authors, peer reviewers, editors, publishers and librarians. Researchers can take on almost all trustor and trustee roles. Trustees may also come in the form of proxy trust metrics (such as impact factors) and research outputs, such as peer-reviewed scientific articles, conference proceedings, preprints, monographs, edited volumes, books, non-peer-reviewed grey literature, magazines and trade journals, reference works, blogs and social media posts (Nicholas et al., 2015). In academic publishing, the role of a third-party trustee is, amongst others, fulfilled by publishing groups, including publishers and journals. Publishers try to establish trust through their services by date-stamping or priority via registration, quality-stamping (certification) through peer review, recording the final, definitive and authorised versions of papers, archiving them and disseminating these to targeted scholarly audiences (Hummels & Roosendaal, 2001).
Because the academic impact of researchers is still often measured in citation indexes (Ravenscroft, Liakata, Clare, & Duma, 2017; Tenopir, 2014) and other altmetrics, such as the number of tweets and blogs mentioning the published paper and the number of times it is bookmarked (Williams, 2017), authors of scientific publications depend upon readers downloading and citing their articles. When choosing a journal to publish, authors often have their audience in mind (Tenopir, 2014). In the last couple of decades, it became more and more important to publish papers that reach “enough” attention (Van Noorden, 2017). Therefore, researchers in some fields depend on prestigious publishers (through prestigious journals) (Larivière, Haustein, & Mongeon, 2015), however, the trustworthiness of these journals is also questioned (Brembs, 2018). These prestigious journals are praised, therefore, it is highly uncertain whether authors can land their papers in such journals; their rejection rates range from 80% to 95% (Khadilkar, 2018). Due to this phenomenon, most researchers aim not for these journals but for those that promise more chances to be published. For some researchers, the citation count and the likelihood of getting published are more important than the trustworthiness of the journal because of the pressure of moving forward in their academic career (Tenopir, 2014).
Open access can increase the audience reach of a paper (Taylor & Francis, 2019) and the number of citations (Huang et al., 2024; McKiernan et al., 2016), therefore, could be a preferred way of publishing. Nevertheless, some early studies investigating the trust in citing behaviour of researchers found that open access journals were less trustworthy compared to closed access journals (Watkinson et al., 2016). The reasons behind distrusting open access journals are multifold, but doubting the quality of open access journals due to the spread of predatory journals is one of the major causes (Editage, 2018; Watkinson et al., 2016).
Readers are dependent on authors to gather, process and present information in a scientifically sound way. “Whenever the relevant evidence becomes too extensive or too complex for any one person to gather it all […] one can have sufficient evidence only through testimony” (Hardwig, 1991). According to the Enago Academy’s report, yearly 500-600 academic papers are retracted because of inconsistencies caused by honest errors, biases or deliberate scientific misconduct (EnagoAcademy, 2021). Readers often trust articles recommended by colleagues (posted on social media or blogs). Familiar journals are also considered trustworthy, and in the case of unfamiliar journals, readers award more credit to peer-reviewed journals and journals with higher impact factors. Some readers check the trustworthiness of papers published in unknown journals by critically evaluating the abstract, the methods, the data published in the article and the references (Tenopir, 2014).
Due to their role in quality control, editors are often called the gatekeepers of academic publishing, although calling them custodians may be more appropriate (Starfield & Paltridge, 2019). Readers depend on editors’ judgment to reject low-quality papers and ensure that the accepted papers comply with specific standards. In addition, readers depend on editors to correct already published work, release an expression of concern about it, or retract it (Vaught, Jordan, & Bastian, 2017) when the paper contains questionable results.
Editorial boards have a huge responsibility to judge the quality and integrity of the scientific research reported in the manuscripts. Unfortunately, editorial boards were found to have extremely low levels of diversity. Only 20% of editorial board members were women, 1% were underrepresented minorities, and more than 25% of the editors came from less than 5% of the research institutes (Liu, Rahwan, & AlShebli, 2023; Newhouse & Brandeau, 2021).
The gatekeeping activity of editors is often outsourced to or shared with peer reviewers, creating a dependent relationship between editors and reviewers (Jackson et al., 2021). Finding peer reviewers to provide feedback on particular papers is often challenging for editors. This makes them more dependent on authors. Although controversial, journals frequently ask authors to suggest peer reviewers when they submit their manuscripts (EnagoAcademy, 2021). However, this also opens the door to peer review circles and citation rings, in which peer reviewers provide favourable comments to each other, and authors write their own peer reviews (Ferguson, Marcus, & Oransky, 2014). Such fake peer reviewers accounted for 15% of the retracted papers (Kulkarni, 2016).
Authors are dependent on the editors to publish their papers. At the same time, they cannot be confident about the fairness of the decision-making process because editors bring subjectivity to the process by making the final decision and choosing peer reviewers (Primack et al., 2019). In addition, authors may be uncertain about what exactly is happening to their manuscripts because journals/editors do not always communicate transparently during the manuscript-handling process (Taylor & Francis, 2015). Authors, in general, want high-quality feedback on their manuscripts, feedback that is specific, actionable, digestible, reasonable, respectful and consistent (Gerwing et al., 2020; Lanier, 2021; Omer & Abdularhim, 2017; Resnik & Elmore, 2016). They also want the contents of their manuscripts to be handled with discretion because “being scooped to a discovery is a scientist’s worst nightmare” (Callaway, 2019). Finally, authors may be uncertain whether peer reviewers act in their interests. In several instances, reviewers have been shown to be biased against papers from certain groups of people (female, non-native English authors or junior researchers, authors from specific geographical locations or low-prestige institutions) (Amano et al., 2023; Fox, Meyer, & Aimé, 2023; Liu et al., 2023; Nakamura et al., 2021; Taylor & Francis, 2015; Walker & Rocha da Silva, 2015). Even knowing the flaws of the system, peer review was found to be trusted by readers and authors to provide authority, quality and reliability (Watkinson et al., 2016).
In 2020, there were roughly 600 publishing platforms and journals which have implemented various versions of the open peer review system (Wolfram, Wang, Hembree, & Park, 2020). Open peer review reveals the content of the peer review, the identity of the referees before or after the publication of the manuscript and/or opens up who can review or comment on the publication (Ross-Hellauer, Deppe, & Schmidt, 2017). This system is aimed to provide more transparency, enrich scientific records, while honouring peer reviewers, and increase the review quality and integrity by reducing reviewer bias and decreasing the manuscript handover time (Tennant et al., 2017). Open peer review requires and builds on mutual trust between authors and reviewers (Schmidt, Ross-Hellauer, van Edig, & Moylan, 2018). Therefore, it is crucial for the wide application of open peer review practices that researchers trust these processes.
Figure 1 depicts the theoretical framework of trust, and the various stakeholders found relevant related to trust in academic publishing.
During the idea generation session, ten researchers from TU Delft were asked to generate solutions for three areas of academic publishing: the duration of manuscript processing, professionalism in peer review, and the power of researchers relative to major publishers. They have clustered the solutions into categories, and we identified the following aspects of the publishing process as most important for TU Delft researchers: open access practices, new governance models, different proposals for editorial control of the peer-review process and for the reviewers’ support.
Open access
Several researchers suggested reforming the system and making all publications open access as a solution. Until that happens, they suggested that pirate sites, which provide open access to articles otherwise behind a paywall in an illegal way, should be accepted. Some participants suggested creating a campaign to collectively publish in open access minor journals as a protest against profit-oriented publishers or even banning publishers still applying this publishing model. Several people suggested promoting publishing preprints.
Peer review
Researchers proposed multiple directions to improve the peer review system, including public review (publishing the review together with the publication or publishing the review in a separate journal edition), providing an online one-on-one session for authors and reviewers after the written review to discuss the review in detail, publishing the rebuttals for reviews also publicly, or having a rating or evaluation system for peer reviewers based on the quality of their feedback. Some researchers brought up the idea of integrating an automatic process in the peer review, by implementing an AI-based pre-review, for example.
Gatekeeping
Researchers thought that publishers or platforms need to invest in hiring more people to do peer reviews, provide (sensitivity) training for peer reviewers, change the editorial boards more frequently and set up and clearly communicate concrete deadlines for the manuscript processing. They also came up with ideas to filter out inappropriate reviews.
Governance
The participants of the idea generation session suggested the wider introduction of the shared ownership structure in publishers and publishing platforms, creating more university-based journals, and having more state-owned or crowdfunded publishing platforms.
Reward and recognition
Besides improving the publishing process, we received suggestions of intervention through recognition and reward mechanisms, like reputation scores—both positive and negative—and punishment mechanisms for the lack of research integrity. Lastly, we received suggestions on diversifying the paper size and format to ease the publishing process.
Our literature study showed that trust relationships and interdependencies exist between the various stakeholders of academic publishing. Based on the categories that emerged during the idea selection stage of the ideation session, we have decided to focus on the following aspects of open publishing during the rest of the research: open access, peer review, gatekeeping, and governance. To investigate how researchers at various Dutch universities trust these aspects of open publishing, we have sent out a survey and performed interviews. In the survey and interviews, we did not differentiate between the roles researchers can take in the academic publishing system. They were all readers and authors of academic publications; most were peer reviewers, and some were part of editorial teams of academic journals.
In the interviews, it became evident that trust plays an essential role in choosing a journal to publish in. Trust is based on various aspects: previous experiences with publishing and experiences with the journal, covering manuscript processing time, review quality, acceptance within the peer community, and some kind of measure representing the quality and trustworthiness of the journal.
For some scientists, the impact factor is the measure of quality and trustworthiness. Participant B8 mentioned that the journal’s impact factor is getting increasingly important due to the spread of predatory journals. The same interviewee also highlighted that prestigious journals with high impact factors, at the same time, are unreachable, so many authors do not even target their papers in these journals. So even though it is important for them to reach a wide audience, it is often more important to be realistic and aim for journals which would publish their papers with more certainty. “I am not sure if I ever published in a journal that had an impact factor larger than 10, to be honest. So these are also journals that I believe my more experienced colleagues fear, you know, because it is difficult to get in, right? Moreover, at the end of the day, you would like to, if you work for half a year on a publication, you would like to get it published. So they have very high criteria, and even if it is a good manuscript and an interesting topic, it might still not be as good as other submitted ones, right?”
Others mentioned that the impact factor is not the most important aspect when they choose where to publish, but it can happen that the chosen journals have, at the same time, a high citation index. “I find multidisciplinary journals and open access journals more interesting than five or six years ago. So, in the past, I liked a journal that I used to read very often. So, there were two or three journals that I knew were the articles that I was usually reading for my work. Then, these were usually very specific domain journals. And in these last years, I found more often that these open access multidisciplinarity journals publish much more interesting articles and they also have a higher citation index.” (Participant B5)
Manuscript processing time was also mentioned as an essential aspect when choosing a journal. Various interviewees (Participants B5 and B7) mentioned that when the review process of manuscripts is delayed, the authors are worried that their research findings would lose their newsworthiness or that they cannot add details of the already completed research when requested by the reviewers.
Trust in open access
As open access is a widely used practice, we expected that researchers participating in our study would be aware of and have experience with publishing in such journals. To our surprise, the majority of the researchers not only knew about open access publishing but preferred open access to closed access. Two-thirds of the survey participants agreed or strongly agreed with “I believe free open access can be trusted”. At the same time, this number was 37% for the same statement about access behind a paywall. Researchers also stated that the open access process has a significantly higher level of integrity and demonstrates a significantly more benevolent attitude towards the scientific community when compared with the subscription-based model (Figure 2, Table 2 indicates the results of the Wilcoxon Signed Ranks test results).
trust | integrity | benevolence | |
---|---|---|---|
Z | -3.552b | -5.027b | -5.896b |
Asymp. Sig. (2-tailed) | <.001 | <.001 | <.001 |
Several interviewees mentioned their moral concerns about the publishing system using paywalls and highlighted that they choose open access over closed publishing based on their values and beliefs. They mentioned that the profit-oriented model of publishers conflicted with the interests of the scientific community. “It is still strange to me that you give an article to a journal without getting paid, and they make money from it, while science should serve society and should not be behind some paywall.” (Participant A4) Another participant said that free access to all scientific publications in a given domain is a prerequisite to scientific integrity: “In my opinion, if I do not have access to all journals concerning my field, then I cannot provide integrity.” (Participant A1)
Several interviewed researchers mentioned that the profit-oriented system blocks knowledge sharing within the universities (students do not have access to important current findings) and outside the walls of knowledge institutions, increasing the gap between science and society. One researcher mentioned that open access publishing allows sharing of the results with research participants (Participant A2). At the same time, another interviewee highlighted the importance of open access in science reaching decision-makers and having a broader impact by being implemented in policies (Participant B6).
Several interviewees mentioned that their universities have an open access publishing policy (Interviewees A2 and B4), and the universities provide financial support to pay for the APC. Another participant mentioned that the grant provider subsidising their research project directed them to publish the research results in open access journals. However, for this latter participant, the compulsory nature of open access publication did not conflict with their values. “Even if I was able to publish, like, not Open access, I still think science should be free to everybody. Absolutely.” (Participant A1)
In the interviews, when asked about open access publishing, participants often mentioned other open science practices, such as sharing open code via GitHub, using open access data for their research and archiving data in repositories, often required by the grant institutions. In other cases, they reflected on the trend they experienced in the last years, moving towards more open science. “I have been in the business, so to say, for ten years; I have noticed that journals are becoming more and more willing to provide means to share open data with the public. For instance, via repositories and hyperlinks. Again, there is a trade-off. Because, as you might understand, becoming open also poses a threat in the sense that if everything is open and not well managed, all data becomes open.” (Participant B5)
One interviewee mentioned predatory journals taking advantage of scientists’ trust in open access journals. They mentioned that they try to avoid publishing in journals that directly approach them, and having selection criteria can help in falling prey to these journals. (Participant A2)
Peer review
In general, researchers stated that they trusted the classical closed peer review system. They have found that this peer review system is competent in identifying errors or inconsistencies in papers, and they scored the integrity of this system high. It is noteworthy that the closed peer review system did not score high on predictability and benevolence (Figures 3 and 4).
One interviewee mentioned that the point of having a peer review is to have a discussion among peers to help the advancement of scientific knowledge, but the current system is selecting out those who disagree with the reviewers’ view. “I expect feedback from my peers because we want to build these ideas together. I am not a genius. And I do not think that knowledge is built by monolithic discourses or research; it is built together. To build that together, we definitely have to disagree on some points … But I think the peer review process now forgets a little bit about this, and it is more focused on how many publications I need to have per year, and people tend to look for the easy path there. They are like, avoid the person that is more critical with your approach.” (Participant B6)
As a significant critique, researchers mentioned in the interviews that they found the current closed peer review method too slow. “For example, an article I wrote two years ago is still not published. So I think that having more reviews is not always good. If you research very fast changing subjects, it can lead to frustrations.” (Participant A3) “Umm, so this is my personal frustration; it frustrates me quite quite a lot because I do realise I would like my results to be out there quicker than then they do now. “(Participant A1)
In the survey results, we found no significant difference between how participants trusted the closed and the public peer review options, except in one aspect: predictability. It seems that researchers trusted public peer review highly, but they would find this review method even more predictable than the closed one (Figure 3, Table 3).
trust | competency | predictability | integrity | benevolence | |
---|---|---|---|---|---|
Z | -.652b | -.537b | -2,953c | -.502b | -.516c |
Asymp. Sig. (2-tailed) | .515 | .591 | .003 | .616 | .606 |
In the interviews, participants highlighted the benefits of public review. Interviewee A1 acknowledged that they had experience only with a closed peer review system, which they trusted, but they found the idea of public peer review convincing and beneficial. “I think it is also good for the reviewers to know that at the end of the day, their names get out there, you know, for bad and for good. So I think if you know your name is going to be on something, you put more effort into it.” (Participant A1) Participant A4 could back this up; based on their experience, people are more constructive if they know that their peer review is published. Participant A2 highlighted the advantages of the public review method, especially for small, specialised scientific disciplines. They mentioned that they felt that the same people get the peer reviewer role when they want to publish, and they are asked to give peer reviews to the same people. “If the same person is always reviewing my work, they might just as well be co-authors. So I think maybe public review is better because then even the editorial team can look at who has reviewed previous work from the same author and decide; maybe we do not invite the same person over and over and over because we want different perspectives as well.” (Participant A2)
Researchers reported that they trusted the participatory peer review significantly less than the public peer review system. They thought that a system that recruits peer reviewers from a broader scientific community would be less competent, predictable, benevolent, and have less integrity (Figure 4, Table 4). Participant A4 mentioned that the chance of finding someone knowledgeable in the field within the randomly assigned reviewers is probably low. Hence, the competence of these reviewers is probably lower, and therefore, the integrity of such a peer review system is also less.
trust | competency | predictability | integrity | benevolence | |
---|---|---|---|---|---|
Z | -4.133b | -2.758b | -1.877b | -2.824b | -2.643b |
Asymp. Sig. (2-tailed) | <.001 | .006 | .060 | .005 | .008 |
Gatekeeping
People filling in the survey thought editorial teams are more competent, predictable, benevolent, and have more integrity than randomly assigned gatekeepers. From these aspects, the differences between trust, competency, and predictability were significant (Figure 5, Table 5).
trust | competency | predictability | integrity | benevolence | |
---|---|---|---|---|---|
Z | -2.784b | -4.878b | -4.549b | -1.758b | -.645b |
Asymp. Sig. (2-tailed) | .005 | <.001 | <.001 | .079 | .519 |
We gained some insights into why participants found the randomly assigned gatekeepers less trustworthy during the interviews. Participants A1 and A2 were concerned about the random gatekeepers’ experience, background and motivation, expecting that the random choice would not consider scientific domains and disciplines. Participant A1 mentioned that they were part of the editorial team of one journal at the time of the interview, representing the editorial’s perspective as well. “As a part of an editorial team, I would go for the first one [editorial board] because I think at least as a part of the editorial team of a journal, I do care for the journal. So, I try to act in the best interest of the journal when I approach it. From the randomly assigned people from the pure community, you might meet somebody interested, but you might as well meet somebody who is just doing it for the sake of putting another item on their CV. And they are just really not putting effort into the selection they make. So I would go for the first one.” (Participant A1)
Participant A1 also mentioned that a positive aspect of the current system is that authors can contact the editorial team and exchange ideas with them directly. Participant A2, although concerned about the experience of the randomly assigned gatekeepers, admitted that the randomly assigned gatekeeper system might be better because it can overcome favouritism if the gatekeepers come from discipline-specific communities. Participant A4 mentioned that this is not an easy question. They mentioned that editorial teams probably have more specific knowledge to make a solid decision based on their expertise. On the other hand, they highlighted that having the same editorial team for too long is not good. “If you let an editorial team sit for too long, you are more likely to get that kind of favouritism, and they will mainly pass on articles from people they know.” (Participant A4)
Governance
Researchers trusted publishing companies with shared ownership policies more than publishers owned by private companies and investors. Researchers stated that those publishers are more predictable, benevolent, and competent, have more integrity, and have a shared governance model that includes the scientific community in decision-making. The differences between these two models were significant in all trust aspects (Figure 6, Table 6).
trust | competency | predictability | integrity | benevolence | |
---|---|---|---|---|---|
Z | -5.954b | -2.486b | -2.081b | -6.159b | -5.891b |
Asymp. Sig. (2-tailed) | <.001 | .013 | .037 | <.001 | <.001 |
During the interviews, one researcher highlighted the conflict between the concept of science as a community-driven endeavour and a single company-owned publisher, which defines which articles should be published. “I do not think it is appropriate for a scientific journal to belong to an individual or several individuals, mostly because I do not think science is a concept that goes with a company or private ownership.” (Participant A2) Another interviewee was more concerned about the profit-oriented nature of privately owned publishers: “For a company-owned structure, the more articles you accept, the more money you raise. And I think that is a dangerous trend”. (Participant A4). The same person mentioned that universities should play a much more critical role in publishing than they currently do.
One interviewee shared a potentially problematic issue related to shared ownership: reaching an agreement between the various perspectives and voices of the scientific community. “I think it would be complicated to combine all of these agendas and backgrounds and go towards a specific goal.” (Participant A4).
Journals and publishing platforms implementing various open science principles are considered alternatives to classical academic publishing. We were interested in figuring out to what extent Dutch researchers trust these new initiatives, especially given their concerns about predatory journals. We investigated various aspects of open science, namely open access, open peer review, random gatekeeping, and shared governance.
Based on the literature review and our results, trustworthiness is the most important factor in choosing journals to read relevant scientific discoveries, and while it is also important in choosing journals to publish their papers, authors might have other priorities, too. Researchers at the beginning of their careers indicated more often that they rely on the impact factor of journals to define their trustworthiness, while experienced researchers based their decisions on where to publish on the familiarity of the journal, the acceptance of the journal within the peer community and their experience with the manuscript handling process and peer review services.
Most researchers we interviewed mentioned that the quality of peer review of a given journal is crucial in deciding where to publish. This is in line with previous studies highlighting that authors might be willing to wait longer for better-quality feedback (Nickerson, 2005). This might contradict the authors’ wish to have a faster manuscript turnover time. Authors are afraid that their findings will lose their urgency and their results will become outdated or that they will lose track of the already finished and written study. Moreover, the reward system within academia pushes researchers to publish a certain number of papers within a set time frame (Bilalli, Munir, & Abelló, 2021). Some interviewees found it difficult to balance between the level of certainty of getting published in the short term and the journal’s reach. The trustworthiness of the journals could take a back seat in this decision.
Based on previous research results, we expected to see a difference between early career researchers and tenured scientists because the institutionalisation process was suggested to fade the discontent with the current mainstream publication system (van Dijk & van Zelst, 2020). Nevertheless, we did not see any significant difference between the trust levels of researchers with various years of work experience.
Some aspects of open publishing, such as open access, were well-known and widely accepted amongst the participants, while others, like participatory peer review or random gatekeeping, were less known. In these cases, many researchers formulated concerns about the competence of the randomly assigned gatekeeper and the reviewers coming from the wider community. Some of the researchers admitted that they did not have experiences with, e.g. open peer review, although they found this type of publishing benevolent and appreciated its added value. Some participants distrusted various aspects of open science because they found it hard to discriminate between serious open science publishers and predatory journals. This is a valid problem; multiple publications that draw attention to predatory journals do not discriminate between predatory papers and open access journals journals (Beall, 2013; Krawczyk & Kulczycki, 2021; Maurer et al., 2021; Torres, 2022).
In contrast to previous studies (Editage, 2018; Watkinson et al., 2016), researchers participating in this study trusted open access journals more than the ones using paywalls. When we asked why, they mainly claimed that profit-oriented publishers no longer represent the scientific community’s interests. Researchers listed various reasons why open-access publishing is important. The most frequently named one was that this way, students and non-scientific audiences could access these publications. Previous research showed that this is indeed the most commonly used argument for publishing in open access journals: increased audience size and increased impact of research findings (Taylor & Francis, 2019). Other arguments for open access mentioned in the interviews, such as the policies of the universities and funding organisations for publishing open access, also overlap with previous findings (Taylor & Francis, 2019). Other previous results were not confirmed or discussed by our study participants. Participants did not discuss receiving a higher number of citations by publishing in open-access journals (McKiernan et al., 2016) or retaining copyright via open access journals (compared to closed journals that demand the transferal of the rights over the published work (Watt & Sever, 2004).
Although our participants trusted open access journals, several interviewees drew attention to the fast review and editing processes they implemented, which were not appreciated. The doubt of quality is one of the often-mentioned counterarguments against open access publishing found in literature as well, next to the inability or unwillingness to pay the APC, the claim that open access journals are not prestigious and finding the embargo policy of more prominent journals as an acceptable solution (Editage, 2018; Köster et al., 2021).
One participant mentioned the trend of open publishing in a broader sense, for example, publishing code and data, and also highlighted the importance of finding a balance between privacy and openness.
The participants of this study trusted the closed peer review system; they thought that it was a competent system for quality control. It also scored high on integrity, while its benevolence and predictability were not that much appreciated. Based on the questionnaire results, researchers trusted open-review journals as much as journals that used closed peer review. This contradicts previous research results. A study from 2008 showed that 80% of authors preferred double-blind, 52% single-blind closed peer review, compared to 27% of authors who would have chosen open peer review (Ware, 2008), but the acceptance of open review might have changed since 2008. Still, a small study from 2020 showed that researchers are hesitant to publish in journals with open peer review because, among others, they were afraid that reviewers would self-censor their reviews (Besançon, Rönnberg, Löwgren, Tennant, & Cooper, 2020). The survey results claimed that researchers would find open review journals more predictable, while in the interviews, other aspects were also mentioned, such as open peer review can lead to more constructive feedback and that it can advance transparency. These aspects were also mentioned in previous studies (Ross-Hellauer, Deppe, & Schmidt, 2017).
Compared to the open and closed peer review options, participatory peer review was not found to be trustworthy. Participants thought that having peer reviews from the extended peer community would be less competent, predictable, benevolent, and have less integrity. Similarly, randomly assigned gatekeepers were also found to be less competent, predictable, benevolent, and have less integrity than editorial boards. Researchers were concerned about the experience, background and especially the motivations of these randomly selected gatekeepers.
In contrast with participatory peer review and randomly assigned gatekeepers, publishers with shared ownership were found to be more trustworthy than journals or publishing platforms owned by private companies. The relationship between closed publishing and company-owned publishers was highlighted and highly distrusted. The reasons behind the distrust were mostly moral, claiming that the concept of science does not fit private ownership and the profit orientation.
In this study, we investigated a limited set of publishing practices and excluded collaborative writing, open infrastructures, and preprints due to limited time and resources. Further studies are suggested to investigate these other aspects of Open Science.
Furthermore, we included researchers working in Dutch Universities. This might limit the generalisation of the results.
The academic publication system is changing. Various scientists have formulated issues with classical publication methods, such as the generated scientific knowledge not reaching a wide enough audience, the inherent biases generated by governance and gatekeeping, or the publish-or-perish phenomenon, just to name a few. Open Science aims to provide various solutions for some of these problems.
In the academic publishing system, multiple actors are in (inter-)dependent relationships with each other. In almost all situations, there is a high level of uncertainty about whether the actors can reach their goals via the other interaction partners. Therefore, we can assume that trust relationships are present in the context of academic publishing.
We have created a theoretical framework to investigate trust in the context of academic publishing. Using competence, benevolence, integrity, and predictability to make the construct of trust more tangible, we could find nuances in which aspects of current and new publishing solutions are important for researchers and which ones are more trusted.
Based on a questionnaire and twelve interviews, we saw that Dutch researchers trust and accept the new publishing concepts of open access, open peer review, and shared ownership in publishers. Participatory or extended peer review and randomly assigned gatekeepers instead of editorial boards are found to be less competent and, therefore, less trustworthy. These results indicate changes in the social norms within the Dutch scientific community, especially related to the profit-oriented nature of scientific publishing and the importance of open access to scientific results.
Based on our results, we can draw a picture of an ideal publisher for the current Dutch researchers who would like to publish their scientific results. If one would create a new publishing platform based on the needs, social norms and values of the Dutch researchers, this publisher
• should have a non-profit ownership model, controlled by a community of experts and/or governed by universities
• should publish open access papers, preprints and peer reviews, with considerations on the publication size and format
• should have some degree of gatekeeping, with a strong peer review process, ensuring quality control mechanisms of the reviews by potentially introducing meta-reviews and automation for speeding up the process
• should keep a reputation-like score that rewards quality peer reviews and discourages the misuse of the system.
The results of our interviews also indicate that the successful implementation of Open Science practices requires a fundamental system change in the research life cycle because academic publishing is still an important part of assessing scientists’ excellence. We strongly believe that changes in the academic ecosystem are much needed. We also strongly believe that these changes need to be co-designed with all stakeholders to realise the transition from the current practices to Open Science. We need to take along scientists, but also publishers, librarians, decision-makers, societal stakeholders, patent and grant officers along the way to create the solutions together. Otherwise, situations could occur in which Open Science-based solutions are implemented while various stakeholders do not trust some aspects of these.
DANS: Trust in Open Publishing Practices – ideation session. https://doi.org/10.17026/SS/XX1LUW (Kalmar, 2024g).
This project contains the ideation session.pdf file, which describes the ideation session, the results collected during it and the analysis of the results. It is licensed under the CC_BY-NC-ND-4.0. The ideation session document does not contain any personal or sensitive information, but due to the agreement signed in the informed consent forms, which were reviewed and accepted by the TUD HREC committee, stating that the anonymised results can be shared by the responsible researcher upon request, access to the document is restricted. If you wish to access the transcripts, please get in touch with the corresponding author of this article via email: e.kalmar-1@tudelft.nl. Access will only be granted for research purposes, for research projects that are relevant to this topic.
DANS: Trust in Open Publishing Practices - interview series B. https://doi.org/10.17026/SS/BUV2QO. (Kalmar, 2024c).
This project contains the interview transcript files (interview 1B.pdf, interview 2b.pdf, interview 3b.pdf, interview 4b.pdf, interview 5B.pdf, interview 6b.pdf, interview 7b.pdf and interview 8B.pdf ) under the license CC_BY-NC-ND-4.0. The interviews do not contain any personal or sensitive information, but due to the agreement signed in the informed consent forms, which were reviewed and accepted by the TUD HREC committee, stating that the anonymised transcripts can be shared by the responsible researcher upon request, access to the transcripts is restricted. If you wish to access the transcripts, please get in touch with the corresponding author of this article, via email: e.kalmar-1@tudelft.nl. Access will only be granted for research purposes, for research projects that are relevant to this topic.
DANS: A Trust in Open Publishing Practices - interview series A. https://doi.org/10.17026/SS/P1XSH0 (Kalmar, 2024d).
This project contains the interview transcript files (interview 1 transcript.pdf, interview 2 transcript.pdf, interview 3 transcripts. pdf and interview 4 transcript.pdf ) under the license CC_BY-NC-ND-4.0. The interviews do not contain any personal information, but due to the agreement signed in the informed consent form, access to the transcripts is restricted. If you wish to access the transcripts, please get in touch with the corresponding author, of this article. Email: e.kalmar-1@tudelft.nl. Access will only be granted for research purposes, for research projects hat are relevant to this topic.
DANS: Trust in Open Publishing Practices. https://doi.org/10.17026/SS/SOAFPP (Kalmar, 2024a).
The project contains the following underlying data:
DANS: Trust in Open Publishing Practices - questionnaire. https://doi.org/10.17026/SS/V51YDC (Kalmar, 2024b).
The project contains the following extended data:
DANS: Trust in Open Publishing Practices - interview protocol A, https://doi.org/10.17026/SS/TCB3ZD Kalmar, 2024e).
The project contains the following extended data:
DANS: Trust in Open Publishing Practices - interview protocol B. https://doi.org/10.17026/SS/YAIGE5 (Kalmar, 2024f).
The project contains the following extended data:
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
The guidelines from Standards for Reporting Qualitative Research (O’Brien, 2014) were applied in this report.
There are multiple free survey platforms, but there is a limited number of GDPR-compliant free versions of Qualtrics. Maybe Zoho survey could be trusted, although the free version has restricted possibilities. A free alternative for SPSS is PSPP.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
No
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Special Education
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Partly
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Scholar of research integrity and open science. Expert in epidemiological research methods, also versed in survey methods, qualitative research and biostatistics.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 30 Jul 24 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)