The disconnect between researcher ambitions and reality in achieving impact in the Earth & Environmental Sciences – author survey

Background: There is an increasing desire for research to provide solutions to the grand challenges facing our global society, such as those expressed in the UN SDGs (“real-world impact”). Herein, we undertook an author survey to understand how this desire influenced the choice of research topic, choice of journal, and preferred type of impact. Methods: We conducted a survey of authors who had published in >100 of our Earth & Environmental Science journals. The survey was sent to just under 60,000 authors and we received 2,695 responses (4% response rate). Results: Respondents indicated that the majority of their research (74%) is currently concerned with addressing urgent global needs, whilst 90% of respondents indicated that their work either currently contributed to meeting real-world problems or that it would be a priority for them in the future; however, the impetus for this research focus seems to be altruistic researcher desire, rather than incentives or support from publishers, funders, or their institutions. Indeed, when contextualised within existing reward and incentive structures, respondents indicated that citations or downloads were more important to them than contributing to tackling real-world problems. Conclusions: At present, it seems that the laudable and necessary ambition of researchers in the Earth & Environmental Sciences to contribute to the tackling of real-world problems, such as those included in the UN SDGs, is seemingly being lost amidst the realities of being a researcher, owing to the prioritisation of other forms of impact, such as citations and downloads.


Introduction
Although one of the fundamental tenets of the research endeavour is about exploration and curiosity, calls to increase R&D spending by nations have been allied with an expectation for research to have "impact" 1 . Impact has many forms -some are easily quantifiable, for example every dollar invested in the Human Genome Project returned $141 to the US economy 2 , whereas others are harder to quantify and often tie into complex interdisciplinary issues, which require long-term commitment and investment -the UN's Sustainable Development Goals (SDGs) and the missions of Horizon Europe serving as prime examples 3,4 . This drive for research to solve 'the Grand Challenges of our time' 5 has acquired increased urgency during the Covid-19 pandemic, but has long been discussed in the context of global problems, including climate change, food security, and an aging global population. Where once there was a comment that we no longer needed experts 6 , trust in research and researchers to provide solutions to real-world problems is currently high, with seven in ten people commenting before the Covid-19 pandemic that science benefits them 7 , and trust in advice from 'qualified scientists and researchers' growing even faster in the face of the pandemic 8,9 .
How do these aspirations manifest at the level of those carrying out research? Primarily, it's through research-assessment mechanisms linked to institutional or grant funding, with citations being the primary currency of success or progress. The UK's Research Excellence Framework (REF) 10 exercise has previously linked to the Pathways to Impact initiative 11 , with contributions to the REF expected to provide impact case studies alongside their submissions that showcase their social or economic impact. It can be challenging to see how answering one research question can create a chain reaction that resonates at a much-higher level, still less how interrelated research questions contribute to solutions to these complex and interdisciplinary issues.
The role that academic publishing plays in the advancement of research is often understood to comprise validation (through the peer-review process), publication (participation in the scholarly record), curation (preservation of the work to ensure its availability in perpetuity), and dissemination (to relevant communities). However, it is becoming increasingly apparent that the value of a research journal is much broader than this, additionally fostering collaboration, network-building (both within core and adjacent fields), and career development 12 .
Therefore, it is essential that the mechanisms and drivers that collectively influence where an author chooses to publish their research support their ability to publish in the journals that are most relevant to their work; that is, where their research is most likely to be found, read, cited, and iterated upon by those working in the same and adjacent disciplines, as well as by those working in policy-making, lobbying, or advisory capacities. However, such drivers and pressures, both personal and external, are varied and nuanced, as are our authors' expectations for what impact that their work might have once it has been published.
It is in this context that we undertook the 2020 Impact Assessment of Earth & Environmental Sciences Research: Author Survey. The survey was designed to achieve three main aims. To understand: • what drives our communities to choose the topics that they research; • what drives our communities to choose the journals that they publish in; and • what type(s) of impact they are most looking for from their work.
We investigated what benefits publishing in our journals could impart on both the research and on the authors following publication, and we looked at to what extent global challenges, such as those expressed by the UN SDGs, were shaping researcher ambitions. This report describes the results that we obtained and considers how the frameworks that are currently in place in academic publishing, such as those around securing research funding, researcher assessment, and career progression, are shaping the desires and decisions of our researchers, either in support of, or in opposition to, these desires.

Methods
In Spring 2020, Taylor & Francis surveyed authors from across our Earth & Environmental Sciences portfolio. The survey (see Extended data 13 ), hosted on Alchemer (formerly SurveyGizmo), was emailed to authors using Salesforce Marketing Cloud. It was sent to just under 60,000 authors and received 2,695 responses (4% response rate).
The survey comprised 23 questions: section A (Q1 & 2) = multiple choice questions to clarify the article that the survey responses related to; section B (3 & 4) = multiple choice questions with the option of prose responses relating to the choice of journal; section C (5-10) = multiple choice questions with the option of prose responses relating to the downstream value of publishing the article for both the work and the author; section D (11-20) = largely multiple choice with the option of prose relating to the impact of the work, the motivation for undertaking the work, and the ability of the work to tackle real-world problems and influence policy change. Questions 13,15,18, and 20 were solely prose responses. Section E (21-23) = demographic questions.
A confidentiality and privacy statement was provided on the first page of the survey, which outlined how the data would be used. Consent to participate in the survey was implied by the authors who clicked through to complete the questionnaire after reading this statement and the instructions given in the invitation email. The data are fully anonymized and no sensitive personal data regarding the respondents were collected. To protect the anonymity of the respondents, all prose responses to the free-text questions (questions 13,15,18, and 20) have been omitted from the shared dataset. Written informed consent was not sought due to the low-risk nature of the research.
The survey responses include authors from 102 journals in the Earth & Environmental Sciences portfolio, and the geographical distribution of responses was similar to that of authors in the portfolio. Therefore, we can be reasonably confident that our responses are representative of Taylor & Francis authors in our Earth & Environmental Sciences journals.

Data analysis
Confidence intervals have been calculated for certain parts of our analysis where we are comparing groups of different sizes within the survey, and we are only reporting on differences that are statistically significant. Microsoft Excel was used to prepare the tables and charts. Confidence intervals were calculator by using the Creative Research Systems sample size calculator 14 .
A note about error bars and statistical significance. The country-comparison charts presented in this report include error bars, which plot the confidence intervals for the percentages shown. When making comparisons, error bars are useful as a visual means of demonstrating the range that likely contains the true overall value for each country in the chart. If the error bars for two or more countries overlap, we should be cautious about making substantive conclusions about any differences, because they may not be statistically significant. Therefore, only clearly statistically significant differences are included in the comparisons presented herein.

Results
SDG-relevance of earth and environment research -the current picture Why do researchers undertake the research that they do? It is a fundamental question and the answer is multifaceted, varying by career stage, geography, and subject discipline. However, the publication of the SDGs by the UN in September 2015, which had the stated aim of providing a "a shared blueprint for peace and prosperity for people and the planet, now and into the future" 3 , allows us an opportunity to frame the question in such a way that gets to the core of what researchers hope to achieve through their work, that is: "do researchers study topics that contribute, either directly or indirectly, to the tackling of real-world problems?".
The contribution of research. Comprising such urgent needs as Clean Water and Sanitation (SDG 6) and Climate Change (SDG 13) and tackling threats to Life on Land (SDG 15) and Life below Water (SDG 14), one might readily anticipate that a high proportion of research in the Earth and Environmental Sciences would have a part to play in meeting the needs expressed by the SDGs. Indeed, 74% of respondents indicated that their research contributed (directly or indirectly) to the tackling of real-world problems, such as those expressed by the UN SDGs ( Figure 1). Furthermore, overall, 90% of respondents indicated that their work either currently contributed to meeting real-world problems or that it would be a priority for them in the future. Therefore, we might infer that, at least in the Earth & Environmental Sciences, it is a strong research imperative for our authors that their work contributes to the tacking of real-world problems.
Such a high percentage aligns with the voices of our journal editors, who, in contributing to our recent publication "Sustainable Development Goals in the Earth and Environmental Sciences" 15 , expounded the variety, breadth, and richness of research that their journals and subject areas have to offer in tackling the challenges laid out in the SDGs.
In our survey, whilst younger researchers were slightly more likely to undertake this type of research (76% of respondents aged under 50 answered "Yes" compared with 70% of respondents aged 50 or older, with resolved confidence intervals), the difference was not very pronounced, thus suggesting that this is a multi-generational aspiration, rather than one solely driven by early-career researchers.  Why is addressing real-world challenges a research priority for our authors?
To understand a bit more about the motivating factors that sit behind the decision of our researchers to investigate topics that have application to real-world problems, we asked "Why have you chosen to undertake research that contributes to these topics?" (Figure 2). The responses to this question presented a clear split between internal drivers-personal interest (62%) and the desire to contribute to addressing real-world problems (78%)-and external drivers, such as encouragement from a university, other collaborators, or improved opportunities to secure research funding, with internal drivers and aspirations being the greater motivators.
We find it surprising that the influence of funders (15%) and institutions (16%) was only narrowly more influential than coincidence (14%) in prompting research that is skewed towards meeting these global challenges.

Ambitions vs reality
We saw the greatest gap between aspiration and reality when we asked what type of impact was most important to respondents-with a maximum of three selections. The most-preferred type of impact was citations from within the same field (69%), over against contribution to the advancement of research (53%), contribution to tackling 'real-world' problems, such as those expressed by the UN SDGs (21%), and input into policy decision-making (19%; Figure 3).
Interestingly, having seen the strong desire of authors to undertake research with real-world application in the earlier questions, when compared with other types of impact, contributing to the tacking of real-world problems dropped to fifth in the list (21%), behind citations from within the same field (69%) and from adjacent/other fields (25%), and achieving a large readership (34%).
We note that some respondents may have felt that citations were a necessary step in contributing to the advancement of research or in tackling real-world problems, through knowledge sharing and discussion, as the reasons for these selections weren't probed further. However, as the question asked what the most important type of impact was to the researcher ("to you"), we think this is unlikely to be a significant line of thought.
Input into policy decision-making, "where the rubber meets the road" for much of the national-scale change that is required to meet the needs captured by the UN SDGs (19%), placed further down the list, on par with forming new collaborations (19%). Only attention from the press (3%) and attention on social media (3%) ranked lower.
The role of the journal Feedback from respondents has indicated two key points: the aspiration of researchers to contribute to the tackling of realworld problems with their work, compared with a focus on citations as the key measure of impact. What role might the choice of a journal have to play in serving either of these aims? To investigate this further, we asked all of the respondents why they submitted their paper to the journal their work was published in, and also whether this journal was their first choice ( Figure 4).
The predominant factor in determining the selection of a journal was its relevance to the author's research, and 64% of   respondents indicated this was one of the most-influential factors underpinning their choice of journal. Interesting, reaching a broader non-academic audience (which we might link to the desire to contribute to resolving real-world challenges) came quite far down the list, with only 6% of respondents noting that this was an influential factor in their choice of journal.
On asking our authors whether the journal that they published in was their first choice, 75% indicated that it was, whilst 19% indicated that it was their second choice and 6% indicated that they had submitted their paper to more than one other journal before publishing it.
In this context, it is perhaps unsurprising to see that a journal having an Impact Factor was important to 42% of respondents. Interestingly, although 42% of authors indicated that the journal having an Impact Factor was an important factor in their decision-making, only 8% indicated that they chose the journal because it had the highest available Impact Factor, thus indicating that the presence of an Impact Factor was more important than the score itself. Many institutions, policymakers, and funders are keen to reduce emphasis on the Impact Factor as part of research assessment practices 16 , so there is perhaps a misalignment in the priorities of researchers, as opposed to their institutions and funders.
What wider value can research journals offer to our academic communities?
To investigate if (and if so, how effectively) publishers support the essential profile-raising and network-building needs of our research communities, we asked survey respondents about the opportunities for network-building and recognition that publishing in the journal had afforded them since they published their work.
When asked if the publication of their article led to the formation of new connections with a range of different groups, almost half of respondents (46%) indicated that the publication of their article led to their forming new connections with researchers/research groups in their own country, whilst 35% formed new connections with researchers/research groups from other countries ( Figure 5). From this feedback, we infer that publications and journals have a vital and valuable role in facilitating global knowledge sharing within subject communities.
Survey feedback highlighted the positive influence that journals and publishers can have in forming new academic collaborations. It is important to note, however, that this "matchmaking" effect does not appear to be as profound in non-academic contexts, with only 7% of respondents noting that they were approached about non-academic collaborations after the publication of their work (however, 48% of these respondents did note that their work influenced or greatly influenced this approach). Furthermore, only 10% of respondents formed new connections with practitioners, consultancies, or other non-academic communities as a result of publishing their work; only 8% formed new connections with governmental/intergovernmental policy-making bodies; and fewer still formed new connections with industry (7%) or funding bodies (5%). As shown in Figure 6, one third of respondents (33%) indicated that, since the publication of their article, they had been approached regarding a potential research collaboration, with two thirds of these respondents noting that publication of their work influenced or greatly influenced this approach (Figure 7). In this regard, publishers and journals play a vital matchmaking role in linking together researchers for further collaboration.
Comparing priorities across different geographies Based on survey feedback, authors based in different regions appear to place different emphases on the criteria that shape their choice of journal, and their views around impact.
United States. Fewer respondents based in the United States indicated that having an Impact Factor was an important criterion in determining their choice of journal compared to the overall average (23% vs 42%). This same lower emphasis on the Impact Factor is seen in our post-publication author survey, which is sent to authors in all subject areas and all geographies, with respondents from the US rating this as a less-important factor in determining their choice of journal compared with the global average ( Figure 8) 17 .
Instead, US-based authors placed more value on real-world types of impact than the global average, with a higher proportion indicating that contribution to tackling big real-world problems, such as those expressed by the UN SDGs, was one of the most-important types of impact to them (29%), and a much-higher proportion indicating that having an input into policy decision-making was important to them (34% vs 19% overall; Figure 9).
Recommendation from a colleague (39% vs 26% overall) and the journal's capacity to reach a broader non-academic audience (13% vs 7% overall) were also deemed to be much more important factors as a means of identifying a suitable journal for US-based respondents, compared to the global average ( Figure 10).

China.
Responses from researchers based in China largely reflected the overall results, both in the most-important types of impact to them and the most-important factors that influence their choice of journal. As in other territories, respondents from China indicated that receiving citations from within the same field was one of the most-important types of impact for them (72%), followed by contribution to the advancement of research (49%) and readership/downloads (33%).
However, one noticeable distinction was the relative unimportance of having a real-world impact in terms of contribution to tackling real-world problems (10% vs 21% overall; Figure 11) and input into policy decision making (8% vs 19% overall), perhaps because China-based respondents were less likely to have collaborations with groups who were involved in SDG-related activities (8% vs 16% overall; Figure 12).
Conversely, for China-based researchers, the relevance of a journal to their work was a much-more-important consideration (83%) compared to the global average (65%) and compared to authors based in the US (59%) and Europe (49%), whilst whether the journal had an Impact Factor was as important to Chinese respondents (44%) as the overall average (42%; Figure 13).

UK and Europe.
Respondents from the UK and Europe closely followed the global averages for both the types of impact that were most important and the most-important factors in determining the choice of journal.
Respondents from the UK and Europe indicated that receiving citations from within the same field was the most-important type of impact to them (73%), followed by contribution to the advancement of research (50%) and readership/downloads (33%). Where respondents based in the UK and Europe differed was in the prospect of forming new collaborations (24%), which they considered to be a more-important type of impact than for respondents from India (16%) and China (14%; Figure 14).    Interestingly, and perhaps related to the premium placed on network-building outside of their own subject communities, only 49% of respondents from the UK and Europe said that their work was published in the most-relevant journal, much lower than all other territories (65% average; Figure 15). India. Respondents from India again closely followed the global averages in terms of the types of impact that were most important and the most-important factors in determining the choice of journal. However, there were two distinct points of divergence.
Compared to the global average, respondents from India placed significantly greater importance on the relevance of the journal for their work (87% vs 65% overall), comparable to respondents from China (83%) and significantly higher than respondents from the US (59%) and the UK and Europe (49%). Similarly, respondents from India placed much greater importance on the journal's capability to reach their community (30%) compared to respondents from China (12%), the US (15%), and UK and Europe (18%), as well as to the overall average (19%; Figure 16).
Perhaps most importantly, respondents from India indicated that a journal's capacity to raise their profile was much more important to them (31%) than respondents from the other territories that we considered (UK and Europe 16%, China 14%, US 12%; Figure 17).
improvements to mechanisms around: access, accessibility, communication of outcomes, and timeliness.
1. Improve access to the latest research, in particular to non-academic/policy-maker audiences, as well as to the underlying code/data.
"Engage closer with non-governmental organisations (environmental and social) -provide greater access to these organisations that are fundamental to achieving the SDGs but do not have the financial resources to enjoy access/membership of the Journals." "Provide access to interesting real-world data sets" / "Publish code and data along with papers; special issues focused on practical applications" In order to engage a non-academic audience, our respondents' views are clear: policy-makers, industry, and the wider public must have access to the original research, both the underlying data and the conclusions. In this regard, greater support for open access publication models 18 across all key stakeholders is an important step to take to allow non-academic readers to engage with the latest research.
2. Improve accessibility of research by changing the language, style, and format of publications to serve a non-academic audience.

"Prepare readers' digest versions of relevant articles, in multiple languages."
"Provide a policy-type document for research papers that tackle real-world problems. Original research paper may be difficult to read by policy-makers." "Publish an e-digest of abstracts indexed by problem area. Send it to NGOs and managers in government agencies so they can quickly find articles that are relevant for their issues." "Increased use of executive summaries from research papers that are accessible to a broader audience than academia" "provide support producing infographics and sharing research to non-academic audiences" To help realise the potential reach, impact, and policy application of research, respondents noted that research outcomes should be presented in a format, style, and language that is accessible and comprehensible to a non-academic audience. Whilst the research article well-serves the research community, the structure, tone, and length may create some barriers for non-academic readers, who are often looking for evidence pertaining to their particular point of need and may be putoff from drawing out points of relevance from a full research paper.
3. Improve communication links to raise the visibility of research implications on policy and real-world issues.   The role of the publisher in driving real-world impact Academic publishing is multifaceted, with a range of different stakeholders located all around the globe, across both the private and public sectors. We asked participants the following question: How could journals or publishers help research to influence the response to real-world problems? Answers were provided as free text and clustered around four main "Making more publicity to the "non-scientific world" of the issues that are published in the journals" / "be present at policy events" "Special editions and workshops (can be via Zoom) to bring people together." "Share published papers on social media and create TV shows where scientists engage on current issues." "Connections with academic media outlets, like the Conversation etc." "They should announce research grants related to real world problems" Authors and publishers need to maximize the opportunity to bring the latest research into the public conscious, with the aim of cultivating a culture that drives policy change. Respondents noted that non-academic summaries, workshops, and discussion forums could directly engage with policy-makers right at the point of need. However, as noted by one respondent, it is also important for publishers to "be present" where appropriate at policy events and to advocate for the value of the research that they publish on behalf of their authors.
4. Better support the publication of research on areas of particular relevance to live policy issues.
"Seek out authors who are also practitioners." / "By opening spaces for discussion among different actors (policy-makers, civil society and academia) and societal sector." "Be willing to publish applied work, not just academic studies." / "encourage and publish more transdisciplinary research" "By staying focused on their journals' scope which should be specific to these real-world problems" "By planning special issues which focus on research that are in response to real-world problems. When doing so, ensuring that enough time is given for research in this area to be specifically conducted, and not expecting that data is already available to be tailored into a paper that addresses these issues." "By considering articles that address real world problems, even it if they are not considered "high impact" or "potentially citable"." These comments collectively strike right at the heart of the purpose of research journals.

Discussion
The impact gap Overall, 90% of respondents indicated that their work either currently contributed (directly or indirectly) to meeting real-world problems or that it would be a focus for them in the future. However, when asked about forms of impact, citations were viewed as more important than advancing research or contributing to the SDGs.
We might infer that this focus on citations as the key form of impact is driven by the mechanisms that underpin research assessment. There are calls to move away from a focus on citations and publication venue, and to judge work based on its own merits 19 . However, changes to institutional assessment mechanisms-and academic culture itself-are slow to take effect. This focus on citations may be compounded by the ways in which impact is judged at a national level, with judgements around research 'excellence' by country often based on comparing field-weighted citations from one country to another 20 .

Is this an issue worth addressing?
The answer to this question must be "yes", both for the research and then for the researcher, institution, funder, and publisher. If the measures that exist within academic publishing remain unchanged, continuing to prioritize other metrics and outcomes, we risk devaluing the necessary application of original research to addressing our global challenges. It is then a slippery slope from devaluing to deprioritising to not doing at all, and the devaluing of important, consequential research today will likely lead to less of it in the years to come, at a time when much more research is required to help meet our global society's needs, not less.
There is also a cultural issue. If research as a whole does not pursue greater public engagement and support the tackling of our global challenges, the research community risks appearing elitist and out-of-touch with the public conscious 21 .
How might we bridge the gap? There is clearly a high level of engagement by the respondents in contributing to the thinking around real-world problems. However, at the heart of academic research, there are seemingly competing interests, which push back on our authors' desire to tackle the key challenges affecting our society, and instead pull authors to pursue volume of output and accruing citations. Therefore, given this apparent disconnect between the ambitions of researchers to address real-world problems with their work and the realities that drive their choice of journal and preferred type of impact, we next turn to what might be done to bridge this gap.
Seeing the wood and the trees There seems to be a knowledge gap for our researchers in understanding and communicating the link between their individual, highly focused projects and wider live policy issues, such as those expressed by the SDGs. Respondents encouraged publishers to facilitate the connection of an individual output to a real-world challenge, such as the SDGs, through the publication of summary research conclusions in approachable language ("lay summaries", "policy highlights", or similar), either alongside or as part of the research article, and to work with researchers and institutions to explore new and alternative ways of disseminating such summaries to the general public or a policy audience.
Authors should be encouraged to articulate how their work contributes to real-world challenges, such as how the work aligns with a particular SDG (goal or target). An example of this is the European Commission's Horizon Results Platform, which allows authors to identify their work with particular SDGs and to flag research outcomes as "Claiming significant policy influence" 22 . Publishers and journal editors should be proactive in this process through the curation of special issues that are directly linked to live policy-relevant issues, such as those expressed in the Horizon Europe missions, to encourage greater consideration of policy priorities by our researchers.
In addition, a focus on interdisciplinarity and the policy relevance of work should be encouraged, as our respondents have indicated that these are essential to ensuring that research has a real-world impact.
To support this work, we will work with our editors and society partners to introduce policy-impact statements widely across our Earth & Environmental Sciences journals over the next 12 months. We hope that this approach will facilitate the contextualisation of new research within larger global needs and aid in the impact of that work on policy decision-making. We will additionally hold a cross-stakeholder discussion forum in 2021 to explore other appropriate structural ways of clarifying policy relevance (such as to particular SDGs) at an article level. We will also publish cross-portfolio special issues on policy-relevant topics, beginning with the policy priorities expressed in the European Commission's Horizon Europe missions 23 , to be published on World Earth Day 2022. By doing this, we aim to encourage the greater consideration of policy priorities by our researchers and to support the publication of such research within our journals. Finally, we will continue to transition our Earth & Environmental Sciences journals onto more-open data-sharing policies to support the availability, reuse, and citation of codes and data 24 .
Institutions could consider developing training programs for their researchers to help them think through how a specialised piece of work can have wider implications, such that it can be incorporated into policy change, and how researchers can communicate and demonstrate the relevance of their work to those discussions.
Building a pipeline to policy Comments from respondents suggest that changes to traditional mechanisms around engagement, knowledge transfer, and research assessment are required to create better links and lines-of-sight between research endeavour and policy development. Furthermore, several respondents also felt that addressing real-world problems with their work was not enough in itself to contribute to change; rather, action by policy-makers was a decisive factor in whether their research would have such an influence.
To enthuse researchers with the ambition of addressing real-world impact, they need to become more cognisant of how their research is incorporated into policy advice and decision-making. When asked whether, since the publication of their work and to the best of their knowledge, their article had been used or referenced in a non-academic output, the majority of respondents (32%) answered "don't know", and only 10% suggested that their work had been used in a policy document.
At present, authors can't easily track and often aren't aware if/how their work is used outside of other research articles. Therefore, there needs to be a much-more-robust feedback mechanism from governments, NGOs, lobbyists, and advisory groups to the academic community, e.g. through the expansion of tools such as Altmetric 25 , if researchers are going to feel sufficiently equipped to engage in policy advocacy and to feel that non-academic outputs, such as those suggested above, would be valued and acted upon. Much has been done to support the translation of research conclusions into language suitable for a non-academic audience, through platforms such as Kudos 28 , or through social media and academic news services such as EurekAlert! 29 and the Conversation 30 . However, such activity has yet to become common practice. The translation of research results into "policymaking language" to make it useable for policy-makers might be achieved through the publication of accompanying abstracts for a non-academic audience or policy implications/highlights for each new piece of work. Ensuring that there are channels for research to reach policymakers is another critical activity publishers should consider. Publishers might consider synthesising and facilitating meta-comparisons of similar research outcomes to support a streamlined evidence-based policymaking process. New products or services could be created to assist in this translation and dissemination activity, as well as the presentation of relevant research to policy-makers and their advisory groups, such as TrendMD 31 . These services will however require financial support; additionally, researchers need to be incentivised to ensure that they are positively rewarded for making these connections. Such activities must be supported by training and incentives from those host institutions keen to ensure that their research outputs have a real-world impact.
To support this activity, publishers and funders should facilitate interoperable standards and persistent identifiers to ensure that all of the outcomes are linked, and that policy decisions are informed by a rich and networked research base.
Most broadly, there may be a need to extend the "research cycle" to incorporate a policy dimension into the initial stage of developing the research question, thereby ensuring that researchers review funding calls in their area and study the research agenda of their governments and policymakers for activities relevant to their field. The European Geosciences Union (EGU) have published a helpful graphic, which outlines the interplay between policy and the research cycle 32 . Greater adoption of this "policy cycle" approach, including the formalisation of prompts at the dissemination stage of the research cycle to direct relevant outcomes to policy-makers, could help to influence sharing behaviours, and the expansion of a researcher's existing network into the policy arena.
Building links and amplifying networks Publishers might typically express the value that they provide to the academic community through the oversight of rigorous peer-review processes to validate research conclusions and through the curation, dissemination, and preservation in perpetuity of the academic record. However, based on the responses to the questions "Did the publication of your article lead to you forming new connections with any of the following?" and "Have any of the following happened in your career since the publication of your article?", it is clear that publishers also play a hugely important role in facilitating new collaborations between researchers, which are often triggered by research publications, with collaboration a critical component of research success 33 .
In addition, through the promotion and dissemination of their research, publishers have the capability to foster meaningful engagement with the general public through press/news media coverage or mentions on social media. Publishers serve an important role as bridge-builder between communities both within and without academia, and should ensure that they continue in this function.
Respondents raised concerns around working in small silos, with limited personal networks and small spheres of influence, particularly in the policy space, which held back the impact of their research in tacking real-world problems. Our survey highlights some areas where publishers should consider how to better connect researchers with the wider non-academic community to help their work resonate outside academic circles, and to have real-world impact, including networkextending activities with non-academic communities, specifically policy-makers and industry.
We suggest that universities give thought to helping their researchers to better grasp the mechanisms for how their research can influence policy, and what steps they can personally take to advocate for the uptake of their conclusions into the policy debate. Likewise, we encourage funders, universities, and publishers to work with governments and other pertinent stakeholders to develop a more robust feedback mechanism to the academic community so that researchers can understand how their work is used in decision-making and can feel equipped to advocate for a real-world impact with their work.
Changing the academic currency or research assessment reform Our feedback suggests that research assessment practices should be reviewed and revised to ensure that policymakers, institutions, and funders capitalise on the aspirations of researchers Our authors expressed a desire to conduct research that helps to tackle global needs, and most are already actively achieving this with their research. However current assessment frameworks continue to focus on and reward citations and publication in highly ranked journals at the expense of opportunities for impact or other forms of output. As part of our ongoing training and support for our authors and editors, we highlight the role of research metrics in measuring performance, but also their limitations, and note that such tools need to be used appropriately, as part of a "basket of metrics", and not in place of a qualitative review of individual outputs 37,38 . We are also investing in diversifying research outputs, including providing support for non-traditional formats, such data notes and software tool articles 39 . We have committed to making a broad set of metrics available across journals that provide a richer overview of their performance, linked with guidance to contextualise these data.
We suggest that institutions reconsider the performanceassessment frameworks that they use, instead placing greater value on a broader range of research outputs, such as patents, case studies, and engagement with secondary education. Likewise, we suggest that funders continue to facilitate, through the grant applications that they support, the pursuit of research with clearly defined opportunities for policy-relevant outcomes or other tangible real-world impact, such as the Gates Foundation's support of the Grand Challenges for Global Health (GCGH) initiative 40,41 .

Conclusion
Following a survey of >2,500 researchers who had published in our Earth & Environmental Sciences journals portfolio, we found that a majority of respondents (90%) indicated that their work either currently contributed to meeting real-world problems or that it would become a priority in the future, thus suggesting that, as one might anticipate, the tackling of real-world challenges is a significant research priority in the Earth & Environmental Sciences.
Whilst it is very encouraging to see that the majority of research in the subject area is concerned (directly or indirectly) with addressing our global needs, the impetus seems to be altruistic researcher desires, rather than incentives or support from publishers, funders, or institutions. As a result, it seems that this laudable ambition is being lost amidst the realities of being a researcher -where success is predominantly measured by citations and publication venue.
Therefore, herein, we have used survey responses to consider what opportunities we have as a research community to collectively assist researchers in having the real-world impact with their work that they (and we) would like it to have. Accompanying this report is a set of suggestions for the wider community, which have been drawn out of the conclusions from this work, along with a series of commitments which we as Taylor & Francis will take to play our part in addressing some of these issues.
We welcome feedback from the community and opportunities for collaboration, and we anticipate that these recommendations will be further refined as we implement our commitments and undergo further consultation.

Data availability
Underlying data

https://editorresources.taylorandfrancis.com/understanding-researchmetrics/.
39. https://f1000research.com/articles/9-657.  The article touches on many topics, but I feel that none of them are discussed to a sufficient extent. It initially centres around 3 key questions that are to be addressed by a survey of authors, but towards the end it morphs into an essay on developing an environment that supports translating research into policy, which is not much underpinned by the survey data. Consequently, the article feels like two that are loosely connected. Moreover, it sometimes reads like a policy statement and advertisement by Taylor & Francis rather than a research article. This impression is strengthened by the lack of research articles amongst the references in conjunction with the authors not adequately positioning their findings in the context of other research on the topic.

Research & Analytics, Taylor & Francis: Taylor-and-Francis_Impact-Assessment-of-Earth-and-Environmental-Sciences-Research-Author-
There is a fundamental conflict between the aspiration of "maximising the capability of research to achieve" and focusing efforts on addressing urgent needs, which remains unresolved in the article. Throughout, these are conflated and confused. I would consider "capability" a most relevant keyword in this context.
Several references are ill-chosen for supporting the specific point that the authors try to make. Specifically, the Lisbon strategy (reference 1) has the declared aim "to make Europe the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion", but it does not include an explicit call for research to have "impact". Moreover, the report of reference 21 states that "People are broadly split on whether the UK invests too much in long-term R&D rather than solving issues that matter now (33% agree vs. 35% disagree)". The link in reference 20 does not work.
It is rather unusual for me to defend Michael Gove, but "we no longer need experts" is a (popular) misquotation by omission. He stated: "I think the people of this country have had enough of experts with organisations with acronyms saying that they know what is best and getting it consistently wrong.", and the latter part of the statement is much relevant for scientists engaging in public debate. It is important to know how to build and maintain trust.
While the authors elaborate on the term "impact", it remains problematic and likely to be understood in various ways. One could challenge the statement about quantification in economic terms being straightforward, and in particular question whether benefits should be evaluated in such a way. For example, people dying early could be economically beneficial, but would that be societally desired?
The "missions" approach in Horizon Europe is somewhat controversial. Notably, the recent 2020 Euroscience Open Forum (ESOF) included a session "Does science for missions undermine the missions of science?". Likewise, the authors state that the drive to solve "the Grand Challenges of our time" has acquired increased urgency during the COVID-19 pandemic, but one could also argue that prominently reveals a potential flaw of focusing on identified challenges, which is in neglecting those strands of research that are most suitable to provide the basis for the next challenges that we are to encounter (e.g. https://www.statnews.com/2020/02/10/fluctuatingfunding-and-flagging-interest-hurt-coronavirus-research/ and https://www.nytimes.com/2021/04/08/health/coronavirus-mrna-kariko.html) The authors refer to citations as the primary currency of success or progress, but it might be worth keeping in mind that it is a widespread myth that this applies universally. In particular, the quoted UK Research Excellence Framework (REF) is not based on citation counts. Moreover, there are significant differences between "impact" in the REF and "Pathways to impact" in the context of funding applications to UK Research Councils. Both are distinct from the "Pathways to Impact initiative" (reference 11). I appreciate the authors mentioning a "chain reaction" emerging from original research. Could one elaborate on what would determine the value of research?
With regard to the role of academic publishing, I note that the International Science Council has recently published an insightful report "Opening the record of science: making scholarly publishing work for science in the digital era." 1 The definition of the three main aims of the survey does not specify what group of people "our communities" refers to. I am far less confident than the authors about the respondents being "representative". I would expect that those who just care about their bibliometric profile and other similar performance indicators are not inclined to spend any time responding, which would result in the respondents being more engaged for the scientific community and the wider society.
I found the "note about error bars and statistical significance" almost entirely stating trivialities, whereas the authors do not provide the crucial information of what the quoted "error bars" actually refer to and what they mean, which leave me unable to interpret them.
There are some substantial weaknesses with the survey questions and the provided answer options. It is somewhat confusing that in some cases respondents were able to pick any number of answers from a list, whereas in others the number of choices was limited. This poses some difficulties for the interpretation of the results and the limitations to answer options should be mentioned clearly in the respective figure and/or captions. It would also be useful if the authors referred to the question numbers.
My main concern with regard to the survey is about Q14 and Q16, which refer to "real-word problems". I think that it is an unfortunate choice that the authors put these central rather than referring to Q11 in conjunction with Q3 on addressing the question on why researchers undertake the research that they do. While the term "real-world problem" carries a polemic tone suggesting that academics might be detached from reality, "directly or indirectly contributing" is remarkably fuzzy. It is not clear to me what Q14 and Q16 are actually able to capture, and I feel that answering with "yes", "no", or "Don't know" is mostly a matter of interpretation of the question. I could make a case for my research falling into either of these categories, depending on what point of view I assume. In fact, "don't know" appears to be a good option given that for some research the connection to "real-word problems" is not immediately apparent and the connection might only be built in the future. Apparently, a substantial number of respondents chose that option. I also note that Q16 refers to "priority" whereas Q14 does not. I did not see the authors commented on lower numbers for an affirmative response on Q16 as compared to Q14.
I also wonder how many of the respondents are familiar with what the UN SDGs are, or are willing to look at up before they answer the question. I note that SDG 8 explicitly recognises creativity and innovation as drivers of economic growth, which aligns fundamental research, not directly targeted at specific challenges, with the UN SDGs.
Something that puzzles me is that 38% of the respondents did not choose the answer "I find researching these topics interesting". Why do they do research that they are not interested in?
It would seem to me that the survey reveals another "gap" than the one the authors claim.
A key gap appears to be in how the research actually materialises into something useful, with only about 20% of the survey respondents stated that "input into policy decision-making" or "contribution to tackling big real-world problems, such as those expressed by the UN SDGs" was amongst the most important forms of "impact" (although they were limited to 3 answers). In contrast, the authors elaborate on the point that respondents ranked formal recognition over making a contribution and state that we risk devaluing the necessary application of original research to addressing our global challenges by prioritising other metrics and outcomes. However, their study does not provide evidence for that. If the research of the respondents is oriented towards "real-world problems", the underlying motivation is not the relevant issue. Unfortunately, the authors do not elaborate on to what extent "contribution to the advancement of research" is aligned with "contribution to tackling real-world problems" and/or "input into policy decision-making" or rather not. It is the more unfortunate that respondents were restricted to a maximum of 3 answers for Q11 rather than being able to state where each of them ranks in priority.
On the question of why authors chose to submit their manuscript to the specific journal, the top chosen answer is pretty much an umbrella category that encompasses more than half of the other answer options, which are more specific on what most "relevant" means.
The authors should define what they consider "Europe", e.g. if respondents stated that they are located in Turkey, have their answers been included or not? To my knowledge, the UK is in Europe.
The mentioned effort on narrowing the science-policy gap is laudable, but can we expect getting the researchers onboard?
The authors mention "traditional" mechanisms around engagement, knowledge transfer, and research assessment, but in particular with respect to the latter, there are only fashions, but no tradition. A tradition only gets established once something is passed on from generation to generation, while we saw substantial changes on shorter time-scales. Notably, the h-index was not invented before 2005.

Andrew Kelly, Taylor & Francis Group, Abingdon, UK
Reponses have been added in-line below the reviewer's comments and are shown in italics.
Reviewer's comments: I am having some difficulties with the policy framing of the article, and it does not become obvious what point exactly the authors intend to make. A few statements don't appear to match up.
The article touches on many topics, but I feel that none of them are discussed to a sufficient extent. It initially centres around 3 key questions that are to be addressed by a survey of authors, but towards the end it morphs into an essay on developing an environment that supports translating research into policy, which is not much underpinned by the survey data. Consequently, the article feels like two that are loosely connected. Moreover, it sometimes reads like a policy statement and advertisement by Taylor & Francis rather than a research article. This impression is strengthened by the lack of research articles amongst the references in conjunction with the authors not adequately positioning their findings in the context of other research on the topic.
We agree that the article needed to be more focused and has been reframed around the results of the author survey, rather than the non-citation value of academic research. The policy framing has been removed.
There is a fundamental conflict between the aspiration of "maximising the capability of research to achieve" and focusing efforts on addressing urgent needs, which remains unresolved in the article. Throughout, these are conflated and confused. I would consider "capability" a most relevant keyword in this context.
Several references are ill-chosen for supporting the specific point that the authors try to make. Specifically, the Lisbon strategy (reference 1) has the declared aim "to make Europe the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion", but it does not include an explicit call for research to have "impact". Moreover, the report of reference 21 states that "People are broadly split on whether the UK invests too much in long-term R&D rather than solving issues that matter now (33% agree vs. 35% disagree)". The link in reference 20 does not work.
It is rather unusual for me to defend Michael Gove, but "we no longer need experts" is a (popular) misquotation by omission. He stated: "I think the people of this country have had enough of experts with organisations with acronyms saying that they know what is best and getting it consistently wrong.", and the latter part of the statement is much relevant for scientists engaging in public debate. It is important to know how to build and maintain trust.
Yes, this section was initially reframed to focus on the increased public trust in science/research during the pandemic, but has now been removed owing to the tighter focus and the references have been updated.
While the authors elaborate on the term "impact", it remains problematic and likely to be understood in various ways. One could challenge the statement about quantification in economic terms being straightforward, and in particular question whether benefits should be evaluated in such a way. For example, people dying early could be economically beneficial, but would that be societally desired?
We agree that the difficulty in qualifying impact itself is part of the challenge, whilst quantification isn't necessarily a good thing. We have reframed around the mobilisation/transfer of knowledge.
The "missions" approach in Horizon Europe is somewhat controversial. Notably, the recent 2020 Euroscience Open Forum (ESOF) included a session "Does science for missions undermine the missions of science?". Likewise, the authors state that the drive to solve "the Grand Challenges of our time" has acquired increased urgency during the COVID-19 pandemic, but one could also argue that prominently reveals a potential flaw of focusing on identified challenges, which is in neglecting those strands of research that are most suitable to provide the basis for the next challenges that we are to encounter (e.g. https://www.statnews.com/2020/02/10/fluctuatingfunding-and-flagging-interest-hurtcoronavirus-research/ and https://www.nytimes.com/2021/04/08/health/coronavirus-mrnakariko.html) Yes, we agree, although the missions concept is gaining traction worldwide, potentially at the expense of curiosity-driven research. The discursive elements around Horizon Europe has been removed as part of the tightening of the article.
The authors refer to citations as the primary currency of success or progress, but it might be worth keeping in mind that it is a widespread myth that this applies universally. In particular, the quoted UK Research Excellence Framework (REF) is not based on citation counts. Moreover, there are significant differences between "impact" in the REF and "Pathways to impact" in the context of funding applications to UK Research Councils. Both are distinct from the "Pathways to Impact initiative" (reference 11). I appreciate the authors mentioning a "chain reaction" emerging from original research. Could one elaborate on what would determine the value of research?
With regard to the role of academic publishing, I note that the International Science Council has recently published an insightful report "Opening the record of science: making scholarly publishing work for science in the digital era."1 Thank you for sharing the reference. Our experience suggests that this is the case, along with other examples, such as Horizon Europe's business case, which comments on increased citations comparatively, but this has been modified or removed in the Introduction.
The definition of the three main aims of the survey does not specify what group of people "our communities" refers to. I am far less confident than the authors about the respondents being "representative". I would expect that those who just care about their bibliometric profile and other similar performance indicators are not inclined to spend any time responding, which would result in the respondents being more engaged for the scientific community and the wider society.
We also agree that survey sampling tends to lead to some degree of self-selection, which may emphasise some biases. However, we were satisfied that the total number of responses across a wide range of journals and the geographical alignment of the respondents with the journals' author base allowed us to have reasonable confidence in the representative nature of the results.
I found the "note about error bars and statistical significance" almost entirely stating trivialities, whereas the authors do not provide the crucial information of what the quoted "error bars" actually refer to and what they mean, which leave me unable to interpret them.
The error bars plot the confidence intervals for the percentages shown. If the error bars for two or more countries overlap, we have been cautious about making any substantive conclusions, because they may not be statistically significant, and only clearly statistically significant differences are discussed in our comparisons.
There are some substantial weaknesses with the survey questions and the provided answer options. It is somewhat confusing that in some cases respondents were able to pick any number of answers from a list, whereas in others the number of choices was limited. This poses some difficulties for the interpretation of the results and the limitations to answer options should be mentioned clearly in the respective figure and/or captions. It would also be useful if the authors referred to the question numbers.
The format of the question was selected according to the purpose of the question and the number of perceived answers. The phrasing of the questions was appropriate for the settings that were used; that is, the questions that presented a limited number of options used wording that emphasised priority, whereas the questions that allowed for an unlimited number of selections used wording that emphasised relevancy.
With regards the question that allowed a maximum of three responses, this was labelled clearly within the paper. within the text and chart labels.
My main concern with regard to the survey is about Q14 and Q16, which refer to "real-word problems". I think that it is an unfortunate choice that the authors put these central rather than referring to Q11 in conjunction with Q3 on addressing the question on why researchers undertake the research that they do. While the term "real-world problem" carries a polemic tone suggesting that academics might be detached from reality, "directly or indirectly contributing" is remarkably fuzzy. It is not clear to me what Q14 and Q16 are actually able to capture, and I feel that answering with "yes", "no", or "Don't know" is mostly a matter of interpretation of the question. I could make a case for my research falling into either of these categories, depending on what point of view I assume. In fact, "don't know" appears to be a good option given that for some research the connection to "real-word problems" is not immediately apparent and the connection might only be built in the future. Apparently, a substantial number of respondents chose that option. I also note that Q16 refers to "priority" whereas Q14 does not. I did not see the authors commented on lower numbers for an affirmative response on Q16 as compared to Q14.
We used the phrase "real-world problems" as we felt it was commonly used parlance in aggregating topics such as the SDGs, but agree that there is some subjectivity there and there is a need for education of researchers in contextualising their work. We have included a brief section to introduce the terms below the note on error bars.
I also wonder how many of the respondents are familiar with what the UN SDGs are, or are willing to look at up before they answer the question. I note that SDG 8 explicitly recognises creativity and innovation as drivers of economic growth, which aligns fundamental research, not directly targeted at specific challenges, with the UN SDGs.
Whilst we did not probe the degree of familiarity of the respondents with the UN SDGs, or the scope of individual Goals or Targets, we believe that, especially in the subject areas covered by the survey, it is reasonable to assume that most respondents are broadly familiar with the priorities of the SDGs and that respondents who felt they were not sufficiently familiar with the SDGs to answer the question would have answered "don't know".
Something that puzzles me is that 38% of the respondents did not choose the answer "I find researching these topics interesting". Why do they do research that they are not interested in?
It would seem to me that the survey reveals another "gap" than the one the authors claim.
A key gap appears to be in how the research actually materialises into something useful, with only about 20% of the survey respondents stated that "input into policy decision-making" or "contribution to tackling big real-world problems, such as those expressed by the UN SDGs" was amongst the most important forms of "impact" (although they were limited to 3 answers).
Thank you for sharing this observation, which has been included in the revision.
In contrast, the authors elaborate on the point that respondents ranked formal recognition over making a contribution and state that we risk devaluing the necessary application of original research to addressing our global challenges by prioritising other metrics and outcomes. However, their study does not provide evidence for that. If the research of the respondents is oriented towards "real-world problems", the underlying motivation is not the relevant issue. Unfortunately, the authors do not elaborate on to what extent "contribution to the advancement of research" is aligned with "contribution to tackling real-world problems" and/or "input into policy decision-making" or rather not. It is the more unfortunate that respondents were restricted to a maximum of 3 answers for Q11 rather than being able to state where each of them ranks in priority.
We have added a new Venn diagram ( Figure 5), which looks at the overlap of responses to three of the key options from question 11. The percentages are based on the total number of respondents selecting at least one of these three options. We chose to limit the number of responses to Q11 because we felt that, whilst it would have been worthwhile asking respondents to rank all of the answers, it would have been a much-larger undertaking to ask respondents to rank or rate nine answers. Additionally, asking respondents to rank all of the answers would not have allowed them to rank things as equally important/unimportant, or to leave some items unranked.
On the question of why authors chose to submit their manuscript to the specific journal, the top chosen answer is pretty much an umbrella category that encompasses more than half of the other answer options, which are more specific on what most "relevant" means.
The authors should define what they consider "Europe", e.g. if respondents stated that they are located in Turkey, have their answers been included or not? To my knowledge, the UK is in Europe. The mentioned effort on narrowing the science-policy gap is laudable, but can we expect getting the researchers onboard?
A mapped list of countries to the regions that were used in the analysis has been added to the FigShare deposit.
This is a report of an interesting survey getting at a major question related to understanding motivations of researchers in conducting and publishing research, and ultimately how to align incentives to support, recognize, and reward better research and activities of scholars aimed at addressing societal challenges and working with communities.
Major points: 1) The main improvement needed for this paper is placing it in context of other work and author surveys. There are many related and similar surveys and analyses, done by publishers, societies, funders, and scholars, and none (not exaggerating; none) are cited or mentioned. Much of the findings here regarding citations and priorities around publishing, open access, and more, have been covered in other recent author surveys, in this general discipline and other disciplines. This context is essential for this paper to be considered scholarly (and published in a scholarly journal). I've reviewed a lot of papers over the years, and this is the first submitted to a leading journal where I've seen such a lack of referencing. This might be acceptable for a report by a publisher (cf. Elsevier's recent gender analysis, self-published, also completely without references) but not a submission to a scholarly journal. Most of the references are just to websites, not any formal survey results or scholarly research on these topics (there's a lot even in the past few years). Such comparisons would also strengthen some of the conclusions.
Just a note that JpGU and AGU have conducted a somewhat similar survey of their members. The results are not published yet but were presented in this session: https://agu.confex.com/agu/fm20/meetingapp.cgi/Session/105702 at the recent AGU Fall Meeting (see presentation starting at about 40 minutes; registration is required). I've been involved in helping this survey. Overall these results (and others AGU has conducted but not published) are similar to the results given in the later questions here regarding selecting journals, citations, etc. However, on the motivation for research (first question in this survey, and the one that sets that main stage for discussion), the AGU-JpGU wording was different but a large number of respondents, well beyond a majority, indicated that their primary motivation for research was around basic "discovery" or "elaboration/synthesis" rather than "responding to responsibility of society." JpGU members even moreso. In this survey, unlike the T&F one, there was a large agerelated difference between early career, and later-career respondents (early career researchers were more focused on topics related to societal impact). I suspect that the populations of respondents overlap heavily in the two surveys. Recognizing that the AGU-JpGU results are not yet fully analyzed or published, I'm just raising this to bring caution to over interpreting the first question of this survey as worded. This question is, however, the most interesting one to explore and provides much of the interesting novelty here. Just be cautious in interpreting the answers.
One test would be also to simply score recent publications (outputs) as to whether they align with the results-that is, do most of the outputs directly or indirectly support SDGs, for example? My sense is that in the Earth and space sciences, many indirectly do, but that the path is long and I'm not sure 75% would without quite a stretch.
2) As the authors note there is a disconnect between authors reporting that they are working (directly or indirectly) on societal relevant topics vs. the "impact" (that is, citations) that they are seeking in their work. Further exploration is needed on whether the respondents misinterpreted the first question or if it was worded so vaguely ("indirectly") as to be meaningless. Note that much "basic" research in the Earth, environmental, and space science has widespread indirect impacts.
Much real time "basic-science" data about the Earth is used in the GPS system, weather predictions, or other uses, e.g. For examples, see this discussion here that I was involved with: https://eos.org/editors-vox/earth-and-space-science-for-the-benefit-of-humanity and the linked papers. Indeed many grant applications require a statement regarding impacts.
Similarly, results for the question on impact expected by the authors are used in comparison. I also wonder if the wording and reality of the scope of published papers drove this response (that is, many have an indirect vs. direct impact) and the response was viewed as a direct impact. 3

Other items:
The authors argue that it is surprising that JIF is important to researchers but that they don't always/regularly choose the highest JIF journals when submitting. This is because researchers know rejection rates and do optimization around likelihood of success (or they don't want to waste their time, which is also important).

If applicable, is the statistical analysis and its interpretation appropriate? Yes
Are all the source data underlying the results available to ensure full reproducibility? Yes Are the conclusions drawn adequately supported by the results?

Partly
Competing Interests: As stated in the review, I'm working on a similar survey (advising) that is completed but not yet published. Reviewer's comments: This is a report of an interesting survey getting at a major question related to understanding motivations of researchers in conducting and publishing research, and ultimately how to align incentives to support, recognize, and reward better research and activities of scholars aimed at addressing societal challenges and working with communities.

Major points:
1) The main improvement needed for this paper is placing it in context of other work and author surveys. There are many related and similar surveys and analyses, done by publishers, societies, funders, and scholars, and none (not exaggerating; none) are cited or mentioned. Much of the findings here regarding citations and priorities around publishing, open access, and more, have been covered in other recent author surveys, in this general discipline and other disciplines. This context is essential for this paper to be considered scholarly (and published in a scholarly journal).
I've reviewed a lot of papers over the years, and this is the first submitted to a leading journal where I've seen such a lack of referencing. This might be acceptable for a report by a publisher (cf. Elsevier's recent gender analysis, self-published, also completely without references) but not a submission to a scholarly journal. Most of the references are just to websites, not any formal survey results or scholarly research on these topics (there's a lot even in the past few years). Such comparisons would also strengthen some of the conclusions.
Just a note that JpGU and AGU have conducted a somewhat similar survey of their members. The results are not published yet but were presented in this session: https://agu.confex.com/agu/fm20/meetingapp.cgi/Session/105702 at the recent AGU Fall Meeting (see presentation starting at about 40 minutes; registration is required). I've been involved in helping this survey.
Thank you and we acknowledge the limitations of our introduction. As part of the refocusing of the article, we have pared-back the introduction to the article and included additional referencing to support the discussion.
Overall these results (and others AGU has conducted but not published) are similar to the results given in the later questions here regarding selecting journals, citations, etc. However, on the motivation for research (first question in this survey, and the one that sets that main stage for discussion), the AGU-JpGU wording was different but a large number of respondents, well beyond a majority, indicated that their primary motivation for research was around basic "discovery" or "elaboration/synthesis" rather than "responding to responsibility of society." JpGU members even moreso. In this survey, unlike the T&F one, there was a large age related difference between early career, and later-career respondents (early career researchers were more focused on topics related to societal impact). I suspect that the populations of respondents overlap heavily in the two surveys. Recognizing that the AGU-JpGU results are not yet fully analyzed or published, I'm just raising this to bring caution to over interpreting the first question of this survey as worded. This question is, however, the most interesting one to explore and provides much of the interesting novelty here. Just be cautious in interpreting the answers.
Thank you for raising this and we agree that it would be interesting to investigate further.
One test would be also to simply score recent publications (outputs) as to whether they align with the results-that is, do most of the outputs directly or indirectly support SDGs, for example? My sense is that in the Earth and space sciences, many indirectly do, but that the path is long and I'm not sure 75% would without quite a stretch.
This was a very interesting suggestion. We have used Dimensions SDGs category data (new Figure  2) to analyse their quantitative alignment of research published in the same set of journals with the SDGs and compared this with the author's qualitative responses.
2) As the authors note there is a disconnect between authors reporting that they are working (directly or indirectly) on societal relevant topics vs. the "impact" (that is, citations) that they are seeking in their work. Further exploration is needed on whether the respondents misinterpreted the first question or if it was worded so vaguely ("indirectly") as to be meaningless. Note that much "basic" research in the Earth, environmental, and space science has widespread indirect impacts.
Much real time "basic-science" data about the Earth is used in the GPS system, weather predictions, or other uses, e.g. For examples, see this discussion here that I was involved with: https://eos.org/editors-vox/earth-and-space-science-for-the-benefit-of-humanity and the linked papers. Indeed many grant applications require a statement regarding impacts. Similarly, results for the question on impact expected by the authors are used in comparison. I also wonder if the wording and reality of the scope of published papers drove this response (that is, many have an indirect vs. direct impact) and the response was viewed as a direct impact.
A comment has been added on this in the revised submission to note further research would be useful to better understand the motivations and responses.
3) The authors list a number of actions T&F are taking or should take. Interestingly, T&F has not signed DORA-as Springer-Nature and Elsevier have now signed (whatever one thinks of that), Wiley and T&F are the major publishers who have not (many individual society journals published with Wiley have). Perhaps the authors could indicate why or why not that would be appropriate and how to leverage that impact. Here's a recent editorial from a T&F publication: https://www.tandfonline.com/doi/full/10.1080/10919392.2018.1522774 Pleasingly, Taylor & Francis has since signed DORA, but no comment has been made in the article, as the policy-related points have been removed.

4)
The authors indicate what some stakeholders, especially publishers, might do. In the Earth environment and space sciences, there are several leading global societies. These are not mentioned. What is their role? Many have missions aligned with providing benefits to society and many are involved in science communication, policy, outreach and training/mentoring (moreso than most commercial publishers and indeed universities). Indeed this might be an argument to focus on publishing with a society versus a commercial title, where these resources are more directly leveraged.
This was an oversight from the previous submission. As we have removed the policy discussion, we haven't elaborated on this further, but we acknowledge the mission focus of many of the leading societies and their importance in shaping the behaviours of researchers in their communities.

Other items:
The authors argue that it is surprising that JIF is important to researchers but that they don't always/regularly choose the highest JIF journals when submitting. This is because researchers know rejection rates and do optimization around likelihood of success (or they don't want to waste their time, which is also important).
We agree that likelihood of success is one of the main drivers in the decision-making of authors when selecting a journal and included a paragraph on whether the article was finally published in the first/second/third-or-more choice. We also suggest that speed of publication/time to first decision, and the journal's relevance to the community are other important drivers in addition to acceptance rate and Impact Factor. Thank you for your careful and thorough reviews. We have been working through the comments and plan to submit a revised version in due course.
One of the major points of concern, which you both raised, was that the paper appeared in some aspects closer to a report/white paper than a research article, in particular in terms of the literature review and the discussion around the uptake of research into policy decision-making. We found these comments especially useful and have been discussing how best to tackle them.
After discussion with the editorial team, we propose to remove the discussion section and the policy-related introductory paragraphs into a separate non-peer-reviewed piece, where they may be better suited. We plan to re-focus as a much-shorter communication of the results of the survey, as the main source of novelty within the work. We would make that clear in a revised title and abstract; a briefer introduction, which would have a narrower scope to review other similarly positioned surveys; and an abbreviated conclusion. We will also address the other comments with respect to the survey and analysis.
Before substantially revising the article in this way, we would like to take the opportunity afforded by the open-peer-review process to ask what you thought of this approach. We would appreciate your comments and thank you again for your feedback to date.