Keywords
self-report data, data studies, data governance, pandemic, pandemic ethics, data collection, big data, data self-reporting
This article is included in the Research on Research, Policy & Culture gateway.
This article is included in the Health Services gateway.
self-report data, data studies, data governance, pandemic, pandemic ethics, data collection, big data, data self-reporting
As the first wave of the COVID-19 pandemic washed through Europe in early 2020, governments desperate to get a grip and researchers and companies hoping to make direct impact into the issue all identified the use of self-report data (data that are collected by participants) as one of the key weapons digitalised societies have to make sense of a health phenomenon that is poorly understood, distributed, and urgently concerning.
Data had to be collected for at least a few different purposes, including symptom tracking and illness management; contact tracing (both at the national and private levels); and test result reporting. These purposes could be met in various combinations by many different offerings, and governments endorsed a spectrum of different technical methods for the collection, and legal requirements and implications tied to the use or non-use of these apps (Kitchin, 2020). Data also had to be collected through different levels of direct involvement on the part of the participant. Some of the data collection is automatically performed by the app once the participant has set it up; other data can only be produced if the user responds to specific questions.
The enthusiastic ‘technological solutionism’ (Morozov, 2014) with which some governments, including the UK, brandished a mobile application as a godsend for keeping the virus at bay can be levelled against their desperation to make their promises believable, the sacrifices digestible, and keep a stable social order at a time when the sky was falling. While uncritical ‘there’s an app for that’ arguments are easily dismissed, the more modest hypothesis, that self-report data can be a valuable resource in specific circumstances, is not. Data self-reporting for the study of health phenomena is a strategy that clinical researchers have been using for decades as well before the rise of the internet and personal computing, patients were asked to self-report symptoms and experience through paper-based questionnaires. The spectrum of patient experience is of course enormously varied so it goes without saying that the same approach will yield highly reliable results with self-reporting about one medical condition, while patients suffering from another condition might not be able to report good quality data.
The purposes for citizens to become a data contributor can be many; the general (im)mobilisation that a pandemic can bring to a population is an extraordinary force for project enrolment. Contact tracing apps risk reinforce social injustices, better protecting those who are already better off, while interfering with the livelihoods of more precarious, frontline workers (Ada Lovelace Institute and The Health Foundation, 2021). It is inadequate, as Lucivero and colleagues (Lucivero et al., 2021) remind us, that contact tracing apps are imposed on citizens on the grounds of a false dichotomy between individual privacy and public safety (Lucivero et al., 2021).
Outside of pandemic times, data self-reporting has often been associated with individual and collective patient empowerment, and patient movements. In ‘normal’ time, it has long been clear that patients who are able to engage on large scale, distributed data collection exercises are better able to advance demands and push for a reorganisation of the research and policy agenda concerning their condition (Epstein, 1996; Rabeharisoa, Moreira, and Akrich, 2014). Data self-reporting becomes a way, then, to put things into motion ‘from the bottom up.’
However, data self-reporting is also the basis on which tech giants have been building empires of surveillance now caught in the public eye for their potential to manipulate and interfere with the lives of individuals, communities and countries. A great deal of criticism has been levied towards project and rhetoric of web-based participation originating in the environs of the global tech industry, and disclosure of personal information, traits and behaviour through automated or manual self-reporting is seen by many as a driver of dis-empowerment and a new colonialism (Zuboff, 2019; H. Ekbia and Nardi, 2014). The pandemic has created opportunities for corporate technology platforms to further penetrate public health systems (Lucivero et al., 2020; Tempini et al., 2022), e.g., the co-development by Google and Apple of contact tracing methods to be used by the National Health Service (NHS) in its contact tracing app; leading to calls of ‘covidwashing’ (Kitchin, 2020); worries of data function creep where the platform monopolists are able to directly or indirectly make use of the data for private initiatives (Lanzing, 2021); and ultimately, the contribution to a general trend were collective dependence from the proprietary technology platforms of monopolists deprives the public from the ability to imagine and develop a future without it (Sharon, 2020).
This article relates to both the exceptional pandemic times and the dynamics of accelerated technology adoption and intensified data collection that they have generated; and the ‘state of play’ in normal times, when individual have been involved in voluntary and involuntary contribution of data by a range of projects from consumer grade e-commerce technologies to ‘cognitive surplus’ (Shirky, 2010) collective projects. In highly digitalised societies, the topic of the ethics and governance of self-report data is inevitably inexhaustible. But since the exacerbation of such ethical challenges caused by the pandemic and its responses, renewed calls have been made for evidence to be submitted to policy-makers. This brief offers a panoramic discussion of the key issues and articulates recommendations for self-reporting project decision-makers that are based on some of the latest innovations in data governance. The research that underpins this brief was funded by the AHRC under the grant ‘UK Ethics Accelerator: Coordinating and Mobilising Ethics Research Excellence to Inform Key Challenges in a Pandemic Crisis (AH/V013947/1)’. The early two-part report resulting from it informs this policy brief; it was made available on Zenodo (Tempini, 2022a, 2022b).
This section outlines the ethical issues that are outstanding to the pandemic experience and the decades preceding it during which web technology became mainstream. The second section takes stock of innovations that can help improve best practices with a view to tackle challenges outlines in the first section.
Data have a tendency to function creep: they often go on to be reused for many other purposes than the ones initially envisaged. The technology used to organise data collection tends to function creep too. This is true both in private and public sector projects. It is worrying because if data are not risk-managed in the best way, disclosures and leaks that lead to individual and group harm are more likely.
In the private sector, the emergence of surveillance capitalism (Zuboff, 2019), and the lively debate discussing it, has highlighted the deceitful and pernicious intrusions of privacy and the subsequent behavioural manipulations that US tech giants have built their empires from: an almost two decades long thread of surreptitious projects to take ever more and diverse data about users; flanked by an assertive and crafted rhetoric on the value of sharing, emancipation, community, and peer2peer entrepreneurship; that increased the ability of these giants to predict, modify, and generate human behaviour to the point that legitimate concerns have been raised as to the effects of big tech services on just about anything social including politics, markets, and mental health. The cost of surveillance capitalism, as Zuboff observes, is human futures. The model has been imitated by companies throughout the sector to the point that both practitioners and public, in different ways, are failing to imagine how tech can be built without relying on the sale of user profiles and behavioural modification. Innovative organisations focusing on privacy-first products are struggling to be seen and break through the current market chokehold. For this reason, the offer by Google and Apple to collaborate in developing an interoperable framework for contact tracing met mixed reception. In a time of great emergency, it was congenial for national COVID-19 response efforts to develop their apps on top of this tech giant-concerted framework. The UK tried to independently develop the same functionality for its NHS contact tracking app, only to backtrack. The tech giants were praised for munificence and sense of civic responsibility. The framework also employs gold standard privacy-preserving techniques approved by privacy experts. It was easy to see, however, how self-serving the move could be to white-wash the giants’ reputations with the very same broad framework they had been tarnished with – individual-level, comprehensive digital surveillance (Kitchin, 2020). The offer to help fits a history of manipulative attitudes to the public discourse.
The legitimisation of surveillance technology, methods, and frameworks as a response to the pandemic emergency has been such that even organisations who have been known to the public for nothing but scandals have had a chance at ‘covidwashing’. For instance, Israel’s contact-tracking app was developed by NSO Group, a secretive spyware organisation notorious for its services to authoritarian governments across the world (Kitchin, 2020). Similarly, Palantir and Experian, other actors of the digital economy of questionable ethics reputation, have also taken part. Observing how easily techniques from the most controversial commercial surveillance practices can be transferred all the way to state infrastructure for pandemic response should make us ask what we are dealing with. For companies such as Palantir, the opportunity was not only reputational, but also an opportunity to turn private technology into state infrastructure, with the commercial benefits that can ensue in the long run (Tempini et al., 2022). The means are the translation, adaptation, and re-deployment of technology, methods, and resources used in population surveillance. The aim is to become indispensable infrastructure of public health response and prepare state administration processes for further penetration through new drives of technologically, and organisationally, compatible systems provision. Case in point, after providing an analytics infrastructure crucial to the coordination of the covid response, Palantir now looks set to win an unprecedentedly valuable and wide-ranging contract for the provision of future NHS digital capabilities (Financial Times, 2022) extending from the infrastructure already provided during the pandemic. The strategy is right out of the infrastructure studies playbook.
Despite the scandals revealed by a number of whistleblowers and leaks (e.g., Snowden on the PRISM program (Wikipedia, 2022a) reaction over recent years has been mixed and, so far, shy of introducing game-changing protections; such is the attraction that the public sector feels in respect to these new sources. The single-most impactful law recently issues on the topic, the European Union (EU) (UK) General Data Protection Regulation (GDPR), has had an uncertain track record as a sensitivising device, as broad patterns of use have not shifted. It provided great flexibility in implementation. In it, the opportunity for technology providers’ is implementation fragmentation. If every provider implements the regulation in a slightly different way (as in the myriad different ways to do ‘cookie’ language, explanation, and notification interface), user burden in expressing and articulating privacy preferences is highest. It is possible that it is rather a string of publicised scandals, such as the Facebook/Cambridge Analytica one, involving the threat of election manipulation and disinformation warfare, that has finally started to bite.
Internet researchers, in the Internet and beyond, should also question and account for the ways in which they might be benefiting from such widely criticised ways to generate data, and the ways in which their own practice can relate to function creep issues: “Similarly, harvesting sensitive information from public–private environments such as social media may raise ethical issues, especially for research involving vulnerable populations who may have limited understanding of the implications of disclosing personal information on these platforms” (Clark et al., 2019).
In the public sector, large trends in networking and ubiquitous computing have brought about an explosion of digital watching as made possible by a highly granular fabric of recording devices; and a network of interoperable databases that are increasingly interconnected for fast, increasingly real-time, networked access. From CCTV cameras, which the UK has led the way in ubiquitous deployment and acceptance, to a panoply of digital listening devices, huge amounts of data can be made available to various agencies from the law enforcement to public administration. The value of data and the pressures to fuel the growth of a competitive digital economy have created incentives for governments to enable the regulated access to previously unavailable data even down to the individual level. The unveiling in Spring 2021 of plans by the UK Government for the regulated access and re-use of NHS England and Wales data (see General Practice Data for Planning and Research (GPDPR) reporting; (Machirori and Patel, 2021)) is the latest of a series of attempts to inject these data in the digital and research sector economies (Vezyridis and Timmons, 2017) and signals the determination on the part of policy makers to find the ‘right’ conditions that will make this digestible to the public. Neoliberal politics and New Public Management ideology of public administration have captured the imagination of policy makers in matters of technology innovation and management. They have made for social, political and financial pressures on state institutions to try to generate value out of any viable public asset. But the protracted occupation by private sector of public infrastructure provision has the effect of stifling imagination and discussion of alternative ways to develop public technology (Sharon, 2020; Tempini et al., 2022). For a long time, scholarship has further pointed out how technology infrastructure and standards have an inertia that makes radical revision as difficult as function creep is easy (Hanseth and Lyytinen, 2010). Once important social and organisational interdependencies have developed over information and communications technology (ICT) that saves as infrastructure, it is very difficult to unseat it. From the care.data debacle (REF) to the latest GPDPR move, NHS data have been one of the fields in which the pressure to data function creep is highest, raising the appeal for government to try to avoid the turbulence of inevitable questioning by trying to get plans pass scrutiny as quickly and silently as possible. As the Ada Lovelace Institute observes, this fits a pattern of a “decide, announce and defend” approach to the public focused on its persuasion instead of its involvement (Machirori and Patel, 2021; Ada Lovelace Institute, 2020). It is a self-defeating strategy, as each time, the plans have been halted by public debate that is only in part focused on the merits and demerits of the envisioned ways of using data and doing research with them; and is focused, in the rest, on public issues of governance, accountability, and trust. The GDPR’s roll out has now been indefinitely frozen (O’Donovan, 2021), as the size of deliberate opt-outs from a distrusting public was concerning. The pandemic has offered a few tell-tales on the issues and risks of data function creep. The UK government attempted to extend regulations of pandemic emergency that loosened constraints on the access to sensitive health data (Tempini et al., 2022; O’Donovan, 2021), but its failure to attend to public trust led to new failure (Wilson, 2021; Bharti et al., 2021).
The public deserves much better, more responsible approaches to data governance accountability and control of function creep. The public should be involved in the deliberation and articulation of values and aims embedded in projects and infrastructures using data about it (Bharti et al., 2021). Projects should be paramountly concerned with gaining public trust. At a minimum, Kitchin recommends that: “Citizens should know precisely what the app seeks to achieve and what will happen with their data. There should also be safeguards to stop control creep and the technology being repurposed for general or national security, predictive policing or other governance or commercial purposes” (Kitchin, 2020). And project managers should be concerned about the cumulative effects that straining public trust with exploitative or unaccountable data management can have in the long-term. In pandemic times, such risks are accelerated as function creep can ensue very quickly. An example is the COVID-19 Symptom Study, data self-reporting app ZOE (ZOE, 2022), launched by a private company and academics at the start of the first wave. Its funding was quickly secured, for the ensuing two years, by the government. Only few months later, when the research team started publishing promising results over the power of self-report symptom data to predict infections, authors observed the trend for the ZOE data to be “increasingly being linked to the public health response within the National Health Service (NHS) in the United Kingdom” (Drew et al., 2020). In the immediate, this could be seen as a good example of collective mobilisation and ingenuity in the face of exceptional challenge. But it poses concerns over the risks this shift could mean for the livelihood of individuals (See Box 1).
The ZOE COVID-19 Study is a self-report study of COVID-19 symptoms and patient experience centred around a smartphone app developed by a company, ZOE, and a team of nutrition and epidemiology researchers at King’s College, London (KCL). The app itself is an example of the speed at which existing technology, code and methods can be redeployed for new purposes. As the first wave hit the UK, the company refactored the app it had been developing for the study of nutrition into an app that could be used by covid patients to record their symptoms on a daily basis, for months, so as to be able to estimate the dynamics of the disease as it swept through society. ZOE and the KCL team led by Tim Spector are serial data self-report study leaders, having been known as The British Gut project, where they asked participants to provide funding, samples and data in exchange for personalised information and advice on gut health. The participation of contributors was already limited to the completion of a few well circumscribed tasks, with no direct input in project management and day to day operations; but it was still taken to be one of a crop of web-based projects that were breaking boundaries towards science democratisation (Del Savio, Prainsack, and Buyx, 2016).When ZOE COVID launched it was quickly downloaded by a great amount of people, thanks to the extraordinary momentum granted by the pandemic emergency (Varsavsky et al., 2021; Drew et al., 2020). It was also adopted in other countries (Kennedy et al., 2021). Contributors input demographic data upon registration, then are asked to submit a daily report of symptoms and covid test results. The data could then be used by researchers wanting to study covid patient experience; and estimate its demographic and geographic distribution as well as geographic and case count movements. Studies reported the self-reported data could be used to produce estimates that closely track those gained from best methods, leading researchers to conclude that data self-report can be a valuable complementary resource of data to be used by policy makers and public health officers in coordinating emergency response. They should be particularly useful, they add, to understand the situation in regions where best method estimates are not viable due to lack in testing capacity.The success of the ZOE COVID-19 app in providing a resource of good quality data was outstanding, but there remain open questions as to how feasible (and desirable) an initiative of such proportions would be outside pandemic time. The initiative commanded such extraordinary participation thanks to the extraordinary context in which many people found themselves; and benefited from the lockdowns keeping the population at home, and constant media coverage that kept the issue relevant and into focus for the many users that the app needs to keep daily reporting even when they feel well (to provide a baseline of negative symptomatology). It also benefited from sudden financial backing from the government, as the app was, in the researcher’s own words, “increasingly being linked to the public health response within the NHS” (Drew et al., 2020). As researchers observed (Varsavsky et al., 2021, p5), relatively high numbers of weekly active users are needed to confidently detect relatively small rises in case counts.However, there are various limitations and experts have been split over how much confidence can be put in these data to guide emergency response. Study researchers observe how the self-selected sample of respondents provides for a biased and non-representative sample of the population, with a trend towards more representation from the less vulnerable. It fits the experience of self-reporting initiatives in general. Events like the ‘pingdemic’ that unexpectedly requested so many citizens to self-isolate after technology had determined they had been exposed to a positive carrier could also suddenly pull the rug from under such initiatives. But other interventions can also interfere, such as vaccinations, or being clear of covid after infection, can provide a sense of security that makes some less interested. The same ZOE technology applied to Sweden, a country that had famously resisted the introduction of social distancing measures and lockdowns, had seen a lower uptake and again biased in favour of better-served areas (Kennedy et al., 2021) clustered around the universities running the study. It is potentially a warning sign for an approach that is recommended as valuable complement in the management of the emergency in the less well-resourced region of a country. People who have worse access to testing might be disadvantaged twice, if the surrogate measures also risk being worse.Worries of function creep and surveillance would not be easily assuaged by sudden partnerships with state institutions (Drew et al., 2020). Emergency response is a highly dynamic situation and even when the consequences of measurement are not taken directly to specific individuals, when public health officials are trying to coordinate a response which might include various measures affecting people’s livelihoods, we are already in a situation where self-report data have been repurposed to a new use and become what they were not initially meant to, e.g., a tool for population management. Experts warned about the potential emergence of incentives to lie; of worries of being reported to the police and in general of risk contributing data to the app; the potential consequences of these worries for the quality of data collection and of other related activities such as self-administered tests; and the consequences of the latter on the quality of policy making.
These are not only ethical concerns. They often become epistemological if people start to take countermeasures which in turn affecting the quality and reliability of the data (Bharti et al., 2021). As about any health researcher knows, opt-outs are not randomly distributed. Just as many other individual preferences and social facts, they introduce bias in experimental and observational research (Teira 2013). When sizeable, they quickly lead to issues statistical representation of the remaining sample. Thus, Bharti et al. (2021) stress how public involvement in the articulation of values and aims of a data project is not only politically beneficial but also leads to better outcomes. The GPDPR is a case in point, as the value of the proposition was tarnished by huge spikes in drop-out rates (O’Donovan, 2021). And as the controversies around the ‘pingdemic’ demonstrated, allowing an app to report information can give a third party the power to interfere with our everyday life at a latter point, on terms one might not be happy with. Perhaps ironically, many deleted the contact-tracing app once it started doing what it was supposed to do and ‘pinged’ them out of circulation. Contact-tracing apps are designed to discipline and reshape spatial movements and social connections (Kitchin, 2020), but they have an uneven impact. Essential workers are likely to face higher costs of ‘protecting others’ policies, if the government does not provide for any other forms of support. The complex dynamics of social systems (Wilson, 2021) mean that these workers will have very high incentives to flout unfair rules. Kitchin notes that data function creep can also be indirect. Location data that is automatically generated, for public health apps, by a smartphone operating system’s location; services can be shared by the operating system with other apps that also use location data; with the result of deepening the surveillance of a user within the ecosystem of data brokers and surveillance capitalism.
Research with Internet data has been raising ethical concerns related to consent and the distribution of harm and benefits for quite some time. Data science and artificial intelligence (AI) methods often rely on the use of large amounts of real-world data to train machines, and these data have often been taken where they are most easily available, e.g., much of the Internet. Internet users directly generate data as they post, record, and interact over platforms, but data are also continuously generated by the underpinning technological infrastructures as a matter of mere technical operation. The whole array of data generation and storage instances is impossible to keep track of (Clark et al., 2019). Intense debates have been generated over the ethical standards that new forms of digital research should adhere to (Petermann et al., 2022); including questions over the double standards that public- vs private-sponsored research follow. Questions of consent will be important for self-report data collection initiatives. If data could be reused multiple times a whole host of questions are raised as to what could be the appropriate mechanisms for granting consent for uses other than those originally envisaged at the point of data generation.
Calls for ‘blanket’ consent that pre-emptively authorise a broad spectrum of re-use (and the milder version of ‘broad’ consent) have been heavily criticised for their vulnerability to misuse, capture and manipulation. Also heavily undermined have been individualistic arguments that envision each research participant deciding on the use of their own data as the most desirable ideal of data governance – most impractical but also unethical to dump all the responsibility onto isolated individuals. Organisational arrangements that require more complex governance procedures and accountability structures have been better received.
Self-report data, especially when health-related, are often very sensitive, giving rise to the potential for direct and indirect harm from misuse, manipulation, and profiling. It is thus important for contributors to be able to keep track and access contextually relevant information as to what is done with their data, and what is envisioned for the future. They should also be made aware that parties interested in accessing and using their data, because ethical standards and guidelines as to what are admissible re-uses of individual data change over time and across research domains and institutions, might not always think that contributors have a right to informed consent when:
▪ the data are shared in a public setting seemingly without whatsoever expectation of privacy (for instance, tweets from public accounts)
▪ there are no expectations that participants could be directly harmed from the research (for instance through anonymisation of the data at the point of collection)
▪ when they are working with a private sector company that has served legally-compliant notifications – for instance, a fair processing notice that is GDPR compliant can pre-empt, at once, a whole host of uses and projects the company will undertake with the data.
Given the higher exposure to diverse and distributed expectations and engagements, high standards should be required of initiatives relying on self-report data. To be meaningful, question of consent should be renewed at major project milestones or updates. Participants should be actively invited to consider whether changing situations fit them and should not be expected to keep track and make sense of change by themselves. For instance, if the governance of a project changes due to change in ownership or management, self-report data should not be considered a conventional asset that can be sold on. Instead, they should be treated as objects that bind different people together in an economy of relational ethics (Prainsack, 2019a, 2019b; Birhane, 2021). Data generation and access should be minimised as much as possible. Many questions require functioning ethical oversight.
Self-report data collection initiatives can achieve impressive feats thanks to the economics of crowdsourcing, where a large number of contributors each invest only a small portion of their time to a great cumulative effect. However, it is not easy to run these initiatives over a long time, and how to do this will be particularly crucial for those initiatives whose data collection becomes more valuable the more it has taken place regularly and over a long period of time, e.g., data about health events and symptoms used for the understanding the spread of disease.
The emergency of the pandemic, with the emotional response and mobilisation it generated, along with the lockdowns and the way many people redirected their attention and free time to activities that can be completed at home, created one of the easiest scenarios for a self-report data collection project to succeed. Many people wanted to understand and learn about COVID-19 and needed reassurance and explanations on their daily experience. Many wanted to help. It was easier to accept intrusions of privacy, the boundaries and consequences of which were not fully clear. And still, even in the times of pandemic, several challenges arose to the motivation and distribution of participation. The UK ‘pingdemic’, and the consequent migration of many users deleting the contact tracing app from their phone, spoke of an unresolved tension on expectations and boundaries of the data reporting that contributors are voluntarily enabling; a tension that was only exacerbated by the experimental nature of the methods involved.
A rich literature on participation in data self-reporting offers many points of concern to keep in mind for future self-reporting projects. The seminal ‘ladder of participation’ published by Sherry R. Arnstein (Arnstein, 1969) has since served to highlight how nuanced the concept of participation can be, open to manipulation and, as a result, how appearances can be misleading. An extensive literature has followed to question the concept of participation further. Self-reporting projects distributed over the web are often opaque as to their inner, complex workings, while very public about the moral economy that they want to draw on to mobilise support, with frequent calls to share for the common good, for altruism, emancipation, empowerment of oneself and their kin. On close scrutiny, participatory practices can turn out to be empty, tokenistic, or extractive ‘crowdsourcing’. Kelty and colleagues (Kelty et al., 2015) point out that participation on the web would be better understood as something that is not a linear spectrum along one axis, but rather as something multi-dimensional. In this way, the limited openness to participation of most web projects is easier to observe, with most projects restricting participation to one or few dimensions.
It is crucial that self-report data collection projects respect the effort and investment put in by contributors to make it possible by relating to them as invaluable partners and project stakeholders. Extractive models popular in the Silicon Valley, that promote the value of sharing and promise empowerment and emancipation, only to extract information from individuals while excluding them from benefits (H. R. Ekbia, 2016) and governance of a project, will come under fire and will be increasingly unpalatable, given the recent mood change in public sphere discussions on these matters. Participation in health research involves a particular kind of labour that involves turning one’s body into readable and available to observation and cognition (Brives, 2013; Milne, 2018; Cooper, 2012); and even in the case of unambiguously commercial projects where crowds are called on to contribute cognitive labour in exchange for financial retribution, as in Amazon’s Mechanical Turk, strong arguments have been raised that point to the exploitation that these models build on (H. Ekbia and Nardi, 2014; Irani and Silberman, 2013; Nardi, 2015).
It is beyond dispute that many projects have exploited contributors who are asked to volunteer time and information while they are kept out of any relevant sense of ownership of the results. In light of much literature, it is clear that it is difficult to develop a self-report data collection project without controversy. The asymmetries of projects with a very large, distributed base of contributors and a very small team of managers and developers will make for very sensitive politics of contribution and participation. Managers of data self-reporting projects should avoid exploiting a common double standard where the efforts of the contributor base are celebrated and recognised with the language of empowerment and ‘bottom-up’, or ‘patient-led’, research while at the same time, individual contributors are excluded from formal recognition when the research is published in peer-review outlets. Only a few projects have tried to recognise the individual contributors that a crowd is made of through authorship credits, and this not always be possible nor desirable. There are many other ways to formally recognise the contribution of the public in these projects other than scientific credit. Direct involvement in governance and management can be an alternative, as well as having formal and public ways to gather and respond to request, motions and value demands.
It is important, and even more so once out of pandemic time, for self-report data collection projects to consider very seriously how they can best and actively engage in transparent communication about all aspects of the project, including decision structures and finalities (See Box 2). The projects that do not cause controversy are likely to be those that engage in a sustained dialogue with the base of contributors; explain key changes in governance and direction of the project; demonstrate accountability to the contributor base at least as keenly as they strive towards other stakeholders such as sponsors, third party researchers, or regulators.
HEAL-COVID is a study of post-hospitalisation COVID-19 symptoms and long-term outcomes. At the time of writing, it reports enrolment of over 1,000 patients who had been hospitalised with covid. Patients are asked to self-report data through a questionnaire that has been designed over only a few weeks by an interdisciplinary team of specialists, including specialists of patient participation. The urgency of the pandemic with its hospitalisation surges threatening NHS standards of care had a capacity for accelerating collaboration and overcoming challenges such as questionnaire copyright licensing, that in normal times can hold up the development of an experiment for much longer. Some licenses were obtained at a very quick turnaround. To find at record speed the patient representatives that could offer review and support in research design, the team was able to make use of Cambridge University hospital’s group. Patient involvement in questionnaire design focused the team’s efforts on designing a minimalist questionnaire of a minimum number of items – it was important not only to ask patients for feedback over questionnaire and workflow drafts, but also to have an open space where the patient could ask the questions they had come up with.Experts point out to contributor burden as both an ethical and epistemological challenge. Wasting time and energy of patients who are already challenged by a debilitating disease, many of whom have long-covid, should be avoided. But also, it is an epistemological challenge because over-burdening contributors diminishes the quality of responses, especially so when many patients are suffering from fatigue over a long time. This meant various trade-offs in deciding what symptoms patients should be asked about. What is important to patients from an experiential point of view might be different from other symptoms who could be warning signs of dangerous complications. Tests that are very valuable but very difficult to self-report well (e.g., a ‘six-minute walk test’) could be weighed against other less informative symptoms that are more intuitive and familiar to patients, such as mood changes. From a self-reporting perspective, reporting on symptoms that are important to patients and that patients can at the same time report on reliably was a key principle of questionnaire drafting.As they designed the study at the beginning of the pandemic, researchers had to make assumptions about the kind of patient that would be most likely to participate in the experiment. Experts note how the composition of the team, and the ecology of actors around them (funders, academics) was not representative of society’s demography. All things being equal, this can make the research more vulnerable to biases and assumptions baked in the research design and questionnaires. They also learned that the infection moved through society in unpredictable and ever-changing ways. While the first wave saw ethnic minorities and essential workers such as taxi drivers disproportionately hospitalised, by the time the third wave was sending patients into hospitals, they were disproportionately the unvaccinated. The consequences of these shifts for the validity of the questionnaire and how its items had been selected were unclear.It also makes more difficult to elaborate and take actions from the feedback about the study that is gathered from the participants; this will also come with hidden biases and assumptions, as researchers collect information only from the self-selected few who accepted to participate. Expert found it much more challenging to know about those who are not participating in the study – more likely to be vaccine and covid sceptics. It is unclear what are the consequences of collecting worse data about specific social groups. It is clear that much depends on how consequential the actions that can be taken from the data that are collected can be – a reason for caution when imagining the use of self-reported data in policy-making and crisis response.
Data can be used for purposes other than initially envisaged in many different ways. Contributors themselves, once engaged and active in the data collection, can repurpose or imitate the methods and inclusive rhetoric of the projects they joined, and extend aims of the data collection and networking taking place on a self-report data collection system to organise research on matters of their own interest.
There is a long history of this this kind of initiatives in health research, with perhaps the most historically significant one that captured by the pioneering work by Steven Epstein (Epstein, 1996), describing how in the midst of the US AIDS epidemic, the AIDS Coalition to Unleash Power (ACT-UP) (Wikipedia, 2022b) activists aiming to influence research on HIV and extend access to experimental therapies harnessed existing networks of civil rights activists to coordinate independent research, protest and sabotage.
Since the rise of the web, techniques and methods for organising independent research that exploits available resources, data, and technology have grown more sophisticated. Examples have multiplied as we saw the emergence of self-report data collection beyond the domain of scientific research and into the domains of civic and political action, where data have been used by activists to mobilise evidence in support of their claims (Milan and Velden, 2016; Bruno, Didier, and Vitale, 2014); as well as artistic performances and other individual applications of data collection on the conduct of one’s life, in the culture of the Quantified Self movement (Sharon, 2017). In the field of health research, the self-organisation of data collection through web networks and technology has been called ‘patient-led research’ (Vayena and Tasioulas, 2013) in the wake of some celebrated examples (Paul Wicks et al., 2008; Paul Wicks et al., 2011) that excited commentators about the social web’s potential in scientific production.
Exciting as it might seem for ordinary individuals to take knowledge matters into their own hands, there are a number of thorny ethical issues associated with distributed contributors’ ability to organise data collection on the side (Vayena et al., 2015; Tempini and Teira, 2019; P. Wicks, Vaughan, and Heywood, 2014; Ledford, 2018). It is clear that people enrolled on these initiatives often interpret the data thus generated in order to make life decisions with potentially large consequences. For instance, patients might make changes in their treatment course, or take uncontrolled chemicals. And when contributors are many, different aims, expectations, levels of engagements and degrees of literacy on the subject matter are at play. Some will be much more vulnerable than others. They have many questions that they might seek to satisfy with what they find, whether the answer is provided through state-of-the-art methods and project level supervision or not.
These projects complete lack of ethical supervision, let alone of the sort that has long been required of publicly funded university research. No one will have a view as to who these people might be and how they should be supported. What these initiatives risk creating then is something short of well-developed solidarity, but rather, an ephemeral alliance and confluence of interests between individuals who might then be find themselves to face the consequences alone. The regulatory trend towards allowing patients more freedom to experiment with treatment courses means that the risk calculus is becoming more complex just as patients gain more freedom to do it themselves (Carrieri, Peccatori, and Boniolo, 2018; Navarro, Tempini, and Teira, 2021; Tempini and Teira, 2020).
Pandemic time is only likely to exacerbate the problem. With COVID-19 we have seen the emergence in social media and public sphere of a number of theories, since debunked, as to the causes of illness and more or less implausible treatments that are within easy reach of each citizen with a curious mind and misplaced scepticism. Many people are willing to shape actions and medical decisions based on information they find more or less casually. They are mostly on their own evaluating risks and kinds of harms that could arise. Spontaneous and ephemeral instances of self-report data collection are likely to compound this kind of issue, by buying credibility, and time, to implausible theories and methodologies. In several examples, patients eventually self-harm (Paul Wicks, Heywood, and Vaughan, 2012).
Self-harm is not the only relevant ethical issue. Another important issue is the potential for contributor-led initiatives to interfere with other, better designed clinical research, such as clinical trials, when participants decide to tamper with the experimental protocol by carrying out their actions on the side from it. The result could be the creation of noise, the jeopardisation or slowing down of the best research. These issues can create a rather uncomfortable situation for the managers and developers of the technology platforms these initiatives are relying on. An issue for them will also be how liabilities and best courses of action can be worryingly unclear.
Given that contributors can invest a significant amount of resources labouring to collect data (Milne, 2018; Cooper, 2012), develop expectations, and can face risk of undesirable consequences of participating in the data collection, it is really important that data are valued and used. As it has been made obvious by great amounts of literature, much of the value that has been ascribed to new kinds of data-intensive data collection, including the collection of self-report data, is due to the assumed possibility of reusing the same data multiple times for different purposes. Given that data can acquire new value when they are linked or juxtaposed with other data and questions, their value could be renewed as many times as people having access to them believe it is worth to do so.
While the investment on the part of an individual contributor can be variable and small, the cumulative investment asked of society, or a relevant group within it, can be considerable; the more so the more contributors a project is able to attract. A self-report data collection project, therefore, will face demands of accountability as to the ways in which the data have been put to use; and whether the amount of effort it commanded was worth the good it generates.
If little is done with datasets that people might have a reasonable expectation more could have been done with, a tension can arise. Scholarship on data self-reporting has thus questioned (Prainsack, 2017; Tempini and Del Savio, 2019; Sharon, 2016) specific projects that harness the rhetoric of empowerment and participation but might not live up to expectations (see Box 3); and asked if those who directly contributing are then excluded from data governance decisions of the data, the incentive to maximise the use of the data might be weaker (Birhane, 2021; Prainsack, 2019b; Ernst Hafen, 2019; Tempini and Del Savio, 2019). Expert practitioners involved in health data self-reporting studies often stress the duty that a project has to make justice to the participant’s burden.
PatientsLikeMe (PLM) is a social media network and platform of online communities centred on the patient experience. It gathers hundreds of thousands of patients suffering from thousands of different conditions. The patients gather to socialise, learn from one another and about their condition, and participate in health research. The platform, unlike most social media, is free of ads. The for-profit company sells research services and access to pseudonymised patient data. Its researchers have published many peer-reviewed publications, some very celebrated for the way they leveraged the web to produce scientific knowledge faster and cheaper than traditional methods. They made PLM one of the most hyped and promising social media in the health space. PLM promised a revolution of health research and care, a model of patient empowerment that would allow them to rewrite the research agenda of pharmaceutical sector, democratising health. This was hoped for especially in respect to ‘orphan diseases’ – conditions that, usually because of low patient numbers, fail to attract the level of investment and research necessary to develop treatments that make an impact. PLM was started as a community for ALS patients by a family affected by the condition who had been at the centre of some of the most dynamic activism and research around the disease – called ‘guerrilla scientists’ their early efforts had already attracted, before the founding of PLM, the attention of a Pulitzer-winning journalist (Weiner, 2004); and documentarists who took their story to Sundance (So Much, So Fast, 2006).The reality of everyday operations at PLM was more challenging than many commentators excited about what the social web could mean for knowledge production might have assumed. The collection of self-reported data from such a disparate base of contributors posed threw up many data quality issues related to the effort to bridge between the world of patient experience, knowledge, language, aims and expectations, and the world of standardised scientific observations, recording and communication, of data structures and taxonomies (Arnott-Smith and Wicks, 2008; Tempini, 2015; Paul Wicks et al., 2010; Frost and Massagli, 2008).The need to collect data that would be worth, at once, for patients and their personal sense of biographical trajectory; and for third parties interested in learning about them for socialisation, scientific or business purposes led to conflicting demands. At play were different definitions of what is a valuable direction for platform and what is necessary burden and attrition (Tempini, 2017). The platform was committed both to a bottom-up revolution of the health research industry; and to the development of a viable, self-sustaining business model in a highly competitive and dynamic health industry, where executive board members and venture capitalists have a way to focus one’s mind. Its monopolistic control over the self-report data, with little direct participation by patients in day-to-day operations and decisions, created an unresolvable tension (Tempini and Del Savio, 2019), and ambiguity towards some genuinely spontaneous patient-led initiatives that threatened to perturb the overall design (Tempini and Teira, 2019).
There are various reasons why a dataset could not be widely re-used. For instance, in the case of health data, their sensitive nature and the high risk of misinterpretation require not everybody should be given access. Also, the data that is actually collected might not be as good and reliable as initially expected by the researchers who design the exercise. Much depends on the everyday practical circumstances of a self-selected group of contributors, with different expectations, hopes and levels of literacy.
Health data can be particularly expensive to keep and govern. They require complex infrastructures to protect the data from unauthorised access. This means keeping up with continuously changing security standards and a quickly evolving risk and threat landscape; while at the same time, maintaining the knowledge that is required to ensure that the data remain valuable, of high quality, and that their peculiarities and qualities are well understood by those who directly re-use them (Demir and Murtagh, 2013; Tempini and Leonelli, 2018). In this respect, self-report data pose more challenges because they are often made available in a structural lack contextual of information and awareness of the specific situation in which they were generated by unknown, distributed volunteer contributors. Also, many data self-reporting projects, especially when funded through public research funds, can ran into issues of long-term maintenance and sustainability; it is important that plans are made early on as to how the data will be managed in the long run and how continuous funding and support for data governance functions should be secured.
At any rate, data self-reporting projects should attend to questions of contributor benefit; reflecting on how to feed back relevant and valuable information learned from the research done with the data, a task that becomes more difficult the wider and more diverse the contributor base is; actively rejecting forces pulling the relationship with contributors towards an extractive state, were contributors keep being asked to contribute data while a clear sense of benefit and utilisation is being lost. This attitude requires active vigilance on the part of project managers and a willingness to challenge abnormal power asymmetries, especially when the project is driven by private entities. Powerful tech giants have entrenched their economic might, to the detriment of the public good, on extractive relationships where ever more data is continuously collected in order to predict and generate behaviour.
This section takes stock of innovations that can help improve best practices with a view to tackle challenges outlined in Part I. The spectrum of self-reporting is extremely diverse and as such impossible to govern through a one size fits all approach; projects can be scientific and not, can raise important privacy concerns, and can be open to organised manipulation and poor governance of risks and harms; best practices and guidelines must be adapted to the local setting and remain open for improvement. The following are three areas of best practice that have been developed to help manage the ethical issues generated by data self-reporting projects. They are applicable to many different domains and institutional settings, and they should be seen as a complementary set of recommendations. They are all concerned with securing the representation of heterogeneous concerns, stakes and forms of knowledge in the day-to-day governance and decision-making of self-report data projects. This is a crucial step for a project to control for the key issues discussed in the previous section.
Managers and developers of data self-reporting projects should consider what measures they can take to ensure data governance is accountable, inclusive, competent. Projects where data are extracted from a contributor base to then be shifted and shared according to the judgement of a self-selected few are more likely to enter controversy or make poor choices from an ethical point of view. These are the governance approaches privileged by private companies and tech monopolists who intend to control the way in which commercial value can be created by contributed data because they concentrate decision power in the hands of allegiant few.
Arguments in favour of individualised, distributed control of data that resonate with recent visions of decentralised web and organisational governance are problematic because of the burden and risks they put each individual deciding for themselves under. They imagine technological frameworks could make it possible for the data to be locked and unlocked by the individual they refer to, thus giving, in principle, total control over the data back to them. The real world might differ. Few have the expertise and time necessary to scrutinise proposal to use their data that might be put before them. Most are likely to take decisions that are against their best interests or principles because of poor or rushed judgement.
In recent years there has been a great deal of innovation in data governance frameworks that deliver a form of collective control on the projects, uses and aims the data are put to. These are the data trusts (Delacroix and Lawrence, 2019) and data cooperative (Ernst Hafen, 2019; E. Hafen, Kossmann, and Brand, 2014) approaches. They have been applied with particular promise in domains such as health data, where data are sensitive, requiring tight scrutiny, and highly valuable and desired by researchers and business alike.
These approaches reject the assumption that each individual is capable and best positioned to decide what projects are worth giving access to their data. They instead elaborate on forms of delegation whereby a number of research participants are appointed as delegates or trustees with data governance responsibilities and are charged with day-to-day decision making and governance. Depending on the setup, members can exercise their own decision over whether to participate in one project or another more or less often. These approaches can be used to take over and collectively govern and mobilise existing datasets that were already generated for other purposes, for instance, receiving healthcare; but can also be used to start and coordinate the collection of new data from the outset, in a ‘bottom-up’ fashion where the project is managed by participants. They are intended to afford better inclusivity, accountability and competent data management. Different individuals should be able to seek different levels of involvement according to their interest.
Not all self-report collection projects need to be governed through a bottom-up approach such as data cooperatives or trusts and in many cases, data governance might be better managed through review boards composed of independent experts (see for instance, section 1.3) who might or might not be participating in the project themselves. The value of representative bottom-up data governance approaches is still in the potential for the project management to better articulate finalities, values and directions of a project in a way that better reflects the views and aims of its pool of participants, enacting the sort of relational ethics (Birhane, 2021) where experiential expertise is valued at the same time as professional expertise
Data governance includes the assessment of risks and benefits of ways of working with data to the best of knowledge and assumptions held by those involved in decision making. Assessing risks and benefits of different ways of using data necessarily involves making many assumptions about future developments and events. The exercise is uncertain, and it is possible even for the most suitable experts to make mistakes. This is true even when data are governed through a collective governance framework such as a data trust or cooperative. Harms and benefits will be very likely to distribute unevenly, and collective governance will not in itself make things right or fair if mistaken assessments lead to harm and benefits that are unfairly distributed in ways that could be mitigated.
As Barbara Prainsack argues (Prainsack, 2019a, 2019b), only when collective governance frameworks are combined with a commitment to the principle of social solidarity they can truly fulfil their revolutionary potential of alternative to the monopolistic extractive models pioneered by tech giants. To achieve this, data self-reporting projects need to accept the possibility of harm and prepare for its mitigation. With her colleagues (Prainsack and Buyx, 2017; McMahon, Buyx, and Prainsack, 2020) she calls for the creation of harm mitigation bodies that will operate harm mitigation measures. These, they suggest, could include issuance of apologies and stipulation of amendments to avoid future harm recurrence and financial support for the gravest of cases.
Through a combination of collective governance (see 1.1) and harm mitigation features, self-reporting projects can aspire at the implementation of a data donation economy that maximises data’s potential to be used for the common good.
Data donation has been used as a buzzword for broadly mimetic purposes by some exploitative projects, but it is not a fuzzy concept. There are a few conditions that must be met for a project to be based on data donation. Barbara Prainsack emphasises that for practices of data reporting and sharing to qualify as based on data donation, they need to exhibit relationality, indirect reciprocity and multiplicity. Relationality requires that the two parties (giver and receiver of data) mutually acknowledge the act of donation; this means that the data receiver (self-report data project manager) honours the work that the data donors (research participants) have done by “systematically considering the needs and interests of data donors and their significant others” (Prainsack 2019a:14); this can be reflected in the ways in which a project is organised to register, reflect and enact the views of its contributors. Relational ethics and bottom-up data governance, for instance, can go some way in this direction. But for the data donation economy to ensure that reciprocity is indirect and thus shared across the membership, harm mitigation measures are seen as a necessary instrument. It is through this combination of arrangements that data multiplicity (Prainsack’s way of pointing out the ability for data to be reused multiple times) can serve the common good. Data that are collected, for instance, through mainstream, status quo, commercial data governance practices (for instance, data collected through a user-generated data platform and used by a Silicon Valley giant for undisclosed commercial purposes) might still have multiple uses, i.e., multiplicity, but the economy of such a project would not be based on donation, because the data are not received as a gift and respected with the ethics gifts command.
A crucially important function that officers of a data trust, data cooperative or data governance organisation form should need to provide is the oversight of ethical risks. At any rate, a data self-reporting project should implement an ethical risk oversight function that is adequate to its size and ambition. A project that aims to create a data resource that can be used time and again and is widely trusted by internal and external stakeholders is highly likely to need a formal oversight structure with clear governance procedures.
The time-tested standard model in this domain are the Research Ethics Committees that have been implemented for decades by universities in the UK, US (aka institutional review boards (IRB)), and the rest of the world. They normally require researchers who intend to embark on new research to submit a research ethics application. This is usually centred on a formal document (e.g., application form) that provides key information about the project aims, methods, resources; and evaluations of ethical risks together with a specification of measures planned to control for them. The application is reviewed by experts serving in the committee who should be able to judge whether the project is responsibly conceived, and the eventual ethical risks are acceptably approached, i.e., control measures are fit for purpose. If the application is rejected, the research cannot start.
Institutions can design, and have designed, research ethics oversight committees and processes differently to adapt to local conditions, but the above tend to be the main features, and this model has been adapted outside academia in the public and private sector as more organisations sought to control risks of their research and development (R&D) activities (including reputational risk, through the accountability that having a formal process affords). This means that there have been no off-the-shelf approaches to research ethics oversight, though there have been various guidelines.
The need for up-to-date guidelines and models has recently become apparent as some late developments in research methodology are challenging the research ethics status quo. A report by the Ada Lovelace Institute, the University of Exeter Institute for Data Science and Artificial Intelligence, and the Alan Turing Institute (Petermann et al. 2022) highlights the new challenges and the possible solutions that institutions should consider keeping their research ethics oversight function suitable (also, among others, explored by Ferretti et al., 2021; Jordan SR, 2019; Basl and Sandler, 2019; Clark et al., 2019). While for a long time RECs have been focusing on the ethical harms and risks that research could pose for the participants in a study (resulting from the origins of research ethics in medical research and the design of clinical experiments), new kinds of research employing cutting edge data science and artificial intelligence methods are bringing up emerging ethical issues. Here, the new innovations engendered by data self-reporting projects coordinated over the web are part parcel of these changes that need to be confronted.
Of growing concern have been the long term consequences of research that might be developing methods and technology that can be easily re-deployed (recall ‘function creep’) across institutions, jurisdictions, countries, populations, and how they might have disparately unequal social impacts; along with a redefinition of issues of privacy, confidentiality, and consent at a time when researchers can source individual level data from the web or other easily available resources, without asking the individuals these data originated from.
Issues are complex and their governance cannot be a box-ticking exercise. Methods that can be seen as gold standard in tacking some problems still require close and situated scrutiny. For instance, anonymisation can be a strong solution for granting privacy protection to individuals whose data widely available on the web is collected without their explicit informed consent. However, it is widely understood that the strength of anonymisation is always relative to the way in which the various other data sources a researcher can have access to, can lead to re-identification. Even when anonymisation is considered satisfactory for the purpose of privacy protection, in itself it offers nothing to protect individuals from the eventual unfair impacts of the outcomes of research once they are operationalised and deployed in new systems shaping individual movements, interactions, behaviours and access to services.
In recent years and in response to the emerging ethical challenges posed by the latest innovations in data science and AI, there has been growing demand for the active consideration of broader downstream impacts of such research. Research communities and academic societies have started to require researchers to submit research ethics statements on the broad societal impacts of their endeavours (Petermann et al., 2022). Leading commercial technology organisations have started to experiment with the implementation of internal ethics review processes for their commercial R&D projects, in an effort to improve their reputation and accountability when the public is questioning whether they can be trusted with running the socio-technical infrastructure of much public life. How much can commercial organisations be expected to follow together ethical and business principles, and successfully map, and manage, the broad landscape of social groups, needs and interests, is very much an open question. Criticisms of ‘stakeholder capitalism’ abound. But it can perhaps be argued that when self-report data power so many projects and systems across both the research and commercial worlds, and underpin affairs in the public sphere; and when so much of public life relies on complex infrastructures developed on methods and techniques of recent invention and never fully understood consequence; then, attempts at translating some of the ethical principles, expectations and methods developed to deal with the arising ethical issues of self-report data might be in order. The implementation of ethical oversight functions, which can take many forms, in new contexts is one such opportunity.
Questions around the re-definition of ethical standards and red lines are tightly linked to practical questions over the right training and composition of experts that REC should seek when evaluating projects with strong interdisciplinary components, whose risks are inherently more difficult to assess (Petermann et al., 2022); and complex projects involving multiple staggered undertakings where a technological solution builds from the previous. Questions that are open to debate and that will require careful consideration when implementing research ethics functions in organisations include also the scope, duration and frequency of interactions between research ethics functions and the researchers leading a project. These questions are not merely operational but really shape the quality of understanding and assessment that a research ethics function is able to deliver.
But at any rate, a form of research ethics oversight is needed in data self-reporting projects.
Here again principles of bottom-up participation will be important. The world-leading public health databank SAIL (Ford et al., 2009), which has been managing dozes of highly sensitive, linked health datasets and making them available to hundreds of research projects, involves members of the public, together with domain experts, as reviewing members of research ethics applications.
Bottom-up participation will improve diversity of perspectives and sensitivity to assess bias and unequal impacts, and will be a required condition for a full implementation of collective governance approach involving data management through trust and cooperative; and design and operation of harm mitigation measures. Birhane (Birhane, 2021) emphasises, with the concept of relational ethics, how those communities and groups that are on the receiving end of a project’s impacts should be included in its governance, because they have a key epistemic privilege about its social outcomes – decisions should be taken with and through them, instead of without.
Important ethical questions on data self-reporting have arisen both before and during the pandemic. The pandemic saw various kinds of self-reporting apps taken up by extraordinary amounts of contributors and heightened our sensitivity to the issues that came forth then. But ‘normal times’ had already taught us much on issues of data governance, recognition and valuation that it is important to try to address as many of these issues at once to best prepare for the time after the pandemic. This manuscript discusses some of the key ones and recommends some of the most promising innovations to tackle them and to help with issues of public trust (Bharti et al., 2021). It argues that flexible and sustained ethical oversight is key. Best practices must be adapted to the local setting and improved over time. This is achieved by acting proactively instead of reactively, and by avoiding simplistic solutions. And it is achieved by increasing diversity (of both background knowledge and experience) in ethical oversight and project management.
I would like to thank Paul Wicks, PPI for HEAL-COVID, who offered time to chat on the topic and input as an expert practitioner. The research that underpins this brief was funded by the AHRC under the grant ‘UK Ethics Accelerator: Coordinating and Mobilising Ethics Research Excellence to Inform Key Challenges in a Pandemic Crisis (AH/V013947/1)’. The early two-part report resulting from it informs this policy brief; it was made available on Zenodo (Tempini, 2022a, 2022b). I would like to thank the team at UK EA for their support and feedback on early versions of this manuscript.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Does the paper provide a comprehensive overview of the policy and the context of its implementation in a way which is accessible to a general reader?
Yes
Is the discussion on the implications clearly and accurately presented and does it cite the current literature?
Yes
Are the recommendations made clear, balanced, and justified on the basis of the presented arguments?
Yes
References
1. Koops B: The concept of function creep. Law, Innovation and Technology. 2021; 13 (1): 29-56 Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Sociology and bioethics
Does the paper provide a comprehensive overview of the policy and the context of its implementation in a way which is accessible to a general reader?
Partly
Is the discussion on the implications clearly and accurately presented and does it cite the current literature?
Partly
Are the recommendations made clear, balanced, and justified on the basis of the presented arguments?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Information systems
Does the paper provide a comprehensive overview of the policy and the context of its implementation in a way which is accessible to a general reader?
Partly
Is the discussion on the implications clearly and accurately presented and does it cite the current literature?
Yes
Are the recommendations made clear, balanced, and justified on the basis of the presented arguments?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: health privacy law; research ethics governance
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 1 11 May 23 |
read | read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)