Assessment of a demonstrator repository for individual clinical trial data built upon DSpace [ version 2 ; peer review : 2 approved ]

Background: Given the increasing number and heterogeneity of data repositories, an improvement and harmonisation of practice within repositories for clinical trial data is urgently needed. The objective of the study was to develop and evaluate a demonstrator repository, using a widely used repository system (DSpace), and then explore its suitability for providing access to individual participant data (IPD) from clinical research. Methods: After a study of the available options, DSpace (version 6.3) was selected as the software for developing a demonstrator implementation of a repository for clinical trial data. In total, 19 quality criteria were defined, using previous work assessing clinical data repositories as a guide, and the demonstrator implementation was then assessed with respect to those criteria. Results: Generally, the performance of the DSpace demonstrator repository in supporting sensitive personal data such as that from clinical trials was strong, with 14 requirements demonstrated (74%), including the necessary support for metadata and identifiers. Two requirements could not be demonstrated (the ability to include deidentification tools and the availabiltiy of a self-attestation system) and three requirements were only partially demonstrated (ability to provide links to de-identification tools and requirements, incorporation of a data transfer agreement in system workflow, and capability to offer managed access through application on a case by case basis). Conclusions: Technically, the system was able to support most of the pre-defined requirements, though there are areas where support could be improved. Of course, in a productive repository, appropriate policies and procedures would be needed to direct the use of the available technical features. A technical evaluation should therefore Open Peer Review


Introduction
The sharing of clinical trial data still occurs mainly with in a closed professional evironment through direct and personal sharing, rather than via accessible data repositories. A multistakeholder taskforce addressing this problem recommended that data and documents from clinical trials available for sharing should be transferred to a suitable data repository to help ensure that the data objects are properly prepared, are available in the longer term, are stored securely and are subject to rigorous governance 1 . A recent study has shown that an increasing number of such repositories are available for sharing of individual participant data (IPD) from clinical studies 2 . There are many different types of repositories, however, such as generic repositories for all kinds of life-science data, repositories exclusively for clinical research data and specialised repositories with a specific focus, e.g. a single disease area, and major heterogeneity exists with respect to data-upload, data-handling, and data-access processes. This heterogeneity of repository types and features, reflects both the different purposes and perspectives of repository founders, and the relative immaturity of repository data-sharing services. Given the lack of a consensus about the services required from a data repository, each organisation has implemented its own policies and systems to meet its own priorities. Greater harmonisation of practices within repositories, coupled with the implementation of quality criteria for repositories, may diminish the reluctance of many researchers to share the data from their studies, thus promoting data-sharing, discoverability, and re-use 3,4 .
In a consensus building exercise, the necessity for compliance of repositories for clinical trial data and related data objects with quality criteria was emphasised 1 . The services any repository provides should conform to specified quality standards, to give its users confidence that their data and documents will be stored securely and in accordance with the specific data transfer agreements they have agreed. During the consensus exercise, the importance of getting consent for data archiving, sharing and re-use from research participants was stressed and formulated as one of the essential data sharing principles. This paper explores the suitability of a widely used data repository system, DSpace, for supporting the long-term management of IPD generated from clinical research while conforming to defined quality criteria. Though DSpace is a repository system used for open data, it is increasingly used also for restricted data access because it provides several built-in features that make it adaptable for restricted data sharing. The work was carried out as part of a broader set of activities aimed at developing mechanisms for the sharing of IPD from clinical research (https:// www.corbel-project.eu/home.htm). It builds on previous published papers describing principles and practical recommendations for IPD sharing 1 , offering a detailed analysis of the processes involved in depositing, managing and sharing IPD 5 , and evaluating existing repositories for their suitability for the deposition of IPD, specifically for researchers in the non-commercial sector 2 . In the latter analysis, repositories were assessed against a set of quality criteria, referring to the processes of data upload, storage, de-identification, and quality controls, metadata, identifiers, flexibility of access and long-term preservation. The aim of this paper is to describe the development of a demonstrator repository based on the DSpace system and assess it using a pre-defined set of quality criteria and requirements.
The reason for developing this repository was to explore further various technical and workflow issues around the long-term management of IPD, in practical terms, using a well-known repository system applied to IPD from clinical research. The demonstrator is intended as an illustrative example only and this paper deals only with technical aspects of the repository system, i.e. its evaluation as a suitable infrastructure. It is clear that many aspects of a repository's suitability for IPD are linked to the procedures and processes implemented by the institution hosting the repository. In other words, a strong technical infrastructure is a necessary but not sufficient indicator of quality.

Methods
Selection of DSpace as software for developing a demonstrator repository Writing a bespoke repository system from scratch was seen as unrealistic, given resource constraints, and in any case less useful than using an existing system -one that would also be available to potential repository managers. A variety of systems were considered as the possible base system for the demonstrator repository (e.g. Figshare, DSpace). These and other systems were characterised with respect to the following standardised criteria 6 : • Name of the system • Contact • Webpage of the system • Level of usage (country) • Short description of the system • Type of activity the system is supporting

Amendments from Version 1
The process of the selection of DSpace as software for developing a demonstrator repository was clearer described. The selection of the quality criteria for assessment of the repository and the reason for missing security features and encryption was better explained. The confusion over the metadata was clarified. In the section « De-identification practices », a line was added in response to the reviewers comment. In « Formal contract regarding upload and storage » an explanation reflecting the comment of a reviewer was given. In the section « Flexibility of access » the meaning of the term self-attestation has been clarified. The section about « Long temr preservation and sustainabiltiy » has been renamed and rewritten. The reasoning for using only public data has been been better explained. In the discussion, the two overarching principles FAIR and TRUST have been introduced. Three references have been added. In addition, some typos/mis-spellings were corrected and minor changes were made to improve the English.

REVISED
• Modules/architecture/components included • What data stored with the system • Research use cases/projects/studies the system is used A formal comparison between the systems was not made 6 , but DSpace was rated as the system with the greatest potential for a demonstrator repository, particularly in an academic context. DSpace was selected partly because it appears to be by far the most popular of the various repository systems, with almost 2884 users, 2204 of them listed as 'academic' (including the University of Cambridge, Yale, Duke University and the University of Edinburgh amongst many around the world; https://duraspace. org/registry/). Three of 25 repositories for IPD from clinical trial data, evaluated in a recent review, are built upon DSpace (Dryad, Drum, Edinburgh DataShare) 2 .
In addition, DSpace is an open source system and can be modified and extended by users. It claims about 100 contributors to the code base, with the Dryad repository, which runs on DSpace, being an example of how the system can be extended. It is possible to download and run a pre-configured 'out of the box' solution, but DSpace also claims to be fully modifiable, even though many of the modifications listed are relatively superficial (e.g. themes, screen configurations, search parameters). The system appeares compliant with most of the relevant standards (e.g. Open Archives Initiative Protocol for Metadata (OAI-PMH)), developed for harvesting metadata descriptions from records), runs on a variety of operating systems and can use either Oracle or PostgreSQL as the back-end database store (https://duraspace.org/dspace/). There also appeared to be an active user group and comprehensive documentation, including a Wiki (https://wiki.duraspace. org/display/DSPACE/). An alternative to DSpace would have been Invenio (https://invenio-software.org/), which delivers the repository units for Zenodo, OpenAIRE and CERN Open Data. Invenio appeared very focused on open data, however, while DSpace seemed to offer more possibilities for supporting more managed access. Further details of the candidate systems considered are given in 6.
Technical infrastructure for the demonstrator repository A data repository was established between October 2018 and June 2019 within the Coordination Centre for Clinical Trials at the University of Düsseldorf, by BT (first author) using version 6.3 of DSpace. Additional software was installed to supplement DSpace functioning and manage servers and common server functionality.
Full list of the software and hardware used for the repository installations and details of the technical implementation of the demonstrator repository: DSpace is a framework of a considerable number of different software tools that must work together to achieve an efficient DSpace installation. Prerequisite software tools must be downloaded, installed, tested, configured and integrated with each other. In addition to DSpace itself, the following were installed: • Ubuntu 16 and Ubuntu 18 (Linux operating system) This mid-range system is capable of supporting an application with either a large number of items (roughly 50,000 files and associated metadata) or a large volume of activity (searches, accesses, downloads, etc.).
For testing, publicly available data and documents as from clinical trials were uploaded to the demonstrator repository. The data used are displayed on the welcome page of the DSpace demonstrator repository (http://90.147.75.211:8080/xmlui/).

Quality applied to the reference implementation
The quality criteria used for assessment were developed from an original collection of 34 attributes, themselves derived from previous work and discussion within CORBEL and the IMPACT Observatory project 2 . These criteria were meant to provide a broad characterisation of a repository and included aspects assessing both a repository's relative maturity and its suitability for clinical research data. From these criteria 8 features were selected as being especially important for clinical researchers wishing to deposit individual participant data (IPD). They were used in a general evaluation of repositories 2 and were also applied to the DSpace implementation.
These 8 criteria identified as being key to successful management of IPD are listed below 2 . 1. Guidelines for data upload and storage 2. Support for data de-identification 3. Data quality controls 4. Contracts for upload and storage 5. Available provenance and accessibility metadata 6. Application of identifiers Other standards and criteria for trustworthy digital repositories have been developed and are being applied, e.g., Data Seal of Approval, International Council for Science World Data Systems 7-10 ). These criteria usually examine more generic repository features, for example the nature of the security measures in place, the use of encryption, the technical infrastructure, staff competence, etc. Because in this exercise we were not evaluating a repository, but focusing instead on a specific tool, one that would sit within a repository, we did not look at these more general criteria in detail. Of course activities such as monitoring, reviewing and implementing security measures are very important, but we would see them mainly as the concern of the repository managing DSpace rather than DSpace itself. The relationship between the eight criteria used here and other standards and criteria available for repositories is explored further in the Discussion section (see also Table 3).
Managing metadata (data about data) is a key requirement of any repository system, though there are two distinct forms of metadata to consider. To promote interoperability and retain meaning within interpretation and analysis, shared data should be, as far as possible, structured, described and formatted using widely recognised data and metadata standards (e.g. Clinical Data Interchange Standards Consortium (CDISC), Core Outcome Measures in Effectiveness Trials (COMET), Medical Dictionary for Regulatory Activities (MedDRA)) 1 . The metadata in this context is descriptive, detailing the contents of the data. A repository should be able to check that such metadata is available, ideally in one of a range of specified formats, and support its inclusion with the data (see the details for criteria 1) but the responsibility for providing it rests with the data generators. But there is also a need for provenance and accessibility metadata, which is used to make up a repository's catalogue of content, and which describes, for example, the nature and source of the data, its date(s), the authors, and -especially important with sensitive data that is likely to be under managed access -how the data can be accessed, including the details of any application procedure. Providing such metadata is the responsibility of the repository itself, although ideally it is done in close collaboration with the data generators. This type of metadata is the subject of criterion 5.
In order to make the assessment of the criteria more operational and to distinguish features of the system (technical features) from measures around the system (e.g. policies and procedures), the criteria were split into specific requirements. This was performed by the group of authors. Table 1 provides a detailed breakdown of the eight criteria in terms of their associated 'requirements' -i.e. the features one would normally expect to see implemented. 'System' features (i.e. repository system and its technical features), are distinguished from 'Procedures' (i.e. function of the repository's policies and procedures).
For example, to support 'Guidelines for data upload and storage', the requirements for the repository could include: a) being able to support a wide variety of file and metadata types, b) providing easy to use mechanisms for the upload of files, including technical instructions, c) providing rules and guidelines for data upload and storage (e.g. which formats or metadata schema to use and when). a), and b) are mainly aspects of the repository system and its technical features, whilst c) is more a function of the repository's policies and procedures.
In the context of this study it is important to stress that only the requirements labelled as 'system' attributes in Table 1 were evaluated (19 of 29, or 66%). Each of these system features was assessed and its level of fulfilment within DSpace classified as: The assessment of the requirements was performed by BT and based on publicly available information about DSpace (web pages, user manuals, Q&A pages, reports, etc.). DSpace was not contacted directly and but there was contact with the DSpace community. The Coordination Centre for Clinical Trials in Düsseldorf participated at a meeting of the German user community.

Results
The results are summarised in this section and in Table 2.
Guidelines for data upload and storage DSpace exhibits a flexible approach to file storage by supporting a range of file types and metadata schemas (1a demonstrated). With a variety of tools available, along with detailed technical guidance, it also provides mechanisms for upload of files, including instructions (1b demonstrated).

De-identification practices before upload
The DSpace system has no published requirements or guidelines relating to the de-identification of uploaded data. It is the submitter's responsibility to ensure that documents are consistent with current standards, guidelines and policies from official bodies and scientific organisations. The submitter is, however, able to use links to requirements, guidelines and/or tools, if these are established by the system's administrator (2a partially demonstrated). As far as we could tell, neither the DSpace repository system nor the user community have implemented de-identification tools or programs, able to perform and document de-identification on an existing dataset (2b not demonstrated). Having said that it is worth noting that, should such support tools be created, DSpace does provide a task management system (known as the 'Curation System') in which such tools can be integrated and configured.

Control of quality of data
The control of the quality of data is more a question of procedures and workflow around a repository than technical features available in a particular system. Nevertheless, there are some

Demonstrated
Long-term availability and maintenance of system expected technical features that could facilitate a quality control workflow. Some of these features are available within DSpace, usually as optional and configurable additions to the data upload process but they are limited to a predefined review workflow. This covers a single reviewer workflow, collection's workflow steps and a score review workflow. This is certainly an important feature but does not correspond to a full quality-controlled process, which needs additional features like monitoring and tracking uploads, rejections, edits; reports about reviews in process and performed, etc. (3a partially demonstrated).

Formal contract regarding upload and storage
A formal data transfer contract signed by the data generator and the repository administrator should be a prerequisite for transferring clinical trial data to a repository, not least to clarify potential legal responsibilities under data protection legislation. At the end of the manual submission process in DSpace, the submitter (data generator) is asked to grant the repository service an appropriate distribution license (different licences can be made available to different user communities). The distribution license can be edited or customised, however, the platform

Application of an identifier
DSpace uses the CNRI Handle System primarily to create a persistent identifier for every object (item, collection and community) stored in the system (6a demonstrated). DSpace also allows other persistent identifiers, such as a digital object identifier (DOI), to be applied to data sets to improve discoverability and to allow correct citation in DSpace. This is in parallel to the Handle System (6b demonstrated). DSpace system supports several common authentication systems, but web based self-attestation is not supported (7b not demonstrated). In this context the term 'self-attestation' refers to a registration like process where the user first has to provide information about themselves, including their contact details, and give details of the purpose for which they intend to use the data, together with any other information required by the data managers. Email details would then normally be verified (by clicking on a validation link sent to the address provided) before access would be granted.

Flexibility of access
Resources can be made available only to certain "privileged" users, and this functionality allows access through group membership to be implemented (7c demonstrated). The 'request a copy' functionality exists in DSpace to facilitate access in cases when uploaded content is not openly shared. With this feature, the data submitter or owner interacts directly with the requester on a case-by-case basis. More complex request evaluation processes, for example involving a data access committee, are not directly supported in DSpace, though could in theory be integrated into any dialog between the requestor and the data submitter (7d demonstrated). The DSpace administrator can assign permissions to a privileged user at the item, community and collections level, allowing granular access to different parts of datasets collections (7e demonstrated).

Long-term preservation and sustainability
These are two related issues, one dealing with the preservation of the data in the long term, the second with the sustainability of the repository itself. A repository's longevity will mainly be dependent on resourcing and institutional commitment, and given the inevitable uncertainties around both of these a clear policy about what should happen to data if a repository is closed would clearly be a requirement for most potential users., At a technical level, however, provides some support for long term preservation mechanisms, e.g. checksums can be applied and verified on all items. It can also be integrated with the open source archiving system Archivematica, allowing the generation of system-independent Archival Information Packages (AIPs) 12 (8a, in so far it is a technical issue, demonstrated). DSpace also claims to have implemented a strategic plan for sustainability. Because it uses open technology, has a broad dissemination and usage, with a large user community and many diverse applications, the long-term availability and maintenance of the system is expected, if not guaranteed (8b demonstrated).

Assessment of the demonstrator repository
The performance in supporting sensitive personal data such as that from clinical trials was strong, with 14 requirements demonstrated (74%). This included strong support for different file types and metadata systems, a range of access control systems, including embargoes and granular access management, an integrated persistent identifier scheme plus support for other identifiers like DOIs, and good support for data management in the long-term.
Of the two areas that were not demonstrated at all, the first -the inability to incorporate de-identification tools in the submission workflow -is arguably an over ambitious requirement. Although general techniques certainly exist for de-identification this should normally be an exercise that is planned, documented and tested on a study-by-study basis, rather than an automatic process. Having links available to de-identification resources is probably a more realistic requirement.
The second missing requirement, the lack of a self-attestation system, is a feature that some data generators might want to use, as it requires much less administrative overhead then setting up access rights for groups and individuals. It would require an administrator to define the fields required for self-attestation and, like the current user registration process, it could be backed up by a system requiring confirmation of the email address given. Given the range of other access options available in DSpace it may not be a serious omission, but it is a missing feature that would be 'nice to have'.
Of the three areas that were partially demonstrated, the need for repository managers to establish links to de-identification and other tools, rather than have them built-in to the system, may represent an additional task but it is one that should be relatively easy to do. It can also be argued that this approach is more flexible, and easier to keep up to date, than a set of links integrated into the system.
The second partially demonstrated area related to quality control. The submission workflow allows for up to three review stages, which is good, but few other elements of quality control and monitoring seemed to be built into the system. For repository managers handling sensitive data, it would be useful to have reports on upload and access or access request activity, and the ability to integrate checklists of required features or information (such as de-identification status, metadata completeness, access types allowed or identifiers applied), as might be applied during the review process, to tag on the data itself (i.e. within internal system metadata). This would allow the status of the data in the repository to be better monitored and potential issues with data quality and/or legal issues to be more quickly identified.
The third partially demonstrated issue related to data transfer agreements, governing the terms of data upload and storage. Sensitive data requires more than a simple upload to a repository because, unless the data is fully anonymised, there are likely to be legal issues that need to be clarified, for instance exactly which institution is acting as the Data Controller, as that term is defined in the General Data Protection Regulation (GDPR).
(At the very least, the legal status of the data needs to be clear, i.e. does it fall under data protection legislation, and if so which, or is it exempt from such consideration because of the way it has been prepared.) In addition, there may be questions about who is responsible for versioning data if it is changed, for paying any associated costs, about the access management required, and who needs to review access requests if access is managed (etc.). These considerations go well beyond any general agreement whereby data generators simply grant the repository the right to make their data available under a selected licence -and for sensitive data they may need to be considered on a study by study basis.
It would therefore be very useful if -as a configured option -the system could enforce a clear check that such a data transfer agreement was in place, preferably with the date of its application. (At the moment that seems possible, but a rather complex workaround is required.) It would be even better if the system could also indicate where the data transfer agreement was stored and link to it, or even display its provisions within the system. Ideally, a mature system would even allow the agreement to be drafted and agreed within the system, as part of a private interchange between the data uploader and the repository.
Weaknesses of the study A limitation of the study is that it is focusing only on the 8 repository features defined in Banzi et al. 2 . Other quality features not considered here may also be very important, for example good data security. This study should therefore be seen as a starting point, which will need further extension, perhaps using alternate approaches and systems (see next section).
We focused on attributes that we thought were particularly important for clinical trial and similar data. Aspects of quality for data repositories that have been cited by other authors, but which have not been explicitly considered in our approach include: • Transparency and accountability  Burton et al. 8 . Allowing access to data in a timely manner and including a proportionate review of the scientific rationale, without introducing unnecessary barriers has been formulated by 7. Science Europe supports the idea of a metadata repository, enabling referencing to related relevant information, such as other data and publications and asks for support of data versioning 10 . Effective audits are proposed by Burton et al. 8 . The ICSU World Data System requires that the repository has adequate funding and sufficient members of qualified staff managed through a clear system of governance to effectively carry out the mission and that the repository enables users to discover the data and refer to them in a persistent way through proper citation 9 . The ICSU World Data System 9 requires that the repository functions on well-supported operating systems and other core infrastructural software and is using hardware and software technological appropriate to the services it provides to its designated community. In addition, the technical infrastructure of the repository should provide for protection of the facility and its data, product, services, and users 8 . The need to try and integrate these different approaches to assessing data repositories is discussed in the next section.
Another weakness of the study is that the assessment of the quality criteria is (necessarily) subjective -the criteria are not quantitative. In our approach, a rather simple scale based upon "demonstrated", "partially demonstrated" and "not demonstrated" was used. The definition of the different categories may not have been precise enough to give an accurate representation of the repository's functioning.
Finally, there may be an issue related to the sources and completeness of the information used. We only took publicly available information about DSpace into consideration (web pages, user manuals, Q&A pages, reports, etc.). We did not contact DSpace directly and were not in contact with their developers. We did, however, participate at a meeting of the German user community and had discussions with a DSpace user. It should be noted, however, that transparency has been formulated as one the main principles for trusted repositories: "In order to select the most appropriate repository for a particular use case, all potential users benefit from being able to easily find and access information on the scope, target user community, policies, and capabilities of the data repository." 13 . As a consequence, publicly available information should be sufficient to basically assess a repository.
Approaches and systems for assessing the quality of repositories There are overarching general principles that address aspects around data management and data repositories on a very high level. In the FAIR principles, it is formulated that data should be Findable, Accessible, Interoperable and Reusable 14 . The TRUST principles formulate guidance for digital repositories of research data with a focus on Transparency, Responsibility, User focus, Sustainability and Technology 13 . Concrete guidelines, recommendations and best practice for data sharing and for trusted repositories should follow these principles and should provide concrete help for implementation of these principles.
Different approaches have been used to assess the quality of repositories dedicated to data sharing, both of sensitive data and more generally, with different emphases laid upon different features. For instance, Hrynaszkiewicz at al. 7 proposed additional features for data repositories to better accommodate non-public clinical datasets, including Data Use Agreements, whilst Burton et al. 8 introduced the term "Data Safe Haven", for sensitive data, and provided 12 criteria that characterised such a haven.
The Core Trustworthy Data Repositories Requirements 9 are intended to reflect the characteristics of trustworthy repositories (for all types of data). All requirements are mandatory and are equally weighted, standalone items. Although some overlap is unavoidable, duplication of the evidence sought among requirements has been kept to a minimum where possible. The choices contained in the supplied checklists (e.g., repository type and curation level) are not considered to be comprehensive, and additional space is provided in all cases for the applicant to add 'other' (more idiosyncratic) information. This and any comments given may then be used to refine such lists in the future. The CoreTrustSeal Board offers all interested data repositories a core-level certification based on the DSA-WDS Core Trustworthy Data Repositories Requirements catalogue and procedures 9 .
One initiative of Science Europe 10 was to develop a set of core requirements for data management plans (DMPs), as well as a list of criteria for the selection of trustworthy repositories where researchers can store their data for sharing. The different approaches are compared in Table 3. In light of the development of the European Open Science Cloud (EOSC) and the increasing pressure for data sharing, these requirements and criteria should help to harmonise rules on data management throughout Europe. This will aid researchers in complying with research data management requirements even when working with different research funders and research organisations.
In general, it may be necessary to better distinguish between criteria that are properties of the underlying infrastructure (e.g. staff preparation, physical security, logical security, appropriate technology) and those which are more tightly coupled to a specific repository system. In fact, we would suggest that there are three (overlapping) 'layers' of attributes that need to be considered -those associated with the underlying organisational infrastructure, those linked to the repository's technical systems, and those derived from procedures and workflows. Future attempts to assess the quality of repositories should perhaps consider these layers more explicitly. In this study we were focused on the 'system' attributes, but a broader description and assessment of a demonstrator repository should examine all three aspects, perhaps across each of the three main functional areas of a data repository, i.e. data upload, data storage and data access.
None of the approaches described above is sufficient to classify the quality of repositories for clinical trial data, as pointed out by Banzi et al. 2 . It may be that we need to differentiate criteria that should apply to all or most data repositories from those that only apply, or become more significant, in the context of particular types of data, like IPD. A general assessment, and especially a general 'score', of repositories may therefore be less meaningful than an assessment for particular types of data or data usage. Despite these difficulties we believe that it would be useful to try and achieve a consensus about what 'quality' means in terms of data repositories, in different contexts, both to support repository managers and to help guide and promote their use by researchers.

Conclusion
We assessed the suitability of DSpace to support a repository of sensitive data, such as that from clinical trials, using quality criteria that we had previously identified as being critical to managing such data. Technically, the system was able to support most of the features required, including the necessary support for metadata and identifiers, though there are areas -for instance explicit support of data transfer agreements -where support could be improved. Of course, in a productive repository, appropriate policies and procedures would be needed to direct the use of the available technical features. A technical evaluation should therefore be seen as indicating a system's potential, rather than being a definite assessment of its suitability. DSpace clearly has considerable potential in this context and appears a suitable base for further exploration of the issues around storing sensitive data.
This work should stimulate the discussion about quality assessment and certification of repositories. The discussion is of particular importance for repository managers as well as standardising organisations in the field (e.g. Data Seal of approval). Another target group are researchers willing to deposit data in a repository, who have an interest that definite quality criteria are fulfilled by the repository.

Data availability
All data underlying the results are available as part of the article and no additional source data are required. Open Peer Review potentially sensitive clinical trial data. The assessment criteria used focussed on the "system" level, keeping the scope manageable, and map well onto more formal existing frameworks. The conclusions that DSpace is not a bad place to start -necessary but insufficient -are sound and offer a useful guide to people faced with similar challenges in enabling the sharing of sensitive data. I have a few specific observations around methods and analysis, noted below.
While geared more towards open data, the FAIR principles (https://www.force11.org/fairprinciples) are an increasingly important set of criteria for research data repos and complement some of the approaches in Table 3. Perhaps they could be added to the mapping?
There is no mention of encryption in the 8 assessment criteria, but encryption is hinted at in the software config ("PostgreSQL 9.5 (with pgcrypto installed) as the relational database back end"). For a repo system handling sensitive data, I'd like to see encryption at rest and encryption in flight as two additional criteria. Perhaps this is implicit in the experiment (the pgcrypto extension offers a tantalising hint!), and if so it's worth making it explicit. If encryption wasn't considered as a criterion, it's worth adding an explanation: certainly encrypting archive data is controversialwhat if you lose the keys? -but an Internet-accessible database of sensitive data is a worrying thing to have exposed unencrypted.
General, automatic de-identification of data is hard, as I'm sure the authors are fully aware! While they do cover de-identification support (or rather, the lack of it) in DSpace, I wonder if they would like to comment on whether they would regard some form of basic personally-identifiable data quality checking as a "must" for repository systems dealing with sensitive data? (Looking for names, addresses, email addresses, etc. in submissions.) How easy would it be for an absentminded researcher to upload PII into DSpace and make it publicly readable by default? Should the assessment criteria be tighter here? Perhaps this is food for future work.

If applicable, is the statistical analysis and its interpretation appropriate? Not applicable
Are all the source data underlying the results available to ensure full reproducibility? Yes

Are the conclusions drawn adequately supported by the results? Yes
Competing Interests: No competing interests were disclosed. review in the context of repository submission workflows, is also described. It also includes a summary of the technical requirements (software dependencies and deployment infrastructure) which can be useful to others evaluating the use of this repository platform for the storage and dissemination of research data.

Research methodology
Overall, the paper includes sufficient details about the methods and analysis undertaken. The authors have explored recent studies in the area, i.e. the suitability assessment presented builds upon a previous study looking at a range of existing repository platforms for sharing clinical trial data, and sensitive data more broadly. The results from this study are the basis for selecting the DSpace platform. In this respect, and although the authors include references to materials where the rationale for selecting this platform is presented, it would have been useful to include a summary table outlining key criteria and some details of the other platforms evaluated. The paper only mentions other platforms (e.g. Figshare, or Zenodo) in passing.
One strength of the paper is that the authors reflect on and present the perceived weaknesses of their study. However, and given the sensitive nature of the data underpinning clinical trials, I found it quite surprising that data security features were not included as part of the key criteria defined for this initial assessment, given that not meeting these criteria could impact the suitability of this platform for the archival of clinical data. The authors acknowledge this weakness of their study and state that criteria relating to data security should be considered in future extensions of the study. As part of future assessment, the authors should consider looking at robust security testing of the platform, such as performing penetration testing.
Another weakness of the study, even though the authors acknowledge it in the paper, is that they have only evaluated openly available documentation for the DSpace platform. Such documentation can often be incomplete in community-based projects, owing to potential lack of resources. More detail about why they took this approach would have been useful. Moreover, and given that DSpace is a very popular platform within the academic community as acknowledged in the paper, the authors could have informally contacted other institutions currently using the platform to find out more about their experiences of the platform when put to similar uses, and their opinion on the platform's strengths and weaknesses.

Content review
The paper reads very well, and the content structure is appropriate. The "Introduction" section sets the scene nicely and provides sufficient background information, with relevant and current literature references. One minor observation is that, when authors introduce the work of a dedicated taskforce addressing the problem of current forms of sharing clinical data, and propose to use data repositories, there is no mention of the importance of gaining consent for data archiving, sharing and re-use from research participants. This is a key barrier to data sharing, and one that we encounter as providers of Research Data Management Services, when researchers wish to deposit their data with our Institutional Repository.
The "Methods" section is well developed: the "Technical infrastructure for the demonstrator repository" section provides useful details for those seeking to use similar platforms; and sufficient information is provided so that a similar assessment can be performed on other platforms, or for study replication (even though the analysis is partially qualitative). As mentioned earlier, it would have been useful to include a summary table outlining key criteria and some details of the other platforms evaluated for completeness.
In the "Assessment of quality criteria for the reference implementation", the paragraph beginning with "To promote interoperability …" is a bit unclear and contradictory. It mentions the importance of using metadata standards for describing, structuring and formatting content, which I agree is very important; but they have excluded them as part of the assessment criteria. In particular, the sentence "Here we focus on standards for metadata" is very confusing as the examples given earlier all refer to metadata standards. Is the sentence intended to mean that the study is only concerned with metadata standards and does not consider data format standards?
The "Results" section reads very well and is clear. The summary table together with the different criteria-based subsections include relevant, high-level information about the technical assessment that has been performed. With respect to requirement 2a around de-identification tools, perhaps it is worth mentioning that, although not specifically implemented by the community, the DSpace platform does have a mechanism / framework in place (i.e. curation system) that allows for easy integration of such tools within DSpace's standard submission workflows (see https://wiki.lyrasis.org/display/DSDOC6x/Curation+System).
It is mentioned in the section "Formal contract regarding upload and storage" that the implemented demonstrator does not provide support for constructing and editing the distribution licence. However, the distribution licence text can be edited or customised, as we have done so in our Institutional DSpace repository instance. Perhaps, what the authors mean instead is that the platform does not provide a user interface to do this easily.
The section about repository long-term preservation could have incorporated more detailed information about the DSpace's platform's capabilities around content preservation and relevant references and links to relevant literature. For example, open source integrations of the DSpace platform with preservation systems exist, e.g. integration with Archivematica ( https://figshare.com/articles/Automating_OAIS_compliant_digital_preservation_using_Archivematica_and_DSpa ). The authors seem to mix the platform's long-term availability based on a number of aspects such as technology sustainability plans, or wide use, with the platform's capabilities for preservation of the repository content itself. The former is not directly related to preservation but to the long-term sustainability of the platform.
Lastly, a number of sections in the paper talk about self-attestation functions in the context of access to repository content (requirement 7b -web-based self-attestation of the user). I am not familiar with this term, and the general reader would benefit with a clearer definition of the term and such functions. I can only guess, based on context and my knowledge of repository platforms, that the authors mean the repository's ability for user self-registration to be able to access repository content, or functions for only giving access to content once certain information about the user has been collected and verified. E.g. the repository allows to incorporate a form asking content requesters to supply information about what uses they will make of the data, purpose of their research, contact information and /or email address to be verified, etc. If this is the case, this should be made much more explicit in the paper.

Minor edits and structure comments
In the "Results" subsection of the abstract, the sentence "Two requirements could not be demonstrated (inability to incorporate de-identification tools in the submission workflow, lack of a self-attestation system) …" is not clear. It needs to be rephrased, e.g. "ability to incorporate …" and "support for self-attestation …". Otherwise it reads as though the things in parenthesis are actually the requirements.
In the "Conclusions" subsection of the abstract, "productive repository" should read "production ready repository" or similar.
In the "Introduction" section, first sentence, "evironment" should read "environment. Table 3, third row "Control of quality of data", C6 should read "Quality assurance" instead of "insurance". Also, Table 3 appears much earlier (page 7) than its reference within the paper (page 11). I found this quite confusing when reading the paper as it appeared straight after Table 2, and completely out of context. It would be much clearer if the table was moved closer to its reference in the text, towards the end of the paper.

Are sufficient details of methods and analysis provided to allow replication by others? Yes
If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable
Are all the source data underlying the results available to ensure full reproducibility? Yes

Are the conclusions drawn adequately supported by the results? Yes
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com