ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Systematic Review

Performance measurement of university-industry collaboration in the technology transfer process: A systematic literature review

[version 1; peer review: 1 approved with reservations]
PUBLISHED 16 Jun 2022
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

Abstract

Background
To fostering organizational improvement, the performance must be assessed. In the context of university-industry collaboration (UIC) in the technology transfer process, the performance of also must be assessed. However, the performance covers complex aspects which makes it difficult to be measured. This indicates the need to have a better understanding of the methods used. Therefore, this study presents a systematic literature review on the performance measurement of UIC in the technology transfer process that can give to the researchers an easy and quick overview of the literature about (mainly) the methods used for the performance measurement.
Methods
We used two major scientific databases, i.e., Scopus and Web of Science. We defined four groups of keywords to restrict the search criteria. We only consider articles published in the last decade, during 2010 to November 2021. The search procedure contains four phases following the PRISMA framework: (i) identification, (ii) screening, (iii) eligibility, and (iv) inclusion.
Results
The final screening process resulted in 24 articles that satisfied the criteria for inclusion in this review. The collected articles are categorized according to two classifications. The first is about type of the collaboration, while the second is about the methods used. We investigated three types of collaboration, i.e., at the level of technology transfer office (TTO), the academic (university) spin-offs, and joint-research. There are several methods for assessing performance, ranging from qualitative, quantitative, to mixed methods.
Conclusion
The literature review leads us to the following conclusions. First, most studies are conducted at the TTO level, in which it is anticipated since it is the most common form UIC. Second, the application of data envelopment analysis is still preferable than other method in measuring the performance. This study also provides possible research directions that can facilitate scholars to uncover gaps in the literature.

Keywords

performance measurement, PRISMA framework, systematic literature review, technology transfer process, university-industry collaboration

Introduction

Policymakers and researchers have prioritized collaboration between academia and businesses since the implementation of the Bayh Dole Act of 1980 (Miller et al., 2018). It has been observed from the active efforts from universities to improve their relationships with industry due to the increased demand to have a positive influence on society as well as the diminishing funding streams (Miller et al., 2014). It further led to the growing push to build university-industry collaborations (UICs) to improve institutional innovation and economic competitiveness through information exchange across academic and commercial spheres (Perkmann et al., 2013). Universities now have additional obligations to assist their researchers in transforming knowledge to value in terms of socio-economic growth (Fayolle and Redford, 2014). This has been indicated by the aggressive creation of relationships with business sector groups as part of the third-mission activities to assist institutions in connecting with society (Rantala and Ukko, 2018). Moreover, universities are also seen as desirable partners in aiding industrial organizations’ innovation efforts (Mäkimattila et al., 2015).

In the technology transfer process, as in the UIC context, the university may supply ideas and expertise to the business, which the industry would then use and put into practice (Prabhu, 1999). There are seven phases of the UIC in the technology transfer process, which include scientific discovery, dissemination of the invention, evaluation of the invention for patenting, registration of the patent, marketing/supply of the technology to companies or entrepreneurs, negotiations of the license, and formal (or informal) commercialization (Lopes et al., 2018). To foster organisational improvement, the performance of the collaboration must be assessed. Performance measurement is the process of measuring all achievable goals (Sutopo et al., 2019). Although the significance of performance measurement is not always easy to justify, the measurement might help organisations discover their strengths and weaknesses as well as areas for improvement. However, the performance of technology transfer process covers complex aspects which makes it difficult to measure (Stankevičienė et al., 2017). This indicates a need to have a better understanding of the methods used in performance measurement of technology transfer process in the UIC context.

The objective of this study is to systematically review the academic articles regarding performance measurement of UIC in the technology transfer process. We searched the articles by using Scopus and Web of Science databases through the PRISMA framewotk. The collected articles were then categorized according to two classifications. The first was the type of the collaboration, while the second referred to the methods used for assessing the performance. It will give to the researchers a quick and easy overview of the literature about the level of analysis, the methods, and selection of variables used for performance measurement. According to the insights of the review, this study also provides some potential research directions that might facilitate scholars to uncover gaps in the literature.

The remaining of this paper is organized as follows. The methodology is presented in the next section. The third section presents the result of the search, including the classification of the collected articles. Possible research directions are offered in the Discussion. Finally, the last section concludes.

Methods

The steps to conducting this literature review are described in Figure 1. In the first step, the research objective was defined. In this case, this study aimed to systematically review the academic articles regarding performance measurements of UIC in the technology transfer process. To get a widespread coverage of the literature, we used two major scientific databases, i.e., Scopus by Elsevier and Web of Science by Clarivate Analytics.

1999deb2-f4d2-4831-b54d-0f9d80691338_figure1.gif

Figure 1. The research steps.

Step 2 was conducted to restrict the search criteria. As such, the relevant search keywords for the query process were determined. With the aid of Boolean operators, we used four groups of keywords as follows: (i) university-industry collaboration OR university-industry linkages OR university-industry relationship OR university-industry partnership OR university-industry alliance OR knowledge and technology exchange OR university-industry knowledge transfer OR university-industry technology transfer OR community knowledge transfer OR technology transfer office OR office of technology transfer OR knowledge transfer office OR technology transfer center OR technology licensing office; AND (ii) performance OR effectivity OR efficiency OR accomplishment OR achievement OR efficacy; AND (iii) assessment OR evaluation OR appraisal OR analysis OR rating OR measurement; AND (iv) research development OR invention OR scientific discovery OR initial development OR product development OR product manufacturing OR commercialization process OR commercialization OR transferring OR technology acquisition OR product manufacturing OR market development OR valley of death OR death valley OR Darwinian seas. Thus, articles which contain those keywords in the title, abstract, or keywords were extracted. We included several “variations” of UIC in the first group of keywords to get wider results of articles discussing UIC. While the second and third groups of keywords are self-explanatory, we added the context in the last group, i.e., stages in the technology transfer process.

The next step was conducting the search. The search procedure contained four phases following the PRISMA framework (Page et al., 2021): (i) identification, (ii) screening, (iii) eligibility, and (iv) inclusion, as shown in Figure 2. PRISMA has potential benefits due to the fact that complete reporting allows readers to assess the appropriateness of the methods and subsequently, the trustworthiness of the findings. Using the previously mentioned inclusion criteria, the search yielded 264 articles (63 from Scopus and 201 from the Web of Science database). We then removed 22 duplicate articles. For the sake of insuring quality, the document type was restricted to (peer-reviewed) research article published in a journal as these sources are the most useful for literature reviews (Saunders et al., 2012). Therefore, other types of documents such as books or book chapters, conference proceedings, short communications, letters or editorial materials were excluded. From a pragmatic point of view, only articles published in English were included. We only considered articles published in the last decade, from 2010 to November 2021, including old literature would be misleading and irrelevant. This way, 75 articles were excluded, leaving only 167 articles to be further investigated. The titles and abstracts were then read to verify whether the articles were relevant to this study’s theme and objective. The second-round inspection was performed by carefully reading the full text of each article to address their eligibility for inclusion in this study. Following this procedure, 49 articles did not meet the criteria and were not considered for final extraction. Note that the ineligible articles did not discuss the addressed main topic in this research domain (the articles discussed e.g., partners’ selection for UIC, barriers of UIC management, the relationship within UIC, among others). Ultimately, the final screening process resulted in 24 articles that satisfied the criteria for inclusion in this review.

1999deb2-f4d2-4831-b54d-0f9d80691338_figure2.gif

Figure 2. The PRISMA framework.

In analyzing and presenting the included 24 articles, we used qualitative approach to categorize articles according to the type of collaboration and methods of performance measurement.

Results and discussion

The final screening process resulted in 24 articles published in 16 different journals (see Figure 3) that met the refinement criteria and became the object of this study. Figure 4 illustrates the number of articles published annually from 2010-2021. This section devotes to articles classification. Each article was classified according to the type of collaboration and methods of performance measurement. Classifying extracted articles from literature review allows readers to validate what has been studied and can also allow to find gaps in this research domain.

1999deb2-f4d2-4831-b54d-0f9d80691338_figure3.gif

Figure 3. Number of articles per journal.

1999deb2-f4d2-4831-b54d-0f9d80691338_figure4.gif

Figure 4. Number of articles per year.

Type of collaboration

We investigated three types of collaboration according to the insights of the literature review: see Table 1. Several elements of UIC performance can be influenced by the type of collaboration. This is shown by the fact that organizational structure has an impact on the stakeholder relationships, information flow, as well as how relational components allow knowledge sharing (Ribeiro and Nagano, 2021).

The first and most common type of collaboration is technology transfer office (TTO). It is a kind of organization which supports universities in dealing with their intellectual assets and transforming them to benefit the society (Carlsson and Fridh, 2002). The roles of TTO are commonly offering support in the area of intellectual property and entrepreneurship, creating relationships with industries and communities, enabling the establishment of firms (or start-ups) from UIC, and generating net income from sponsored research, collaborating partners, or consulting opportunities. Note that there are several variations for the term “TTO” in the literature, including knowledge transfer office, university technology transfer office, offices of technology licensing, and industrial liaison office (Brescia et al., 2016).

One of the most important mechanisms of TTO and the one that is expected to have the highest impact at the local level, is the formation of academic spin-offs (ASO) or university spin-offs (USO); thus, ASO is a new firm formed by the utilization of university’s core technology or technology-based idea (Smilor et al., 1990). The establishment of ASOs results from the technology transfer policy of a university. ASOs also take part in accelerating technology innovation and promoting economic development (Block et al., 2017; Guerrero et al., 2015; Visintin and Pittino, 2014). ASOs have distinguished features compared to other start-ups, i.e., being created within a university and offering products resulting from university research. For these reasons, ASOs characteristically have huge potential in terms of research and innovation.

While ASOs might be considered as having the highest impact, joint research probably reflects the lowest quality of collaboration (Guimón, 2013). This type of collaboration does not create such a new formal organization, often in the form of temporary (short-term) contract or agreement between university and industry partners. This collaboration commonly involves on-demand problem solving with predefined results and tends to be expressed as consulting, contract research, and licensing. In addition, it might include joint supervision and postdoctoral or doctoral positions offered within alliance (Perkmann et al., 2011; Seppo and Lilles, 2012), co-owned patents (Hong and Su, 2013; Lei et al., 2011), joint publications (Lundberg et al., 2006; Tijssen et al., 2009), joint public lectures, and joint training (Al-Ashaab et al., 2011; Ramos-Vielba et al., 2010).

Methods for performance measurement

The next classification relates to methods used to measure the performance of the collaboration:see Table 2. One measure of performance evaluations which is widely used is efficiency (Cornali, 2012). In a broader view, efficiency refers to the ratio of output to input (Cooper et al., 2006). In this sense, efficiency is used to describe the ability of the collaboration to generate output (s) with available input(s). In the literature, efficiency is largely measured by two most popular frontier methods: i.e., data envelopment analysis (DEA), and stochastic frontier analysis (SFA). In this review, DEA is preferable to SFA (DEA was applied in seven studies while SFA was used in only two studies). Compared to SFA, DEA can manage multiple inputs and outputs more easily. There is no assumption about the functional form in DEA. However, DEA assumes that all differences from the frontier (or the most efficient) are due to inefficiency. It means that DEA does not discriminate between inefficiency from statistical noise; thus, it results that DEA might overestimate inefficiency level. On the other hand, DEA’s drawback can be eluded in the SFA procedure because SFA does discriminate between inefficiency and statistical noise. In SFA, we also can identify the effect of inputs on the outputs, something that we cannot observe in DEA. SFA also can be used in a panel data setting (Ulkhaq, 2021). Despite several benefits as have been showed previously, research on applying SFA are the minority compared to DEA. It seems very fruitful to undertake more research regarding measuring efficiency of the collaboration using SFA.

Table 2. Methods used for performance measurement.

Methods usedObserved in
Data envelopment analysisCuri et al. (2012), Fadeyi et al. (2019), Ho & Lee (2021), Lafuente & Berbegal-Mirabent (2019), Rossi (2018), Shi et al. (2020), Sutopo et al. (2019)
Stochastic frontier analysisBertoletti & Johnes (2021), Lee & Jung (2021)
Regression analysisCaldera & Debande (2010), Conti & Gaule (2011), Hung et al. (2015)
Performance metricsAlbats et al. (2018), Gianiodis & Meek (2020), Tseng & Raudensky (2014), Weis et al. (2018)
Productivity measureLafuente & Berbegal-Mirabent (2019)
Multi-criteria decision making toolsAragonés-Beltrán et al. (2017), Stankevičienė et al. (2017)
Other quantitative methodsCartalos et al. (2018), Kireyeva et al. (2020), Venturini & Verbano (2017)
Qualitative method: Best transfer practicesResende et al. (2013)
Mixed method: Spin-off lean accelerationIazzolino et al. (2020)

Regression analysis is used as an alternative to the frontier methods, as observed in only two studies. Although it cannot measure efficiency, it can be used to identify the determinants of UIC performance. The outcome of the collaboration would serve as dependent variable. Caldera and Debande (2010) used five measure of performance, i.e., the number of research and development (R&D) contracts, R&D contracts income, the number of spin-offs, licensing income, and the number of licensing agreements (the last two also have been used by Conti and Gaule, 2011). Latent growth model (LGM), which is a regression-based model, was used by Hung et al. (2015) for assessing the use of knowledge created by universities. The LGM consists of two sub-models: the level-1 model describes the individual change over time measured by the number of cumulative patent citations, while the level-2 model describes inter-university differences in citation growth. They showed how the influence of knowledge use for patented inventions is subject to research impact, UIC, and the university location.

Performance metrics have been proposed to evaluate the performance of the collaboration. Gianiodis and Meek (2020) proposed performance metrics to assess entrepreneurial education initiatives inside of the entrepreneurial university. They argued that a profit-oriented framework model (which commonly includes two performance matrices: revenues from licensing and other activities, and new ventures or start-ups) only favors elite universities and undervalues resource-constrained universities. Tseng and Raudensky (2014) used two normalized metrics, i.e., overall performance metric (OPM) and patenting control ratio (PCR) to assess the performances of TTO activities of twenty US major universities. The OPM which is developed based on outcome instead of process, is a combination of patents issued, disclosures submitted, patent applications filed, TTO revenue and the numbers of licenses agreed, and start-ups launched, associated with different weighting factors. On the other hand, PCR which is a dimensionless metric, is the number of patents granted normalized by the number of patent applications. Albats et al. (2018) presented specific key performance indicators (KPIs) of UIC. They broke down the UIC measures across the collaboration lifecycle which implies four stages of UIC and its assessment, i.e., inputs, in-process activities, outputs and impact. Weis et al. (2018) characterized the performance of research organizations across different steps of the technology transfer process. They proposed the commercialization pipeline that can be used to assess and compare relative levels of technology transfer activity at different institutions, and at different steps along the pipeline. They defined seven steps in the pipeline (i.e., research expenditure, invention disclosure, patent application, patent issued, licenses and option executed, start-up, and adjusted gross income), in which each step corresponds to a specific metric.

Lafuente and Berbegal-Mirabent (2019) investigated the productivity level of TTO. They analyzed the productivity of TTOs in Spain from 2006 to 2011 by calculating the Malmquist index, a total factor productivity index. After the productivity level was identified, its relationship with aspiration performance was investigated by using a random-effects model in a panel data setting. Results of the study confirm that productivity is influenced by changes in the configuration of the TTO’s outcome portfolio.

Several studies tried to employ multi-criteria decision making tools. Aragonés-Beltrán et al. (2017) presented the analytic network process (ANP) by Saaty (1996) to assess whether the TTO activities are contributing to meeting the third mission goals set by the university in its strategic plans and to what extent. They implemented the ANP in Universitat Politècnica de València (UPV)-TTO and found that TTO’s objective “facilitate the participation of the UPV in sponsored R&D and innovation (R&D&I) programmes” should be prioritized first; and “support services to justify expenditure incurred in the development of subsidised R&D&I activities” obtained the highest weight among other TTO activities, meaning that this activity should be prioritized first. Stankevičienė et al. (2017) used factor relationship model (FARE) to identify the relationship among criteria that influence TTO performance and weights of each criterion. After implementing FARE, the TOPSIS method by Hwang and Yoon (1981) was utilized to rank the seven biggest Lithuanian universities to find out how much, based on the chosen criteria, universities were influenced in their value creation process through TTO performance assessment.

Several scholars proposed other quantitative methods to evaluate the performance of the collaboration. Kireyava et al. (2020) derived 11 success factors of scientific project implementation in TTO, which consist of e.g., finance, infrastructure, human resource, communication system, among others. Venturini and Verbano (2017) identified resources used in the various stages of development of an ASO, i.e., opportunity recognition, entrepreneurial commitment, threshold of credibility, and threshold of sustainability (Vohora et al., 2004) as well as indicators for the performance evaluation, i.e., revenue, employee numbers, number of patents, quality certification, R&D investments, total liabilities and equity, prizes and awards, and return on equity index. Cartalos et al. (2018) derived three dimensions which consist of several criteria for the assessment. The dimensions are technology–innovation, market opportunities, and exploitation team. Each criterion in those dimensions are scored with a 4-level Likert-type scale (1: to a low, very low extent; 2: to some extent; 3: to a considerable extent; 4: to a high/very high extent), except for the criterion of technology maturity (of the technology–innovation dimension), which is marked using the technology readiness level scale (EARTO, 2014).

Apart from the quantitative approach, although less popular, Resende et al. (2013) used a qualitative analysis tool, named best transfer practices that can be utilized to improve efficiency and effectiveness of TTO. They used interviews, participative observation, document analysis, and survey to investigate the performance of TTO and where the improvements can be performed.

Iazzolino et al. (2020) proposed the spin-off lean acceleration, which constitutes a combination of the quantitative and qualitative aspects to assess ASO. They argued that most of the traditional methods of assessment are designed for firms which operate in an organized manner; on the contrary, start-ups are far from that organized manner. In particular, the features of ASOs are more multifaceted compared a common start-up firm, including involved stakeholders, barriers and drivers, as well as key success factors (Hossinger et al., 2020; Mathisen and Rasmussen, 2019). The starting point is that before measuring performance quantitatively, the critical point of an ASO is to realize as quick as possible if the key risk areas are sufficiently controlled, which can be identified through qualitative analysis.

Possible research directions

Efficiency and effectiveness are two common terms which are widely exploited in assessing the performance of organisations (Cornali, 2012; Mouzas, 2006), and therefore we believe these terms are highly relevant to be discussed in this study. One of the challenges for organisations to be managed is to balance efficiency with effectiveness. In many cases, they often are unsuccessful in doing this; instead, they might do by dealing with efficiency but neglecting effectiveness. Leaders of organisations are often looking for efficiency indicators, such as cost reduction, resources usage minimization, and operational margins improvement; however, those indicators are not measures of effectiveness (Ambler, 2003). The tendency to pursue efficiency may be credited to the fact that business is more applicable to efficiency gains than to the effectiveness of a business (Moran and Ghoshal, 1999). We confirm the argument with the fact that none of the performance measurement methods observed in this review was related to effectiveness (or impact) evaluation.

Although tightly related, those concepts are distinct from each other. The Britannica Dictionary gives effective definition as “producing a result that is wanted” while efficient means “capable of producing desired results without wasting resources”. Efficiency is quantitatively determined by the ratio of output(s) to total input(s) (Cooper et al., 2006). In the literature, evaluating effectiveness is often referred to impact evaluation, that can be used to answer such specific question: “What is the impact of a program on an outcome of interest?” (Gertlet et al., 2016), see Figure 5. The focus is only on the impact (not the resources): that is, the changes directly attributable to a program. This section is divided into two subsections, each discussing possible research directions related to efficiency and effectiveness (or impact) concepts.

1999deb2-f4d2-4831-b54d-0f9d80691338_figure5.gif

Figure 5. Efficiency versus effectiveness.

Source: Ulkhaq (2022).

Efficiency measurement

Even though there is abundant research in the field of efficiency measurement, the efficiency is largely measured by the frontier methods: parametric, e.g., SFA; and non-parametric approach, e.g., DEA. Despite several benefits as has been showed previously, research on applying SFA in this research domain are the minority compared to its non-parametric counterpart, i.e., DEA. It seems fruitful to undertake more research regarding measuring efficiency of the collaboration using SFA. In addition, in some cases, the aim of the study is not only identifying the inefficiency level, but also factors that describe inefficiency, called the determinants of inefficiency. DEA cannot handle this issue, yet SFA can easily incorporate the determinants of inefficiency into the so-called heteroscedastic model. Accordingly, SFA can model both cross-sectional and panel data, different to DEA that only can be applied in cross-sectional data. In this review, only two studies used SFA to assess efficiency: Bertoletti and Johnes (2021) who used cross-sectional data, and Lee and Jung (2021) who used panel data. Compared to cross-sectional data, the benefit of using panel (or longitudinal) data is that more information on inefficiency (as well as the changes in efficiency) can be explained; whereas the cross-sectional data can only give a static picture of inefficiency.

Panel data also enables researchers to consider heterogeneity that may exist and to observe whether inefficiency has been persistent over time (time-invariant) or time-varying. The persistent inefficiency is defined as a long-term or structural inability of an institution (in this case is the type of collaboration) to achieve the desired output. On the other hand, time-varying inefficiency refers to a short-run shortage that can be removed swiftly without a huge structural change. Distinguishing between persistent and time-varying inefficiency is vital since they might have different policy implications (Lai and Kumbhakar, 2018). The most recent model in SFA is called the “four-component model”, which was proposed by Colombi et al. (2011), Kumbhakar et al. (2014), Tsionas and Kumbhakar (2014). The model separates producer effects, random noise, persistent, and time-varying inefficiency as follows

(1)
yit=α0+fxitβ+τiηi+vituit,
where yit is the output for a producer i (i = 1, 2, … N) at time t (t = 1, 2, … T), α0 is an intercept, f (xit; β) is a function of the production technology, xit is a vector of inputs, β is the corresponding vector of parameters to be estimated, τi is the random producer effects that portray unobserved heterogeneity, ηi is the non-negative persistent inefficiency, vit is a random two-sided statistical noise, and uit is the non-negative time-varying inefficiency. Scholars have proposed several methods to estimate Equation (1), see Colombi et al. (2011), Kumbhakar et al. (2014), Tsionas and Kumbhakar (2014).

Not a single article in this review used this model. Therefore, it is suggested to conduct research measuring performance of UIC by using the four-component model of SFA. The determinants of inefficiency then can be investigated by applying the heteroscedastic model. Caudill and Ford (1993), Caudill et al. (1995), and Hadri (1999) proposed that the heteroscedasticity can be parameterized by a vector of observable variables and corresponding parameters. If uit is assumed to follow half-normal distribution, then σu2i.e., the variance of uit is the (only) variable to be parameterized. Further, the exponential function is used to ensure positivity. Therefore, the parameterization is as follows

(2)
σu2=expzuTwu,
where zu is a vector of determinants inefficiency and wu is the corresponding coefficient vector. The expected value of uit is now a function of σu2 as
(3)
Euit=2/πσu=2/πexp12zuTwu.

How to estimate the marginal effect of zu on E [uit] given the half-normal assumption of ui can be seen in Kumbhakar et al. (2015).

Impact evaluation

Impact evaluation seeks to evaluate the effect of a program on an outcome (Imbens and Rubin, 2008; Rubin, 1974). Mathematically, it can be written as:

(4)
Δ=YP=1YP=0,
where Δ is the impact, (Y|P = 1) is the outcome when a program (P) is present, and (Y|P = 0) is the outcome when a program is absent. In the following we give a simple example to demonstrate how impact evaluation might be performed in the context of measuring performance of UIC. Let’s say we want to evaluate the impact (Δ) of a vocational training program (P) on the income of TTO (Y). To evaluate the impact, we must have income data of two identical TTOs: an income of TTO in which its employee had participated in the vocational training program (Y|P = 1) and an income of the same TTO in which its employee had not participated in the vocational training program (Y|P = 0). Then, the impact of P on Y is the difference between (Y|P = 1) and (Y|P = 0), following Equation (4).

However, such evaluation is unmanageable, as we know that it is impossible to evaluate identical unit in two different states at the same time. This is called the “counterfactual problem. The vital point to estimate this problem is to shift from the individual level to the group level. From a statistical point of view, if the number of individuals in a group is large enough, the individuals are statistically undifferentiated from each other at the group level. To accommodate this, we now have to form two groups: (i) the group that partakes in the program, known as the treatment group, and (ii) the control group, which does not participate in the program.

One of the challenges to conduct this research is to recognize a control group and a treatment group which are statistically similar, on average, in the absence of the program. In the vocational training program, we have to find a group of TTO which does not conduct the training (as control group) and other TTO group which does conduct the training (as treatment group). Then, we can compare those groups’ income to evaluate whether the training effectively affects the income. This simple example might be expanded to some real applications in UIC context. Conducting a particular program in the UIC context and evaluating it to observe the outcome that might or might not be benefited from is a very promising branch in this research domain. Readers are encouraged to see for instance Gertlet et al. (2016) that discusses impact evaluation in more detail.

Conclusions

This study has systematically reviewed literature (from the Scopus and Web of Science databases) on the performance measurement of UIC in the technology transfer process. It represents a unique opportunity to contribute to the literature by mapping articles systematically in this research domain. Through the PRISMA framework, the review collected 24 articles published in 16 different journals, which were thoroughly analyzed. The collected articles were categorized according to two classifications. The first was the type of collaboration, in which authors might conduct their studies at TTO, ASO, or joint-research level, while the second was the methods used for assessing the performance.

The literature review lead us to the following conclusions: first, most studies were conducted at the TTO level, in which was anticipated since it is the most common form of UIC. Second, the application of DEA for measuring efficiency is still preferable to SFA. Although DEA has several benefits over SFA (see previous section), it cannot identify the influence of inputs on the outputs, does not distinguish inefficiency from statistical noise, and cannot analyze panel data. Therefore, this study explored the use SFA more deeply to measure efficiency as one of possible research directions. We argue that it is suggested to conduct research on measuring performance of UIC by applying the four-component model of SFA, i.e., the most recent SFA model. Another research direction lies in the field of impact evaluation. Since none of the articles in this review conducted impact evaluation research, we suggest a simple example of how to conduct the evaluation in the context of UIC that might be beneficial for the future research.

This study also highlights several implications. It represents a unique opportunity to contribute to the research domain by mapping the type of collaboration in UIC and methods to measure the performance of UIC. As has been previously mentioned, this study provides possible research directions that are highly beneficial for researchers to find gaps in the literature. Another theoretical implication comes from the methodology used, as this study presents a robust and structured methodology using the PRISMA workflow in the area of UIC research.

One of the limitations of this study was the use Scopus and Web of Science databases as we cannot claim that we covered all published articles falling in this research domain since other databases, including Google Scholar and EBSCO, can be used. The other limitation is to include only journal papers as document types in the search. It I possible that different types of documents could contain more useful information. Finally, the search was guided by a set of keywords that provided us with a certain confidence level that we have synthesized an extensive knowledge base on this research domain. It is possible that relevant articles did not put the set of keywords used in this study in their title, abstract, and keywords.

Data availability

Underlying data

Figshare: Dataset for “Performance Measurement of University-Industry Collaboration in the Technology Transfer Process A Systematic Literature Review”, https://doi.org/10.6084/m9.figshare.19731553.v4

This project contains the following underlying data:

  • - PRISMA_2020_checklist

  • - Framework Prisma

  • - Dataset Scopus

  • - Dataset WoS

Reporting guidelines

Figshare: PRISMA_2020_checklist for “Performance Measurement of University-Industry Collaboration in the Technology Transfer Process A Systematic Literature Review”, https://doi.org/10.6084/m9.figshare.19731553.v4

Data are available under the terms of the Creative Commons Zero “No rights reserved” data waiver (CC0 1.0 Public domain dedication).

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 16 Jun 2022
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Pujotomo D, Helmi Syed Hassan SA, Ma’aram A and Sutopo W. Performance measurement of university-industry collaboration in the technology transfer process: A systematic literature review [version 1; peer review: 1 approved with reservations]. F1000Research 2022, 11:662 (https://doi.org/10.12688/f1000research.121786.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 16 Jun 2022
Views
13
Cite
Reviewer Report 16 Aug 2023
Chunyan Zhou, International Triple Helix Institute, Palo Alto, California, USA 
Approved with Reservations
VIEWS 13
Suggestions and Comments:
  1. Suggest to change the title into  “Types and performance measurement methods of university-industry collaboration in the technology transfer process: A systematic literature review”.
     
  2. Some important literatures are missing,
... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Zhou C. Reviewer Report For: Performance measurement of university-industry collaboration in the technology transfer process: A systematic literature review [version 1; peer review: 1 approved with reservations]. F1000Research 2022, 11:662 (https://doi.org/10.5256/f1000research.133685.r171812)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 16 Jun 2022
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.