ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Systematic Review

The Interview Trap: A systematic review on factors affecting the validity of Employment Interviews

[version 1; peer review: awaiting peer review]
PUBLISHED 09 Dec 2025
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS AWAITING PEER REVIEW

Abstract

This article examines factors that compromise the validity and reliability of employment interview selection processes, with a focus on the influence of interviewer bias, format, and decision-making processes. The purpose of the present Systematic Review (SR) is to determine how different types and formats of interview modalities affect the selection process, consolidate empirical findings regarding the influence of the most widespread interviewer biases, and identify bias mitigation strategies that can enhance the interview selection process. The methodology used PECO and PRISMA criteria to identify keywords, define search strings, and 19 full review articles published between 2015 and 2025 in peer-reviewed journals. This time frame captures the most recent developments and relevant research, recognizing the opportunity to reflect this under-researched area, specifically the role of bias and interview format in employment selection processes. The results emphasize the importance of pre-and post-interview reflection, interviewer preparedness, and gathering applicant feedback to refine selection decisions. In conclusion, the interview process can be improved by offering interviewer refresher training, utilizing recorded interviews to minimize the effects of selective memory, and implementing bias-mitigation strategies, such as job analysis and interviewer self-awareness, to enhance the validity and reliability of employment selection interviews.

Keywords

Biases, Rapport, Validity, Reliability, and Interview formats

Introduction

Interviewing candidates is not an exact science, but it requires time, training, and involves more than simply evaluating a candidate’s hard and soft skills. A factor discussed in the reviewed literature indicates that selecting the correct interview method and having an experienced interviewer is crucial for fully understanding the candidate’s potential for long-term adaptation to the team and company culture.

Job interviews are one of the most used and efficient tools for selecting the best candidate for a job.13 “An interview is a method of asking questions to gain qualitative and quantitative data”.4 At its core, interviews are used to gain insights into an applicant’s experiences, skills, motivations, and knowledge. A job interview can be stressful, nerve-wracking, frustrating, and exciting. Regardless of how many times a candidate has applied and been invited for a face-to-face interview, or as is the norm today, online interviews, an interview should be an open, fair, and unbiased opportunity where the candidate shows their experience, passion, skills, and motivation for the job, to build rapport while answering questions relevant to the position.

Building on this idea, the interview setting is not as simple as it looks. Several significant factors are involved in the interview process. There are two sides: the interviewer or panel of interviewers and the interviewee, and several methods of conducting the interview. It is short; the candidate does most of the talking, and even if the modality is face-to-face, there is not enough time to develop a close relationship. Part of the foundation is for the interviewer or panel to understand that the outcome of the interview selection entails expectations, beliefs, needs, intentions, and judgment that, in many ways, will influence the interaction between the parties involved and the final decision.5

Some impressions and similarities can influence a positive selection,6 or suggest, there is still an opportunity to examine issues with employee selection and the potential for negative behaviors.7 Structured interviews are designed to assess a wide range of job-related constructs, and validity can be affected unless the criteria for the interview are evaluated before the event occurs.8 Trained interviewers who are knowledgeable about professional standards and the legal implications of discrimination or bias can become more aware and better equipped to combat biased thinking when provided with awareness training.2

While the interview remains an interaction, a social method to evaluate applicants, and where there is a trade of data, words, and emotions between an interviewer and an applicant, the goal of the interview is to offer a reliable, validated, fair interview experience to both the interviewer and candidate in order to assess the interviewee’s suitability for employment Interviewers have the opportunity, time, and focus to leverage this two-way exchange to gather and corroborate additional sources of information rather than solely relying on the applicant’s comprehensive interview package.9 There is still an ongoing debate about whether some interviewers or interview modalities are more effective than others in conducting interviews.

This paper presents an overview of the research conducted in the field through a systematic review (SR) of factors such as interviewer biases and the impact of selecting the appropriate interview modality in the employment interview selection process. The paper is organized into the following sections. The introduction provides an overview of current studies related to the most frequent biases that persist in the interview selection process, as well as the different benefits and differences in the interview modality. The methodology section explains the protocol applied and the process for extracting relevant data. The authors also provide a detailed description of the results of the data extraction process. The discussion section presents a dissenting view to answer the research questions. It addresses how the discussed factors can influence and risk the validity and reliability of the interview selection. Finally, the conclusion summarizes the main inferences drawn from the study.

The researchers conducted a literature search because, although researchers have published several studies since 2015, most meta-analyses have primarily focused on the validity and effectiveness of selection interviews. For instance, some studies evaluate the validity of specific interviews in relation to specific constructs.10 Another analysis corrects biases in previous reviews on the validity of interviews, confirming the usefulness of structured interviews, albeit with slightly lower validity than initially estimated.11 Furthermore, other research examines how prior work experience predicts performance and turnover, providing relevant evidence on job selection methods that include interviews.12

These studies did not explicitly analyze the intersection of different interviewer biases and interview modalities, nor did they offer recommendations for improving the quality, effectiveness, and experience of the interview selection process. The authors identify the need to combine various factors in the interview selection process.

Objectives

This SR will address two dimensions and aims to determine how the different types and formats of interview modalities can affect the selection process, consolidate empirical findings regarding the influence of the most widespread interviewer biases, and identify bias mitigation strategies that can add value to the interview selection process. By addressing these factors, our research paper aims to provide insights into how to improve the fairness and effectiveness of interviews, ultimately contributing to better hiring decisions.

We framed our research objectives into the following main research questions (RQs), each one aligned with one of the two core dimensions under review.

Accordingly, our first research question (RQ) focuses on the personal dimension by examining how interviewer biases influence decision-making and affect selection outcomes.

RQ1. To what extent do bias mitigation strategies (e.g., structured rubrics, interviewer training, building rapport, anonymized evaluations) improve the reliability and validity of selection decisions compared to traditional interview practices?

The researchers have organized this research question into the following sub-questions.

  • 1.1. What are the most widespread types of biases exhibited by interviewers during an interview selection process?

  • 1.2. How does interview training reduce the interviewer’s first impression bias?

  • 1.3. To what extent do standardized scoring systems and feedback mechanisms among interviewers (e.g., post-interview calibration sessions) improve interviewer ratings and interview reliability?

The second research question focuses on the technical dimension by analyzing the types and formats of interviews used in the selection processes.

RQ2. What is the effect of bias mitigation strategies (e.g., structured rubrics, interviewer training, rapport, anonymized evaluations) compared to traditional interview practices on perceived fairness and the reliability of hiring decisions?

The researchers have organized this research question into the following sub-questions.

  • 2.1. To what extent do structured interviews improve the predictive validity of hiring decisions compared to unstructured interviews?

  • 2.2. How does the selected modality (type or formats of interviews) compare in terms of validity and reliability?

  • 2.3. How does rapport-building enhance interview outcomes across various interview formats?

Methods and materials

This review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines to ensure a systematic review of the literature.13 Compliance with PRISMA promotes transparency, clarity, and replicability in systematic review.

Eligibility criteria

Studies will be selected according to the criteria outlined below:

Study design: This research employed the PECO model because the study is rooted in social science, not clinical intervention. The focus lies in examining exposure factors—specifically, how different types of interviewer bias and interview modalities (e.g., in-person, video, or phone) are associated with variations in the interview selection process rather than testing a specific intervention or treatment. Therefore, the PECO structure—Population, Exposure, Comparison, and Outcome provides a more appropriate and flexible framework for guiding this systematic review. PECO criteria was used to identify keywords and define search strings based on research questions.

P (Population): Institutions (schools, hospitals, government) involved in employment selection.

E (Exposure): Interview modalities and interviewer bias types.

C (Comparison): Between interview formats or bias mitigation strategies.

O (Outcomes): Presence or reduction of bias, perceived fairness, validity, and reliability of interview outcomes.

Timing: Between 2015 and 2025 will be selected for inclusion, because of the inquiry to research four years before and after COVID-19.

Setting: Restrictions by document type will analyze academic articles published during this period.

Language: To ensure consistency, clarity, and interpretation and minimize translation errors, this review included only peer-reviewed articles published in English. Using a single language enhances the accuracy of data analysis, particularly when examining complex constructs such as interviewer bias.

Information sources

To develop this systematic review, an exhaustive search was conducted in major scientific databases, including Web of Science and Scopus, to identify relevant and current studies on the topic. In addition, the search was supplemented with additional sources, including bibliographic references from selected articles of interest, to ensure broad and representative coverage of the available evidence. The search criteria included specific keywords: interview, employment, cognitive, biases, and stereotypes

Search strategy

The structured search strategy responds to the questions raised in the investigation.

Identification

The structured search used to conduct the search of the articles was in Scopus:

The search strategy incorporated the following specific keywords: interview, employment, cognitive, biases, and stereotypes. These terms were selected for their direct relevance to the research objective and were combined using Boolean operators (e.g., AND, OR) to optimize the retrieval of pertinent literature.

TITLE-ABS-KEY (interview) AND TITLE-ABS-KEY (employment) AND TITLE-ABS-KEY (cognitive) AND TITLE-ABS-KEY (biases) OR TITLE-ABS-KEY (stereotypes)) AND PUBYEAR > 2015 AND PUBYEAR < 2025 AND ( LIMIT-TO (DOCTYPE, “ar”)) AND ( LIMIT-TO ( SUBJAREA, “PSYC”) OR LIMIT-TO ( SUBJAREA, “SOCI”) OR LIMIT-TO (SUBJAREA, “ECON”) OR LIMIT-TO (SUBJAREA, “BUSI”)) AND (LIMIT-TO (LANGUAGE, “English”))

The structured search used to realize the search of the articles was in Web of Science (WoS):

Interview (Topic) and employment (Topic) and cognitive biases (Topic) or stereotypes (Topic) and 2024 or 2023 or 2022 or 2021 or 2020 or 2019 or 2018 or 2017 or 2016 or 2015 (Publication Years) and Article (Document Types) and English (Languages).

To develop this systematic review, an exhaustive search was conducted in major scientific databases, including Web of Science and Scopus, to identify relevant and current studies on the topic. The search criteria included specific keywords within a specified time range of years, articles in the English language, and peer-reviewed publications. In addition, the search was supplemented with further sources, including bibliographic references from selected articles of interest, to ensure broad and representative coverage of the available evidence, which enabled us to identify 20 additional studies. The inclusion of other articles was particularly relevant to our study, as the researchers sought a more comprehensive view of the factors selected for this review. Another reason for including additional articles in the search was to capture studies that are not always covered or indexed by traditional databases. Table 1 lists the number of records identified in each case.

Table 1. Records obtained.

CriteriaFiltersScopusWoS Supplementary articles
RestrictionTopic (title, abstract, author keywords)1812.201515
Period2015-20251655.423292
Subject areaEconomics, Business, Psychology, and Social Issues.1619.04920
Document typeArticles, books, chapters, and conference proceedings1639.65120
LanguageEnglish1418.21520
InterestMost Cited (only WoS)183
Total217

Data management

The software used to manage the data and analyze the information in articles was Mendeley and Microsoft Excel. The researchers used Mendeley to manage the articles resulting from the search in the scientific databases, eliminate duplicate references, and classify the information from each article, underlining it with a different color according to the category.

On the other hand, EXCEL was used to document and manage data resulting from following the PRISMA protocol. The workbook is made up of several tabs, where each phase is documented.

Selection process

This subsection is composed of screening (inclusion/exclusion criteria), eligibility (inclusion/exclusion criteria), inclusion (Quality Assessment Criteria), and Review and Mapping Protocol.

Screening: Duplicate references will be eliminated using the Mendeley software. After eliminating duplicates, we performed an initial screening based on title, keywords, and abstract to eliminate irrelevant studies. 122 records were selected, and 72 were excluded in this section.

Eligibility: First, we included articles whose title contained the phrase “factors that compromise interview selection” (coded as 1 if present, 0 if absent). Second, we included articles whose abstract contained the phrase “biases in job interview selection” (also coded as 1/0). Third, we examined the full text to determine whether it addressed both the technical dimensions of the interview and the personal dimensions of the interviewers.

When the title and abstract alone did not provide enough information, we reviewed the entire content of the paper. To operationalize this decision, we applied the following logical function:

IF (AND (TITLE=1; ABSTRACT=1; COUNT.IF (ABSTRACT: ABSTRACT;1) ≥ 1), then “candidate article”, otherwise “no”.

In other words, we considered an article a candidate if the specified keywords appeared in both the title and abstract, and if the abstract contained at least one occurrence of relevant content. Table 2 presents the detailed application of this logic.

Table 2. Number of selected papers.

Criteria Papers
Articles Elected122
Excluded articles72

Inclusion: To minimize study bias and maximize internal and external validity, the authors utilized the following categories.

Study design: Articles that demonstrate the relation between interview biases and interview format and type.

  • Have all research questions been addressed adequately?

  • Articles that address both the technical dimensions of the employment interview and the personal dimensions of the interviewers within the selection process.

System design: Articles that show the technical dimension of the interview, and personal dimension of the interviewers in an employment interview selection process.

  • Is a structured interview selection format more accurate than an unstructured interview format?

  • What interview characteristics surface, and should be present to avoid an incorrect interview decision?

  • What type of interview biases are more salient during an interview, and does it make a difference in the type and format of interview methodology used?

Quality assessment checklist. Table 3 Quality assessment score assigned to each question according to the information details provided regarding the topic of interview selection, biases, interview personal dimension, and interview technical dimension.

Table 3. Quality assessment checklist.

LevelDescription Score
YesInformation is explicitly defined/evaluated1
PartiallyInformation is implicit/stated0,5
NoInformation is not inferable0

We included papers and classified them as “full-reading articles” in the subsequent stages if their total score was equal to or greater than four points. Table 4 presents the results.

Table 4. Full reading papers included.

Criteria Papers
Full reading papers19
Excluded articles31

Review and Mapping Protocol: Figure 1 presents the PRISMA protocol phases and the evolution of the number of records in each one of them.

6e2d2a88-c11e-464c-ae4e-1ff0ea292423_figure1.gif

Figure 1. PRISMA protocol.

Summary of the PRISMA protocol stages used for the identification, screening, eligibility, and inclusion of articles.

The analysis and classification of the articles followed a bottom-up approach, carried out in three stages.

  • 1. Analysis: We highlighted text fragments that answered the questions in the Objective Section using different colors in Mendeley. This step supported detailed reading, deeper analysis, and classification.

  • 2. Classification: We defined label codes to assign representative meaning to the highlighted information.

  • 3. Extraction: We classified each text fragment highlighted in stage 1 according to the codes from stage 2. To manage the resulting information, we organized the data in a spreadsheet.

Data items

As part of the data extraction and synthesis process, Figure 2 provides the acronyms employed to categorize the information.

6e2d2a88-c11e-464c-ae4e-1ff0ea292423_figure2.gif

Figure 2. Acronyms to classify information.

Acronyms identified in the review and used to categorize the data, along with their corresponding frequencies.

To complement these results, Table 5 outlines the codes associated with each research question, along with the studies in which these codes were identified.

Table 5. Codes considered for each of the research questions.

SourceData items: Acronyms to classify information
Adeoye-Olatunde, O. A., & Olenik, N. L. (2021).Semi-Structured Interviews (SSTI), Video Recording (VR), Interviewer Training (IT)
Blackman, M. (2017).Interview Selection (IS), RAM Model (RAMM)
Bergelson, I., Tracy, C., & Takacs, E. (2022).Halo Bias (HLB), /Horn Bias (HRB), Affinity Bias (AB), Confirmation Bias (CFTB), Confirmatory biases (CTYB), First Impression (FI), Interview Selection (IS), Blinded Interviews (BI), Interview Stereotypes (IST), Interviewer Training (IT)
Buijsrogge, A., Duyck, W., & Derous, E. (2021).First Impressions (FI), Implicit Bias (IB), Physical Appearance (PA), Rapport (RAP)
Dipboye, R. L. (2017).Fairness (FNSS), Dual Process Approach (DPA), Prejudice: Physical Appearance (PA), Confirmatory Bias (CB), Structured Interviews (STI), Interviewer Rating Scores (IRS)
Florea et al. (2019).First Impressions (FI), Interview Panel Type (IPT), Confirmation Bias (CB)
Frieder, R. E., Van Iddekinge, C. H., & Raymark, P. H. (2016).Rapport (RAP), Cognitive Load Theory (CLT), Interviewer Rating Scores (IRS), Interview Structure Standarization (ISS), Interviewer Training (IT)
Hickman, L., Tay, L., & Woo, S. E. (2024).Automated Video Interviews (AVI)
Jager et al. (2020).Recall Bias (RB) Interviewer Bias (INTB)
Kausel, E. E., Culbertson, S. S., & Madrid, H. P. (2016).Relational Demographic Theory (RDT), Overconfidence Bias (OCB)
Li, Y., & Wei, X. (2024).Stigma Stereotypes (SS), Employment Bias (EB), First Impressions (FI), Interview Format (IF): Face-to-face
Morris, S. B., Daisley, R. L., Wheeler, M., & Boyer, P. (2015).Structured Interviews (STI), Interviewer Training (IT), Interview Selection (IS), Single Interviewer (SI), Interview Panel Type (IPT), Validity and Reliability (V&R)
Nørskov et al. (2022).First Impressions (FI), Implicit Bias (IB), Structured Interviews (STI), Interview Panel (IP), Fairness (FNSS), Automated Video Interviews (AVI)
Otugo O, et al. (2021)Implicit Bias (IB), Explicit Bias (EB), Virtual Interview Type (VIT)
Taherdoost, H. (2022).Structured Interviews (STI),Non-Structured Interviews (NSTI), Interviewer Characteristics (IC), Interviewer Training (IT)
Vanderpal, G., & Brazie, R. (2022).Interview Selection (IS), RAM Model (RAMM)
Wiersma, U. J. (2016, December).First Impressions (FI), Mental Bias (MB), Halo Bias (HLB), Structured Interviews (STI), Non-Structured Interviews (NSTI)
Wingate, T. G., Rasheed, S., Risavy, S. D., & Robie, C. (2024).Fairness (FNSS), Prejudice: Physical Appearance (PA), Salience Bias (SB), Rapport (RAP), Interviewer Training (IT), Interview Expectations (IE), Automated Video Interviews (AVI)
Woods et al. (2020).First Impressions (FI), Video Interviews (VI), Interviewee Selection Expectation (ISE)

Interpretation of findings

This section addresses the research and mapped sub-questions following the completion of the information analysis process. Definitions of biases that are stated in the literature and are applied to the factors that can compromise an employment interview selection process are shown in Table 6.

Table 6. Definitions related to interview biases.

PaperBiases or interview
[Nørskov et al., 2022]Implicit bias (IB) involves the unconscious, rapid, and automatic processing of information and can be in direct contradiction to consciously held values and beliefs of individuals.
[Bergelson et al., 2022]Halo bias (HLB): Taking someone’s positive characteristic and ignoring any other information that may contradict this positive perception. Horn bias (HRB): Taking someone’s negative characteristic and ignoring any other information that may contradict this negative perception. Affinity bias (AB): Increased affinity with those who have shared experiences, such as hometown or education. Conformity bias (CTYB) occurs when the view of the majority can push one individual to also feel similarly about a candidate, regardless of whether this reflects their true feelings; it can occur when there are multiple interviewers on one panel.
[Florea et al., 2022]Confirmation bias (CFTB), a phenomenon in which people seek out and overcome information consistent with their current beliefs.
[Kausel et al., 2016]Overconfidence bias (OCB) is operationally defined as the subtraction of objective accuracy from subjective confidence. Defined as an unwarranted belief in the correctness of one’s answers.
[Florea et al., 2022]First impression bias (FIB) causes a decision maker, assessing the outcomes of some process, to place undue weight on early experiences that contribute to an initial impression.
[Jager et al., 2020]Selection bias (SB) implies that the relationship between exposure and outcome may differ in those who participate in the interview and those who do not. Recall bias (RB) is caused by differences in accuracy or completeness of recall to memory of past events or experiences. Interviewer bias (IB) has been defined as the systematic error due to interviewer’s (sub) conscious gathering of selective data, or their influencing of subject response.
[Li & Wei, 2024]Employment bias (EB) refers to the unfair disadvantages or negative judgments faced by individuals who do not adhere to traditional gender expectations during employment decisions.
[Woods et al., 2020]Explicit biases are conscious beliefs, attitudes, or prejudices that individuals are aware of and can deliberately report.
[Wingate et al., 2024]Salience bias (SLB) denotes that such information tends to be “overprocessed” and weighted more strongly than information that evokes less attention.

The most widespread types of biases exhibited by interviewers during the interview selection process

This conceptual network illustrates the interrelationships between various concepts within the field of interviews, biases, perceptions, and human interactions, as described in the articles reviewed in the literature. The network of relationships between concepts or dimensions analyzed, as presented in various articles, illustrates how they are interconnected. Each vertex in Figure 1 represents a full name, and each edge (the line connecting two nodes) indicates a relationship between them that coexists in the same articles found in the SR.

The nodes correspond to the full range of evaluated dimensions, such as First Impressions, Rapport, Fairness (FNSS), and Physical Appearance. The research identifies and discusses these dimensions as key concepts that appear throughout the analyzed articles.

Edges are lines that connect two nodes and indicate the relationship between the analyzed dimensions. If two dimensions share an edge, it indicates that both coexist or relate to each other within the same article. For example, “First Impressions” (FI) has multiple connections with other dimensions such as “Rapport” (RAP), “Physical Appearance” (PA), and “Confirmatory Bias” (CB).

The connections indicate that some dimensions are more closely connected (have more edges), suggesting that they are mentioned together in more articles or are more closely related. Central to our understanding of the topics explored in these articles is the concept of first impressions (FI), which exhibits widespread interconnectedness and demonstrates its significant relevance across multiple research dimensions.

Dimensions such as the Halo Effect (HE), Overconfidence Bias (OCB), and Similarity Bias (SB) demonstrate reduced interconnectedness compared to other dimensions.

Regarding centrality, “First Impressions” (FI) is the most central dimension in the network, meaning it is likely the most recurrent concept in the articles, as it is connected to many other dimensions, demonstrating how the most connected dimensions (such as “FI”) tend to be closer to the center of the graph. This reflects the density of connections this dimension has with other dimensions.

In contrast, other dimensions, such as Mental Bias (MB) and the Diversity-Validity Dilemma (DVD), hold a more peripheral position, reflecting their less recurrent or relevant nature compared to First Impressions.

The most detached dimensions (less connected, such as the “Diversity-Validity Dilemma”) are located further from the center, reflecting their lower relevance or frequency in the articles.

The main findings in Figure 3 show that:

6e2d2a88-c11e-464c-ae4e-1ff0ea292423_figure3.gif

Figure 3. Key concepts interrelate to first impressions.

Key concepts interrelate to first impressions across the reviewed literature.

“First Impressions” (FI) is the most frequently mentioned acronym and has the highest number of connections, which suggests that it is a central concept in the reviewed articles.

Other dimensions, such as including “Rapport” (RAP), “Confirmatory Bias” (CB), and “Physical Appearance” (PA), are also frequently mentioned in the articles, albeit to a lesser extent than “FI.” “Mental Bias” (MB) and “Diversity-Validity Dilemma” (DVD), among others, have fewer connections, implying that they are less frequent or relevant compared to the more central acronyms.

Results

This SR aims to determine which factors can compromise the validity and reliability of a job selection interview, from understanding the different types and formats of interviews to the impact of interviewer biases, the selection process, and the interview expectations.

To this end, for the proposed SR, we translated our research goal into the following main research questions (RQs). The review will address two dimensions: a technical dimension (interview) in which interview types and formats will be discussed, and a personal dimension (interviewer) in which several of the most widespread biases and outcomes will be addressed.

Our first research question (RQ) focuses on the personal dimension by exploring how interviewer biases shape decision-making and influence selection outcomes.

RQ1. To what extent do bias mitigation strategies (e.g., structured rubrics, interviewer training, building rapport, anonymized evaluations) improve the reliability and validity of selection decisions compared to traditional interview practices?

To deepen this inquiry, we have structured the following sub-questions.

1.1. What are the most widespread types of biases exhibited by interviewers during an interview selection process?

There are multiple methods for assessing personnel selection efficiency, including cognitive tests, personality assessments, and the interview format.14 Although employment interviews remain a central component of selection processes, they often lack objectivity due to implicit biases rooted in rapid, unconscious processing, making them difficult to detect and correct.6 It does not take long to introduce a bias after meeting someone. Empirical evidence showed that as minimal an exposure time as a tenth of a second is sufficient for people to infer a specific trait from facial appearance.1517 Interviewers can develop biases against applicants from several sources, including resumes, social media, feedback from current or former leaders (for internal candidates), and the common habit of reviewing applicant materials just before the interview.8

The related studies suggest that most interviewers exhibit elements of implicit bias. However, it is not the only bias discussed in the literature. The definition of implicit, which has been closely researched by cognitive psychologists, relates to the idea that implicit memory is an unconscious form of memory. “Implicit biases involve the unconscious, rapid, and automatic processing of information and can be in direct contradiction to consciously held values and beliefs of individuals”.6 When this implicit behavior appears in the interview selection process, the particularly damaging content of stereotypes can hinder the accuracy of a decision. While reducing bias in interviews is never easy, interviewers can minimize bias by applying an objective selection approach. Research findings indicate that both explicit and implicit biases exist at multiple points throughout the interview process.16 Moreover, these biases are harmful and inherently unfair. Literature on implicit bias supports that when individuals are aware of their stereotypes, this proactive mindset can affect their social judgment and behavior. The main distinction between implicit and explicit biases lies in the level of awareness and control.18 Such bias influences interviewers’ evaluations, undermines fairness, and often results in negative candidate experiences and reactions. To ensure a fairer hiring process, interviewers must first acknowledge that implicit bias affects their decisions, an issue that continues to create disparities in employment selection.6

There are different possible sources of first impression bias. One possible source is confirmation bias, a phenomenon in which people tend to seek and overweight information consistent with their current beliefs.15,19 Confirmatory bias is more likely to occur when interviewers are unaware of the bias and can influence behavior without altering information processing or affect information processing without changing behavior.5 Other types of biases mentioned in the SR are defined:

“Selection bias implies that the relationship between exposure and outcome may differ between participants in the study and those who do not participate. Because researchers typically do not know this relationship in non-participants, they can usually only hypothesize about the potential for selection bias”.20

Interviewers introduce interviewer bias when they (sub) consciously gather selective data or influence the subject’s responses. “Differences in the accuracy or completeness of recalling past events or experiences cause recall bias”.20 Being overly confident can lead to risky decisions and have numerous consequences for the outcomes of those decisions. Research reveals a notable gap in understanding how interviewers process candidate information and perform when making selection decisions, with overconfidence, “an unwarranted belief in the correctness of one’s answers”,20 emerging as a key factor linked to this issue.

1.2. How does Interview training impact the reduction of interviewer’s first impression bias?

First impressions remain a talking point when forming biases during a job interview.21 The systematic review has shown that generally, knowledge received first tends to outweigh data received later. First, impressions have a lasting impact on perceptions and future behavior, and they are a significant factor in employment selection. This behavior, if accepted, can diminish a sound decision and cloud the accuracy of a candidate’s selection. Furthermore, reviewing or getting familiar with information about applicants before the interview can lead interviewers to form their initial impressions of the applicants.9 An initial impression bias occurs when decision-makers place disproportionate emphasis on initial experiences or early information when forming judgments about a process or individual. An explicit form of first impression bias is confirmation bias. Individuals look for information that is relatively steady with their current credence.15

The potential for first impressions to color an interviewer’s judgment is a key consideration in the hiring process.21 The purpose of an interview is to gather comprehensive details without prejudice, and this objective is missed, interviewers may selectively gather or interpret information, leading to biased decisions.22 When the initial impression is highly positive, subsequent evaluations often become excessively optimistic; conversely, a negative first impression can lead to overly critical assessments.15

Training materials are valuable, yet another inconsistency lies in the interviewer’s experience, training, and familiarity with the interview guidelines. While the truth is that some interviewers can interview better than others, or as Wingate et al.9 explain, while training may lead to bias, some interviewers are more susceptible to bias because of the training that they have received, their experience, or the capacity to remain focused on the accuracy of the selection process. There is a debate between asking questions that could lead the interviewer into trouble and knowing when to use probing questions to add value.19 An interviewer with training who can ask insightful questions, gather, interpret, and evaluate complex cues, understand job competencies, and remain focused on accuracy and fairness, playing a significant role in the selection and validity of the interview process.4 An array of interviewer factors should be considered: experience, training, and efficacy. A skilled interviewer who can ask, collect, interpret, and evaluate complex cues, predict, understand job competencies, and remain focused on the accuracy and fairness of the interview selection and applicant merits is a defining feature in accurate decision-making.14

Research in interview training suggests that interviewers with more preparation ask more questions, which can prolong the interview while encouraging them to engage and behave differently than those with less training.23 Frieder et al. suggested limiting the number of candidates per interviewer to a narrow range of around four. Specifically, decision time increased as interviewers progressed through the first few applicants However, after about four applicants, they explained that decision time reached an asymptote and decreased as interviewers evaluated additional applicants based on cognitive load theory; according to their understanding of this theory, people have an unlimited capacity for long-term memory but a relatively limited capacity for working memory, which may hinder the efficiency at gathering and processing information, leading to a less comprehensive evaluation.23

Interviewer’s training and preparation are positive strategies that yield benefits in both structured and unstructured interviews.5 While training offers an alternative approach to recognizing and addressing implicit bias in the interviewer, one downside is that it has a short-term effect on eliminating completely unfairness.6 Another point is the relevance of training interviewers in self-awareness and self-regulation (key areas in emotional intelligence), which may upgrade the selection process. Interviewers’ accountability is an additional justification for fairness in the selection process. One last reference is for interviewers with less experience to review other interviews before the next one. Despite applying training to avoid biases during the interview process, interviewers often find ways to judge others. While training may not eliminate biases, it can create awareness and provide opportunities for additional resources, such as educational videos, role-play, and self-assessments, thereby preventing scenarios of premature and unfair judgments.16

1.3. To what extent does the use of standardized scoring systems and feedback mechanisms among interviewers

Interviewers should benchmark what the role entails in long-term performance.24 Nevertheless, interviewers go into interviews without understanding what they will measure or look for. There is a heated conversation about the validity, consistency, standardization, and reliability of interviewers’ ratings of interview answers, as well as the consistency behind them. The lack of standardization creates room for illicit and inappropriate questions; furthermore, standardized questions are selected and asked to all applicants without bias.19 The reliability and validity of the interview process depend on several critical factors, one of which is the interviewer’s ratings. Research on interview selection reveals that these ratings frequently reflect interviewer bias and can be influenced within minutes of the interview’s commencement.24

There is an atypical difference between judgment in unstructured interviews versus structured interviews. In the latter, this interview modality provides numerical ratings on several dimensions or questions asked. The interviewee’s answers are “scored” utilizing anchored scales and a rubric by the interviewer’s ratings across the separate dimensions or competencies. The goal is to follow a standard process and established guidelines, in addition to the rubric system and scoring for each answer, with the mindset of helping interviewers identify the criteria in classifying what constitutes a key to scoring a high or low-performing answer, and to reach a consensus when comparing the applicant pool for a final selection decision. This evaluation process needs to be standard to avoid a snap decision. Interviewers’ “snap” decisions have resulted in uniform gaps and implications for all parties involved in the selection process.23 Interviewers’ ratings and final scores should represent a holistic judgment based on the entire interview process rather than solely on first impressions, judgments, and assumptions, which may result in an inaccurate decision.21 In reality, interviewers may fail to reach a consensus before they finalize the selection decision. Disagreements can be attributed to differences in competency, interviewing experience, training, ideologies, or how evaluators organize and interpret the same information. When different interviewers evaluate candidates, these idiosyncrasies introduce variability to the ratings that are unrelated to the assessed competencies, thereby reducing their validity.14

When a selection procedure adopts a neutral stance, for example, by focusing on assessing all applicable facts, it improves fairness perceptions related to that situation.6 Additionally, it may lead to a greater focus on objective criteria and knowledge about candidates, resulting in fairer selection decisions. Collaboration among interviewers is essential when the selected interview format involves a panel or multiple interviewers participating in the interview process. It is crucial to have a valid process for integrating, interpreting, and analyzing the interview content to ensure the process’s validity. Seeking feedback is another crucial factor regarding the accuracy of the information collected, and the overall impression of the interview process among the interviewers is where the value, validity, and reliability of the decision lies.5

Accordingly, the second research question focuses on the technical dimension by analyzing how various interview types and formats shape the structure and effectiveness of the selection process.

RQ2. How do organizations integrate essential components to enhance the effectiveness of different interview modalities? Such as feedback, building rapport with applicants, to collaboration among interviewers. This research question has been mapped into the following questions:

To guide this analysis, the researchers have developed the following sub-questions.

2.1. Structured interviews improve the predictive validity of hiring decisions compared to unstructured interviews.

Based on the SR, there is more consensus that a trained interviewer and a structured interview format give the interviewer and the interviewee greater reliability and validity in the interview selection process. In discussions concerning the interviewer’s ratings, highly structured interviews involve interviewers asking the same questions in the same order, with no follow-ups or probes allowed.5 Structured interviews offer a variety of benefits: candidates are evaluated against the job for which they are applying, allowing the best candidate to identify the job’s key dimensions.17 Having a structured interview and interviewer rating calibration may reduce confirmatory bias. However, a higher structural degree offers greater validity to the interview process.15 Structuring the interview can help reduce biases, but it does not eliminate interviewer biases.9

Structured interviews offer the opportunity to eradicate and reduce bias to a greater extent than unstructured interviews; furthermore, a structured interview can be perceived as more consistent and fairer.6 This type of interview format or protocol allows the interviewer to order and select the questions asked, which are the same for all applicants.4 However, one limitation mentioned is the lack of elaboration beyond the questions selected for the interview process. In other words, the central issue is not the elaboration itself, but whether it is done similarly for all candidates.4 Another interesting perspective on this modality is that while applicants who engage in self-promotion during an interview may receive higher ratings from interviewers, this effect diminishes when interviewers use a structured interview format.26

2.2. The selected modality (type or format of interviews) compares in terms of validity and reliability.

Structured and semi-structured interview formats offer benefits by allowing the organization to understand the interview interaction and the essence of making a selection effectively and fairly in distinctive ways.15 Semi-structured interviews offer a unique blend of techniques, combining mixed methods and flexibility while remaining focused on a validated, holistic assessment. Adeoye-Olatunde and Olenik1 note that a semi-structured interview is better suited when the interviewer seeks a unique perspective from the applicant rather than holding back the entire understanding of the interview’s purpose. The distinctiveness of this format lies in the fact that the interviewer still uses predetermined questions but is free to ask additional questions for clarity; nonetheless, it also requires expertise from the interviewer.4

Regarding the interview type, meaning whether to utilize a single interviewer or a panel of interviewers, the SR associates a panel of interviewers with fairness. A basic premise is that interviewers who demonstrate care and promote applicants’ well-being and self-esteem foster a more ethical and fair perception of both the interview process and the organization itself.6 Panel interviews are likely to enhance interviewer accountability, which can positively impact the effectiveness of the employment interview.15 A single interviewer can accomplish a face-to-face interview with accuracy when it is only used if the interviewer has a history of accurate judgments.14 Face-to-face interviews offer greater opportunities to understand the applicant, yet ensuring fairness and equity remains essential, as subjectivity can easily introduce decision bias.22

2.3. Rapport building plays a key role in enhancing interview outcomes across various interview formats by improving trust, communication, and the overall effectiveness of the interview process.

One area that remains open for discussion is the level of conversation or rapport-building before starting the formal questions, which can vary significantly. Building rapport can ease any potential tension.17 Maintaining the centrality of a pre-interview stage, there is a need for a sense of connection from the interviewer’s perspective to the interviewees to ease the conversation before the formality of the interview begins. The practice of rapport would allow the participants to feel attuned to each other, generate an atmosphere of openness, and create a more natural flow of communication.

While rapport building may be limited in structured interviews, it remains an important element that should not be entirely excluded from the interview process.21 These researchers recommend incorporating rapport-building procedures instead of eliminating rapport building to avoid the urge to form impressions. On the other hand, experienced interviewers and those who built rapport tended to make quicker decisions.23 The opportunity to build rapport is more prevalent in unstructured interviews because, in structured interviews, interviewers are instructed to stay on course with questions relevant to the job. They caution that when building rapport in unstructured interviews, interviewers should receive training to use the interview structure and stay focused on job-related questions.

Observed risks to validity

The researchers suggested that a comprehensive systematic review would reveal that factors such as biases, lack of interviewer training, disparity in establishing rapport with applicants, inconsistency in interview ratings, and selecting the most appropriate interview format would adversely impact the validity and reliability of an employment interview selection decision.

An interview selection should predict several factors before the final decision.14 Throughout the SR, structured interviews possess higher reliability and validity than other interview formats. As previously mentioned, there are several technical dimensions of the interview. Interviewers are increasingly exploring alternative formats in their quest for greater predictive validity and efficiency.24 There are three types of interviews: structured, semi-structured, and unstructured. One crucial argument for greater validity is the consistency of the interview’s structure, particularly in terms of question and response evaluation. There are arguments that unstructured interviews offered less criterion validity when compared with estimates on highly structured interviews. By incorporating standard questions and coherent scoring guidelines to evaluate the applicant’s responses, the validity and reliability of a structured interview increase compared to an unstructured interview.3 Extending the discussion on interview format selection, Florea et al.15 support the view that semi-structured interviews also offer high validity.

Other predictors are the selection of a panel of interviewers or a single interviewer. A panel of interviewers will include individuals with diverse traits, personalities, backgrounds, and opinions that can counter or mitigate a one-sided effect on an interview decision.15 While some studies suggest that the length of the interview may generate fatigue on both sides, the interviewer and the applicant, other studies support the notion that the validity and reliability of the interview are not compromised by the first impression and early decision made by the interviewer.21 Suppose the selected type of interview involves employing multiple interviewers for multiple candidates. In that case, the selection of these interviewers and training may augment the validity of the process. It will reduce the induction of single judgments and biases linked with a single interviewer, resulting in a more reliable assessment.14

Building on the responsibility of accurate and unbiased interviewer ratings, having a consistent scoring system where all applicants are equally assessed, and further understanding how interviewers’ decisions may vary across applicants or over a slate of interviews may aid the mission to increase reliability and validity.23 This opportunity can arise due to a disagreement between raters and a lack of consistency or inadequate training in the interview selection process. Acting as an interviewer is a significant responsibility, and providing an improper rating of an applicant’s response will compromise the validity of the selection. Hickman et al. highlight that one potential source of contamination is that interviewers’ ratings have a strong influence on hiring managers’ decisions.24 Scoring for each answer helps the interviewer classify what constitutes a high- or low-performing answer, thereby bridging the gap in selecting the best applicant based on interview performance. They continue to align with the notion that training plays a significant role in minimizing biases and playing a middle ground to avoid a less favorable score for an answer.24 When fairly scoring a response, the validity and reliability of the interview’s results are more likely to be compromised.19

Interviewer training should be an integral part of the selection process, with refresher courses reinforcing the importance of gathering holistic information from each candidate.23 However, a lack of training may prompt the interviewer to employ a different strategy for processing information when dealing with more advanced and complex questions. Training can also inadvertently lead to biases, as interviewers may become more aware of their flaws or biases. Wingate et al.9 found that younger interviewers were slightly more susceptible to bias-related characteristics than their older counterparts, and trained interviewers were more prone to bias than those without formal training.

Discussion

In employment interview selection, there has been a long-standing interest in what makes a good interview selection decision. Prior studies investigating employment interview selection have examined various effects of biases, the structure of the interview, the mechanics of the relationship between the interviewer and interviewee, and other behaviors related to the accuracy of the selection decision. Therefore, the purpose of the present study was to investigate and provide a comprehensive overview of research in the field of interview selection, presenting a systematic review (SR) that focuses on factors such as interviewer biases and the impact of selecting the appropriate interview modality in the employment interview selection process.

The reality is that organizations miss out on gaining ground and significant opportunities when they do not offer a candidate an opportunity because, according to the interviewer, they need to meet specific selection criteria.7 The reality is that interviews tend to focus on negative versus positive information to eliminate applicants. As a result, by focusing on the negative, this information becomes more salient. It is likely to be a negatively impactful factor during the selection process.9 Selecting personnel remains a challenge when predicting validity and assessing the impact on the decision process, as the entire interview is not error-free. The dilemma appears when the drumming sound requires a decision. There is an argument that when evidence is required to make a verdict, interviewers often rely on subjective shortcuts, such as inferences and predispositions, to make decisions.15 What happened next was a missed opportunity to extract information to make an accurate decision.23 Although organizations today insist on improving employee selection using a more standardized interview system and have the opportunity to incorporate different types of interviews to minimize human errors, these still exist.

The findings from the document reveal that biases in employment interviews persist as a significant challenge in the pursuit of fair and effective hiring practices. The interview aims to provide the interviewer and candidate with a reliable, validated, fair, and dynamic experience. Based on the SR, the key to having an effective and successful discussion lies in recognizing bias, preparing for the interview process, and understanding what is at stake (measuring success). Managing the influence of biases during an interview promotes fairness and objectivity. It minimizes the discriminatory, unconscious, and rapid effects that tend to shape an interviewer’s judgments within seconds of contact.6 Biases can be persistent and automatic, undermining core principles.25 Preconceptions can surface from visual cues, such as stereotypes, or through like reviewing materials immediately before interviews.9 Interviewers are encouraged to direct their bias awareness toward recognizing the distinction between what they ought to know and what they actually know.21 Experience-based questions that draw on past situations tend to yield more valuable insights than those framed around hypothetical circumstances.5

As discussed in the studies on the interview selection process, one key insight is the importance of providing adequate training for interviewers. A well-trained interviewer who applies neutral observational skills, communicates without bias, and maintains a clear focus on job-related competencies can strengthen the validity of the selection process.4 While interview training does not eliminate interviewers’ bias, workshops and mock interviews are regarded as aids in gaining confidence, experience, and sufficient exposure. These tools thereby minimize the risk of pitfalls, such as asking leading questions, and maximize the best use of the interview process to reach a more accurate selection decision.1 One important area for further investigation is the time elapsed since interviewers’ last received training and the quality of that training. Research indicates that interviewer experience and training are associated with higher self-efficacy. However, in a study of 1,042 interviews, more than 69% of decisions were made within the first five minutes. This finding underscores that even seasoned interviewers can benefit from periodic refresher training.23 The goal of the training should not be to overlook the applicants’ characteristics, as doing so would render the selection process incomplete. Furthermore, it will reduce the credibility of the training and introduce ethical challenges.9

Regardless of the interview format used for the selection process, selecting the most appropriate one will influence the validity and fairness. Unstructured interviews offer flexibility and provide an opportunity to ask questions outside the interview guide; however, some limitations for interviewers include the time consumed and the potential introduction of biases. This researcher recommends that this format is more accurate and valuable for experienced interviewers.4 Unstructured interviews offer lower criterion validity compared to highly structured formats. They also create challenges, such as inconsistent questioning and the difficulty of comparing candidate strengths fairly, which limits their effectiveness in producing reliable hiring outcomes.17 Furthermore, they can create difficulties due to the use of different questions and the inability to match strengths when comparing multiple candidates, among other challenges. The interview, though imperfect, can play a crucial determining factor in improving the validity, consistency, and reliability of interviewer evaluations through the selection process, standardization, and the use of structured interviews accompanied by scoring rubrics and calibration between interviewers, as supported by.23 Another valuable interview format is the semi-structured interview, which is considered ideal when seeking nuanced insights from candidates, as it strikes a balance between flexibility and consistency.1

This interview format has been shown to demonstrate a high degree of validity, reinforcing its effectiveness in evaluating candidates.15 However, this format requires a high level of expertise from the interviewer to be applied effectively.4 Whether a single interviewer or a panel of interviewers is involved in the selection process, fact-based criteria, calibration sessions, and feedback mechanisms can improve fairness perceptions, reduce variance unrelated to candidate competence, and help foster alignment among panel members.5,6 One potential method is to conduct a job analysis for the position being filled, as this helps outline how well each applicant aligns with the requirements and their likelihood of becoming a successful performer.9 Taking this patch can outline the applicability of each applicant to be a successful performer in the job in question. Panel interviews may further promote fairness by increasing accountability and minimizing individual bias.6 In contrast, a single interviewer model relies heavily on that individual’s judgment and experience, potentially magnifying biases unless carefully managed.14

“Rapport building” refers to a brief, casual conversation aimed at easing an applicant’s nerves and creating a temporary connection; although subtle, it remains a powerful element in the dynamics of an interview.2 Interviewers often assess interview success based on how well they establish this superficial relationship, which depends on being perceived as both warm and professional, as well as knowledgeable. With this in mind, an interview can influence an applicant’s behavior, cognitive thinking, and overall performance. It is not only exchanging an answer for another question but also involves multiple interviewers with different personalities, recalling prior job experiences, and trying to impress the interviewer.24 Arguably, it is practical not to overlook the interviewer’s experience, skills, and characteristics because, one way or another, they will impact the results. Good listening is a practical skill for interviewers, especially in situations where an applicant’s answer reveals a misunderstanding of the question. Interviewers can also ask candidates for feedback, which not only improves future interviews but also enhances the applicant’s overall experience.4

The idea of driving a positive candidate experience (through fairness and clear expectations), offering valuable training opportunities to interviewers, and ensuring the accuracy of the interview process (in terms of format and modalities) will yield higher validity and reliability, ultimately offering the organization a competitive advantage.5

Conclusions

This article examined factors that compromise the validity and reliability of an employment interview selection. The objective was to continue expanding the debate and highlighting the opportunities for the interview process to be unbiased, particularly in terms of the interviewers’ decisions and format, to benefit from this interaction and experience accurately.

One factor discussed in the SR indicates that choosing the proper interview method is critical to fully understanding the candidate’s potential for long-term fitting into the team and company culture. By streamlining the interview process, implementing several of the techniques discussed previously in this document, and clearly understanding the purpose of the interview, personnel involved in interviewing and selecting candidates have the potential to improve the accuracy of their final decisions. Organizations can substantially enhance the effectiveness of their hiring practices by developing a deeper understanding of the interview as a selection tool and strategically applying its potential to improve decision-making and organizational outcomes.25 Offering the interviewer and the candidate more flexibility in the interview process with fewer constraints can lead to a more desirable outcome and an accurate selection decision. The outcome can become less biased by magnifying and offering a semi-structured interview.15 However, there is still room to investigate these differences between single and panel interviews regarding validity.14

The literature on selection has shown that a broad spectrum of influential factors remains embedded in the interview selection process. The interview remains an interaction, a social method for evaluating applicants, where a trade of data, words, and emotions occurs between the interviewer and the applicant. Moreover, some impressions and similarities can influence a positive selection and carry more weight than other applicable qualifications and cognitive proficiencies.6 Given the complex and multifaceted nature of what constitutes a structured employment interview and the potential for interviewer bias, the researchers argue that a review of the literature is necessary to generate a comprehensive understanding of what is known, what has been learned, and what has changed to add value to the topic of employment interview selection processes. Another alternative to the standard approach involves introducing variations into the existing interview process, specifically by measuring the efficiency, value, and opportunities that ideally emerge during each interview.19 Research is needed to identify the key components of the interview structure and determine which ones are most important. There are significant differences between unstructured and structured interviews, including the selection of questions, the level of free and natural flow, and the absence of a rubric system.19

One last recommendation is that, while it is not common for interviews to be recorded. Note-taking remains the primary method for documenting the interview process. It is recommended for accurate purposes and to avoid biases, such as confirmation bias. As an alternative solution, reconsider recording the interview rather than relying solely on note-taking.5 Transcribed recorded interviews can help determine several previously unseen characteristics of the applicant, such as language use and the time between responses. These researchers discourage the use of mixed modalities during interviews because the data may introduce variation when analyzed. Interestingly, ensuring rigor and credibility are key factors that every interview process should have.18 Furthermore, interviewers should consider providing equal time for all applicants to think before answering question.1 Fairness and accuracy are two of the highest grades in the selection process.8

Taken together, these findings underscore the need for further research to identify and mitigate biases, ultimately strengthening the interview as a reliable, fair, and effective tool in hiring. This review seeks to spark interest among researchers in addressing these critical issues surrounding employment interview practices.

Reporting guidelines

Repository: PRISMA checklist and flow chart for ‘The Interview Trap: A systematic review on factors affecting the validity of Employment Interviews’. https://doi.org/10.5281/zenodo.17195256.27

Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 09 Dec 2025
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Morales H and López A. The Interview Trap: A systematic review on factors affecting the validity of Employment Interviews [version 1; peer review: awaiting peer review]. F1000Research 2025, 14:1381 (https://doi.org/10.12688/f1000research.168805.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status:
AWAITING PEER REVIEW
AWAITING PEER REVIEW
?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 09 Dec 2025
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.