Keywords
Instructional practice, public university instructors, validation, factor analysis
Measurement is essential for methods of instruction to be successful. Having an instrument that is reliable and validated in the given setting is vital. The primary goal of this research project is to validate the instructional practice scale (IPS) for university instructors in the Ethiopian context.
By implementing a cross-sectional descriptive survey research design, 1,254 participants across four public universities – Arbaminch, Dilla, Wachamo, and Jinka representing the first, second, third, and fourth generations, respectively were randomly selected and participated. The data was split in half and underwent an exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).
The three components of the EFA were alternately filled with seventeen items that satisfied certain standards, had a loading value of > .5, and a Cronbach alpha of ≥ .874. The factors identified in the EFA have been confirmed to be the thirteen items with loading, Cronbach alpha, Raykov’s rho coefficient (rho_A), composite reliability (CR) value of >.7, and Average Variance Explained (AVE) value of > .5. Tests of measurement and structural models showed a good fit. The Fornell-Larcker criterion, which is employed in discriminant validity analysis, demonstrates that the square root of the AVE for each construct is higher than the correlation it exhibits with other constructs. The correlations’ heterotrait–monotrait (HTMT) ratio is getting close to zero, and there isn’t one in the confidence interval at the.05 significance level. Both guaranteed strong discriminant validity.
For university instructors, the 13 items generally have powerful psychometric properties. The three subsections of the instruction practice scale—planning (4 indicators), delivering (4 indicators), and assessment (5 indicators) — are meant to measure the instructional practice effectively. Implications of the findings were further discussed.
Instructional practice, public university instructors, validation, factor analysis
Changes have been made in response to the reviewer's comments in this revised manuscript, with particular emphasis on the introduction, limitations, and conclusion sections.
See the authors' detailed response to the review by Kris Stutchbury
One of the most important factors in encouraging students to learn and succeed better is the instructors’ instructional practice. In a transmission model of teaching, a teacher imparts knowledge and students absorb it passively (Emanalia, 2017). Traditionally, instructional practice—also referred to as teacher-centred practice—is a formal and controlled instructional practice where the instructor plans what, when, and how students learn (Horvat-Samardžija, 2011); teachers were the ones who created the lesson in the classroom (Saleh & Jing, 2020). An alternative view of instructional practice highlights the needs and viewpoints of pupils. How, what, and when learning occurs is set by both the instructor and the pupils (Horvat-Samardžija, 2011). Schweisfurth (2013) posits that learner-centered education (LCE) is fundamentally rooted in pedagogical attitudes rather than mere practices, outlining key dimensions such as classroom relationships (ranging from authoritarian to democratic), learner motivation (extrinsic versus intrinsic), the nature of knowledge (fixed versus fluid), the teacher’s role (authoritative versus facilitative), and curriculum design (fixed versus negotiated). Within this framework, effective LCE implementation includes practices such as leveraging sociodemographic data—including learners’ cultural backgrounds, family structures, and community contexts—to tailor instruction, as well as fostering an inclusive learning environment where individual differences are respected to cultivate a sense of belonging. Such principles can be operationalized through strategies like allowing students to select assignment topics aligned with their interests, collaboratively establishing classroom norms to promote mutual respect, integrating culturally responsive materials (e.g., texts, case studies, and examples reflective of student diversity), and employing differentiated instruction that accommodates varied learning modalities (e.g., visual, auditory, and kinesthetic). These approaches collectively ensure that pedagogical practices align with the ethos of LCE, promoting equity, engagement, and academic success. Similarly, this examines instructional practices through the lens of how educational institutions organize (plan), implement (deliver), and evaluate (assess) student learning, while integrating learners’ perspectives—with a particular emphasis on the behavioral dimension of instructional practices as implied in Bibon (2022).
Research indicates that teachers, a lack of course materials, students’ disinterest in the subject, and ineffective teaching strategies are all important factors influencing students’ performance (Majo, 2016). In fact, significant funds are allocated to enhancing institutions and developing educational materials to augment students’ performance (Barrett, 2018). Nonetheless, there have been initiatives to deliver education; Open University (2018) and Iglesias (2016), for instance, modified the science curriculum to provide instruction that enhances student learning. This generally suggests that instrument validation intended to evaluate one of the essential components of success—that is, instructional practice—is either not prioritized or is not given enough weight. Competency-based assessments utilizing instruments that have been contextually validated have lagged despite the amplification of the instruction issue.
Many academics evaluate teaching or instructional practices from diverse angles. Learner-centred teaching practices, such as those found in Sarwar, Zerpa, Hachey, Simon, & Barneveld (2012); classroom organization, student orientation, and enhanced activities-based measures that were adapted from the Organization for Economic Co-operation and Development (OECD) (2009); high school instructional practice measures that emphasize a focus on people (Fischer, Fishman, Dede, Eisenkraft, Frumin, Foster,… McCoy, 2018); performance criteria-based measures (Zemelman, Daniels & Hyde, 2005); teaching for conceptual understanding measures (Mullis, Martin, Gonzalez, Gregory, Garden, O’Connor, & Smith, 2000); and observations (Saleh & Jing, 2020). Neither of them speaks of every stage of instructional practice, from preparation to evaluation. By assessment, Bibon (2022) addresses these kinds of problems. Collecting the items from Abundo (2019), Benosa (2018), and Sergio (2018) into components related to instructional planning, delivery, and assessment, Bibon (2022) addresses these kinds of problems.
In general, we validated the instructional practice scale to more accurately assess the construct in the context of Ethiopian university instructors, presuming that the Ministry of Education would provide freshman students with uniform learning modules. Measuring this construct using a validated instrument is essential to gaining knowledge of it, effectively conveying that to others, and making necessary corrections. A construct needs to be evaluated using a reliable and appropriate in-context tool to obtain its images. In light of this, the following goals of the study were set:
The purpose of this study was to validate the instructional practice tool in the context of Ethiopian public university instructors. Hence, we employed a cross-sectional descriptive survey method to gather data from the target population at certain points in time. We randomly select the southern part of the country. All eight universities identified in it, are categorized based on generation (year of establishment). Four public universities—Arbaminch, Dilla, Wachamo, and Jinka—representing the first, second, third, and fourth generations, respectively, were utilized to select participants.
The items were first developed by Abundo (2019), Benosa (2018), and Sergio (2018) to account for the instructional practice of teachers through classroom observations to compile their thesis at Bicol University. By 2022, Bibon extracted those items and compiled them in the form of a scale responded in five alternative responses - never, rare, sometimes, frequently, and always. Grounding on the constructivism of teaching and learning and the suggestion of educational institutions, Bibon (2022) classified into three categories of instructional practice. These include—planning (8 indicators), delivering (9 indicators), and assessing (8 indicators)—the scale is meant to measure the instructions used by scientific teachers. In his study, the scale’s Cronbach alpha of .86 indicated better internal consistency when assessing the construct.
The study’s target population consisted of instructors at Arbaminch, Dilla, Wachamo, and Jinka universities. Various sample sizes have been suggested to perform factor analysis. For instance, the following criteria are deemed excellent: at least ten times as many subjects as variables (Everitt, 1975; Nunnally, 1978); at least 100 subjects (Gorsuch, 1983; Kline, 1994); sample size to the number of variables (e.g., three to six subjects per variable) (Cattell, 1978); sample size-to-parameter ratio of 20:1 (Jackson, 2003); and 50 - Very poor, 100 - poor, 200 - fair, 300 - good, 500 - very good, and 1,000 or more scale of sample adequacy are excellent (Comrey & Lee, 1992). According to Comrey and Lee, a total of 1300 individuals were chosen to detect structures, representing an excellent sample size (1000) accounting for a maximum response error of 30%.
Kothari’s (2004) stratified proportional sample size formula, nh = (Nh/N)*n, was employed to draw participants proportionally from the four universities. Where N represents the entire population size, Nh represents the sample size for the hth stratum, nh represents the sample size, and n is the sample size. Therefore, nh calculated as follows: 432 for Arbaminch University out of 1,720 instructors, 318 for Dilla University out of 1,263 instructors, 281 for Wachamo out of 1,119 instructors, and 269 for Jinka University out of 1,069 instructors, assuming N = 5,171 and n = 1300.
Since they were either inadequately completed, incomplete, or not returned, 46 response papers were removed. The 1,254-participant data were randomly divided into two groups, 627 participants in each group. The data was then utilized to find patterns of structure and confirm them using confirmatory factor analysis (627).
2.3.1 Content validity evaluation
According to Almohanna, Win, Meedya, and Vlahu-Gjorgievska (2022), reliable instruments yield reliable data. Lawshe’s (1975) content validity quantitative evaluation method was used to evaluate each item by nine experienced subject matter experts (SMEs) from the following fields: social psychology, educational planning and management, curriculum and instructional provision, educational measurement and evaluation, and so on. The formula for computation is displayed as follows:
CVR = content validity ratio
ne = number of panellists pointing to the item as ‘essential’
N = total number of panellists
A three-point rating system was used to rank each item on the draft data-gathering tool (1 not essential, 2 useful but not essential, and 3 essential). CVR has a value between -1 and +1. The item is deemed acceptable and clear if the value is positive; it should be reworded, modified, or rejected if the value is negative; and it is deemed necessary and legitimate if 50% of the panellists in the N size assess the item as essential. In general, every item satisfies the acceptable standard of ≥.75 (Lawshe, 1975), suggesting that the items are extremely important. The overall mean of all items in the scale using the Content Validity Index (CVI) statistical technique was .88, exceeding ≥.70 standard given by Tilden, Nelson, and May (1990), and ≥.8 suggested by Davis (1992).
2.3.2 Data collection
This study was conducted in 2022 to 2023/24 academic year. The participants were recruited between May 1st and May 30th, 2023, and data was gathered from June 1st to June 30th, 2023. The questionnaire was administered in person. We gathered an endorsement letter and reached out to department heads and deans of colleges and institutes. They were given a brief overview of the study’s objectives, the possible participants, the type of data collection tool, and the typical amount of time needed to complete the questionnaire. Through establishing a communication channel with these high-ranking officials at various stages, a survey was dispersed around departmental offices. After that, it was distributed at random among instructors who provided consent until the target number of participants was attained.
2.3.3 Data analysis
Cleansing of data was done before data analysis. As a result, eight response sheets that were improperly filled out and 3 that were not returned were excluded. SPSS-23 (https://www.ibm.com/support/pages/downloading-ibm-spss-statistics-23) and a free trial version smartPLS-4 (https://www.smartpls.com/downloads/) were utilized for the data analysis. SmartPLS was used to investigate confirmation factor analysis, and descriptive statistics and exploratory factor analysis were carried out using SPSS.
The Center for Educational Research along with the Office of Research and Dissemination and the Office of Vice President for Research and Technology Transfer at Dilla University have ensured that the issue under investigation complies with academic research criteria and ethical standards on 13/01/2023.
Potential participants received brief instructions regarding the study’s overall goal and the characteristics of the data collection instrument. The representatives from the above-mentioned offices on the same date confirmed that collecting oral consent from participants is sufficient for the present study. Accordingly, we obtained verbal informed consent from each participant. The confidentiality of the participants was greatly protected by avoiding mentioning their names and other relevant identifiers during the data collecting and reporting procedure. Representatives from Center for Educational Research, Office of Research and Dissemination, and the Office of Vice President for Research and Technology Transfer at Dilla University were approved the unharmfull nature of the data collection tool and assumed the number of participants and confirmed that collecting verbal consent from participants is sufficient for the present study.
Table 2 indicates that 1,254 instructors took part. Approximately 956 (76.2%) participants were male, and 298 (23.8 %) participants were female, making up around three-fourths and one-fourth of the total, respectively. The age distribution has a mean of 34.16 years and a standard deviation of 4.37, falling between the minimum age of 28 and the maximum age of 45. This number appears to be in line with the distributions of work experiences and academic ranks. There were 1186 (94.6%) master’s degree holders, 50 (4%) PhD holders and 18 (1.4%) assistant lecturers as the final minimum size. This means that the instructors in the minimum, maximum, and average age groups will be covered, accordingly. One year of work experience at a university is the minimum, while sixteen years is the maximum. Ultimately, 852 (67.9 %) participants, or the higher two-thirds, underwent training for higher education teaching under the Higher Diploma Program (HDP). In contrast, approximately one-fourth of instructors were not.
We employed the Direct Oblimin with Kaiser Normalization Rotation Method, a Maximum Likelihood (ML) Extraction Method, an Eigenvalue exceeding one, and a factor loading cut-off value of .5 (greater than the default criteria, i.e., .3). By assuming that the observed variables are normally distributed, ML produces factor structures with high correlations of indicators. With big sample sizes, ML yields estimates that are effective, less skewed, and less variable.
3.2.1 Assumption test result
To move forward with EFA, multiple assumptions were examined. According to George and Mallery (2019) and Hair, Hult, Ringle, and Sarstedt (2022), the data distribution resulted in a relatively normal distribution, falling within the range of ±1, -.394 for skewness, and 0.014 for kurtosis. Other tests, such as the Kolmogorov-Smirnova, Shapiro-Wilk, and Z values or critical ratios for normalcy, have shown minor violations (skewed to negative).
Variance Inflation Factor (VIF) results for instructional planning, delivery, and assessment were 1.052, 1.049, and 1.004, respectively. Tolerance values were.95 for instructional planning, .954 for instructional delivery, and .996 for instructional assessment sub-scales. As indicated in the literature, both tests verified that there was no issue with multicollinearity with this set of data. for example, a VIF higher than 5 to 10 (Kim, 2019), VIF greater than 10 and a tolerance value < 0.10 (Hair, Black, Babin, & Anderson, 2010) indicates a potential problem of multicollinearity, a VIF < 5 (Ringle, Da Silva, & Bido, 2014; Rogerson, 2001) and even 4 (Pan & Jackson, 2008) are considered acceptable.
The internal consistency of the overall and subscale items was checked using Cronbach alpha, resulting in .874, .928, .886, .786 and instructional planning, delivery, assessment subscales and overall scale, respectively ( Table 4). As stated by Sarstedt (2019), this guarantees the measurement’s unidimensionality and sub-dimensionality nature and satisfies the need for EFA analysis (.7 minimum criteria). The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy is .846, which is around the desired (≥.70) category, according to Kaiser (1974), Hoelzle & Meyer (2013), and Lloret, Ferreres, Hernandez, & Tomas (2017). Bartlett’s Test of Sphericity, 7921.264 (p=.00), further confirms that the data are suitable for EFA analysis ( Table 3).
Kaiser-Meyer-Olkin Measure of Sampling Adequacy | .818 | |
Bartlett’s Test of Sphericity | Approx. Chi-Square | 7921.264 |
df | 300 | |
Sig. | .000 |
3.2.2 Exploratory factor analysis result
Six factors were obtained by rotating the matrix. But there wasn’t a single item loaded in the sixth factor. In the fourth factor, only two items (items 5 and 6) and in the fifth factor, only one item (item 16) were loaded. Therefore, the last three factors were eliminated since they did not meet the criteria for having three to five items in each component and were not appropriate for conducting confirmatory factor analysis, as stated by (MacCallum, Widaman, Zhang, & Hong, 1999; Raubenheimer, 2004). Additionally, there was no loading in any factor for items 8, 13, 14, 15, and 17. This means that items were loaded below the given threshold (.50). Eight items were eliminated overall.
Table 4 illustrates the rotation of eight items to the instructional assessment (IA) subscale (loading .573 to .821), four items to the instructional delivery (ID) subscale (loading .854 to .892), and five items to the instructional planning (IP) subscale (loading .569 to .874). For IA, ID, IP, and overall scale, the internal consistency or reliability values of .886, .928, .874, and .806 exhibit robust (Taber, 2018) and good dependability (Salkind, 2015; Tavakol and Dennick, 2011; Lavrakas, 2008).
According to the total variance explained analysis, the three components collectively account for 60.95% of the variance in instructional practice. This demonstrates that irrespective of rotation methods and disciplines, 50% explained variance is sufficient (Sürücü, Şeşen, & Maslakçı, 2021; Beavers, Lounsbury, Richards, Huck, Skolits, & Esquivel, 2013; Hair, Sarstedt, Pieper, & Ringle, 2012; Pett, Lackey, & Sullivan, 2003). Furthermore, instructional delivery and instructional planning variables accounted for roughly comparable variance (18.51% and 18.47%), respectively. Whereas the instructional evaluation factor explains the relatively highest share of variance (23.97%) ( Table 5).
When the sample size is 200 or higher, Cattell’s Scree Plot test is still another trustworthy method to ascertain the number of components (Sürücü, Yikilmaz, & Maslakci, 2022). Starting with the fourth component indicates a notable linear trend in the eigenvalue pattern ( Figure 1). We reasonably retained the three factors at 60.95%.
Furthermore, we conducted parallel analysis to confirm wether the number of components in EFA loading and scree plot are actual factors or due to chance. Six hundred twenty seven cases, 25 items, 95% specifications, and principal components analysis method were specified. As a result, only the first three rawdata egenvalues are greater than the respective prcntyle of the random data egenvalues ( Figure 2). The parallel analysis confitrmed that three components are extracted in EFA as suggested by (O’Connor, 2000).
3.3.1 Common Method Bias (CMB)
CMB analysis is advised as the data samples were obtained by a questionnaire and/or all variables were obtained from the same individuals (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). As a result, we used Harman’s single-factor test to assess CMB, and the results showed that it explains 24.284% of the variation, which is lower than the 50% acceptable limit.
3.3.2 Measurement Model
Convergent validity, internal consistency reliability, and discriminant validity were examined using a reflective measurement approach. The extent to which a component is positively correlated with another factor that assesses the same construct is known as convergent validity. Factor loadings and average variance extraction were used for testing it. As a result, Items 22, 20, 23, and 25 in the instructional assessment factor and Item 1 in the instructional planning factor were loaded .398, .582, .606, .694, and .698 respectively ( Figure 3).
According to Hair, Hult, Ringle, and Sarstedt (2016), Henseler, Ringle, & Sarstedt (2014), and Hair, Black, and Babin (2010), this indicates below the threshold (≥.7). Following a sequential removal and reanalysis of the first three relatively low-loaded items (items 22, 20, and 23), item 25 improved from .694 to .723, satisfying the threshold. As a result, according to Sarstedt, Ringle, and Hair et al. (2017), Henseler et al. (2014), and Hair et al. (2010), the Average Variance Explained (AVE) in instructional assessment also improved from .476 ( Figure 3) to .607 ( Figure 4), met the minimum acceptable criterion (>.5). To satisfy the minimally acceptable standards of item loading values and AVE values, four items—three from the instructional assessment and one from the instructional planning—were generally removed.
Raykov’s rho coefficient, composite reliability (CR), and Cronbach alpha were employed to assess the construct validity and reliability ( Table 6). As per (Hair, Hult, Ringle, & Sarstedt, 2017; Hair, Black, Babin, & Anderson, 2010), the outcome satisfies the acceptable criterion for all (.7 to .95).
Constructs | n items | Cronbach alpha (>.7) | rho_A (>.7) | CR (>.7) | AVE (>.5) |
---|---|---|---|---|---|
IA | 5 | 0.861 | 1.046 | 0.884 | 0.607 |
ID | 4 | 0.928 | 0.937 | 0.949 | 0.822 |
IP | 4 | 0.849 | 0.891 | 0.895 | 0.681 |
The degree of differentiation between a component and another component is known as discriminant validity. The results of the tests on the heterotrait–monotrait (HTMT) ratio of correlations between constructs and the Fornell-Larcker criterion are displayed in Table 7 and fall within an acceptable range. According to Hair, Risher, Sarstedt, & Ringle (2019); Henseler, Ringle, & Sarstedt (2014); Fornell, Larcker (1981), this indicates that each construct’s square root of the AVE (all bold crossing values) in the Fornell-Larcker criterion exceeds its intercorrelations with other constructs or greater than the absolute measure of any correlation. Henseler et al. (2014) criticized the Fornell and Larcker test for not consistently detecting the absence of discriminant validity in some study scenarios.
Fornell-and-Larcker test | Heterotrat-Monotrait Ratio (HTMT) | ||||||
---|---|---|---|---|---|---|---|
IA | ID | IP | IA | ID | IP | ||
IA | 0.779 | IA | |||||
ID | 0.149 | 0.907 | ID | 0.137 | |||
IP | 0.310 | 0.087 | 0.825 | IP | 0.242 | 0.100 |
In order to evaluate discriminant validity, we thus looked at an alternative set of HTMT criteria. The results indicate that there is no discriminant validity issue, according to Hair et al. (2019) and Henseler et al. (2014). Because there is a positive correlation between IP and IA (r = .242, p =.00), IP and ID (r = .1, p =.00), and IA and ID (r = .137, p =.000). The ratio of correlations is also getting closer to zero. The two-tailed confidence interval at the.05 significance level does not include one.
3.3.3 Structural model
We checked the estimated model’s goodness-of-fit. There was a .08 marginal standard root means square residual (SRMR). Based on Brown’s (2015) analysis, the model is appropriate if ≤ 0.08. We also used the variance inflation factor (VIF) to assess for multicollinearity amongst the latent components. Multicollinearity problems are indicated by a VIF of greater than five (Hair, Risher, Sarstedt, & Ringle, 2019; Sarstedt, Ringle, Hair, 2017). Since all of the numbers in Table 8 are ≤ 3.529, multicollinearity is not an issue for the model.
A key limitation of this study lies in its dependence on self-reported data, which may introduce response biases, including social desirability bias and variations in participants’ perceptions, understanding, and interpretations of instructional practices. To mitigate potential misunderstandings, clear explanations of instructional practice definitions, survey items, and completion guidelines were provided to participants prior to data collection. To strengthen validity, subsequent research should incorporate multi-method assessments, including classroom observations and peer evaluations, to triangulate findings. Additionally, longitudinal studies could assess the instrument’s predictive validity in relation to student learning outcomes. To assure the true representations of instructors underlying beliefs, a random 2-5 class students were checked to see if their instructors allowed them to select assignment topics related to their interests, facilitated a discussion where students suggested norms and agreed on respectful behavior standards, incorporated texts, examples, and case studies that reflected students’ diverse cultural backgrounds, and considered varying learning styles (e.g., visual, auditory, kinesthetic) in lesson delivery. Generally, by addressing these limitations, future work can further refine the tool’s applicability across diverse educational settings while advancing methodological rigor in pedagogical research.
The validation of the instructional practice scale for university instructors underscores the critical importance of context-specific instrument development. Through EFA, the initial 25-item scale was refined to 17 items across three key dimensions, with CFA further validating a final 13-item model. This study provides a psychometrically robust tool for assessing instructional practices at the university level, demonstrating its applicability for research, professional development, and promotion-related evaluations. However, its generalizability is currently limited to broad university-level courses, necessitating further discipline-specific adaptations to account for pedagogical variations across academic fields.
The findings carry significant implications for both academic research and instructional practice. First, the validated instrument offers a reliable means for evaluating and enhancing instructional effectiveness in higher education, supporting evidence-based faculty development initiatives. Second, the study highlights the necessity of contextual validation, suggesting that future researchers should develop and test specialized instruments tailored to distinct disciplines to ensure pedagogical relevance. Generally, this serves as a valuable foundation for discussing professional development needs, rather than representing a definitive assessment of competence.
The Center for Educational Research along with the Office of Research and Dissemination and the Office of Vice President for Research and Technology Transfer at Dilla University have ensured that the issue under investigation complies with academic research criteria and ethical standards on 13/01/2023 (DU/164/2023). Potential participants received brief instructions regarding the study’s overall goal and the characteristics of the data collection instrument. The representatives from the above-mentioned offices on the same date confirmed that collecting oral consent from participants is sufficient for the present study. When requested, participants preferred to give their consent orally rather than in writing. They perceived written consent as requiring a lot of resources, including time. Accordingly, we obtained verbal informed consent from each participant. Representatives from Center for Educational Research, Office of Research and Dissemination, and the Office of Vice President for Research and Technology Transfer at Dilla University approved the unharmful nature of the data collection tool and assumed the number of participants and confirmed that collecting verbal consent from participants is sufficient for the present study.
Zenodo: validating instructional practice scale for instructors in some selected Ethiopian public universities: confirmatory factor analysis. https://doi.org/10.5281/zenodo.11493518 (Mehari et al., 2024b).
The project contains the following underlying data:
Zenodo: Original instrument/scale. Zenodo. https://doi.org/10.5281/zenodo.11667235 (Mehari et al., 2024a).
The project contains the following extended data:
• Original instrument/scale
Zenodo: Validating Instructional Practice Scale (IPS) for University Instructors: Parallel analysis [Data set]. Zenodo. https://doi.org/10.5281/zenodo.12204587 (Mehari, 2024b).
The project contains the following extended data:
Zenodo: STROBE checklist for validating instructional practice scale for instructors in some selected Ethiopian public universities. https://doi.org/10.5281/zenodo.12705646 (Mehari, 2024a).
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
We express our gratitude to Dilla University for providing funding for this study, under the major theme of “Students’ learning styles and instructors’ teaching practices as determinants of academic achievement among first-year university students in Ethiopia, SNNPR universities”. We also thank the participants and all stakeholders who contributed to the feasibility of the study.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: I mainly focus on psychometric research in education, Educational leadership, English teaching methods, and second language acquisition.
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: mathematics education research, statistical modeling, improving instructional practices and student outcomes, especially at the tertiary level
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Higher Education, Educational Leadership, Pedagogy, Learning Assessment, Language Teaching
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
I cannot comment. A qualified statistician is required.
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Pedgagogy and pedagogical change
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||||
---|---|---|---|---|
1 | 2 | 3 | 4 | |
Version 2 (revision) 07 May 25 |
read | read | read | |
Version 1 28 Aug 24 |
read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
First, regarding questionnaire validation, the authors underwent a detailed procedure, so the result can be reliable.
Second, I have ... Continue reading Thank you for giving me this chance to examine this work.
First, regarding questionnaire validation, the authors underwent a detailed procedure, so the result can be reliable.
Second, I have some suggestions and questions: (1) what criteria did the authors rely on to refine their instructional practices into three constructs: instructional assessment, instructional delivery, and instructional planning? As far as I know, TALIS 2018 survey also define this construct based on four dimensions: clarity of instruction, cognitive activation, classroom management, and classroom assessment. (2) Regarding the questionnaire, did the authors use instructional practice as a single construct which consists of three dimensions as stated above? I am confused about CFA analysis. By this, I mean the authors used instructional assessment as an outcome variable of the two independent instructional planning and delivery, and instructional delivery plays a mediator in this model as well. However, the goal of this study was not to test such relationships. 3) Do authors think they should say something about the participants' biography in the discussion section? Do these characteristics have any influence on the results? (3) In the discussion section, it seems to me that the authors did not discuss the items that were removed and retained in this context, but focus on quantitative numbers only. Lastly, as I am not clear about what in-text citation or reference style the journal mandates, I think the authors should comply with this rule, too.
First, regarding questionnaire validation, the authors underwent a detailed procedure, so the result can be reliable.
Second, I have some suggestions and questions: (1) what criteria did the authors rely on to refine their instructional practices into three constructs: instructional assessment, instructional delivery, and instructional planning? As far as I know, TALIS 2018 survey also define this construct based on four dimensions: clarity of instruction, cognitive activation, classroom management, and classroom assessment. (2) Regarding the questionnaire, did the authors use instructional practice as a single construct which consists of three dimensions as stated above? I am confused about CFA analysis. By this, I mean the authors used instructional assessment as an outcome variable of the two independent instructional planning and delivery, and instructional delivery plays a mediator in this model as well. However, the goal of this study was not to test such relationships. 3) Do authors think they should say something about the participants' biography in the discussion section? Do these characteristics have any influence on the results? (3) In the discussion section, it seems to me that the authors did not discuss the items that were removed and retained in this context, but focus on quantitative numbers only. Lastly, as I am not clear about what in-text citation or reference style the journal mandates, I think the authors should comply with this rule, too.