Keywords
health literacy, Rasch analysis, critical thinking, informed choice, evidence-based practice
This article is included in the Global Public Health gateway.
health literacy, Rasch analysis, critical thinking, informed choice, evidence-based practice
Every day we are faced with different treatment claims, in the news, in social media, and by our family and friends. Some of these claims are true, but many are unsubstantiated.1,2 Without being supported by reliable evidence such guidance can lead to waste and harmful health choices.3,4 Thus, improving people’s ability to assess whether treatment claims are based on reliable evidence may lead to better health outcomes. The spread of misinformation during the Covid-19 pandemic has further emphasized the importance of promoting critical thinking and science literacy as a public health initiative.5,6
The Informed Health Choices (IHC) Network facilitates development of interventions for teaching children and adults the ability to assess treatment claims (informedhealthchoices.org). We have developed a list of Key Concepts that people need to know to be able to assess claims about treatment effects.7 By ‘treatment’ we refer to any intervention (action) intended to improve health, including preventive, therapeutic, and rehabilitative interventions, and public health or health system interventions. In two recent randomized trials in Uganda, we found that primary school children and their parents could be taught to apply these concepts.8,9 Currently we are preparing for a new trial in Kenya, Rwanda, and Uganda to evaluate a set of educational resources for lower secondary schools (the IHC secondary school resources).
The Claim Evaluation Tools item bank was first developed for use in the abovementioned trials in Uganda, evaluating learning outcomes in primary school children and their parents.8,9 We also developed the item bank so that it could be used as a flexible resource for teachers and researchers, enabling them to design their own instrument for their own purposes.10,11 The item bank can be used for creating tests in schools (including higher education) and for research purposes in, for example, surveys and randomized trials.
Since it was first developed, the item bank has been periodically revised to reflect changes we have made to the Key Concepts list. Since our first trials in Uganda, researchers have developed instruments using items from the item bank in other contexts, including China, Mexico, and Norway.12–14 Other studies are underway in Croatia and the USA. Currently, the item bank includes more than 200 items, with three to four multiple-choice questions (MCQs) available for assessing knowledge and the ability to apply each concept in the list. The item bank also includes a sample of literacy questions for use in contexts where reading ability may be a barrier for responding to the MCQs. It also includes items for assessing people’s intended behaviours and self-efficacy (scored on 5-point Likert scales). All items are written in plain language and are suitable for both children and adults.
In the present study, our objective was to develop and evaluate the psychometric properties of a new assessment tool developed from the item bank for use in Uganda, Kenya, and Rwanda. This outcome measure will be used in randomised trials of the IHC lower secondary school resources.
Below we describe how we designed the questionnaire, how it was administered, and how we analysed and report the data. The protocol and underlying data for this study has been published.34,35
For this study we included both ability items and the items measuring intended behaviour and self-efficacy.
We planned on removing MCQs with sub-optimal measurement properties based on the results of this study. Therefore, we included more MCQs than we plan to use in the trial (two MCQs per Key Concept). The educational intervention we will evaluate in the randomised trials addresses nine Key Concepts (Table 1). For each of those concepts, we included three MCQs in the questionnaire, a total of 27 MCQs assessing ability. All MCQs included 3 response options.
We included three items that assess intended behaviour and four items that assess self-efficacy. The Likert scales include four response options ranging from very likely to very unlikely (intended behaviour) or very difficult to very easy (self-efficacy), and a fifth option: ‘I don’t know’.
In addition, we included demographic questions asking about gender, age, educational level, country of residence, training in research methods, and experience with participation in randomised trials. Gender, age, and country of residence were important for the psychometric analysis (testing for differential item functioning). The other background factors were used to ascertain that we were able to recruit people with a spread in ability level (ability to assess treatment claims). Level of education and familiarity with research methods have been shown to be associated with more correct answers.14
In preparation for this study, we conducted cognitive interviews and piloted the questionnaire with individuals from our potential target groups in Uganda, Kenya, and Rwanda.11,15 The objective was to get feedback from members of our target groups in the three contexts on the acceptability and relevance of the terminology and formats used in the questionnaire. Even though the items included in the Claim Evaluation Tools item bank have previously gone through an extensive development process in Uganda, we considered it important to get feedback from people in our target groups in Rwanda and Kenya, where the items had not been tested before.
We recruited schools in May- August 2021 through the project’s teacher networks. In the interviews the students were encouraged to think aloud about how they understood the scenarios and response options, and to identify any issues they had regarding comprehension of terminology or format. The researcher noted down all identified issues. All feedback was summarised by the lead investigators and the findings was discussed in the project group including the research teams in all three contexts.
Piloting took place in a classroom setting. The purpose and instructions of the test was introduced to the students by a member of the research team in collaboration with the teacher, observations were made regarding time taken to complete the questionnaire and comprehension of the format (incorrectly filled in response options).
Findings coming out of the interviews and pilots led to only minor changes, such as changing some of the names and other terminology used in the MCQs to improve familiarity in the two new contexts. We also changed the format of the intended behaviour and self-efficacy items from a traditional Likert-scale to resemble a multiple-choice format, keeping the same response options (Figure 1).
We made that change because the Likert-scale format was unfamiliar to some of the students in the three contexts, and the MCQ format was more familiar and acceptable to the students. The pilot studies also provided us with information about the time needed to complete the questionnaire (between 30 and 60 minutes) and what we could expect in terms of missing responses in the upcoming trial.
Previously, several tests have been developed from the claim evaluation tools item bank. The test developed for this study was named the Critical Thinking about Health test. A copy of the test evaluated in this study is available us extended data.36
There is no gold standard for the number of respondents needed for Rasch analysis. This is a pragmatic judgement considering the number of items evaluated and the statistical power needed to identify item bias resulting from background variables.16–18 Rasch analysis does not require a representative sample. However, the sample should include enough people to allow for evaluating differential functioning and a spread in ability. Studies have found that a sample of 200-250 people per group is suitable for detecting differential item functioning (DIF).19,20 We expected both item-sets to work in the same way for children and adults and to have no differential functioning by gender.11 For this evaluation, we also needed a sample of people with different ability to assess treatment claims. There are few background variables that may predict ability to assess treatment claims, but higher education involving training in statistics or research methods may be a factor.14 Consequently, we estimated that recruiting approximately 500 people in each country, with an equal distribution of men and women, and lower secondary school students and adults would be adequate (Table 2). We also made sure to recruit people from higher education contexts, through the university networks in each context, as well as people in our local communities, social media, and students from schools participating in piloting of the educational intervention. We commenced data collection in July 2021 and was completed December in the same year.
All recruitment and data collection were done during lock down due to COVID-19, leading us to use varied strategies for recruiting our respondents.
In Uganda we recruited participants using our networks there including teachers, students, and National advisory panel networks. For students, we used three strategies, including visiting students at their homes, reaching out through the student network, and also requested teachers who were conducting online revision classes to introduce us to their students via the platforms to introduce the project and share the questionnaire link via WhatsApp or Telegram (both media apps for communication) after obtaining consent. For adults, we recruited people with higher education qualifications through university platforms i.e., the University faculty platforms, a PhD forum which has over 40 PhD fellows, students studying medicine WhatsApp groups, and a teachers’ network WhatsApp group. However, for the local communities, we visited food and clothes markets and asked them to complete the questionnaires. All data collection was done in the central region (Kampala and Wakiso) and the northern region (in Gulu district) of Uganda.
In Kenya we recruited students from three schools that participated in piloting the IHC secondary school resources. In those schools, we purposively included all the participants from one stream except those that had been selected for the pilot. Each school had about three-four classes and each class had about 40 students. For adults, we included the student’s institution of tertiary education and members of the community with low education levels (secondary and below), and those that could read and owned a Smartphone. For the students, we purposively included students from two faculties (Health and Arts and Sciences). Through the Dean of students, we invited them to a meeting where we introduced the project, outcome measure and sought their verbal consent. We then shared the link to the test and asked them to log in and participate. For community members, we used our database to recruit members that were actively involved in the institute’s previous and ongoing community-based projects in rural settings in Butere sub-County. Although we reached out to many members, only a few members responded thus we resorted to recruit more from the student’s fraternity (pursuing diploma and certificate courses). We used a similar recruitment and consenting process described for the students above.
In Rwanda, for adults, we used WhatsApp and recruited using the snowballing method through our networks, including the projects teachers’ network and students’ network in Rwanda. The teachers network included lower secondary school teachers who were from different schools, and they varied in terms of work experience, age, subject area and schools they teach from. Similarly, the students’ network included students from similar schools as members of teacher’s network. They also varied in their age, sex, and history of school performance (high or low performing students). We also used emails and reached out to adults who work or previously worked with the school of public health researchers in Rwanda. We also engaged a teacher’s network who also responded to the test. We recruited students through schools that participated in the development and pilot of the intervention in Kigali city and surrounding neighborhoods.
Most of the data collection was done online, using a service hosted by the University of Oslo (Nettskjema). One small sample (students in Kenya) used paper questionnaires in a classroom setting and administrated as an exam as part of pilot testing of the IHC secondary school resources. The test was administrated by a teacher under the instructions of the research team. The paper questionnaires were scanned and added to the data collected online.
Ethical approval was obtained from the relevant authorities in each country; Masinde Muliro University of Science and Technology, Institutional Ethics Review Committee (MMUST/IERC/75/19, License No: NACOSTI/P/21/8103) the Rwanda National Ethics Committee 916/RNEC/2019, School of Medicine Research Ethics Committee (REC REF 2020-139)/Uganda National Council of Science and Technology (HS916ES).
All participants were given written information about the purpose of the study and that participation was voluntary, and how the findings would be used to improve the validity and reliability of the Critical Thinking about Health test. Children participating through their schools were also given oral information. We obtained written consent from all adult participants, the minor’s guardians, and written assent from the minors.
Since this was a knowledge test, just as a regular school exam, this study did not collect any personal or other sensitive information that could lead to identification of the respondents. None of the members of this project group had access to information that could identify individual participants during or after data collection.
Rasch analysis is a dynamic way of developing measurement tools with construct validity.14 The approach is used to address important measurement issues required for validating an outcome measure, including internal construct validity (by testing for unidimensionality), invariance of the items (item-person-interaction), and item bias (differential item function).21,22
We imported the data from Excel (version 2208) into RUMM2030 (https://www.rummlab.com.au/) and followed the basic steps of Rasch analysis as recommended in the literature.21,23 R is a freely accessible software environment for statistical computing and graphics including Rasch analysis that can be used to run a similar analysis (https://www.r-project.org/). We analysed the two item-sets separately based on the assumption that these measure different underling traits. The MCQs were scored dichotomously as correct or incorrect. We applied the polytomous model to the intended behaviour and self-efficacy items.22 When entered into RUMM2030, missing data was coded as “0”.
The first step in the analysis involved exploring the class interval structure (number and size of ability groups) and the summary statistics (person-Item distribution). In Rasch analysis, the ratio between any two items should be constant across different ‘ability’ groups. The response patterns to an item-set is tested against what is expected by the model which is a probabilistic form of Guttman scaling.21 In other words, the easier the item is, the more likely it will be ‘passed’, and the more able the person is the more likely he or she will pass.21 We explored this relationship using the summary statistics function in RUMM2030.23 In RUMM2030, the item-person interaction is presented on a logit scale, where the mean item location is ‘0’. If the instrument is a well-targeted measure (not too easy or too difficult), the mean location for individuals would be around the value of zero.22 If the person location is higher than zero, this indicates that the test is easy, if the person location is lower than zero this indicates that the test is difficult. The item and person fit residual statistics assess the degree of divergence (or residual) between the expected and observed data for each person item when summed for all items and all individuals respectively for each test set.22 In RUMM2030 this is reported as an approximate z-score, representing a standardized normal distribution.22 Ideally, item and person fit should have a mean of zero and a standard deviation of one.22
We calculated Cronbach’s alpha to assess the reliability of both item-sets by removing missing data. A Cronbach’s alpha above 0.7 was considered acceptable.22
The principal component analysis/t-test protocol is used to test the hypothesis of unidimensionality. This is done by identifying the two most divergent item subsets (using the residual principal component function in RUMM2030), and then calculating t-tests.22 If ≤5% of tests are significant, strict unidimensionality can be inferred.24 However, the concept of ‘unidimensionality’ is not ‘definite’ but relative and should be supplemented with quantitative or qualitative interpretation of the explicit variable definition and considering the context and purpose of the measurement.24,25
We tested for local dependency by using the residual correlations function in RUMM2030. Data from this output was copied into Excel (version 2208) and any residual correlations greater than 0.2 above the average was considered as potential problematic dependency.22
We identified individuals and items with ‘misfit’ to the Rasch model by chi-square statistics and by exploring the fit residuals. Items with statistically significant chi-square probabilities do not fit the model at 0.01 significance level, items within a ±2.5 fit residual range are considered to be potentially problematic.22 Similarly, individuals with a fit residual of ±2.5 were considered as not fitting the model. Such extreme values can be an indication of, for example, guessing or copying, and that the item-set is not appropriate.
We examined differential item functioning (DIF) by age, gender, and country of residence. It was our objective to include only items that could be applied fairly across these demographic variables. Ideally, all items in the Claim Evaluation Tools item bank are expected to work in the same way for men and women, and across age groups. There are two types of DIF. Uniform DIF is when the difference between groups for an item is systematic - for example adults having systematically higher ability compared to lower secondary school students. This is less problematic (when it is known) than non-uniform DIF, where the difference between groups on an item is inconsistent across ability groups.21 For this study, we considered non-uniform DIF as unacceptable. We predicted that we would find uniform DIF by country, as we know from other studies that there are differences in ability-by-concept across countries.14 Uniform DIF by gender and age was unwanted but would be considered in relation to the other findings from the Rasch analysis. The reason for this was that the questionnaire will be used for measuring differences between an intervention and a comparison group, and systematic DIF would therefore not be a problem in our study.
In the item characteristic curve plot the expected scores and the observed scores for the class intervals of the different ability levels are displayed. We observed the item characteristic curve for each item and made note of items that showed under-discrimination, over-discrimination, or had several deviating ability groups.22 We considered items with under-discrimination and classic over-discrimination for removal. Marginal over-discrimination was not considered to be a problem for our purposes.
For the polytomous items we explored the threshold ordering (fit to the expected logical order of the response options) to check for disordered thresholds. Disordered thresholds suggest that the scoring categories are not progressing as expected, and that the item is not working properly.22
This study follows the STROBE-reporting standards.38
A total of 1,671 responses were entered into the analysis distributed across 10 ability groups identified by the RUMM2030 software of which 49% were women and 40% were young people (under 18). Of these, 35% were from Kenya, 34% from Uganda, and 31% from Rwanda. Missing data was minimal only 0.004%, and thus had no impact on the analysis.
The person-item distribution shows that both item sets are well targeted (mean person location was -0.218 for the ability item set and 0.084 for the Likert item-set.
For the ability items, the person fit residual was -0.204 (SD 0.741) and thus showed satisfactory fit to the model. The items’ fit residual was 0.712 (SD 2.235) and warranted further investigation in subsequent analyses.
For the Likert items, the item fit residual was 0.543 (SD 0.938), indicating reasonable fit. However, the high standard deviation for the person fit residual (-0.546, SD 1.783) suggested some misfit to the model.
Both item-sets were found to be reliable, with a Cronbach’s alpha of 0.72 and 0.79 for the ability and Likert item-sets respectively.
In the analysis of the ability item-set, we identified one person with a highly negative fit residual (adult, female, Rwanda) and two with highly positive fit residuals (male, young person, Rwanda and adult, female, Rwanda). Of the 27 MCQs, three items had extreme negative values, and four items had extreme positive values.
There were no items with extreme values in the Likert item-set. However, several misfitting persons were identified (296 individuals) with high negative residuals and two individuals with high positive residuals.
The majority of the ability items had a good fit to the item characteristics curve (Figure 2). Four items showed evidence of classic overdiscrimination, of which two of these also had very high negative fit residuals (Figure 3). Four items showed sign of classic underdiscrimination and were considered candidates for removal (Figure 4). Most Likert-items showed a good fit, although two items were slightly overdiscriminating, this was considered acceptable.
In the DIF analysis of the 27 ability items, two items showed uniform DIF by gender (one item where males did systematically better and one where females had higher ability). Three items showed DIF by age, of which two were uniform (one item where young people performed better and one item where adults had higher ability). One item had non-uniform DIF by age. Uniform DIF by country was found for 10 items, the ranking of the three countries differing across these items.
There was no DIF by gender, age, or country in the analysis of the Likert item-set.
In the Likert item-set, two items were found to be slightly over-discriminating and were therefore considered acceptable. The remaining items showed very good fit.
When exploring the ordering of the thresholds, we found that the three Likert items evaluating intended behaviour were disorganized. A reanalysis of these suggest that these could be improved by dichotomising the response options. The four items evaluating self-efficacy showed a good fit.
In the analysis of the ability item-set, 8% of the T-tests were significant.
The magnitude of multidimensionality in Likert-items were found satisfactory at 5% and considered to be unidimensional.
There were no item-pairs correlations above 0.2 of the average value in any of the item-sets, suggesting no important redundancy.
The outcome measure to be used in the final trial was reduced to include only two MCQs for each Key Concept to be assessed. We removed the ability-items with suboptimal fit. Since the Likert-items were all found to have good fit, these remained unchanged.
The revised outcome measure has been published as extended data.37
Overall, both item-sets were found to have good fit to the Rasch model and suitable for our target audience. The reliability of both item-sets was also good. Observations of the individual item and person fit provided us with guidance on how to improve the design and administration of the two item-sets.
When observing each individual item’s fit to the Rasch model in the ability item-set, we identified some items that could be removed to improve the questionnaire. Of 27 ability items, three had differential item functioning by age or gender of which only one of these were highly problematic (non-uniform). As expected, some items also showed differential item functioning by country. Possible explanations for this may be that there are differences in cultural beliefs or because there are differences in the curricula taught in schools. Considering that the differential item functioning by country was uniform and that we are planning to use the outcome measure in randomised trials comparing effects between comparison groups in each specific context, this was not considered to be a concern for our purposes. We also identified some items with poor measurement properties by observing the item characters curves. Taken together with the item showing non-uniform DIF, these were considered for removal from the final outcome measure to be used in our upcoming trial.
In the analysis of the Likert item-set, two issues were identified that we needed to address. Three items measuring intended behaviour showed disordered response categories, furthermore we identified a high number of people with extreme values. This can be an indication that some of the respondents had difficulty answering these questions. As noted in the methods, we observed that some people in the studied contexts were unfamiliar with intended behaviour and self-efficacy questions. The results from this study suggested that we need to plan carefully for how this item-set is administered and ensure that people are adequately instructed about the format and purpose of these questions. The results also suggested that we should either redesign the attitude items so that the response options are dichotomized (with three response options instead of five) or dichotomise the answers by collapsing the response options in the analysis following the trial. We did the latter in the trial of the IHC primary school resources by combining likely (or difficult) and very likely, and combining unlikely, very unlikely, and ‘don’t know’).26
We found no important redundancy in the item-sets (dependency between item pairs), and both item-sets appear to measure only one underlying trait (unidimensionality). The ability item-set had a somewhat higher percentage of T-tests above the statistical threshold of 5%.24 Considering that this is the first time we have observed this in one of the many Rasch analyses we have done on instruments developed from the Claim Evaluation Tools item bank, we considered the magnitude of unidimensionality observed in the ability item-set acceptable.12–14
The overabundance of unreliable treatment claims that accompanied the COVID-19 pandemic has highlighted the need for facilitating critical thinking as an important public health initiative.5 This is essential to protect people against unreliable treatment claims and enable them to make informed treatment choices.
Health literacy is defined in many ways, but typically includes the ability to think critically (sometimes referred to as critical health literacy).27,28 A conceptual framework is helpful when developing assessment tools.29 Health literacy is often measured using self-report.30 Furthermore, many of the health literacy instruments available aim to capture other domains of health literacy such as functional and social literacy.30,31 In addition to measuring perceptions of one’s own abilities (self-report or self-efficacy), it is important to measure abilities objectively (performance). The association between self-report and performance is not straightforward.32 The Health Literacy Tool shed, a database of health literacy measures has indexed 16 instruments evaluating an aspect of health literacy intended for adolescents using an objective measurement of performance, of which eight are available in English.30 The Claim Evaluation Tools have a narrower scope than most of these and focusses on one critical skill, the ability to assess treatment claims and make informed treatment choices. Although these instruments can provide information about people’s general health literacy skills, applying a more specific assessment tool in, for example, mapping studies, makes it easier to design interventions targeting the specific gaps identified.
One limitation of this study is that the adult population included more people with higher education than the general population in each of these three settings. Thus, the test might be more difficult for people with less education. However, although participants with higher education are somewhat more likely to answering the ability questions correctly, there does not seem to be a strong association.14,33 Another limitation is that the findings of this study are exclusive to the three Eastern African countries, and the validity and reliability in other contexts are uncertain. The item-sets validated in this study should therefore undergo further psychometric testing if used elsewhere.
The strategy of using pilot testing and a Rasch analysis have been found to be a robust method for developing measurement tools in several contexts.10–13 An important strength of this study is that we used explicit and transparent methods, following the principal steps recommended for Rasch analysis.21–23 Another strength is that we were able to recruit enough people despite the fact all three countries were burdened by the pandemic during the data collection. The results of this study and subsequent design of the questionnaire based on these results ensures that both the ability and Likert item-sets are a valid and reliable outcome measure for the randomised trials of the IHC lower secondary school intervention in all three countries.
To our knowledge, this is the first measurement tool developed for measuring ability, intended behaviours, and self-efficacy for critical thinking about treatments in Kenya and Rwanda, as well as in Uganda. The two item-sets we evaluated in this study were found to be reliable and to have satisfactory measurement properties.
The findings from our analysis were used to redesign and improve the ability item-set. The results also informed guidance for how the Likert item-set should be administered and analysed.
Zenodo: Critical thinking about treatment effects in Eastern Africa. Data set uncoded. [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7680780. 34
The project contains the following underlying data:
Zenodo: Study protocol: Assessment of validity and reliability of a questionnaire based on the Claim Evaluation Tools Item bank in Uganda, Kenya and Rwanda. https://doi.org/10.5281/zenodo.7680616. 35
The project contains the following extended data:
Zenodo: Critical thinking about treatment effects in Eastern Africa. The Critical Thinking about Health test (before Rasch analysis). Zenodo. https://doi.org/10.5281/zenodo.7756037. 36
The project contains the following extended data:
• Critical thinking about treatments test – Vis - Nettskjema.pdf. (Original test validated as part of this study).
Zenodo: Critical thinking about treatment effects in Eastern Africa. The Critical Thinking about Health test. https://doi.org/10.5281/zenodo.7680606. 37
The project contains the following extended data:
Zenodo: STROBE checklist for ‘Critical thinking about treatment effects in Eastern Africa: development and Rasch analysis of an assessment tool’. https://doi.org/10.5281/zenodo.7680586. 38
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
We would like to thank Sarah Rosenbaum for providing her expertise in designing the questionnaire. Furthermore, we would like to thank the rest of Informed Health Choices team for their valuable feedback and discussion in planning and conducting this study. We are also very grateful for all the secondary school students and adults who took time to contribute to this study and to the ministry of education and school administration for allowing students participation.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
I cannot comment. A qualified statistician is required.
Are all the source data underlying the results available to ensure full reproducibility?
No
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Eye, Ocular surface diseases, optometry, ophthalmology and eye health
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |
---|---|
1 | |
Version 1 26 Jul 23 |
read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)