Keywords
brain training, cognitive training, cognitive enhancement
brain training, cognitive training, cognitive enhancement
'Preliminary' was added to the title.
Added more theoretical background to justify the investigation of cultural differences in attitudes to brain training games.
Added a discussion of the differences between Lumosity and Brain Age: Train Your Brain in Minutes a Day!
See the author's detailed response to the review by Herdiyan Maulana
See the author's detailed response to the review by Kelsey Prena
See the author's detailed response to the review by Yuka Kotozaki
Recently, there has been much interest in so-called “brain training” (BT) applications and games. These programs are typically marketed to consumers as enjoyable, interactive experiences that, if used regularly, are claimed to improve a range of cognitive skills, such as attention, memory, and multitasking ability (Simons et al., 2016). The potential benefits of such training, if effective, are numerous. For example, training executive function skills such as working memory and task switching could potentially lead to improved outcomes in education, quality of life, and employment for the general population (Diamond, 2013). In addition, people with cognitive deficits, such as those with intellectual disabilities or age-related cognitive decline, could also benefit from effective cognitive training software (Robb et al., 2018; Buitenweg et al., 2012).
Research on the effectiveness of various types of cognitive training has found evidence that it can lead to improvements in tasks that bear some resemblance to the training (“near transfer”), but limited evidence that these improvements transfer to distantly related tasks (“far transfer”) or indeed to everyday life (Simons et al., 2016; Sala et al., 2019; Aksayli et al., 2019). These findings suggest that theories of transfer that emphasize the importance of overlap between the training and the target skills (e.g., Gobet, 2016; Taatgen, 2013; Oei and Patterson, 2014) may provide the best account of the mechanisms by which cognitive training is effective. Therefore, a detailed theoretical understanding of the overlap between the training and the desired outcome may be an important factor in the design of effective, tailored cognitive training programs in the future (see Smid et al., 2020, for this and other recommendations).
As part of a more comprehensive science of cognitive training, it is also important to investigate the attitudes and habits of the people who will potentially use the training. Individual differences in personality, motivation, expectations etc., are likely to play a role in determining a user’s engagement with a training program (Smid et al., 2020). Regular engagement is obviously an important factor in any kind of training; however, attrition is a commonly reported problem in trials of cognitive training software (Corbett et al., 2015; Robb et al., 2019), and at least one commercial BT program (Cogmed) assigns users a coach to ensure that they regularly engage with the software. Understanding how and why people use cognitive training programs may therefore be an important additional factor in determining their effectiveness.
Previous research has found that participants typically have positive beliefs about the effectiveness of BT. Torous et al. (2016) found positive beliefs about the effectiveness of BT mobile applications in young American consumers, both in participants who had used BT programs and those who had not. Other research found similar results in parents of children with intellectual disabilities (who may benefit from cognitive training): parents believed that BT could benefit their children and expressed positive attitudes towards supporting such training. Again, these attitudes were not related to how much experience the parents had with BT apps or games (Robb et al., 2018). It has also been shown that people’s expectations about the effectiveness of BT can be influenced by the information they receive about such programs. Rabipour & Davidson (2015) and Rabipour et al. (2018) found that participants’ expectations about the effectiveness of BT at baseline could be subsequently raised or lowered by presenting them with positive or negative messages about BT. Finally, Ng et al. (2020) found that frequency of engagement was only weakly correlated with perceived cognitive benefit for a range of activities, including BT. While this research reveals important information about the attitudes, habits, and expectations of a range of potential consumers of BT, it is primarily focused on Western users. It is widely recognized that much research involving human subjects may be biased towards certain demographics (Henrich et al., 2010). In the specific case of brain training apps, previous findings suggest the possibility of relevant cultural differences, particularly differences between people from western and Asian backgrounds.
Firstly, cultural differences in technology acceptance may affect the use of brain training apps and games. Technology acceptance refers to the ways in which users adopt and use new technologies. Several models and theories have been proposed to explain this phenomenon, such as the Technology Acceptance Model (TAM) (Davies, 1989; Bagozzi et al., 1992), which holds that two important factors influencing an individual’s acceptance of any technology are perceived ease of use and perceived usefulness (Davies, 1989). Research has shown that cultural factors may influence technology acceptance. Jan et al. (2022) conducted a meta-analysis of studies on technology acceptance and the cultural dimensions proposed by Hofstede (2011). Hofstede conceptualised culture as the “collective mental programming” or “software of the mind” which distinguishes one group of people from another (Hofstede et al., 2005), and identified six dimensions along which cultures can vary (Hofstede, 2011). Based on their meta-analysis, Jan et al. (2022) proposed a conceptual model of how technology acceptance is directly affected by cultural dimensions such as individualism/collectivism, uncertainty avoidance, and long-term orientation. For example, in more individualistic societies such as the United States (Hofstede, 2001), an individual's intention to use a new technology such as a brain training app may depend highly on their perception of how the app can benefit them, while in more collectivist societies such as China or Japan (Hofstede, 2001), their perception of how it might benefit society or perhaps their company might be more important.
Secondly, research by Jaeggi et al. (2014) on individual differences in working memory training found that the success of the training was influenced by individuals’ motivation, pre-existing ability, and implicitly held theories of intelligence. The latter construct was assessed using the Theories of Cognitive Abilities Scale (Dweck, 1999), which classifies participants according to whether they view intelligence as fixed (innate and difficult to change), or something that is incremental (malleable, and can be changed based on experience). The authors reported a difference between those with fixed beliefs and those with incremental beliefs in terms of how well the working memory training transferred to a test of visuospatial reasoning (greater transfer for those with incremental beliefs). While the cultural background of participants was not reported in this study (all were recruited from a US university and the surrounding community), other research using the Theories of Cognitive Abilities Scale has found evidence for cultural differences in attitudes to intelligence. Jose and Bellamy (2012) investigated how parental theories of intelligence influence how their children engage with academic tasks. Recruiting participants from New Zealand, the US, China, and Japan, they discovered that the view that intelligence is malleable was most strongly supported by American parents, followed by those from New Zealand and China, while the view was least strongly supported by Japanese parents. Taking these two results together, it is reasonable to question if differences between Japanese and American individuals regarding their implicit theory of intelligence could affect how they perceive and use brain training apps, and how such training transfers to other contexts.
In the case of understanding attitudes and habits regarding BT, the largest previous study was conducted in the US (Torous et al., 2016). Following the points in the previous paragraphs, it is important that data from other countries, particularly those which evidence suggests may have differing attitudes and habits, is collected and analysed.
Japan represents a large group of potential consumers of cognitive training who may have different habits or attitudes than, for example, those in the US. In addition to the previously discussed differences regarding Hofstede’s cultural dimensions and attitudes to intelligence, previous research has found differences between users from Japan and the US in terms of how they access and use mobile apps (Lim et al., 2014), review video games (Zagal & Tomuro, 2013), and their preferences for the design of websites (Cyr et al., 2005). Japan also has a developed BT market, with popular BT games having been released in the country for several years (Fuyuno, 2007; Chancellor & Chatterjee, 2011). Taken together, these points suggest the possibility that attitudes and habits regarding BT may differ between users in Japan and other countries. Therefore, the main purpose of this paper is to provide preliminary data on the habits and attitudes of Japanese people regarding BT apps and games, thus expanding our knowledge of how and why such programs are used around the world and laying the groundwork for future research in this area.
This cross-sectional study used a Japanese translation of the questionnaire used by Torous et al. (2016) with minor adaptations. Before translation, the original questionnaire was adapted in two ways. Firstly, while Torous et al.’s (2016) questionnaire specifically focused on using smartphone apps, the present study also included questions (and adapted the wording of questions) to refer to games consoles. This was because it was expected that Japanese-produced BT programs would be popular among Japanese people, and some such software is only available on games consoles. Secondly, when asking participants which cognitive training programs they had used, the list of options was updated to reflect apps and games available in Japan.
This questionnaire was then translated into Japanese by two professional translators, who both independently produced separate translations. Professional translators were contracted through Gengo, a web-based human translation platform. A native Japanese speaker familiar with the research project merged these translations; differences in the two translations were resolved through discussion between this person and the author of the paper. This resulted in a final Japanese version of the questionnaire. Before being used, this version was translated back into English by a third professional translator, and this version was compared with Torous et al.’s (2016) original questionnaire. There were some minor differences in the wording of the original questionnaire and the back-translation. For example, “duration” (original) became “period of time” (back-translation); the phrase “For the purpose of this survey, we will call these ‘brain training apps/games’” (original) became “In this survey, we will refer to these as ‘brain training apps/games’” (back-translation); and the question “Do you own a smartphone?” (original) became “Do you have a smartphone?” (back-translation). It was judged that none of these minor differences would affect the meaning of any of the questions. The questionnaire can be viewed in full in both English and Japanese as extended data on the Open Science Framework (Robb, 2021).
Participants were recruited using CrowdWorks, a Japanese crowdsourcing website. All registered CrowdWorks users were deemed eligible to participate; there were no additional inclusion or exclusion criteria. Crowdsourcing websites have been shown to be viable methods for recruiting participants for questionnaire research (Behrend et al., 2011; Peer et al., 2017). Previously, Majima et al., (2017) compared participants recruited via CrowdWorks with Japanese student samples and found that there were relatively small differences in some personality traits, and that the CrowdWorks participants were (as would be expected) more diverse in terms of age and employment history. The translated questionnaire was uploaded to CrowdWorks and responses collected during December 2017. At the start of the questionnaire, the purpose of the research was explained, and participants were informed that they were not obliged to take part, that their responses would be used for research purposes, and that by continuing with the questionnaire they would indicating their consent to participate. No identifying information about the participants was collected. All participants were paid 30 JPY (approximately 0.27 USD in December 2017) to complete the questionnaire, whether their response was used in the final analyses or not. The research was conducted according to the recommendations of the Human Research Ethics Committee (Sciences) at University College Dublin, where the lead author of the paper was employed at the time of the research. The protocol was deemed to be exempt from full ethical review as the data were collected anonymously, the participants were not from a vulnerable group, and they were not placed at any risk during the research.
Assuming that the number of people aged 16 and over in Japan is approximately 110,000,000 (Statistics Bureau of Japan, n.d.), and that 50% have used BT (based on results from Torous et al., 2016), with a margin of error of 5% and a confidence level of 99%, the ideal sample size was calculated to be 664. Given that previous research has highlighted concerns with unreliable responses and high attrition rates in crowdsourced samples (Keith et al., 2017), 1000 responses were collected.
After collection, the data were inspected, and potentially unreliable responses were removed. Unreliable responses included those with inconsistent answers to similar questions, or the wrong answer to the simple sum of nine plus four (included to check that participants were diligently reading and responding to the questions). There were no missing data in the final dataset used for analysis.
Descriptive statistics were used to investigate smartphone and games console ownership of participants; usage of health and fitness apps; concerns about BT; BT apps/games used by participants; and participants' beliefs about the effectiveness of BT. Phi-coefficients were calculated to determine if there were associations between participants' beliefs about whether BT could lead to cognitive/emotional improvements (specifically in thinking ability, attention, memory, and mood), whether they had used BT, and if they thought BT apps/games had negative side effects. Following Torous et al. (2016) a score was calculated for each participant measuring how positively they felt about BT. The difference in this score between participants who had used BT and those who had not was investigated using a Mann-Whitney U-test. Pearson correlation coefficients were calculated between this score, the number of BT apps/games participants had used, and the longest period of time participants had used BT. Finally, gender differences were also investigated. These analyses were performed using JASP version 0.11.1 and version 0.14.1.
A total of 1000 responses were received. Of these, one response was excluded as the age in years was entered as 336, while six responses were removed as they provided an incorrect answer to the sum of nine and four. A further 175 participants’ responses were removed as they gave inconsistent answers about their history of using BT apps or games. There were four kinds of inconsistency. Firstly, 13 participants answered “yes” to item 8 (“Have you ever used an app or game that claims to increase memory, concentration, attentiveness, or other cognitive abilities? In this survey, we will refer to these as ‘brain training apps/games.’”) but answered “I have never used one” to item 16 (“What is the longest period of time you have used a brain training app/game? If you have never used one, please select ‘I have never used one.’”). Secondly, 11 participants answered “yes” to item 8 but did not enter any brain training apps or games that they had used when asked to do so in item 15. Thirdly, 111 participants answered “no” to item 8, but indicated they had used BT apps or games in item 16 (i.e., they entered a period of time they had used apps or games). Fourthly, 152 participants answered “no” to item 8, but entered apps or games they had used when asked to do so in item 15. Note that some participants' responses were inconsistent in more than one of these ways. With these responses removed, the final sample used for analyses contained responses from 818 participants (524 female, 294 male; mean age 36.1 years; standard deviation 9.5 years). The underlying data can be accessed on the Open Science Framework (Robb, 2021).
Figure 1 shows the devices and kind of apps owned by participants, divided into seven age categories. In all age groups, at least 70% of participants owned a smartphone, with the highest rate of smartphone ownership (94.12%) in participants aged 30 years and under. Games console ownership was approximately 40% in the 20-29 years, 30-39 years, and 40-49 years categories.
The most common concerns participants had about BT apps and games were the cost of the product, the time required to use them, and a lack of certainty regarding their effectiveness. Most participants did not express concerns about the safety of their health data or whether the apps or games have medical recommendation (Figure 2).
The most-used training programs were produced by Nintendo and released on the Nintendo DS handheld games console. Over half the participants reported having used Nou wo kitaeru otona no DS training (released in the US as Brain Age: Train Your Brain in Minutes a Day! and in Europe as Dr. Kawashima’s Brain Training: How Old Is Your Brain?) and just under a quarter of participants reported using the follow-up game (Motto nou wo kitaeru otona no DS training; US: Brain Age 2: More Training in Minutes a Day! Europe: More Brain Training from Dr. Kawashima: How Old Is Your Brain?). Of the remaining programs, all but one were used by fewer than 10% of participants. Only 19 participants (2.32%) reported having used Lumosity (Figure 3).
Participants indicated positive perceptions of BT apps and games, believing that they could improve thinking ability (79.58%), attention (66.26%), memory (78.61%), and mood (73.35%). Phi-coefficients were calculated for all combinations of the binary variables (i.e., yes/no questions) regarding participants’ views about whether BT could improve thinking ability, attention, memory, and mood, as well as the binary variables regarding whether they had used BT, and if they thought BT had negative side effects. There were weak to moderate positive correlations between (1) thinking ability and attention (ϕ = 0.376, p < 0.001), (2) thinking ability and memory (ϕ = 0.453, p < 0.001), and (3) attention and memory (ϕ = 0.378, p < 0.001) (Table 1) (see Akoglu, 2018, and Schober et al., 2018, for discussion of interpretation of strength of correlations).
All variables are dichotomous yes/no items. Chi-square tests showed that all associations were positive except for that between “Has used brain training apps/games” and “Apps/games have negative side effects”. Significant at p = 0.01 level marked with *.
Following Torous et al. (2016), a score was calculated for each participant measuring how positively they felt about BT. Participants were given one point for each positive answer to the four questions about whether they thought BT improved thinking ability, attention, memory, and mood, and one point for a negative answer to the question about whether they thought BT apps and games have negative side effects. The maximum score of five indicated a participant thought BT improved all four factors and had no side effects, while the minimum score of zero indicated a participant thought BT did not improve any of the four factors and had negative side effects. This score was significantly higher among respondents who indicated that they had used BT apps or games (Mann-Whitney U test, U = 37757, p < 0.001); the rank biserial correlation was -0.213, indicating a weak effect size. This score showed negligible or weak correlations (Pearson correlation) with both the total number of apps/games a participant had used (Pearson’s r = 0.163, p < 0.001) and the duration they had used BT apps/games (Pearson’s r = 0.237, p < 0.001) (Table 2).
Significant at p = 0.01 level marked with *.
Finally, gender differences were also examined, as suggested by reviewer 1. There were significant differences between males and females in terms of smartphone ownership (females more likely to own), games console ownership (males more likely to own) and the belief that BT apps/games have negative side effects (females more likely to think so). Phi-coefficients showed that these associations were all weak or negligible (ϕ ranged from 0.097 to 0.12). These results are shown in Figure 4.
The results of the present study suggest that a high rate of Japanese consumers may have positive perceptions of the potential benefits of BT apps and games, comparable to or (in the case of positive effects on mood) higher than the rate in US consumers (Torous et al., 2016). While there were correlations between positive perceptions about the effects of BT on specific cognitive factors (thinking ability and attention, thinking ability and memory, and attention and memory), these were weaker than those found in US consumers (Torous et al., 2016). Similarly to Torous et al.’s (2016) findings in US consumers, most participants in the present study did not indicate concerns about clinical recommendations, privacy of health data, or negative side effects, when considering BT games. Rather, the cost of apps/games, the time involved, and uncertainty about their effectiveness were the main concerns in the present sample. While this study only presents preliminary findings, it is interesting to note the similarities with the US participants in the previous study. Previously, it was shown that there were only minor demographic differences (in terms of gender and level of education) in beliefs about benefits of BT (Ng et al., 2020). It may be that there are similarly minor differences between people from different socio-cultural backgrounds, although further research to investigate this specific hypothesis would be required.
Lumosity, which was used by 70% of US consumers in the previous study by Torous et al. (2016) was only used by 2.3% of the Japanese participants. This large difference in the number of participants using Lumosity may be partly explained by the fact that the app was only released in Japanese in December 2014. In the Japanese participants surveyed here, the most popular brain training apps were games made by Nintendo for the Handheld Nintendo DS console. However, since the previous study (Torous et al., 2016) only focused on smartphone apps, it does not provide any information about how widely used Nintendo BT games are in US consumers. Nintendo BT games are popular globally, however: Brain Age: Train Your Brain in Minutes a Day! (the most used game among Japanese participants in the present study) was among the 10 best-selling video games of 2006 in the US. It is therefore likely that many of the participants in Torous et al.’s (2016) study also had experience of using this BT game.
However, this finding does raise an important question: are there differences between the US and Japan in terms of which brain training apps or games are most widely used? If so, a further important area of study would be to compare the apps or games used in the two countries in terms of, for example, interface design, how they are advertised and sold, and what cognitive activities users engage in while using them. For example, Lumosity (the most used game in the US sample studied by Torous et al., 2016) can be played on most smartphones, tablets, or computers, uses a subscription-based payment system, and involves playing a variety of simple games which are claimed to target specific cognitive skills, such as memory, attention, and flexibility (see www.lumosity.com). On the other hand, Brain Age: Train Your Brain in Minutes a Day! (the most used game in the present study) is played on a handheld console (the Nintendo DS), is purchased with a single payment, and the gameplay involves activities such as performing calculations, reading texts aloud, counting the syllables in words, and playing Sudoku (see Nintendo of America, 2006). There are therefore potential differences between these two brain training apps/games which may impact both attitudes and habits, but also their effectiveness. However, to fully understand this, it would be necessary to conduct empirical research on how widely used these programs are in the US and Japan, as well as a comprehensive analysis of the various relevant aspects of the two products, such as their design, marketing, payment system, and so on.
The results presented are similar to previous findings that people’s perceptions of the positive effects of BT are not strongly related to their experience of using BT. Torous et al. (2016) found that US consumers’ positive beliefs about BT were only weakly correlated with the number of BT apps they had used. Similarly, Rabipour et al. (2018) found that people with experience of BT had similar expectations about its effectiveness to people with no experience. In other research, Robb et al. (2018) found that parents of children with intellectual disabilities had positive beliefs and attitudes regarding BT and had high intentions to support the use of BT by their children, despite the fact that the sample had very little experience with BT programs. There is therefore a continuing lack of evidence that experience of using BT is associated with positive beliefs about the effectiveness of BT. The fact that participants in both the present and previous studies (Rabipour & Davidson, 2015; Torous et al., 2016, Rabipour et al., 2018, Robb et al., 2018) have very positive beliefs about the effectiveness of brain training (whether or not they have actual experience of using BT), combined with the lack of evidence that BT is actually effective (Simons et al., 2016; Sala et al., 2019; Aksayli et al., 2019), illustrates the importance of investigating the role of psychological factors such as motivation, effects of being observed during training and testing, and placebo effects in BT research. It has been shown that users' perceptions about BT can be relatively easily influenced by biased messages regarding their effectiveness (Rabipour and Davidson, 2015; Rabipour et al., 2018), and placebo effects have been found in previous BT research (Boot et al., 2013; Foroughi et al., 2016). Future trials of BT programs would benefit from accounting for such potential confounding factors.
To facilitate a direct comparison between Japanese and American users of BT, this study used a direct translation of the questionnaire developed by Torous et al. (2016). In this questionnaire, the items referring to participants’ positive and negative beliefs about BT (e.g., “Do you think brain training apps and games can improve memory?”) were phrased as questions requiring yes/no answers. However, in retrospect, it may have been more informative to adapt the questionnaire to have Likert-style responses. This would still have allowed some comparison with previous research but could have also facilitated more nuanced analysis of the results. This represents the major limitation with the current study.
While the use of crowdsourcing platforms such as CrowdWorks to recruit participants is becoming more common in recent research, there remain some potential limitations associated with this approach. Firstly, it is recognized that crowdsourced participants may not always be representative of the population of interest (Stewart et al., 2017). In the present study, this issue is most obvious when considering the ages of the participants: in 2015, 26.6% of the Japanese population were over 65 (Statistics Bureau of Japan, n.d.), whereas in the sample analyzed here, only 9 of 818 participants were over 60. Given that BT is often considered as a potential intervention for people with age-related cognitive decline (Buitenweg et al., 2012), the habits and attitudes of this demographic are clearly important. Secondly, it may be suggested that data collected from crowdsourcing platforms is of low quality (e.g., due to participants answering questions without fully reading or considering them). However, previous research has found data collected via the crowdsourcing platform Amazon Mechanical Turk is of comparable quality to other methods (Kees et al., 2017). In the present study, several indicators were used to identify potentially automated or low effort responses (see section Participants and Procedure). A total of 182 responses (18.2%) were removed before analysis, which is comparable to the rate of 14% automated and low effort responses found in a study of Amazon Mechanical Turk workers by Buchanan & Scofield (2018).
A final limitation may arise from the translation and adaptation of the original questionnaire used by Torous et al. (2016). Although the translation process involved three independent translators and the production of a back-translation which was compared with the original questionnaire, it remains possible that the meaning of some items was altered in a way that affected the results.
Due to the low number of participants over 60 years old, the present study cannot provide any reliable information on the attitudes and habits regarding BT in the elderly population in Japan. Future research investigating this topic would be important. Future research could also benefit from using Likert-style items, as discussed in the section Limitations. Further research will also be required to understand more completely the factors that influence people’s attitudes towards BT, and the role of psychological confounders such as placebo effects in BT research. Finally, one important finding of the present study is the popularity of BT games produced by Nintendo, emphasizing the importance of games consoles in the BT market, at least in Japan. Since the previous major study of BT habits in western users only focused on smartphone applications (Torous et al., 2016), future research should investigate if BT programs on games consoles, such those produced by Nintendo, are as widely used in countries other than Japan.
This preliminary study contributes to a growing literature investigating the expectations, attitudes, and habits of potential users of brain training applications and games. There are two main findings. Firstly, similarly to previous research conducted in the US, Japanese consumers have positive beliefs about brain training which do not seem to be strongly associated with the amount of experience they have using such programs. Secondly, the most widely used brain training software among Japanese participants are two games made by Nintendo and played on the handheld Nintendo DS console.
Open Science Framework: Attitudes and habits regarding brain training games and apps in Japan, https://doi.org/10.17605/OSF.IO/CW5AG (Robb, 2021).
This project contains the following underlying data:
Open Science Framework: Attitudes and habits regarding brain training games and apps in Japan, https://doi.org/10.17605/OSF.IO/CW5AG (Robb, 2021).
This project contains the following extended data:
- questionnaire-v3-final_jp.pdf (Questionnaire in Japanese)
- questionnaire-v3_en-back-translation.docx (Questionnaire back-translated to English)
Data are available under the terms of the Creative Commons Zero “No rights reserved” data waiver (CC0 1.0 Public domain dedication).
This work was supported by (1) KAKEN grant no. 18H05804 from the Japan Society for the Promotion of Science and (2) funding from the charity RESPECT and the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013; REA grant agreement no. PCOFUND-GA-2013-608728), both to Nigel Robb.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
No
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
No
References
1. Borella E, Carbone E, Pastore M, De Beni R, et al.: Working Memory Training for Healthy Older Adults: The Role of Individual Characteristics in Explaining Short- and Long-Term Gains.Front Hum Neurosci. 2017; 11: 99 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Neuroscience
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Digital Game, Exergame
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Social psychology, Experimental approach, Social cognition, Tele-mental health study
Competing Interests: No competing interests were disclosed.
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
No
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
No
References
1. Torous J, Staples P, Fenstermacher E, Dean J, et al.: Barriers, Benefits, and Beliefs of Brain Training Smartphone Apps: An Internet Survey of Younger US Consumers.Front Hum Neurosci. 2016; 10: 180 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Communication neuroscience, video games, learning, memory
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||||
---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | |
Version 3 (revision) 21 Jul 23 |
read | read | |||
Version 2 (revision) 25 Oct 21 |
read | read | |||
Version 1 23 Jan 21 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)