Attitudes and habits regarding brain training applications and games among Japanese consumers: a preliminary cross-sectional study [version 3; peer review: 3 not approved]

Background : While there is now a large amount of research investigating whether brain training applications and games are effective or not, there is less research on the expectations, attitudes, and habits of potential users of brain training programs. Previous research suggests that people generally have positive beliefs about the effectiveness of brain training which are not dependent on their level of experience of brain training. However, this research has primarily focused on western participants. Methods : In the present study, a questionnaire was used to investigate the attitudes and habits of Japanese consumers towards brain training. The final sample contained responses from 818 people. In addition to descriptive statistics, correlation coefficients were calculated to determine if there were relationships between variables relating to participants' beliefs about brain training and experience of using brain training. Results : Participants had positive beliefs about the effectiveness of brain training. However, these beliefs were negligibly or weakly correlated with their level of experience of using brain training, both in terms of the number programs used (Pearson's r = 0.163) and duration of use (Pearson's r = 0.237). The most widely used brain training program (used by 52.93% of participants) was made by Nintendo for the handheld Nintendo DS games console. Conclusions : The research presented here supports previous findings


Introduction
Recently, there has been much interest in so-called "brain training" (BT) applications and games. These programs are typically marketed to consumers as enjoyable, interactive experiences that, if used regularly, are claimed to improve a range of cognitive skills, such as attention, memory, and multitasking ability (Simons et al., 2016). The potential benefits of such training, if effective, are numerous. For example, training executive function skills such as working memory and task switching could potentially lead to improved outcomes in education, quality of life, and employment for the general population (Diamond, 2013). In addition, people with cognitive deficits, such as those with intellectual disabilities or agerelated cognitive decline, could also benefit from effective cognitive training software (Robb et al., 2018;Buitenweg et al., 2012).
Research on the effectiveness of various types of cognitive training has found evidence that it can lead to improvements in tasks that bear some resemblance to the training ("near transfer"), but limited evidence that these improvements transfer to distantly related tasks ("far transfer") or indeed to everyday life (Simons et al., 2016;. These findings suggest that theories of transfer that emphasize the importance of overlap between the training and the target skills (e.g., Gobet, 2016;Taatgen, 2013;Oei and Patterson, 2014) may provide the best account of the mechanisms by which cognitive training is effective. Therefore, a detailed theoretical understanding of the overlap between the training and the desired outcome may be an important factor in the design of effective, tailored cognitive training programs in the future (see Smid et al., 2020, for this and other recommendations).
As part of a more comprehensive science of cognitive training, it is also important to investigate the attitudes and habits of the people who will potentially use the training. Individual differences in personality, motivation, expectations etc., are likely to play a role in determining a user's engagement with a training program (Smid et al., 2020). Regular engagement is obviously an important factor in any kind of training; however, attrition is a commonly reported problem in trials of cognitive training software (Corbett et al., 2015;Robb et al., 2019), and at least one commercial BT program (Cogmed) assigns users a coach to ensure that they regularly engage with the software. Understanding how and why people use cognitive training programs may therefore be an important additional factor in determining their effectiveness.
Previous research has found that participants typically have positive beliefs about the effectiveness of BT. Torous et al. (2016) found positive beliefs about the effectiveness of BT mobile applications in young American consumers, both in participants who had used BT programs and those who had not. Other research found similar results in parents of children with intellectual disabilities (who may benefit from cognitive training): parents believed that BT could benefit their children and expressed positive attitudes towards supporting such training. Again, these attitudes were not related to how much experience the parents had with BT apps or games (Robb et al., 2018). It has also been shown that people's expectations about the effectiveness of BT can be influenced by the information they receive about such programs. Rabipour & Davidson (2015) and Rabipour et al. (2018) found that participants' expectations about the effectiveness of BT at baseline could be subsequently raised or lowered by presenting them with positive or negative messages about BT. Finally, Ng et al. (2020) found that frequency of engagement was only weakly correlated with perceived cognitive benefit for a range of activities, including BT. While this research reveals important information about the attitudes, habits, and expectations of a range of potential consumers of BT, it is primarily focused on Western users. It is widely recognized that much research involving human subjects may be biased towards certain demographics (Henrich et al., 2010). In the specific case of brain training apps, previous findings suggest the possibility of relevant cultural differences, particularly differences between people from western and Asian backgrounds.
Firstly, cultural differences in technology acceptance may affect the use of brain training apps and games. Technology acceptance refers to the ways in which users adopt and use new technologies. Several models and theories have been proposed to explain this phenomenon, such as the Technology Acceptance Model (TAM) (Davies, 1989;Bagozzi et al., 1992), which holds that two important factors influencing an individual's acceptance of any technology are perceived ease of use and perceived usefulness (Davies, 1989). Research has shown that cultural factors may influence technology REVISED Amendments from Version 2 'Preliminary' was added to the title. Added more theoretical background to justify the investigation of cultural differences in attitudes to brain training games. Added a discussion of the differences between Lumosity and Brain Age: Train Your Brain in Minutes a Day! Any further responses from the reviewers can be found at the end of the article acceptance. Jan et al. (2022) conducted a meta-analysis of studies on technology acceptance and the cultural dimensions proposed by Hofstede (2011). Hofstede conceptualised culture as the "collective mental programming" or "software of the mind" which distinguishes one group of people from another (Hofstede et al., 2005), and identified six dimensions along which cultures can vary (Hofstede, 2011). Based on their meta-analysis, Jan et al. (2022) proposed a conceptual model of how technology acceptance is directly affected by cultural dimensions such as individualism/collectivism, uncertainty avoidance, and long-term orientation. For example, in more individualistic societies such as the United States (Hofstede, 2001), an individual's intention to use a new technology such as a brain training app may depend highly on their perception of how the app can benefit them, while in more collectivist societies such as China or Japan (Hofstede, 2001), their perception of how it might benefit society or perhaps their company might be more important.
Secondly, research by Jaeggi et al. (2014) on individual differences in working memory training found that the success of the training was influenced by individuals' motivation, pre-existing ability, and implicitly held theories of intelligence. The latter construct was assessed using the Theories of Cognitive Abilities Scale (Dweck, 1999), which classifies participants according to whether they view intelligence as fixed (innate and difficult to change), or something that is incremental (malleable, and can be changed based on experience). The authors reported a difference between those with fixed beliefs and those with incremental beliefs in terms of how well the working memory training transferred to a test of visuospatial reasoning (greater transfer for those with incremental beliefs). While the cultural background of participants was not reported in this study (all were recruited from a US university and the surrounding community), other research using the Theories of Cognitive Abilities Scale has found evidence for cultural differences in attitudes to intelligence. Jose and Bellamy (2012) investigated how parental theories of intelligence influence how their children engage with academic tasks. Recruiting participants from New Zealand, the US, China, and Japan, they discovered that the view that intelligence is malleable was most strongly supported by American parents, followed by those from New Zealand and China, while the view was least strongly supported by Japanese parents. Taking these two results together, it is reasonable to question if differences between Japanese and American individuals regarding their implicit theory of intelligence could affect how they perceive and use brain training apps, and how such training transfers to other contexts.
In the case of understanding attitudes and habits regarding BT, the largest previous study was conducted in the US . Following the points in the previous paragraphs, it is important that data from other countries, particularly those which evidence suggests may have differing attitudes and habits, is collected and analysed.
Japan represents a large group of potential consumers of cognitive training who may have different habits or attitudes than, for example, those in the US. In addition to the previously discussed differences regarding Hofstede's cultural dimensions and attitudes to intelligence, previous research has found differences between users from Japan and the US in terms of how they access and use mobile apps (Lim et al., 2014), review video games (Zagal & Tomuro, 2013), and their preferences for the design of websites (Cyr et al., 2005). Japan also has a developed BT market, with popular BT games having been released in the country for several years (Fuyuno, 2007;Chancellor & Chatterjee, 2011). Taken together, these points suggest the possibility that attitudes and habits regarding BT may differ between users in Japan and other countries. Therefore, the main purpose of this paper is to provide preliminary data on the habits and attitudes of Japanese people regarding BT apps and games, thus expanding our knowledge of how and why such programs are used around the world and laying the groundwork for future research in this area.

Questionnaire
This cross-sectional study used a Japanese translation of the questionnaire used by Torous et al. (2016) with minor adaptations. Before translation, the original questionnaire was adapted in two ways. Firstly, while Torous et al.'s (2016) questionnaire specifically focused on using smartphone apps, the present study also included questions (and adapted the wording of questions) to refer to games consoles. This was because it was expected that Japanese-produced BT programs would be popular among Japanese people, and some such software is only available on games consoles. Secondly, when asking participants which cognitive training programs they had used, the list of options was updated to reflect apps and games available in Japan.
This questionnaire was then translated into Japanese by two professional translators, who both independently produced separate translations. Professional translators were contracted through Gengo, a web-based human translation platform. A native Japanese speaker familiar with the research project merged these translations; differences in the two translations were resolved through discussion between this person and the author of the paper. This resulted in a final Japanese version of the questionnaire. Before being used, this version was translated back into English by a third professional translator, and this version was compared with Torous et al.'s (2016) original questionnaire. There were some minor differences in the wording of the original questionnaire and the back-translation. For example, "duration" (original) became "period of time" (back-translation); the phrase "For the purpose of this survey, we will call these 'brain training apps/games'" (original) became "In this survey, we will refer to these as 'brain training apps/games'" (back-translation); and the question "Do you own a smartphone?" (original) became "Do you have a smartphone?" (back-translation). It was judged that none of these minor differences would affect the meaning of any of the questions. The questionnaire can be viewed in full in both English and Japanese as extended data on the Open Science Framework (Robb, 2021).

Participants and procedure
Participants were recruited using CrowdWorks, a Japanese crowdsourcing website. All registered CrowdWorks users were deemed eligible to participate; there were no additional inclusion or exclusion criteria. Crowdsourcing websites have been shown to be viable methods for recruiting participants for questionnaire research (Behrend et al., 2011;Peer et al., 2017). Previously, Majima et al., (2017) compared participants recruited via CrowdWorks with Japanese student samples and found that there were relatively small differences in some personality traits, and that the Crowd-Works participants were (as would be expected) more diverse in terms of age and employment history. The translated questionnaire was uploaded to CrowdWorks and responses collected during December 2017. At the start of the questionnaire, the purpose of the research was explained, and participants were informed that they were not obliged to take part, that their responses would be used for research purposes, and that by continuing with the questionnaire they would indicating their consent to participate. No identifying information about the participants was collected. All participants were paid 30 JPY (approximately 0.27 USD in December 2017) to complete the questionnaire, whether their response was used in the final analyses or not. The research was conducted according to the recommendations of the Human Research Ethics Committee (Sciences) at University College Dublin, where the lead author of the paper was employed at the time of the research. The protocol was deemed to be exempt from full ethical review as the data were collected anonymously, the participants were not from a vulnerable group, and they were not placed at any risk during the research.

Sample size
Assuming that the number of people aged 16 and over in Japan is approximately 110,000,000 (Statistics Bureau of Japan, n.d.), and that 50% have used BT (based on results from Torous et al., 2016), with a margin of error of 5% and a confidence level of 99%, the ideal sample size was calculated to be 664. Given that previous research has highlighted concerns with unreliable responses and high attrition rates in crowdsourced samples (Keith et al., 2017), 1000 responses were collected.

Statistical analysis
After collection, the data were inspected, and potentially unreliable responses were removed. Unreliable responses included those with inconsistent answers to similar questions, or the wrong answer to the simple sum of nine plus four (included to check that participants were diligently reading and responding to the questions). There were no missing data in the final dataset used for analysis.
Descriptive statistics were used to investigate smartphone and games console ownership of participants; usage of health and fitness apps; concerns about BT; BT apps/games used by participants; and participants' beliefs about the effectiveness of BT. Phi-coefficients were calculated to determine if there were associations between participants' beliefs about whether BT could lead to cognitive/emotional improvements (specifically in thinking ability, attention, memory, and mood), whether they had used BT, and if they thought BT apps/games had negative side effects. Following Torous et al. (2016) a score was calculated for each participant measuring how positively they felt about BT. The difference in this score between participants who had used BT and those who had not was investigated using a Mann-Whitney U-test. Pearson correlation coefficients were calculated between this score, the number of BT apps/games participants had used, and the longest period of time participants had used BT. Finally, gender differences were also investigated. These analyses were performed using JASP version 0.11.1 and version 0.14.1.

Results
A total of 1000 responses were received. Of these, one response was excluded as the age in years was entered as 336, while six responses were removed as they provided an incorrect answer to the sum of nine and four. A further 175 participants' responses were removed as they gave inconsistent answers about their history of using BT apps or games. There were four kinds of inconsistency. Firstly, 13 participants answered "yes" to item 8 ("Have you ever used an app or game that claims to increase memory, concentration, attentiveness, or other cognitive abilities? In this survey, we will refer to these as 'brain training apps/games.'") but answered "I have never used one" to item 16 ("What is the longest period of time you have used a brain training app/game? If you have never used one, please select 'I have never used one.'"). Secondly, 11 participants answered "yes" to item 8 but did not enter any brain training apps or games that they had used when asked to do so in item 15. Thirdly, 111 participants answered "no" to item 8, but indicated they had used BT apps or games in item 16 (i.e., they entered a period of time they had used apps or games). Fourthly, 152 participants answered "no" to item 8, but entered apps or games they had used when asked to do so in item 15. Note that some participants' responses were inconsistent in more than one of these ways. With these responses removed, the final sample used for analyses contained responses from 818 participants (524 female, 294 male; mean age 36.1 years; standard deviation 9.5 years). The underlying data can be accessed on the Open Science Framework (Robb, 2021). Figure 1 shows the devices and kind of apps owned by participants, divided into seven age categories. In all age groups, at least 70% of participants owned a smartphone, with the highest rate of smartphone ownership (94.12%) in participants aged 30 years and under. Games console ownership was approximately 40% in the 20-29 years, 30-39 years, and 40-49 years categories.
The most common concerns participants had about BT apps and games were the cost of the product, the time required to use them, and a lack of certainty regarding their effectiveness. Most participants did not express concerns about the safety of their health data or whether the apps or games have medical recommendation ( Figure 2). Of the remaining programs, all but one were used by fewer than 10% of participants. Only 19 participants (2.32%) reported having used Lumosity (Figure 3).
Participants indicated positive perceptions of BT apps and games, believing that they could improve thinking ability (79.58%), attention (66.26%), memory (78.61%), and mood (73.35%). Phi-coefficients were calculated for all combinations of the binary variables (i.e., yes/no questions) regarding participants' views about whether BT could improve thinking ability, attention, memory, and mood, as well as the binary variables regarding whether they had used BT, and if they thought BT had negative side effects. There were weak to moderate positive correlations between (1) thinking ability and attention (ϕ = 0.376, p < 0.001), (2) thinking ability and memory (ϕ = 0.453, p < 0.001), and (3) attention and memory (ϕ = 0.378, p < 0.001) ( Table 1) (see Akoglu, 2018, andSchober et al., 2018, for discussion of interpretation of strength of correlations).  whether they thought BT apps and games have negative side effects. The maximum score of five indicated a participant thought BT improved all four factors and had no side effects, while the minimum score of zero indicated a participant thought BT did not improve any of the four factors and had negative side effects. This score was significantly higher among respondents who indicated that they had used BT apps or games (Mann-Whitney U test, U = 37757, p < 0.001); the rank biserial correlation was -0.213, indicating a weak effect size. This score showed negligible or weak correlations (Pearson correlation) with both the total number of apps/games a participant had used (Pearson's r = 0.163, p < 0.001) and the duration they had used BT apps/games (Pearson's r = 0.237, p < 0.001) (  Figure 2. Participants' concerns about brain training apps and games. Finally, gender differences were also examined, as suggested by reviewer 1. There were significant differences between males and females in terms of smartphone ownership (females more likely to own), games console ownership (males more likely to own) and the belief that BT apps/games have negative side effects (females more likely to think so). Phicoefficients showed that these associations were all weak or negligible (ϕ ranged from 0.097 to 0.12). These results are shown in Figure 4.

Discussion
The results of the present study suggest that a high rate of Japanese consumers may have positive perceptions of the potential benefits of BT apps and games, comparable to or (in the case of positive effects on mood) higher than the rate in US consumers . While there were correlations between positive perceptions about the effects of BT on specific cognitive factors (thinking ability and attention, thinking ability and memory, and attention and memory), these were weaker than those found in US consumers . Similarly to Torous et al.'s (2016) findings in US consumers, most participants in the present study did not indicate concerns about clinical recommendations, privacy of health data, or negative side effects, when considering BT games. Rather, the cost of apps/games, the time involved, and uncertainty about their effectiveness were the main concerns in the present sample. While this study only presents preliminary findings, it is interesting to note the similarities with the US participants in the previous study. Previously, it was shown that there were only minor demographic differences (in terms of gender and level of education) in beliefs about benefits of BT (Ng et al., 2020). It may be that there are similarly minor differences between people from different sociocultural backgrounds, although further research to investigate this specific hypothesis would be required.
Lumosity, which was used by 70% of US consumers in the previous study by Torous et al. (2016) was only used by 2.3% of the Japanese participants. This large difference in the number of participants using Lumosity may be partly explained Table 2. Pearson correlations between participants' positive beliefs about brain training, the total number of brain training apps or games they had used, and the maximum duration they had used brain training apps or games. Significant at p = 0.01 level marked with *.  Table 1. Phi-coefficients for associations between participants' beliefs about whether brain training can improve four mental factors, whether they had used brain training, and whether they thought brain training has negative side effects. All variables are dichotomous yes/no items. Chi-square tests showed that all associations were positive except for that between "Has used brain training apps/games" and "Apps/games have negative side effects". Significant at p = 0.01 level marked with *. by the fact that the app was only released in Japanese in December 2014. In the Japanese participants surveyed here, the most popular brain training apps were games made by Nintendo for the Handheld Nintendo DS console. However, since the previous study  only focused on smartphone apps, it does not provide any information about how widely used Nintendo BT games are in US consumers. Nintendo BT games are popular globally, however: Brain Age: Train Your Brain in Minutes a Day! (the most used game among Japanese participants in the present study) was among the 10 best-selling video games of 2006 in the US. It is therefore likely that many of the participants in Torous et al.'s (2016) study also had experience of using this BT game.

Improve
However, this finding does raise an important question: are there differences between the US and Japan in terms of which brain training apps or games are most widely used? If so, a further important area of study would be to compare the apps or games used in the two countries in terms of, for example, interface design, how they are advertised and sold, and what cognitive activities users engage in while using them. For example, Lumosity (the most used game in the US sample studied by Torous et al., 2016) can be played on most smartphones, tablets, or computers, uses a subscription-based payment system, and involves playing a variety of simple games which are claimed to target specific cognitive skills, such as memory, attention, and flexibility (see www.lumosity.com). On the other hand, Brain Age: Train Your Brain in Minutes a Day! (the most used game in the present study) is played on a handheld console (the Nintendo DS), is purchased with a single payment, and the gameplay involves activities such as performing calculations, reading texts aloud, counting the syllables in words, and playing Sudoku (see Nintendo of America, 2006). There are therefore potential differences between these two brain training apps/games which may impact both attitudes and habits, but also their effectiveness. However, to fully understand this, it would be necessary to conduct empirical research on how widely used these programs are in the US and Japan, as well as a comprehensive analysis of the various relevant aspects of the two products, such as their design, marketing, payment system, and so on.
The results presented are similar to previous findings that people's perceptions of the positive effects of BT are not strongly related to their experience of using BT. Torous et al. (2016) found that US consumers' positive beliefs about BT were only weakly correlated with the number of BT apps they had used. Similarly, Rabipour et al. (2018) found that people with experience of BT had similar expectations about its effectiveness to people with no experience. In other research, Robb et al. (2018) found that parents of children with intellectual disabilities had positive beliefs and attitudes regarding BT and had high intentions to support the use of BT by their children, despite the fact that the sample had very little experience with BT programs. There is therefore a continuing lack of evidence that experience of using BT is associated with positive beliefs about the effectiveness of BT. The fact that participants in both the present and previous studies (Rabipour & Davidson, 2015;Torous et al., 2016, Rabipour et al., 2018, Robb et al., 2018 have very positive beliefs about the effectiveness of brain training (whether or not they have actual experience of using BT), combined with the lack of evidence that BT is actually effective (Simons et al., 2016;, illustrates the importance of investigating the role of psychological factors such as motivation, effects of being observed during training and testing, and placebo effects in BT research. It has been shown that users' perceptions about BT can be relatively easily influenced by biased messages regarding their effectiveness (Rabipour and Davidson, 2015;Rabipour et al., 2018), and placebo effects have been found in previous BT research ( Boot et al., 2013;Foroughi et al., 2016). Future trials of BT programs would benefit from accounting for such potential confounding factors.

Limitations
To facilitate a direct comparison between Japanese and American users of BT, this study used a direct translation of the questionnaire developed by Torous et al. (2016). In this questionnaire, the items referring to participants' positive and negative beliefs about BT (e.g., "Do you think brain training apps and games can improve memory?") were phrased as questions requiring yes/no answers. However, in retrospect, it may have been more informative to adapt the questionnaire to have Likert-style responses. This would still have allowed some comparison with previous research but could have also facilitated more nuanced analysis of the results. This represents the major limitation with the current study.
While the use of crowdsourcing platforms such as CrowdWorks to recruit participants is becoming more common in recent research, there remain some potential limitations associated with this approach. Firstly, it is recognized that crowdsourced participants may not always be representative of the population of interest (Stewart et al., 2017). In the present study, this issue is most obvious when considering the ages of the participants: in 2015, 26.6% of the Japanese population were over 65 (Statistics Bureau of Japan, n.d.), whereas in the sample analyzed here, only 9 of 818 participants were over 60. Given that BT is often considered as a potential intervention for people with age-related cognitive decline (Buitenweg et al., 2012), the habits and attitudes of this demographic are clearly important. Secondly, it may be suggested that data collected from crowdsourcing platforms is of low quality (e.g., due to participants answering questions without fully reading or considering them). However, previous research has found data collected via the crowdsourcing platform Amazon Mechanical Turk is of comparable quality to other methods (Kees et al., 2017). In the present study, several indicators were used to identify potentially automated or low effort responses (see section Participants and Procedure). A total of 182 responses (18.2%) were removed before analysis, which is comparable to the rate of 14% automated and low effort responses found in a study of Amazon Mechanical Turk workers by Buchanan & Scofield (2018).
A final limitation may arise from the translation and adaptation of the original questionnaire used by Torous et al. (2016). Although the translation process involved three independent translators and the production of a back-translation which was compared with the original questionnaire, it remains possible that the meaning of some items was altered in a way that affected the results.

Future work
Due to the low number of participants over 60 years old, the present study cannot provide any reliable information on the attitudes and habits regarding BT in the elderly population in Japan. Future research investigating this topic would be important. Future research could also benefit from using Likert-style items, as discussed in the section Limitations.
Further research will also be required to understand more completely the factors that influence people's attitudes towards BT, and the role of psychological confounders such as placebo effects in BT research. Finally, one important finding of the present study is the popularity of BT games produced by Nintendo, emphasizing the importance of games consoles in the BT market, at least in Japan. Since the previous major study of BT habits in western users only focused on smartphone applications , future research should investigate if BT programs on games consoles, such those produced by Nintendo, are as widely used in countries other than Japan.

Conclusion
This preliminary study contributes to a growing literature investigating the expectations, attitudes, and habits of potential users of brain training applications and games. There are two main findings. Firstly, similarly to previous research conducted in the US, Japanese consumers have positive beliefs about brain training which do not seem to be strongly associated with the amount of experience they have using such programs. Secondly, the most widely used brain training software among Japanese participants are two games made by Nintendo and played on the handheld Nintendo DS console.
This project contains the following underlying data: -raw-data-retrieved-4-24-2020.csv (File exported from CrowdWorks) -raw-data-jp (Spreadsheet of questionnaire responses in Japanese) -raw-data-en (Spreadsheet of questionnaire responses in English)
This project contains the following extended data: -questionnaire-v3-final_jp.pdf (Questionnaire in Japanese) -

Herdiyan Maulana
Faculty of Psychology, Universtias Negeri Jakarta, Jakarta Selatan, Indonesia Thank you for having me as a reviewer for this paper. I would like to appraise the author for very interesting paper as it is focused on cross-cultural perspective on brain training application in the non-Western country perspective, as well as utilizing sufficient large sample size participant. When I get assigned to this paper I am aware that two prior peer reviewers have sought their response as unfavorable output to the author. I am trying my best to provide the review as objective as I can based on those prior reviews. Please kindly find my following comments and feedback below.
f1000 research is a highly reputable research outlet that requires paper published in this journal to meet certain standards in terms of novelty of the topic, recent methodological approach, number of sample size, and implication of the study's findings. Based on my thorough reading of this article, I am afraid that the paper has not been met the f1000 highest standards, particularly in offering the novelty of the topic and complexity of the analysis. I am all aware that cross-cultural study has been always interesting issue to explore, however, the author needs to work more in presenting further arguments supported by deep psychological theoretical ground about why it is important to conduct such study. For example, the authors stated that; "While this research reveals important information about the attitudes, habits, and expectations of a range of potential consumers of BT, it is primarily focused on Western users. It is widely recognized that much research involving human subjects may be biased towards certain demographics (Henrich et al., 2010). In the case of understanding attitudes and habits regarding BT, the largest previous study was conducted in the United States . To fully understand the attitudes of BT users, it is vital that a global perspective is considered" explaining which part of the Torous et al (2016) study has relevance to the study conducted by the author. The author may explain how specific cultural and social context (e.g., collectivist vs individualistic perspective) from both countries associated with different habit, response to stimulus on BT, or perception on the stimuli of the BT based on more solid evidences from prior studies.
The authors mentioned that this study provides preliminary findings/data about how Japanese consumers perceived the BT program. This statement make me convinced that this article might fall into a pilot study type. A title refinement by adding "pilot study" should be a good idea.
More consistent interpretation to the results also needs to be drawn as the authors inferred that; "Taken together, these results present mounting evidence that experience of using BT is not strongly associated with positive beliefs about the effectiveness of BT' I am bit worried about this over-statement made by the author, since this study only focusing on cross-sectional approach, which the causal effect of using BT to participant belief of BT effectiveness cannot be inferred from such approach.
In the conclusion section, the author mentioned "the most widely used brain training software among Japanese participants are two games made by Nintendo and played on the handheld Nintendo DS console". I think this conclusion is too vague, it is better to break down and explain what features that differentiate the BT application used in both cultures and how these feature differences implicate to varied individual's response/experiences to BT training instead mentioning the brand/platform used by the participants.

Is the work clearly and accurately presented and does it cite the current literature? Yes
Is the study design appropriate and is the work technically sound? Partly

If applicable, is the statistical analysis and its interpretation appropriate? Yes
Are all the source data underlying the results available to ensure full reproducibility? Yes

Nigel Robb
Thank you very much for taking the time to review this article. I appreciate your thoughtful and constructive comments. I have revised the article to address the issues you identified: I have revised the introduction section, to include more theoretical background regarding relevant cultural differences between the US and Japan based on existing studies.
I have requested to change the title to include the word "preliminary".
I have softened some of the language. For example, in the discussion section, I have changed "Taken together, these results present mounting evidence that experience of using BT is not strongly associated with positive beliefs about the effectiveness of BT" to "There is therefore a continuing lack of evidence that experience of using BT is associated with positive beliefs about the effectiveness of BT." I have added a paragraph to the discussion section discussing the differences between Lumosity and Brain Age: Train Your Brain in Minutes a Day!, and making recommendations for future research in this area. This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Yuka Kotozaki
Iwate Medical University, Morioka, Japan Although it has been corrected, there seems to be a fundamentally serious problem with this study.
The results obtained are also very weak, so it will be very difficult to accept them as they are.

Is the work clearly and accurately presented and does it cite the current literature?
It's my understanding that Spearman's Rho should only be used with rank-order variables, and we have dichotomous variables here, not rank-order variables.

2.
between users from Japan and from the US. So, since the previous study focused on American users and Japan is a highly-developed brain training market, I think it is justified to study Japanese consumers. To clarify, the funding from the Japanese funder (Kaken grant) was awarded after the data collection in the present study was completed, so grant money in no way influenced the decision to choose Japan. I have changed the age groups in Figure 1, now using seven categories. ○ 2. I've reworded the interpretation of the results to soften my claims about Japanese consumers' concerns regarding BT. I have reworded the interpretation of the results regarding socio-cultural differences and changed the language to emphasize that this is a preliminary study with limited findings ○ I have removed the claim that there are major differences between the Japanese and US BT markets and instead just pointed out the differences between this and the previous study in this respsect.
Iwate Medical University, Morioka, Japan You are using the Japanese translation of the questionnaire used by Torous et al. (2016). Before conducting this survey, did you verify that the results obtained from the Japanese translated version of the questionnaire are equivalent to the original questionnaire?

○
In the case of a manuscript, even if the product name is in Japanese, I think it would be better to describe it in English (or in romaji, etc.).

○
As for the correlation, if it is around 0.2, it is almost uncorrelated rather than weakly correlated, isn't it? Generally speaking, 0 to less than 0.3: almost no correlation, 0.3 to less than 0.5: very weak correlation, 0.5 to less than 0.7: correlation, 0.7 to less than 0.9: strong correlation, and 0.9 or more: very strong correlation.
Thank you very much for taking the time to review the article, and I apologize for taking so long to respond to your comments. I've updated the manuscript now to address your concerns as follows: Apart from the rigorous translation/back-translation process, there were no further checks to verify that the results obtained are equivalent to the original English questionnaire. Given that (1) the questionnaire does not really have multiple items measuring the same construct and (2) it is possible that there are differences between Japanese and American respondents, it may not be feasible to do this kind of verification. I have added some text to the limitations to make this clear.