ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Study Protocol

Stage 1 Registered Report: How subtle linguistic cues prevent unethical behaviors

[version 1; peer review: 1 approved with reservations]
PUBLISHED 22 Aug 2019
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Preclinical Reproducibility and Robustness gateway.

This article is included in the Japan Institutional Gateway gateway.

This article is included in the Social Psychology gateway.

Abstract

Different ways of description can easily influence people’s evaluation and behaviors. A previous study suggested that subtle linguistic differences in ethical reminder instructions can cause differences in preventing readers’ unethical behavior. The present study aims to replicate the previous finding by Bryan and his colleagues (2013) in the Japanese context, additionally exploring the influence of unfamiliar instructions that capture participants’ attention. In two experiments, which are planned to be conducted online, participants are asked to make 10 coin-tosses and report the number of “heads,” indicating the amount of money that could be earned. We will manipulate instructions (“Don’t cheat” vs. “Don’t be a cheater” vs. no instruction as a control) for each participant group including nearly 270 participants (Experiment 1). Next, we will conduct an extended experiment with an additional task in which more attention is directed toward the text (Experiment 2). Through these registered experiments, we examine the credibility of the previous finding that type of instruction affects the occurrence of unethical behaviors.

Keywords

cheating, persuasion, attention, moral, self construal, labeling

Introduction

When people behave dishonestly, they usually downplay the seriousness of the dishonest act (e.g., Monin & Jordan, 2009; Steele, 1988), weakening the link between the dishonesty and one’s self-identity (e.g., Bandura, 1999) to avoid the correspondent inference (Jones & Nisbett, 1972; Ross, 1977) that he or she is the kind of person who behaves dishonestly. According to self-concept maintenance theory, individuals strive to create and maintain an image of themselves as good and ethical people (Markus & Wurf, 1987; Mazar et al., 2008).

Meanwhile, different ways of description can easily influence people’s evaluation and judgment about something, even though they have a wealth of previously established knowledge (Fausey & Boroditsky, 2010). For instance, using a transitive verb (agentive description, e.g., “Timberlake ripped the costume”) to describe an accident makes participants significantly more likely to blame the actor, compared to the same description with the words changed to the intransitive verb (nonagentive description, e.g., “The costume ripped”). Another study found that, for children aged 5–7 years old, when a noun label was employed to describe a character (e.g., “She is a carrot-eater”) rather than a verbal predicate (e.g., “She eats carrots whenever she can”), their judgment about those characteristics would be more stable over time (Gelman & Heyman, 1999). The same phenomenon is demonstrated regarding self-perception (Walton & Banaji, 2004). It is possible that language has some effect in this category (Gelman et al., 2000) because when nouns are used to refer to something, one may have a deeper understanding of it, which is noted to “enable inductive inferences” (Gelman & O’Reilly, 1988).

When a word description is linked with unethical behaviors and used to categorize who someone is, people act more conservatively. Bryan et al. (2013) study showed that changing the term from “Don’t cheat” to “Don’t be a cheater” can decrease unethical behavior. They manipulated an instruction (e.g. “Please don’t be a cheater” or “Please don’t cheat”) to inhibit participants from conducting unethical behaviors. As a result, the “Don’t be a cheater” group had a significant decrease in unethical cheating behaviors. In another experiment, Bryan et al. (2011) found that more people would choose to vote if they heard the words “be a voter” rather than “to vote” on the day before election day. Additionally, research showed that, compared to “helping,” “being a helper” encouraged more children to conduct kind behaviors toward others (Bryan et al., 2014). However, subsequent research found that although “being a helper” can lead to more kind behaviors initially, once there is a setback, the backfire may also be stronger accordingly (Foster-Hanson et al., 2018). The reason underlying this phenomenon is as follows: as category labels, nouns bear a strong link to identity and may lead to self-doubt once he/she fails. Regarding ethical behavior, a moral-character model has been proposed, where moral character consists of motivation, ability, and identity elements (Cohen & Morse, 2014). Moral identity refers to being disposed toward valuing morality and wanting to view oneself as a moral person. This disposition should be considered when attempting to understand why people who behave unethically tend to apply all kinds of strategies to weaken the behavior-identity link (Bandura, 1999), including “euphemistic labeling” regarding language.

According to Bryan et al. (2011), the effect of noun expression is a motivation-driven process. When the noun is involved with positive identities, such as “voter” and “helper,” people produce more correlated behaviors; when the noun is involved with negative identities such as “cheater,” however, people produce fewer correlated behaviors.

In general, we believe that highlighting a self-identity word will decrease unethical behaviors; for example, according to Blasi (1984), a moral person is one for whom moral categories and moral notions are central, essential, and important to self-understanding. Morals cut deeply to the core of what and who they are as people. However, one study revealed that highly constructed self-identities are associated with more unethical behaviors (Cojuharenco et al., 2012). There are therefore still many unsolved problems about the relationship between the concept of self-identity and unethical behaviors.

In this study, we aim to replicate Experiment 3 of Bryan et al. (2013), for the following reasons:

First, the participants of Experiment 1 in Bryan et al. (2013) were asked to think of a number from 1 to 10. If the number was even, they would be paid $5; if it was odd, there was no reward. The authors found that “participants who were instructed to generate a random number typically show a strong bias toward odd numbers,” yet an accurate number could not be verified at the end of experiment. Thus, we abandoned the method of Experiment 1 because it contains much uncertainty. Compared with Experiment 2, which just constituted by two condition: “cheater” and “cheating”, a baseline group was included in Experiment 3 which made Experiment 3 more complete in its design. Moreover, we found that the effect size of Experiment 3 was small (f = 0.302). With an effect size equaling 0.302 in G*Power (significance level α = 0.05, power level 1-β = 0.95), Experiment 3 required at least 174 participants; in fact, only 99 people joined the original research. From this, we suppose that the effect size in Experiment 3 was overvalued.

According to the above review, high levels of self-identity and the willingness of individuals to maintain a positive self-view should decrease unethical behaviors. We anticipate that the results will remain the same as for the original article, in which utilizing the self-relevant noun “cheater” curbed cheating behaviors significantly more than the verb “cheating.”

Methods

Experiment 1

Our experiment will be conducted online in a private and impersonal way, which means that participants will not meet or be expected to meet the experimenters. We aim to replicate Experiment 3 of Bryan et al. (2013), in which there are three conditions: “cheater,” “cheating,” and “baseline.” In the baseline condition, a reminder about cheating will not be mentioned.

Participants. Participants will be users of the Yahoo! Crowdsourcing Service in Japan. Participants are required to meet the a priori criterion that they are native Japanese speakers. We plan to conduct a pilot test to determine the shortest time in which one could reasonably participate in the experiment in good faith. This pilot test is detailed in a later section (Outlier extraction). Participants will be excluded if they complete the experiment faster than the pilot test time. Repeat participation will be prevented.

Procedure. Online instructions will explain that a recent controversial article has claimed to report the first scientific evidence for paranormal phenomena.

We ask the participants to find a coin at home and, while trying to influence the outcome of each toss with their minds, flip the coin 10 times, making it land on “heads” as often as possible. They will be asked to ensure that they are “properly motivated.” For every toss landing on “heads,” they will receive 10 T-points (≒ 10 Yen) as a reward. Noting that the average Yahoo! Crowdsourcing user receives two T-points for one task, the reward we set is five times that amount. Even those participants who report a chance level of five “heads,” they will receive 25 times more points than for a normal task. This reward should be large enough for some participants to conduct a cheating behavior.

Participants will be randomly assigned into the three conditions. Three conditions are posted on the crowdsourcing website, but a person who participated in one of the conditions cannot participate in the other conditions. This results in passive randomization. In the “cheater” and “cheating” conditions, the manipulation is embedded in the instructions. We referred to the instructions in the original experiment (Bryan et al., 2013) and translated it into Japanese, as follows:

NOTE: Please don’t [cheat/be a cheater] and report that one or more of your coin flips landed heads when it really landed tails! Even a small [amount of cheating/number of cheaters] would undermine the study, making it appear that psychokinesis is real.

[嘘つきにならない/嘘をつかない]でください。コイントスの結果,実際にはコインの裏が出たのに表が出たと報告しないでください。

たとえわずかでも[嘘があると/嘘つきがいると],この研究を台無しにし,念力が実在するように見せてしまいます。

The manipulation is also embedded in the instructions on the next page, where participants log the outcomes of their 10 coin-flips. At the top of the page, a message is displayed in large red words: “嘘つきにならないでください/嘘をつかないでください” Which was “PLEASE DON’T [CHEAT/ BE A CHEATER]” as in the original experiment (Bryan et al., 2013).

In the baseline condition, the instructions are the same as above, except that the cheating message is not mentioned.

Power analysis and sample size. Because Experiment 3 of Bryan et al. (2013) did not report the effect size of η2, first, we calculated the effect size of the analysis of variance (ANOVA) result from the F and df values. Bryan et al., 2013 reported the statistics of their one-way ANOVA as F(2, 96) = 4.38, p = .015. Hence, we calculated η2, based on Cohen’s (1973) method, as η=.0836. Then, we calculated the effect size f as follows: f = √(η2/(1 – η2) = 0.302. The small sample size may overestimate the effect size so, as a replication convention (e.g., Nitta et al., 2018), we halved the effect size of the original experiment, and used G*Power 3.1.9.3 (Faul et al., 2009) to conduct a power analysis (i.e., to 0.151). In G*Power, we set the significance level α = 0.05, power level 1-β = 0.95, and effect size f = 0.151. According to the conditions of the original experiment, we will divide the participants into three groups. The required total sample size is 681, with 227 participants in each group. Therefore, we will try to recruit at least 681 participants. Data collection will not exceed 810 participants. This stopping rule is set because it is difficult for us to control the number of participants to just 681, due to the characteristics of the simultaneous participatory online recruitment system; therefore, we will allow for up to 120% of the required sample size (i.e., 810). If more than 810 people participate in the experiment, we will select the data of the first 810 participants based on the time stamp and use this for the analysis. Also, we set the number of participants (max. 365 males and 445 females) to match the gender distribution of the original study (male: female = .45: .55).

Data analyses. In this study, the dependent variable is the mean number of “heads” reported. In the original experiment, a one-way ANOVA and t-test will be performed. Specifically, the ANOVA will be performed for analyzing the main effect of the three groups. A problem in the original study was that it did not adjust for any significance level in subsequent multiple comparisons. Therefore, in the present study, we will use a one-way ANOVA and Tukey’s method for the multiple comparisons. Additionally, in order to check the cheating in each group, the original study performed one-sample t-tests between the mean number of “heads” reported and the chance level (i.e., 50%). These analyses will be performed using jamovi (version 1.0.5). The original results are summarized in Table 1.

Table 1. Results of Experiment 3 of Bryan et al. (2013).

Analysis typesReported p-valueDegree of freedom Effect size
Main effect: three groups.01596 f = 0.302
t-test“cheating” vs “cheater”.01396d = 0.71
“cheater” vs baseline.00496d = 0.66
“cheating” vs baseline> .8096d = 0.05
“cheating” vs “chance”< .000536d = 0.79
baseline vs “chance”< .000535d = 0.78
“cheater” vs “chance”> .3025d = 0.19

Outlier extraction. For our online experiment, we will establish a minimum completion time (MCT) for inclusion in the final sample. By asking five colleagues who are unfamiliar with this experiment to complete the experiments as fast as possible, then calculating the mean completion time. Specifically, each colleague will perform a coin toss ten times. After each toss they record on the website that will be used in the experiment. This pilot test will not include the work that “influence the outcome of each toss with their ‘properly motivated’” and measures only the required time of the coin toss and recording. Bryan et al. (2013) also used the MCT as an extraction criterion. We will exclude those participants who complete it faster than this MCT, because they may rush through the experiment and fail to complete it in good faith.

Experiment 2

This experiment will be employed as an extended, conceptual replication of Experiment 3 in the original study (Bryan et al., 2013). If our Experiment 1 successfully replicates the previous experiment, we will conduct Experiment 2, adding a “cheating” condition in which we use tasks concerning an instruction to ensure that participants’ attention is captured (e.g., Folk et al., 1992; Folk et al., 2002). There are two main reasons why we wish to conduct Experiment 2. First, some studies have revealed that a moral reminder may be an effective way to decrease unethical behaviors (Belle & Cantarelli, 2017). While there was no difference in the rate of cheating between the baseline and the “Don’t cheat” groups in Experiment 3 of Bryan et al. (2013), their results suggest that the ethical reminder in the “Don’t cheat” condition did not work. Because those experiments were conducted online, it is difficult to ensure whether the participants had actually seen and understood the instruction. Otherwise, it is also possible that the participants ignored the original experiment’s instruction due to satisficing (e.g., Chandler et al., 2014; Oppenheimer et al., 2009; Sasaki & Yamada, 2019). This may be the reason that there was no significant difference between the cheating and baseline conditions. The second reason to conduct Experiment 2 is that the significance of the cheater condition may occur because of excessive attention being paid to the instruction. Noticeably, the main difference between our Experiment 1 and the original Experiment 3 lies in the different language used in the instruction. Thus, if our Experiment 1 is a successful replication, we will then choose to focus on the expression of the Japanese instruction, rather than the English instruction of the original Experiment 3.

To confirm this assumption, we conducted a preliminary experiment, asking participants to evaluate their familiarity with certain expressions in Japanese. The expressions “Don’t cheat” and “Don’t be a cheater” were translated into Japanese and native speakers evaluated their familiarity (1: not familiar to 5: very familiar) with them via an Internet survey on the Yahoo! Crowdsourcing Service. The protocol of this experiment was registered on the Open Science Framework (Guo et al., 2019). The results showed that the familiarity rating score in the “cheater” condition was significantly lower than that in the “cheating” condition, t(64) = 6.73, p < .001, Cohen’s d = 0.834. Hence, we conjecture that the anticipated difference in the results between the “cheating” and “cheater” conditions in Experiment 1 may partly occur due to differences in attention paid to the instruction, instead of the preservation of a positive self-image proposed by the previous study (Bryan et al., 2013). This means that part of the effect of the “cheater” condition is due to the unfamiliar expression, which attracts people’s attention then plays a role in preventing them from conducting unethical behavior. See Extended data for details about this experiment.

In our Experiment 2, we manipulated the way in which participants saw the instructions to explore the differences between the cheating and baseline conditions. Experiment 2 comprises three conditions: “cheating,” “cheating” with task, and baseline. We predict that the cheating with task condition will be more effective in curbing unethical behaviors than the baseline condition, because the task will arouse more attention. While the instruction in the cheating condition will be in large red capital letters, this should have no significant difference compared with baseline.

Procedure. The procedure for Experiment 2 is identical to that of Experiment 1, except for important differences in two aspects. In Experiment 2, we will focus on whether the participants read the instructions as diligently as we expect. First, we will delete the original “cheater” condition and add another “cheating” condition. Second, in the new “cheating” condition, we will add a task page in which participants are asked to choose the exact expression (i.e., “Don’t cheat”) that appeared on the screen from three sample sentences. We will remind participants of this task in advance to ensure they read the instructions carefully.

Power analysis and participants. Because the power analysis of Experiment 2 is the same as in Experiment 1, we intend to recruit participants in the same way as Experiment 1. The minimum completion time will also be established for participants to be included in the final sample. The exclusion standard is similar to Experiment 1.

Data analyses. In Experiment 2, the dependent variable is the mean number of “heads” reported. We will still use a one-way ANOVA and Tukey’s method for the multiple comparisons. To check the cheating rate in each group, a one-sample t-test between the mean number of “heads” reported and the chance level (50%) will be analyzed.

Study timeline

Currently, online experiments for participants to conduct the coin-toss task are under construction. After stage 1 acceptance, our colleagues will be asked to complete the pilot test to calculate the MCT. Then, we will post our experiments on the Yahoo! Crowdsourcing Service to recruit participants. We are supposed to complete the experiments and subsequent analysis within two months.

Ethical approval and consent to participate

The present study received approval from the psychological research ethics committee of the Faculty of Human-Environment Studies at Kyushu University (approval number: 2019-004). Completion of experiments will be taken as consent to participate from participants. Participants have the right to withdraw from the experiment at any time without providing a reason. In addition, we will protect participants’ personal information. Because this study will be conducted online, even if participants engage in cheating behaviors, we cannot identify them or meet the participants face-to-face.

Data availability

Underlying data

No underlying data are associated with this article.

Extended data

Open Science Framework: How subtle linguistic cues prevent unethical behaviors, https://doi.org/10.17605/OSF.IO/68FVK (Guo et al, 2019).

This project contains the following extended data:

  • - Protocol for the pilot study conducted for Experiment 2.

  • - Data collected for the pilot study conducted for Experiment 2.

Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).

Comments on this article Comments (0)

Version 4
VERSION 4 PUBLISHED 22 Aug 2019
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Guo W, Liu H, Yang J et al. Stage 1 Registered Report: How subtle linguistic cues prevent unethical behaviors [version 1; peer review: 1 approved with reservations]. F1000Research 2019, 8:1482 (https://doi.org/10.12688/f1000research.20183.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 22 Aug 2019
Views
54
Cite
Reviewer Report 24 Oct 2019
Michiko Asano, Department of Psychology, College of Contemporary Psychology, Rikkyo University, Saitama, Japan 
Approved with Reservations
VIEWS 54
Summary
 
This study is designed to examine an interesting topic of whether and how subtle linguistic cues may prevent unethical behaviors. The proposal is generally well written. However, I have some concerns, which I address below. ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Asano M. Reviewer Report For: Stage 1 Registered Report: How subtle linguistic cues prevent unethical behaviors [version 1; peer review: 1 approved with reservations]. F1000Research 2019, 8:1482 (https://doi.org/10.5256/f1000research.22170.r55680)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 24 Dec 2019
    Yuki Yamada, Faculty of Arts and Science, Kyushu University, Fukuoka, Japan
    24 Dec 2019
    Author Response
    Thank you very much for taking the time and effort to check our manuscript. we also appreciate your helpful comments. Individual answers are given below.


    1. Introduction: It ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 24 Dec 2019
    Yuki Yamada, Faculty of Arts and Science, Kyushu University, Fukuoka, Japan
    24 Dec 2019
    Author Response
    Thank you very much for taking the time and effort to check our manuscript. we also appreciate your helpful comments. Individual answers are given below.


    1. Introduction: It ... Continue reading

Comments on this article Comments (0)

Version 4
VERSION 4 PUBLISHED 22 Aug 2019
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.