Keywords
Intelligent voice assistant, technology acceptance models, use intention, use behaviour, personal voice assistance
This article is included in the Artificial Intelligence and Machine Learning gateway.
Intelligent voice assistant, technology acceptance models, use intention, use behaviour, personal voice assistance
Intelligent voice assistants (IVAs) are software agents embedded into smart devices that recognise and interpret the user’s speech and respond accordingly (Hoy, 2018). Cortana, Alexa, Siri and Google Assistant are the most prevalent IVAs across the United States and Europe (Garcia, Lopez, & Donis, 2018; ‘Preferred voice assistant’, 2018). They are programmed to provide both indoor and on the go services; examples of these services include providing information, dialling, setting a meeting, playing music and even chatting for entertainment purposes (Kiseleva et al., 2016).
Presently, the interest in IVAs is rapidly increasing due to advancements in the capabilities of these agents (Zhao, Lu, & Hu, 2018), such as their success in completing more complex tasks with human-like communication skills (Purington, Taft, Sannon, Bazarova, &, Taylor, 2017). Research has shown that over half of people use IVAs daily (Garcia, Lopez, & Donis, 2018), and that smartphone embedded IVA use increased by 31 % worldwide in 2017 (‘Frequency with smartphone’, 2017).
A growing body of literature has focused on diverse issues related to IVAs; examples include user satisfaction (Kiseleva et al., 2016), concerns about utilising 24-hour listening systems (Moorthy & Vu, 2015) or predictors of use behaviour in focus group settings (Cowan et al., 2017). However, very limited research has used the technology acceptance models (Davis, 1986; Venkatesh, Thong, & Xu, 2016) in IVA domains to understand the factors influencing IVA usage. Additionally, many of the studies have examined only current usage of IVAs (e.g., Kiseleva et al., 2016; Moorthy & Vu, 2015); thus, there is a need for research to adopt technology acceptance models (TAMs) and compare the antecedents of non-users and current users of IVAs.
Therefore, this study aims to examine antecedents of IVA use and non-use through a cross-sectional setting and uses TAMs as the theoretical foundation. Furthermore, the current study seeks to extend the TAMs by including three additional constructs—perceived privacy concerns, perceived needs and awareness of functionalities. In doing so, the study ultimately intends to answer the central question: What distinguishes IVA users from non-users?
IVAs are software agents that are primarily embedded in smart devices and operated through available Internet networks. Lacking a precise definition, literature has referred to these applications as ‘mobile assistants’, ‘intelligent personal assistants’, or ‘voice assistants’ (Jiang et al., 2015; Cowan et al., 2017). For this paper, these applications are simply referred to as ‘intelligent voice assistants’.
Well known IVA market leaders include Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa (Cowan et al., 2017). IVAs can interpret both written and verbal human commands and respond in these ways (Hoy, 2018); these are known as multi-modal conversational systems (Jiang et al., 2015). IVAs can be activated with key wake words such as ‘Hey Siri’ for Apple. Once activated, the command is streamed to cloud-based data centres to be converted to computer language (Brown, 2016). Then, the most suitable personalised response is determined, and suggestions are created for the user.
Recently, more literature on the topic has emerged due to the advancements in IVAs and the widespread use of the applications in various areas. Given the highly personal information disclosure requirements of IVA services, most studies have investigated the direct relationship between perceived privacy concerns and current usage, and these results primarily supported the importance of these concerns (e.g., Cowan et al., 2017). However, little is known about their moderating role. King and He (2006) briefly discussed the possibilities of this when they indicated that more moderating roles should be examined further to clarify the relationships of technology acceptance models and theories, which will be covered by the current research. This will also correspond to the expectation of Venkatesh et al. (2016), who emphasised that very limited research considered IVA usage based on the Technology Acceptance Models.
The TAM explains user information systems and technology acceptance processes based on theories from social psychology (Davis, 1989), which further enhanced the Unified Theory of Acceptance and Use of Technology (UTAUT2; Venkatesh, Thong, & Xu, 2012).
To conclude, a key element of this research is the base formed from the TAM and UTAUT, which will help to answer the main question efficiently: What distinguishes IVA users from non-users? (see Figure 1 for non-user Model 1 and Figure 2 for current users Model 2). In order to answer the main question, the following constructs were discussed and hypothesised.
On the left, predictors (independent variables) are illustrated. Among these, ‘perceived privacy concerns’ functions as moderator and the main predictor. Another moderator variable, ‘perceived awareness of intelligent voice assistant use’, is displayed in the middle. On the right, the dependent variable of ‘use intention’ for non-users is illustrated. Each arrow addresses a hypothesis or research question formulated around the relationships between independent variables, moderators, and dependent variables. The wide arrows indicate expected moderator correlations between the variables, whereas narrow arrows illustrate the potential direct correlation between the main predictors and the dependent variable.The direction of the expected relationships demonstrated by signs ‘+’ and ‘-’ that refers to a positive and negative relationship, respectively. IVA: intelligent voice assistant.
On the left, the candidate predictors (independent variables) are demonstrated whereas on the right, the dependent variable of ‘use behaviour’ for current users is illustrated. Each arrow addresses a hypothesis or research question formulated around the relationships between independent variables, moderators, and dependent variables. The wide arrows indicate expected moderator correlations between the variables, whereas narrow arrows illustrate the potential direct correlation between the main predictors and the dependent variable.The direction of the expected relationships demonstrated by signs ‘+’ and ‘-’ that refers to a positive and negative relationship, respectively. IVA: intelligent voice assistant.
Previous literature has shown that users who interact with these external agents (IVAs) have serious concerns about their personal data privacy (Mihale-Wilson, Zibuschka, & Hinz, 2017; Wu, n.d.). The perceived privacy concerns were defined as the ‘perceived concerns about possible loss of privacy as a result of information disclosure to a specific external agent’ (Xu, Smith, & Hart, 2011, p. 800). This consideration can be attributed to the sharing of considerable amounts of personal information with IVAs, so users are better served and have their needs gratified (Stucke & Ezrachi, 2017). Moreover, as the current study took smartphone embedded IVAs into account, where a great deal of personal information is stored, remarkable privacy concerns in relation to IVAs could arise. Therefore, it is convincing that high perceived privacy concerns towards IVAs might be a reason for not utilising these technological applications. To answer this question, the following hypothesis was formulated.
H1: High levels of perceived privacy concerns towards IVAs are positively correlated with a greater likelihood of non-use.
In line with Unified Theory of Acceptance and Use of Technology (UTAUT2) model of Venkatesh et al. (2012) and considering the individuals’ present smartphone mediated communication habits (Karapanos, Teixeira, & Gouveia, 2016), investigating online communication behaviour should, theoretically, strengthen the understanding of personal technology acceptance behaviour. Online communication behaviour refers to ‘an individual’s cumulative communication frequency with online applications via smart devices’ and lends itself in the following relationship.
H2: Higher levels of online communication behaviour via smart device applications (e.g., WhatsApp or iMessage) are positively correlated with an individual’s (a) use intention and (b) use behaviour of IVAs.
TAMs (Venkatesh, 2000; Venkatesh, Speier, & Morris, 2002) demonstrated the importance of the ‘ease of use’ as a strong predictor for adopting new technologies. In the scope of the current study, the perceived ease of use concept was adapted from Davis (1989) to be defined as ‘an individual’s perception of effortlessness to use addressed technology’. Previous studies have further shown that perceived ease of use can predict both individuals’ usage behaviour related to technology (e.g., Moore & Benbasat, 1991), as well as their use intentions (e.g., Shroff, Deneen, & Ng, 2011). Given that IVAs are also technologically advanced, it is convincing to investigate the following hypothesis.
H3: Perceived ease of use is positively correlated with an individual’s (a) use intentions and (b) use behaviour of IVAs.
The studies that employed TAMs have long argued that perceived usefulness is a primary determinant of the intention to use or adopt technology (Davis, 1989; Venkatesh, 1999; Venkatesh & Bala, 2008; Venkatesh & Davis, 2000; Venkatesh et al., 2002). The definition was adapted from Davis (1989) as the ‘degree to which a person believes that using a particular system would enhance his or her [daily life practices]’ (p. 320). Essentially, people tend to use or adopt addressed technology if they perceive it to be useful. Considering IVAs relate closely to the types of technology discussed in previous findings, in the IVA context, the current research assumption is as follows.
H4: Higher levels of perceived usefulness are positively correlated with a higher likelihood of (a) use intention and (b) use behaviour regarding IVAs.
Furthermore, individuals who are hedonically motivated are expected to use products or services to potentially gain excitement or joy (Venkatesh et al., 2012, Brown & Venkatesh, 2005), as they are motivated by affective or sensory stimulation and gratification. Therefore, a positive correlation between hedonic motivation and IVA use is plausible because new advanced functionalities provide a wide variety of fun features (Hoy, 2018), such as making jokes or displaying funny videos. In previous UTAUT models, hedonic motivation and social influence were only related to usage intention (Venkatesh, Morris, Davis, B.,& Davis., D, 2003; Venkatesh et al., 2012). However, in the current study, hedonic motivation and peer influence (social influence) are expected to predict both possible outcomes of use intention and behaviour. This approach will answer the call of Han and Yang (2017), who suggested that future studies should inspect both potential and current user’s IVA usage behaviours.
H5: Higher levels of hedonic motivation are positively correlated with a greater likelihood of (a) use intention and (b) use behaviour regarding IVAs.
The Expectancy-Value Theory of Palmgreen and Rayburm (1982) indicated that individual media use is based on perceived needs. This concept was defined as ‘what individuals think about their needs, which is different from the actual need’ (Zhu & He, 2002, pp. 472-473). This concept has long been considered a reliable determinant of use behaviours and attitude in technology studies (e.g., Shih, 2004). Expanding on the literature, Zhu and He (2002) proposed that perceived need is an important motivation to predict the individual use behaviour and intention of any new form of technology. Considering the nature of IVAs as a beneficial helping tool, it is appropriate to propose the following hypothesis.
H6: Perceived needs regarding IVAs (functions) positively correlate with a greater likelihood of (a) usage intention and (b) use behaviour concerning IVAs.
In line with the UTAUT2 model (Venkatesh et al., 2012) in the current paper, social influence will be referred to as ‘peer influence’. The definition was adapted from Fulk (1993) and refers to a ‘person’s consideration of his/her social network that potentially influences the attitude or cognition toward the addressed technology’ (p. 926). Studies on technology use have long examined whether cognition relates to technology changes in accordance with sources of information (Fulk, 1993; Schmitz & Fulk, 1991). Martins et al. (2013) found that a greater peer influence correlated with higher adoption rates for e-bank technology. Given that IVAs also fit this technological mould, it is valuable to examine the following hypothesis.
H7: Peer influence regarding IVA use positively correlates with a stronger likelihood of (a) use intention and (b) use behaviour for IVAs.
Studies have further suggested that without awareness of the system functionalities, attitude or usage behaviour cannot be developed (Abubakar & Ahmad, 2013; Shareef et al., 2009; Lopatovska et al., 2018). Previous studies conceptualised awareness of functionalities as ‘having and acquiring knowledge as much as a user perceived to be sufficient to learn the characteristics of online system and interact through perception or by means of information about ICT’ (Shareef, Kumar, Kumar, & Hasin, 2009, pp. 544-562). The current study echoes this by defining awareness of functionalities as an ‘individual’s perceived awareness of main IVA functions’.
Taking the next step forward, research has shown that awareness of IVA functionalities was linked to privacy concerns where people increase privacy concerns after shown IVAs constant listening function (Manikonda, Deotale, & Kambhampati, 2017). This study will thus examine the moderation role of perceived awareness of IVA functionalities with the following hypothesis.
H8: For non-users, negative relationships between perceived privacy concerns and the intention to use IVAs will be stronger when there is a higher level of perceived awareness of IVA functionalities than when there is a lower level of perceived awareness.
Earlier, a potential direct relationship between perceived usefulness and use intention was proposed. However, IVA functionalities such as organising agendas or setting alarms (Porcheron, Fischer, Reeves, & Sharples, 2018) may not be as relevant to non-users, due to a lack of direct interaction with the IVAs. This consequently might influence how non-users evaluate the usefulness of IVAs. This works alongside the previously mentioned direct relationship (H4a), which proposes that individuals who find IVAs useful show a higher likelihood to intend to use IVAs. This relationship is then expected to be analysed with the following hypothesis.
H9: For non-users, positive relationships between perceived usefulness and the intention to use IVAs will be stronger when there is a higher level of perceived awareness of IVA functionalities than when there is a lower level of perceived awareness.
According to King and He (2006), potential moderators should also be considered in order to investigate further nuances in individual technology acceptance. One such moderator is perceived privacy concerns. It is already known that users should disclose remarkable amounts of personal information while interacting with IVAs to access more uniquely tailored functions (Stucke & Ezrachi, 2017). The necessity of high disclosure with such an online external agent could, however, lead individuals to indicate privacy concerns (Mihale-Wilson, Zibuschka, & Hinz, 2017; Wu, n.d.). This is due to a partial awareness and concern that their personal information could be leaked to third parties without their consent. Therefore, there is a strong justification for investigating the potential role of perceived privacy concerns in relation to other predictors.
Because, presently, people are communicating more regularly via online networking applications, such as WhatsApp (Karapanos, Teixeira, & Gouveia, 2016), which are embedded in smart devices, it is likely that people hold low privacy concerns towards online-based applications like IVAs since they use these applications regularly. This is in line with the findings of the study of Fogel and Nehmad (2009), which indicated that people who show greater risk-taking attitudes have an online networking platform account. Given that networking applications and IVAs both functions based on the shared personal information on online domains, investigation of the link between perceived privacy concerns and online communication behaviour of individuals is worth consideration with the following hypothesis.
H10: Relationships between online communication behaviour via smart device applications and both (a) the intention to use IVAs and (b) use behaviour of IVAs will be stronger when there is a lower level of perceived privacy concerns towards IVAs than when there is a higher level of perceived privacy concerns.
As outlined earlier, privacy concerns and ease of use are two of the most influential determinants of technology use (e.g., Shroff, Deneen, & Ng, 2011; Mihale-Wilson, Zibuschka, & Hinz, 2017). Moreover, studies of Lai (2017), Shen and Chiou (2010) focused on how privacy concerns determine the relationship between ease of use and intention to use technological systems relate to Internet systems. Given that IVAs also function using online services, the current hypothesis was posed.
H11: Relationships between perceived ease of use and both (a) the intention to use IVAs and (b) use behaviour of IVAs will be stronger when there is a lower level of perceived privacy concerns towards IVAs than when there is a higher level of perceived privacy concerns.
Findings by Featherman and Fuller (2003) showed that as the privacy risk increases, the influence of perceived usefulness on behavioural intention disappears. Moreover, a study on social networking websites showed that privacy concerns moderated the effects of perceived usefulness on behavioural intention (Lai, 2017; Tan et al., 2012). Since very limited research considered this relation for current users, and the literature is still unclear regarding mixed results for intended use, further examination is needed.
H12: Relationships between perceived usefulness and both (a) the intention to use IVAs and (b) use behaviour of IVAs will be stronger when there is a lower level of perceived privacy concerns towards IVAs than when there is a higher level of perceived privacy concerns.
Furthermore, a study on smart home technology implementation for elderly residents showed that more than half of the respondents refused to use some services due to privacy concerns, despite indicating they might need such a system (Demiris, Hensel, Skubic, & Rantz, 2008). The literature on this relationship is minimal, though, and so clarifying whether this relation holds for both current users and non-users in regards to IVAs is an interesting gap to explore. Subsequently, the following hypothesis was formulated.
H13: Relationships between perceived needs and both (a) the intention to use IVAs and (b) use behaviour of IVAs will be stronger when there is a lower level of perceived privacy concerns towards IVAs than when there is a higher level of perceived privacy concerns.
Since previous studies have primarily focused on the direct relationship between hedonic motivation and technology use behaviour (Venkatesh et al., 2012), little is known about external factors that may influence this correlation. One such factor is perceived privacy concerns. This concern could lead individuals to overcome hedonic motivations when considering using IVAs. Therefore, the following research question was posed to better understand this relationship.
RQ1: To what extent do perceived privacy concerns moderate the relationship between hedonic motivation and both (a) use intention and (b) use behaviour?
Venkatesh and Morris (2000) further indicated that peer influence has a direct effect on technology use. However, the existing literature does not provide a sufficient explanation of whether perceived privacy concerns also impact the relationships between peer influence and outcomes (i.e., use intention and use behaviour) in the IVA domain. Thus, to investigate this relationship, the final research question was posed.
RQ2: To what extent do perceived privacy concerns moderate the relationship between peer influence and both (a) use intention and (b) use behaviour?
Since limited research exists in this specific area of interest and test TAM & UTAUT theories, a cross-sectional study was selected to investigate the antecedents of individuals’ use and non-use behaviour with IVAs.
The sampling frame consisted of international adults at least 18 years of age; this sample is believed to strengthen the generalisability of the study (Maruping, Bala, Venkatesh, & Brown, 2017). Both non-users and current users were recruited through Facebook simultaneously using a single questionnaire, where the link of the survey was shared via the researcher’s personal Facebook page to various networking groups. Using snowball sampling, people were asked to recruit their friends and family either on their Facebook timeline or via personal e-mails.
The data collection period lasted from November 26th to December 9th, 2018. In total, 315 individuals agreed to participate in the study; however, after data was inspected, 38 respondents were removed due to incomplete questionnaires. Thus, 277 respondents made up the final sample and were included in the analyses. This number consisted of 125 non-users and 152 current users.
As Table 1 demonstrates, the sample ranged between 20-74 years old (M = 36.92, SD = 8.59). The descriptive statistics showed that women, high- and middle-level educated individuals, were overrepresented compared to low-level educated respondents. Similarly, full-time employees were overrepresented compared to unemployed respondents and students.
The table shows the proportion of the demographic variables in data: Education level, occupation, sex, and sample population for both users and none-user.
The ethical approval of the current research was obtained through Communication Science Department of University of Amsterdam in 2019 through the supervisor of the research. The data for this study was gathered through a self-reported survey. The University of Amsterdam’s Qualtrics account was used to distribute the questionnaire. Prior to gathering the data, collection of respondents’ IP addresses was switched off to meet European Union privacy regulations.
After a brief introduction, respondents gave their informed consent to participate in the study at the beginning of the questionnaire by ticking the box. Respondents were notified that they could withdraw at any time and were given the opportunity to retrieve their answers within 24 hours of completion. To do so, respondents were asked to create a unique identification code consisting of the last four digits of their mobile phone number, which would enable the researcher to retrieve the data. Upon not receiving any request, the codes were deleted from the data wherein ultimate privacy of the respondents is preserved.
Dependent variables
IVA use behaviour
The IVA use behaviour measure was based on the question: ‘Which of the following voice assistants do you currently use?’ Respondents were provided a list of IVA options along with an ‘I don’t use a voice assistant’ option. People who chose the latter were considered non-users, whereas people who indicated that they are currently engaging with at least one of the IVA options were considered users. This variable was recoded into the dichotomous grouping of current users vs non-users.
IVA use intention
The IVA use intention measure was adopted from Venkatesh, Morris, Davis and Davis (2003). The statements asked respondents for their level of agreement 1 (strongly disagree) to 5 (strongly agree) on planned IVA use in the next 1-3 months. The items showed good internal consistency (α = .97). A composite score was created based on the mean of the items to form the variable ‘use intention’ (M = 2.23, SD = 1.00). This scale was used for the 127 respondents who were classified as non-users. Higher scores indicate a greater intention to use IVAs.
Predictors
Eight variables are used in this study to predict the use intention and behaviour of individuals regarding IVAs (Table 2 for factorial analyses and factor loadings).
The table demonstrates the factor loadings of the scaled set of items extracted for each variable listed based on the factorial analysis conducted.
Cronbach’s alpha values varying between.80 and.97, showing the reliability of the items used for each variable.
The Kaiser-Meyer-Olkin Measure of Sampling Adequacy and Bartlett’s test of sphericity values are displayed on the top row, indicating the validity of the factor analysis.
On the right column, each factor’s ‘total variance explained’ values are illustrated based on the Principal Component Analysis shifting between 55. 74 and 93. 89, which is expected to be higher than 50%.
Perceived need
The perceived need of IVAs measure consisted of five dimensions adjusted from Zhu and He’s (2002) scale. Examples of these are searching for personal information and navigation assistance. For each dimension, respondents indicated how important the needs were on five-point scales that ranged from 1 (not at all) to 5 (very much). The items had a good Cronbach’s alpha of.80. The variable ‘perceived need for IVA’ was then formed based on the mean score of all five items (M = 2.97, SD = 0.94). Higher values indicated a higher perceived need for IVA functions.
Perceived privacy concerns
Perceived privacy concern was measured with a combined scale adapted from existing literature (Xu, Teo, & Tan, 2005; Yang, Lee, & Zo, 2017). An identical agreement scale was used for five statements; an example of such a statement is ‘Disclosing my personal information to a voice assistant would cause many unforeseen problems’. The items showed high reliable internal consistency (α = .91). The items were thus averaged to form the ‘perceived privacy concerns’ (M = 3.47, SD = .77) variable. Higher scores indicated a greater perceived privacy concern.
Awareness of the IVA functionalities
This measure was adapted from Urquhart et al.(2017). The respondents were asked to indicate which provided tasks they think IVAs can perform based on what they have learned, heard or know about the program. To do so, respondents were provided with a series of functions from which to choose. Items were anchored 1 (not at all aware) to 5 (very much aware). Items had a reliable Cronbach’s alpha of .84. The mean score was used to establish the final ‘awareness of the IVA functionalities’ variable (M = 3.23, SD = 1.10). Higher values indicate a higher awareness of IVA functions.
Online communication behaviour
The online communication behaviour measure was created specifically for the current research. It was measured by two indicators namely whether they use these devices for communication purpose and frequency of using smart devices to communicate with others. Respondents were provided a list of smart device options (e.g., iPhone) as well as ‘other’ and ‘I don’t communicate through smart devices’ options. Then, respondents indicated their frequency of smart device use for each listed item on a 5-point scale that ranged from 1 (never) to 5 (very often). A sum score was applied to form the new variable ‘online communication behaviour’ (M = 14.40, SD = 3.02). Higher scores indicate a higher level of multiple device usage for online communication purpose.
Perceived usefulness
Perceived usefulness was adopted from Davis (1989) and comprised six statements evaluated with 5-point Likert agreement scales. Respondents indicated the degree to which they believe that utilising IVAs might increase their task performance. This variable was constructed for both current users and non-users. Examples of the items included ‘Using voice assistants would improve my daily task performance’ and ‘I find voice assistants useful in my daily tasks’. Based on the mean scores of the items, the variable ‘perceived usefulness for non-users’ (M = 2.94, SD = 0.75) was formed and used in linear regression with use intention as the dependent variable. Cronbach’s alpha was high (.94). Additionally, to form the variable ‘perceived usefulness for users’, composite scores were created based on the mean scores of the items (M = 3.37, SD = 0.77). Cronbach’s alpha was.93. To conduct a logistic regression test for use behaviour (e.g., current users vs non-users), based on the sum scores, cumulative ‘perceived usefulness’ variable was constructed (M = 3.17, SD = 0.79). Higher scores indicate that individuals (would) find IVAs useful.
Perceived ease of use
The perceived ease of use scale was adapted from Lu, Yao, and Yu (2005). This construct will help evaluate the degree to which individuals believe that using IVAs would not cost them any extra effort (Davis, 1989). Likert scales with 5-points were used to gauge respondents’ agreement with the four statements. Some examples are ‘Learning to use the voice assistant was easy for me’ and ‘My interaction with the voice assistant would not require a lot of mental effort’. The mean score (M = 3.56, SD = 0.68) was used to create the ultimate variable ‘perceived ease of use for non-users’ that was later analysed in a logistic regression with use intention as the outcome. Cronbach’s alpha was.80. For users, the total explained variance was 66.57% with a reliable Cronbach’s alpha of.83. These items were also averaged to form the variable ‘perceived ease of use for users’ (M = 3.63, SD = 0.74). Finally, to conduct a logistic regression for use behaviour (e.g., current users vs non-users), based on the sum scores, cumulative ‘perceived ease of use’ variable was constructed (M = 3.60, SD = 0.71). Higher scores indicate a higher perception that using IVAs could be/is practical in daily life.
Hedonic motivation
Hedonic motivation was adapted from Vandecasteele and Geuens (2010); it indicates whether motivation is derived from pursuit of fun and pleasure. This concept was constructed for both current users and non-users and was measured on a 5-point Likert agreement scale for four enjoyment related statements. One example statement is ‘Using new information technology is fun’ (Cronbach’s alpha of.90). A mean score was used to create the ultimate ‘hedonic motivation’ (M = 3.91, SD = 0.74) variable, where higher scores indicate greater hedonic motivation of individuals towards IVA usage.
Peer influence
The peer influence measure was adapted from Taylor and Todd (1995) and was used to capture the potential power of social networking on individual technology use. One example item is ‘People who are important to me use a voice assistant.’ Identical 5-point Likert agreement scales were used to evaluate each item. Cronbach’s alpha was reliable (α = .89). A composite score was created based on the mean scores of the items and formed the variable ‘peer influence’ (M = 2.73, SD = 0.80). Higher scores indicate greater peer influence on individuals’ IVA current use and non-use.
Control variables and demographics
Five controls were included in this study to suppress spurious relationships. These are perceived technology innovativeness, previous IVA use (pre-use), occupation, education and sex. The respondents indicated their current employment status, occupation, the highest level of completed education and their sex.
Moderation variables
Perceived technology innovativeness
The perceived technology innovativeness measure was adopted from Agarwal and Prasad (1998). This concept indicated whether possible proposed relationships are influenced by an external variable. The 5-point Likert agreement scale was used to evaluate each of the four statements. An example item is ‘When I hear about new information technology, I would look for ways to experiment with it’. Cronbach’s alpha was reliable (α = .91). Based on the average score of the four items (M = 3.40, SD = 0.85), the variable ‘perceived technology innovativeness’ was formed. Higher scores indicated greater individual technology innovativeness towards new technology use in general.
Previous IVA use (pre-use)
The pre-use measure was established for the current study to identify whether previous experiences negatively influenced non-users’ current behaviour towards IVA use. Respondents were asked to indicate if they had ever used one of the provided IVA options in addition to an open answer option. Individuals could also select the ‘I have never used any IVA’ option. The concept was ultimately formed as a dichotomous grouping variable consisting of ‘pre-users’ and ‘non-pre-users’.
Two analyses were employed in the current study. OLS Linear regression analysis was used to address non-users’ behavioural intention, and logistic regression analysis was conducted to determine the antecedents of use behaviours (e.g., current users vs non-users). Both analyses were employed using a two-step approach. First, only main predictors and control variables were introduced into the model. Then, interaction terms were included. To test the moderation effects, two-way interactions were employed in all analyses. These analyses were evaluated in accordance with the standard 95% confidence intervals (CI) and p < .05 significance level.
Data was checked preceding the regression analyses in several ways. First, diagnostic collinearity statistics, residuals and outliers were checked. Data from one respondent was then excluded from the analysis due to the unrealistic questionnaire completion time. Moreover, although according to Myers (1990), the Variance Inflation Factor (VIF) < 10 does not indicate a major problem (as cited in Field, 2013, p. 325), as Bowerman and O'Connell (1990) suggested, to avoid possible bias the variables were centralised (M = 0.00, SD = 1.00; as cited in Field, 2013, p. 325). Z-values of each variable were additionally calculated in SPSS 24 to include interaction terms into moderation analyses. The following formula was used to compute the interaction terms.
Interaction residuals = Predictor residuals * Moderator residuals
The overarching question for this research was ‘What distinguishes IVA users from non-users?’ To answer this, non-users’ use intention was first examined as a dependent variable. Then, the antecedents of use behaviour for both current users and non-users were examined (please see Tables 3 and 4 for correlations).
The table shows the correlations between dependent variables use behaviour, use intention and various control variables: Sex, occupation, age, education level, previous intelligent voice assistance use (Pre-IVA use), technology innovativeness (Tech inv.)
Values ranged from -1 to +1, and close to +/- 1 indicate a high correlation.
(‘-’ addresses a negative correlation, and ‘+’ addresses a positive correlation).
Significant correlations: age and occupation (.516**), use intention and age, sex, technology innovativeness (0.230**, 0.361**, 0.206* respectively) as well as use behaviour and technology innovativeness (.416**).
Sex | Occupation | Age | Education | Pre- IVA use | Tech. inv. | Use behaviour | Use intention | |
---|---|---|---|---|---|---|---|---|
Sex | 1 | 0.193 | 0.388 | 0.116 | 0.486 | 0.279 | 0.005 | .361* |
0.057 | 0.372 | 0.916 | 0.333 | 0.166 | 0.563 | 0.011 | ||
Occupation | 1 | .516** | 0.187 | 0.235 | -0.016 | 0.017 | -0.06 | |
0 | 0.053 | 0.889 | 0.759 | 0.7 | 0.572 | |||
Age | 1 | 0.052 | -0.174 | -0.014 | 440 | .230** | ||
0.193 | 0.118 | 0.811 | 0.047 | 0.009 | ||||
Education | 1 | 0.019 | 0.062 | -0.089 | 0.085 | |||
0.773 | 0.128 | 0.416 | 0.205 | |||||
Pre- IVA use | 1 | 0.387 | 1.63 | 0.085 | ||||
0.213 | 0.202 | 0.513 | ||||||
Tech. inv. | 1 | .416** | .206* | |||||
0 | 0.02 | |||||||
Use behaviour | 1 | 0.57 | ||||||
0.051 | ||||||||
Use intention | 1 |
The table shows the correlations between dependent variables ‘use behaviour’, ‘use intention’ and predictors (independent variables).
Values ranged from −1 to +1, and close to +/− 1 indicate a high correlation.
(‘−’ addresses a negative correlation, and ‘+’ addresses a positive correlation).
Significant correlations: Perceived technology innovativeness and peer influence, hedonic motivation, perceived need, perceived ease of use, use intention, use behaviour (0.206**, 0.185**, 0.184**, 0.206**, 0.354** respectively). Awareness of IVA functionalities and perceived technology innovativeness, hedonic motivation, perceived need, perceived ease of use, online communication behaviour, use intention (0.245**, 0.218**, 0.327**, 0.194**, 0.187**, −0. 178** respectively). Peer influence and hedonic motivation, perceived need, perceived usefulness, use intention, use behaviour (0.180**, 0.357**, 0.406**, 0. 407**, 0.317** respectively). Hedonic motivation and perceived need, perceived ease of use, online communication behaviour, use behaviour (0.130**, 0.292**, 0. 164**, 0. 165* respectively). Perceive need and perceived usefulness, online communication behaviour, use intention (0.480**, 0.186**, 0.380** respectively). Perceived usefulness and perceived ease of use, use behaviour (0. 187**, 0.270** respectively).
PTI | AWF | PI | HM | PN | PU | PEU | PPC | OCB | UI | UB | |
---|---|---|---|---|---|---|---|---|---|---|---|
PTI | 1 | .245** | .206** | .481** | .185** | 0.137 | .184* | 0.032 | 0.151 | .206* | .354** |
0 | 0.001 | 0 | 0.002 | 0.126 | 0.039 | 0.596 | 0.012 | 0.02 | 0 | ||
PAWF | 1 | −0.008 | .218** | .327** | −0.069 | .194* | 0.076 | .184** | −.178* | 0.071 | |
0.892 | 0 | 0 | 0.443 | 0.029 | 0.205 | 0.002 | 0.046 | 0.913 | |||
PI | 1 | .180** | .357** | .406** | 0.092 | −0.01 | 0.082 | .407** | .317** | ||
0.003 | 0 | 0 | 0.302 | 0.872 | 0.177 | 0 | 0.003 | ||||
HM | 1 | .130* | 0.085 | .292** | −0.084 | .164** | 0.086 | .165* | |||
0.031 | 0.341 | 0.001 | 0.166 | 0.006 | 0.335 | 0.034 | |||||
PN | 1 | .480** | 0.16 | 0.039 | .186** | .380** | 0.198 | ||||
0 | 0.072 | 0.515 | 0.002 | 0 | 0.067 | ||||||
PU | 1 | .187* | 0.075 | −0.078 | 0.413 | .270** | |||||
0.036 | 0.402 | 0.386 | 0 | 0 | |||||||
PEU | 1 | −0.016 | 0.134 | 0.169 | 0.068 | ||||||
0.857 | 0.132 | 0.058 | 0.257 | ||||||||
PPC | 1 | −0.07 | −0.063 | 0.125 | |||||||
0.249 | 0.479 | 0.216 | |||||||||
OCB | 1 | 0.086 | 0.086 | ||||||||
0.335 | 0.899 | ||||||||||
UI | 1 | 570 | |||||||||
0.051 | |||||||||||
UB | 1 |
The hypotheses that addressed non-users’ intention were investigated using a two-step linear regression (OLS) analysis. First, the direct relationships between use intention and the main predictor variables (e.g., H1—perceived privacy concerns, H2a—online communication behaviour, H3a—perceived ease of use, H4a—perceived usefulness, H5a—hedonic motivation, H6a—perceived needs, and H7a—peer influence), along with the control variables (e.g., sex, occupation, education, perceived technology innovativeness and pre-use), were investigated (see Table 5 for analysis and tests).
The table shows the analyses run in two steps for both conceptual models of current users and non-user. T (t test) values, B (standardised coefficient), standard error, b* (unstandardised coefficient), p (significance values p <= 0.05), odds ratio (a measure of association between factors and an outcome) and 95 confidence intervals are displayed for significant results.
On the left section, in step one, OLS linear regression analysis, including the listed control variables and predictor (independent variables), revealed that hypotheses 6a, 7a: perceived needs and peer influence significantly correlates with the use intention for non-users.
In step two, moderator variables included in the analysis resulting in the opposite direction of hypothesis 10, which predicts the moderator role of perceived privacy concern on the relationship of online communication behaviour and use intention.
On the right section, a two-step analysis is displayed for the current users. The bivariate logistic regression analysis revealed that peer influence has a significant relationship with use behaviour (hypothesis 7b).
Model 2 | Model 3 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Dependent variables | Use intention (UI) | Use behaviour (UB) | |||||||||
Analyses/Variables | t-value | B/SE | b* | P | Odds ratio | 95 % CI | B/SE | ||||
Step 1 | OLS Linear regression | Block 1 | Bivariate logistic regression | ||||||||
Control variables | |||||||||||
Age | |||||||||||
Sex | |||||||||||
Education* | 3.36 | 0.98/11.48 | 1.21/0.53 | ||||||||
Occupation | |||||||||||
Perceived Technology Innovativeness** | 2.93*** | 1.89/4.59 | 1.07/0.23 | ||||||||
Previous IVA use | NA | ||||||||||
Predictors | H/RQ | H/RQ | |||||||||
Perceived Privacy Concerns (PPC) | H1 | NA | |||||||||
Online Communication | |||||||||||
Behaviour (OCB) | H2a | H2b | |||||||||
Perceived Ease of Use (PEU) | H3a | H3b | |||||||||
Perceived Usefulness (PU) | H4a | H4b | |||||||||
Hedonic Motivation (HM) | H5a | H5b | |||||||||
Perceived Needs (PN) | H6a (supported) | 2.1 | 0.18/0.09 | 0.19 | 0.038* | H6b | |||||
Peer Influence (PI) | H7a (supported) | 3.44 | 0.39/0.11 | 0.29 | 0.001** | H7b (supported) | 1.96** | 1.26/2.10 | 0.67/0.22 | ||
Step 2 | Moderators | Block 2 | Nagelkerke R2 = 0.343 | ||||||||
PPC * OCB | H10b (opposite direction) | 2.11 | 0.22/0.10 | 0.18 | 0.038* | H10b | |||||
PPC * PEU | H11a | H11b | |||||||||
PPC * PU | H12a | H12b | |||||||||
PPC * PN | H13a | H13b | |||||||||
PPC * HM | RQ1a | RQ1b | |||||||||
PPC * PI | RQ2a | RQ2b | |||||||||
PAWF * PPC | H8 | NA | |||||||||
PAWF * PU | H9 | NA |
In regards to the main analysis, the multiple linear regression revealed that, overall, the model was significant, F(16, 107) = 5.33, p < .001, R2 = .44. The control variables were included in the performed tests.
As shown in Table 5, consistent with H6a, which posited that ‘Perceived needs regarding IVAs (functions) positively correlates with a greater likelihood of (a) use intention concerning IVAs’, the results of multiple OLS regression analysis revealed a positive correlation of perceived needs with use intention. Therefore, perceived needs significantly predicted the intention to use IVAs, providing support for H6a.
H7a assumed that ‘Peer influence regarding IVA use positively correlates with a stronger likelihood of (a) use intention for IVAs’, and the analysis correspondingly revealed that (controlling for perceived privacy concerns, online communication behaviour, perceived ease of use, perceived usefulness, hedonic motivation, perceived needs and awareness of IVA functionalities) peer influence positively related to individuals’ use intention. As shown in Table 5, according to these results, peer influence predicted people’s use intention; therefore, H7a was supported.
Contrary to expectations, there were no significant direct relationships between the intention to use and perceived privacy concerns (H1), online communication behaviour (H2a), perceived ease of use (H3a), perceived usefulness (H4a), and hedonic motivation (H5a). Therefore, these hypotheses were not supported (see Table 5).
As Table 5 demonstrates, in the second step, the analysis examined the moderating role of perceived privacy concerns on the relationships between the dependent variable (e.g., use intention) and the various independent variables—perceived ease of use (H11a), perceived usefulness (H12a), perceived needs (H13a), hedonic motivation (RQ1a) and peer influence (RQ2a). Moreover, the moderating influence of awareness of IVA functionalities on the relationships between use intention and both perceived privacy concerns (H8) and perceived usefulness (H9) were analysed. Following the inclusion of these factors, the model overall showed a large effect size, F(8, 99) = 4.30, p < .001, R2 = .51. The model summary table demonstrated that the incremental increase between steps one and two regarding explained variance was significant, p < .001, ΔR2 = .07.
Analyses revealed that perceived privacy concerns had a positive and significant moderating effect on the relationship between online communication behaviour and use intention. Surprisingly, this relation was in the opposite direction of the hypothesis, as the current study proposed that (H10a) ‘Relationships between online communication behaviour via smart device applications and (a) the intention to use IVAs will be stronger when there is a lower level of perceived privacy concerns towards IVAs than when there is a higher level of perceived privacy concerns’. Rather, it was found that this relation was strong for people who held a high level of privacy concerns. Therefore, H10a was not supported.
Moreover, the relationships between use intention as a dependent variable and the predictors—perceived ease of use (H11a), perceived usefulness (H12a), and perceived needs (H13a) were not moderated by perceived privacy concerns. Therefore, these hypotheses were not supported. In regards to remaining research questions, perceived privacy concerns did not moderate the relationships between the dependent variable use intention and either independent variables—hedonic motivation (RQ1a) or peer influence (RQ2a). Overview of the results can be seen in Table 5.
To analyse whether use of IVAs was directly (positively) correlated with online communication behaviour (H2b), perceived ease of use (H3b), perceived usefulness (H4b), hedonic motivation (H5b), perceived needs (H6b) and peer influence (H7b), a bivariate logistic regression was conducted. The analysis was run with a grouping variable where non-users served as the reference group (0; nnon-users = 122) and current users as the other group (1; nusers = 150).
The Omnibus Test of Model Coefficients showed that, overall, the model was significant (χ2 [14] = 76.61, p < .001). Additionally, the classification table revealed that the sample was 69.90% predictive.
Results of the binary logistic regression analysis (controlling for, online communication behaviour, perceived ease of use, perceived usefulness, hedonic motivation, perceived needs and control variables: sex, occupation, education, perceived technology innovativeness) revealed a positive, significant, direct relationship between peer influence and current IVA usage. Therefore, H7b, which indicated that ‘Peer influence regarding IVA use positively correlates with a stronger likelihood of (b) use behaviour for IVAs’, was supported. However, direct correlations between IVA use as a dependent variable and the remaining predictors—online communication behaviour (H2b), perceived ease of use (H3b), perceived usefulness (H4b), hedonic motivation (H5b) and perceived needs (H6b)—were insignificant. Therefore, these hypotheses were not supported.
In regards to the second step (block 2), the Omnibus Test of Model Coefficients table for moderation analyses showed that, overall, the model was significant (χ2 [6] = 3.89, p < .001). Further, the classification table revealed that 71.% of the sample was predictive.
Results of the binary logistic regression for two-way interaction effects revealed that perceived privacy concerns did not significantly moderate the relationships between IVA use behaviour as a dependent variable and the predictors—online communication behaviour (H10b), perceived ease of use (H11b), perceived usefulness (H12b) and perceived needs (H13b). Therefore, these hypotheses were not supported. Additionally, there were no significant moderation relationships regarding the research questions for hedonic motivation (RQ1b) and peer influence (RQ2b). Overview of the results can be seen in Table 5.
The ultimate aim of this research was to determine key reasons for current IVA use and non-use behaviours and to understand the potential use intention of non-users. This was manifested in the research question: What distinguishes IVA users from non-users?
The current study investigated whether perceived needs predict the use intention and behaviours of IVA users and non-users. In line with previous technology studies (Zhu & He, 2002), this study showed that the perceived needs of the functionalities of IVA technology were positively related to the use intention of individuals. However, this relation did not hold for current users. Essentially, non-users indicated that they may need IVAs and therefore intend to use them. This finding provides partial support for one of the current study’s aims; it extends the existing technology acceptance literature on IVAs by considering new constructs. Therefore, perceived needs clearly explained some of the reasoning for individual’s IVA use intention.
Furthermore, the insignificant perceived needs results regarding current use were in line with the respondents’ low (daily) IVA use frequency. The descriptive statistics showed that most (44.%) users utilised IVAs less than once per month, whereas daily usage was only 9.% thus, users of the current data showed that they do not feel needs for IVA functionalities; therefore, they do not use IVAs frequently which was in line with Garcia, Lopez, and Donis’ (2018) findings.
Moreover, peer influence’s relationships with IVA current use and use intention was investigated. In line with expectations, both current users and non-users who intend to use IVAs indicated that they consider their friends or families’ point of view about IVA usage. The current users also considered themselves technologically innovative, which reflects the previous literature in similar domains. For instance, social influence impacted the adoption of wireless Internet services via mobile technology and internet banking (Lu, Yao, & Yu, 2005; Martins et al., 2013). Therefore, it was demonstrated that the main construct ‘social influence’ of UTAUT2 was also a strong predictor for the IVA domain.
The current study also examined whether hedonic motivation predicts use intention and current use of IVAs. Contrary to information provided by the TAM and UTAUT2, as well as studies on personal computer adoption in households (Brown & Venkatesh, 2005; Venkatesh et al., 2012), hedonic motivation did not predict current use or use intention for IVAs. Further investigation showed that more than half of the non-users indicated that they were ‘not aware at all’ or ‘slightly aware’ of (62.%) being able to ask voice assistants to entertain them. To clarify the proposed reasoning, future examination is needed with another data.
Surprisingly, current results did not support the direct relationship between perceived ease of use and outcomes (e.g., use intention and current use of IVAs), which was contrary to TAMs and other Internet-related technology literature (Riffai, Grant & Edgar, 2012). The reason for insignificant results for current users could be related to user dissatisfaction of IVA task completion. As Kiseleva et al. (2016) indicated, the amount of effort respondents spend on a certain task was highly related to their satisfaction. For non-users, this absence is potentially due to a lack of experience with IVAs. These results also apply for the lacking support for perceived usefulness (e.g., Brill, 2018). Future studies should measure user satisfaction to clarify these assumptions.
A similar explanation applies to perceived privacy concerns. Refuting much of the literature related to the topic (e.g., Cowan et al., 2017), the current study did not find support for the claim that low levels of privacy concerns related to IVA use intention contrary to the current users. Thus, individuals may not be aware of the extent to which they must disclose personal information in order to have their needs satisfied by IVA services, and thus, they may not ruminate on privacy concerns about IVAs. Future studies should better operationalise perceived privacy concerns, particularly for IVAs, to provide more information to the respondents about the consequences of a potential online data breach.
The key findings of the current examination were based only on the data that was gained through nonprobability sampling. Therefore, an overrepresentation of women and full-time employees could influence the generalisability of the current findings. Furthermore, since this study was cross-sectional, causality claims cannot be made with the current findings. Finally, this paper focused on smartphone embedded IVAs; therefore, the results found in this study may not reflect the use behaviour for IVAs on other platforms (Cowan et al., 2017).
Overall, the results of the current research make a valuable contribution as it is a preceding step toward understanding the potential impact of interacting with smartphone embedded IVAs.
This research was conducted under the care of the University of Amsterdam, Communication Science Department.
Harvard Dataverse: Voice Assistant Use, https://doi.org/10.7910/DVN/WON8RZ (Tanribilir 2021).
Harvard Dataverse: Voice Assistant Use, https://doi.org/10.7910/DVN/WON8RZ (Tanribilir 2021).
This project contains the following extended data:
Data are available under the terms of the Creative Commons Zero “No rights reserved” data waiver (CC0 1.0 Public domain dedication).
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Partly
References
1. Palos-Sanchez P, Hernandez-Mogollon J, Campon-Cerro A: The Behavioral Response to Location Based Services: An Examination of the Influence of Social and Environmental Benefits, and Privacy. Sustainability. 2017; 9 (11). Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Technology acceptance models
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |
---|---|
1 | |
Version 1 25 Jun 21 |
read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)