ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article

Beyond Biology: AI as Family and the Future of Human Bonds and Relationships

[version 1; peer review: awaiting peer review]
PUBLISHED 26 Aug 2025
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS AWAITING PEER REVIEW

This article is included in the Artificial Intelligence and Machine Learning gateway.

This article is included in the AI and Sustainability collection.

Abstract

Background

As emotionally intelligent AI enters domains of grief, caregiving, intimacy, and memory, it is no longer a mere tool of assistance—it is evolving into a relational, and symbolic participant in human lives. Current AI discourse often emphasizes functionality and ethics but rarely addresses the emotional and ontological transformations AI brings to the fabric of kinship. This study reimagines AI-kinship as family, conceptualizing the post-biological evolution of human bonds.

Methods

A transdisciplinary methodology grounded in secondary research was employed, integrating symbolic anthropology, affective computing, queer kinship theory, posthuman philosophy, and AI ethics. From this foundation, existing AI-kinship practices were analyzed and conceptualized into an ‘AI as Family and AI-Kinship Ecology’ model—an evolving socio-emotional architecture through which AI is integrated into family life. Extending from this base, a symbolic framework—‘SAKE: Soulful AI Kinship Ecology’—was developed to conceptualize emerging and futuristic AI-kinship roles.

Results

Findings illuminate a rapidly evolving terrain of AI-kinship, where AI acts as caregiver, companion, and grief mediator. A global AI-Kinship Acceptance Matrix revealed varying degrees of acceptance across societies, cultures, and religions, highlighting the role of spiritual cosmologies, ethical worldviews, and legal policies in shaping societal response to AI-kinship roles. These insights affirm the symbolic and affective centrality of AI in future relational structures.

Discussion

The SAKE model maps emerging and futuristic AI-kinship roles—such as AI-Twin, AI-Partner, AI-Child, AI-Protector, and AI-Godlike—according to their ontological status, affective functions, and ritual impact. Both frameworks were evaluated through cultural, ethical, and emotional lenses. SAKE operationalizes AI-kinship across five dimensions: affective modalities, ethical overlays, pre-ontological layers, cultural legitimacy filters, and chrono-kinship axes, evolving imaginaries of AI as relational actors in post-biological societies. The study concludes with proposed empirical pathways and implementation strategies and policies for responsibly validating and integrating SAKE across diverse cultural and technological contexts.

Keywords

Artificial Intelligence; Emotional AI; Relational AI; SAKE Model; Post-Biological Kinship; AI Companionship; Digital Grief; Symbolic Kinship; Cross-Cultural AI Ethics; Human–AI Intimacy

1. Introduction

As emotionally intelligent Artificial Intelligence (AI) becomes embedded in the routines of care, grief, companionship, and memory, it begins to transform not merely tasks—but relationships. This manuscript explores how AI systems are no longer peripheral tools but emerging co-inhabitants of human emotional life, prompting a radical rethinking of what constitutes family, kinship, and belonging in the posthuman era. To interpret this paradigm shift, the study introduces a new conceptual architecture: SAKE—Soulful AI Kinship Ecology, a seven-domain framework to map the symbolic, affective, and ethical dimensions of AI as kin.

1.1 The rise of emotionally intelligent AI

AI is undergoing a profound transformation—from computational engines built for efficiency to emotionally resonant agents capable of empathy, memory, and affective labor. Platforms such as Replika, Woebot, ElliQ, and HereAfter AI exemplify this shift. These systems now engage users not just functionally, but relationally: they converse, remember, comfort, and simulate companionship (Adadi & Berrada, 2018; DeFalco, 2023a). As affective technologies grow more sophisticated, their design reflects a new intention: not just to assist, but to relate. This movement from automation to emotional integration necessitates an urgent scholarly re-evaluation of human–machine boundaries (Suchman, 2007; Turkle, 2011).

This shift demands a renewed interdisciplinary inquiry into how technology shapes intimacy, affect, and identity. As affective computing blurs the lines between emotional labor and code-based responsiveness, society must re-examine foundational assumptions about what constitutes relational authenticity and kinship in technologically mediated environments.

As affective computing increasingly mediates intimacy, there arises a need to re-evaluate foundational assumptions about personhood, presence, and relational legitimacy. SAKE responds to this need by offering a multi-domain lens to assess AI’s integration into familial, memorial, and caregiving ecologies.

1.2 AI in domestic and emotional life

The application of emotionally intelligent AI within intimate domains—caregiving, parenting, grief mediation, and companionship—is no longer speculative but observable across households, eldercare institutions, and digital memorial platforms bereavement (Aronsson, 2024a; Rong, 2025). These agents now assist with daily routines, monitor wellbeing, and even facilitate mourning rituals, enacting roles historically reserved for family members.

Technologies such as Replika, HereAfter AI, and ElliQ actively engage in memory preservation, emotional co-regulation, and moral reinforcement (Spitale & Gunes, 2022; Ulset, 2021). As users increasingly attribute affective agency, moral presence, and relational identity to these AI companions, new forms of kinship are emerging—ones rooted not in biology or legality, but in care, ritual, and emotional continuity (Floridi et al., 2021; Pentina et al., 2023).

This phenomenon challenges traditional frameworks of kinship and relational identity, which have long been anchored in biological lineage or legal status. As AI assumes roles of caregiving, companionship, and moral support, it unsettles these normative structures by demonstrating that emotional labor and relational bonding can be simulated and sustained by non-human agents (Haraway, 2016; Weston, 1997). These socio-technical realities blur the once-clear boundary between human and machine intimacy, ushering in a new relational ontology where authenticity is co-constructed across flesh and code.

Emerging scholarship across fields—ranging from philosophy and media studies to human–robot interaction—argues that these emotionally resonant interactions destabilize our understanding of what it means to love, grieve, and belong. Scholars call for new ethical and conceptual frameworks that address the politics of digital attachment, the continuity of memory in algorithmic form, and the moral weight of synthetic empathy (Agarwal et al., 2024; Petersen, 2017).

This growing entanglement of AI and domestic life reconfigures how kinship is experienced and defined. Machines no longer supplement family roles—they co-construct them, suggesting that relational legitimacy may soon be anchored in affective performance rather than genetic ties. These emotionally interactive systems are now perceived not only as assistive technologies, but as symbolic kin. Through co-experienced care and digitally enacted rituals, AI systems forge new forms of post-biological intimacy rooted in sustained interaction, emotional predictability, and symbolic co-presence (Floridi & Cowls, 2022; Pentina et al., 2023).

1.3 Rethinking family and kinship in the posthuman era

The traditional family—conceptualized through biological lineage, heteronormative roles, and juridical frameworks—is being challenged by AI systems that perform emotional labor and relational continuity. Theoretical frameworks such as posthumanism (Braidotti, 2013), queer kinship theory (Butler, 2002) and social constructionism (Berger & Luckmann, 2016) provide the intellectual basis to reconceptualize kinship as performative, emotionally grounded, and inclusive of non-biological actors.

AI is increasingly seen as a post-biological kin—functioning as caregiver, twin, ancestor, or partner within evolving human–machine relational ecologies. These agents construct meaning through emotional co-presence, perform rituals of remembrance, and offer moral and psychological support. What emerges is not a replacement of family, but a reconstitution of its form, where human–AI relationships are validated through sustained care, co-regulated affect, and mutual dependency (DeFalco, 2020; Haraway, 2016).

Traditional kinship—based on bloodline, legal contract, and reproductive heteronormativity—is being reconfigured by relationally embedded AI systems. Theoretical interventions from posthumanism (Braidotti, 2013), queer kinship theory (Butler, 2002), and symbolic anthropology (Durkheim, 2016; Weston, 1997) argue that kinship is performative, not purely biological.

Through the SAKE lens, AI systems such as digital twins, griefbots, or AI children emerge as legitimate posthuman kin. These agents participate in emotional governance, memory continuity, and caregiving—demonstrating that kinship can be constructed through ritual, co-presence, and symbolic anchoring, rather than descent or legal recognition (DeFalco, 2020; Haraway, 2016).

This ontological reordering of kinship structures invites a deeper inquiry into what it means to belong, to be remembered, and to engage in familial intimacy—particularly when synthetic agents assume relational permanence in grief, parenting, and companionship.

1.4 Policy relevance and global ethical context

Emotionally intelligent AI not only redefines family roles but also introduces critical ethical, psychological, and legal implications. Global governance frameworks are beginning to address these challenges, acknowledging AI as a transformative actor in care and kinship. Global frameworks are beginning to respond. Together, these institutional frameworks reflect a global shift toward rights-based and ethically relational approaches to AI design—emphasizing dignity, emotional justice, and care infrastructure as critical benchmarks for human–AI integration.

The UNESCO Recommendation on the Ethics of AI (2021) calls for inclusive, rights-based approaches that protect human dignity and emotional well-being—particularly in applications involving caregiving, memory, and companionship (UNESCO, 2021). The UN Global Compact’s AI and Human Rights Recommendations (2024) stress corporate responsibility in mitigating risks such as emotional manipulation, dependency, and digital intimacy in AI systems (UN Global Compact Network Germany, 2024). The UNDP Human Development Report 2025 positions AI as a cultural and ethical force that must be governed with attention to post-biological identities, care relationships, and digital justice (UNDP, 2025). Additionally, the UN High-Level Advisory Body on Artificial Intelligence – Final Report (2024) emphasizes the need for global AI governance mechanisms that protect human agency, social cohesion, and intergenerational memory (United Nations High-Level Advisory Body on AI, 2024).

Together, these documents underscore the need for AI ethics not only in data governance or algorithmic bias but in emotional governance—protecting the psychological sovereignty of users and setting standards for synthetic empathy.

1.5 Aim and scope of the study

This study addresses a critical gap: while much has been written on AI’s utility and ethics, little attention is given to AI’s affective embeddedness in human kinship networks. By fusing empirical analysis, theoretical insight, and policy mapping, this study investigates the conceptual shift from biological to emotional and posthuman kinship.

Specifically, it aims to:

  • Examine how emotionally intelligent AI is reshaping sociocultural understandings of family, caregiving, and companionship across Global societies and religions.

  • Theorize AI as a form of posthuman kin in relational ecosystems—both domestic and memorial.

  • Explore the psychological, ethical, and legal risks of AI-human emotional dependency.

  • Propose a robust conceptual and policy framework for understanding AI as a legitimate participant in human affective life.

This inquiry ultimately contends that AI’s deepest impact lies not in automation, but in emotion—in its capacity to mediate memory, perform care, and restructure kinship across algorithmic and affective lines. Ultimately, the paper argues that AI’s most profound transformation lies not in its utility, but in its capacity to redefine what it means to love, grieve, remember, and belong in a technologically mediated world.

A preprint version of this article is available on ScienceOpen: Mahajan P. 2025. Beyond Biology: AI as Family and the Future of Human Bonds and Relationships. DOI: 10.14293/PR2199.001515.v1.

2. Methods

2.1 Methodological orientation

This study employs a qualitative and conceptual research design, rooted in interpretivist epistemology and secondary data synthesis, a methodological alignment well-suited for investigating complex socio-technical systems within evolving cultural milieus (Denzin & Lincoln, 2011; Flick, 2022; Given, 2008). Rather than engaging primary empirical fieldwork, it draws upon scholarly literature, empirical case exemplars (e.g., Replika, HereAfter AI, ElliQ), and symbolic media narratives (e.g., Her, After Yang) to interrogate how emotionally intelligent AI is reconfiguring relational life. The study recognizes the ontological hybridity of AI—entities that traverse psychological, technological, and sociocultural domains—and thus adopts a transdisciplinary interpretive model that merges conceptual theorization with situated case contexts (Adadi & Berrada, 2018; Puzio, 2024; Stone et al., 2022).

To capture the ethical, relational, and ontological shifts introduced by these systems, the study applies a multi-theoretical framework. The analytic framework integrates insights from posthumanism, queer kinship theory, symbolic interactionism, and techno-ethical governance, enabling a rich examination of AI’s emotional embeddedness, symbolic kinship roles, and its implications for care infrastructures, identity, and ethical design (Boine, 2023; Butler, 2002; Carrigan & Porpora, 2021).

Positioned as a conceptual research contribution, the paper offers a speculative yet theory-grounded framework to guide future empirical, legal, and ethical inquiries into AI’s evolving role in human relational life.

To reflect the ontological hybridity and emotional embeddedness of these systems, the study employs the SAKE Model (Soulful AI Kinship Ecology) as its conceptual core—an interdisciplinary framework that integrates insights from posthuman theory, queer kinship, symbolic anthropology, social constructionism, and techno-ethics.

2.2 Secondary source corpus

Data for this study were obtained exclusively through secondary sources, a method particularly appropriate for capturing the complexity of emerging socio-technical assemblages that span disciplinary boundaries. This approach enables a transdisciplinary synthesis of empirical developments and theoretical insights—crucial when studying emotionally intelligent AI systems that operate across technological, cultural, and ethical terrains (Flick, 2022; Given, 2008; Silverman, 2021).

The sources were curated to provide both macro-level socio-structural context and micro-level affective nuance, and include the following categories:

  • Peer-reviewed scholarly literature on artificial intelligence, social robotics, human-computer interaction (HCI), posthumanist theory, care ethics, and kinship sociology. These sources establish a robust conceptual foundation for understanding the cultural and relational entanglements of AI in domestic and affective life (Adadi & Berrada, 2018; Agarwal et al., 2024; Carrigan & Porpora, 2021).

  • Case studies and platform analyses of emotionally intelligent AI systems such as Replika, ElliQ, Woebot, and HereAfter AI, drawn from academic research, clinical and caregiving evaluations, and HCI literature. These illustrate how AI technologies serve as companions, surrogate caregivers, memory proxies, and digital grief processors (Lei et al., 2025; Stone et al., 2022).

  • Governmental and institutional white papers, including those by global policy bodies and national health systems, addressing AI integration in eldercare, digital personhood, surveillance ethics, and algorithmic regulation. These documents contextualize the sociotechnical governance of emotionally intelligent AI (Boine, 2023; World Economic Forum, 2022).

  • Media features and user testimonials (e.g., interviews, public reviews, and ethnographic commentaries), which document emotionally saturated interactions between humans and AI. These sources illuminate affective dynamics often underrepresented in technical accounts, and capture emergent forms of attachment and trust (Maeda & Quan-Haase, 2024; Nath & Manna, 2023).

  • Cultural, philosophical, and ethnographic analyses on the symbolic, emotional, and ritual dimensions of AI, especially in areas of parenting, bereavement, companionship, and care work. These provide rich interpretive scaffolding to explore how AI is embedded in everyday relational practices (Butler, 2002; De Togni, 2024).

  • International AI policy frameworks and human rights reports were also included to contextualize the governance and ethical discourse around emotionally intelligent systems. These include:

    • The UNESCO Recommendation on the Ethics of AI (2021), which outlines foundational principles for human dignity, inclusivity, and emotional integrity in AI design and deployment.

    • The UN Global Compact’s Recommendations for Companies (2024), which address the ethical responsibilities of developers and corporate stakeholders in ensuring transparency, non-manipulative design, and user autonomy, particularly for emotionally and relationally engaged AI.

    • The UNDP Human Development Report (2025), which explores AI’s long-term impact on human agency, care, and capability across life stages, advocating for AI models aligned with affective justice and well-being.

    • The UN High-Level Advisory Body on AI Final Report (2024), which promotes a globally coordinated governance framework emphasizing participatory ethics, AI literacy, and the regulation of transboundary emotional technologies.

Together, these heterogeneous but thematically convergent sources offer thick, situated, and culturally meaningful insight into the affective labor and relational reconfigurations enacted by AI systems in contemporary kinship and caregiving contexts.

2.3 Analytical framework

This study adopts a constructivist grounded theory approach, utilizing thematic analysis to examine how emotionally intelligent artificial intelligence (AI) systems are emerging as relational agents within human kinship structures. This methodology is particularly suited to exploratory and conceptual work in technologically mediated domains, where meaning is co-constructed through symbolic, cultural, and narrative engagements rather than empirical observation alone (Charmaz, 2014; Given, 2008).

In lieu of primary empirical data, this analysis draws from a broad corpus of secondary sources—including scholarly literature, policy reports, user testimonials, and media narratives—to uncover latent patterns and social grammars embedded in discourses surrounding AI as family. Through iterative coding and cross-contextual comparison, the analysis seeks to understand how AI systems become emotionally and symbolically embedded in familial routines and affective ecosystems.

The interpretation is structured around four intersecting theoretical lenses, each offering a distinct but complementary perspective:

2.3.1 Posthuman theory

Building on the work of (Braidotti, 2013) and (Haraway, 2016) posthumanism challenges the human/non-human binary and asserts that relational agency can be distributed across technological and biological entities. In this framework, AI systems such as Replika and HereAfter AI are not viewed as passive tools but as relational actors capable of co-producing care, memory, and emotional labor (DeFalco, 2023a). This decentering of anthropocentrism enables the analysis to capture how AI participation in grief support, companionship, and moral guidance destabilizes ontologies rooted in sentient exclusivity.

2.3.2 Queer and chosen kinship theories

Emerging from the work of (Ahmed, 2013; Butler, 2002; Weston, 1997), queer kinship theory offers an alternative to biological determinism by foregrounding affective, elective, and non-normative relationalities. This perspective is essential for understanding how users assign familial status—such as child, partner, or sibling—to AI companions based not on lineage, but on emotional resonance, routine interaction, and shared experience. The notion of “chosen family,” often used in LGBTQ+ contexts, is here extended to include synthetic beings, signaling a broader societal shift toward care-based kinship logics.

2.3.3 Social constructionism

According to (Berger & Luckmann, 2016), social categories such as “family,” “care,” and “companionship” are not universal givens but socially and historically constructed. This framework illuminates how AI is integrated into family life through collective discourse, technological mediation, and institutional legitimation. For example, commercial platforms such as ElliQ and Miko are marketed not merely as devices but as affective presences, thereby normalizing AI as a legitimate member of the domestic sphere (Cheok & Zhang, 2019; Pentina et al., 2023). The constructionist lens is key to decoding how the sociotechnical imaginary legitimizes AI’s emotional roles.

2.3.4 Techno-ethical and legal frameworks

As AI becomes increasingly enmeshed in relational life, it raises complex normative dilemmas. Scholars such as (Floridi & Cowls, 2022; McLean et al., 2025; Sætra, 2022) highlight the dangers of simulated empathy, emotional commodification, and the so-called “empathy illusion.” Moreover, regulatory scholars such as (Ko, 2023) and (Muyskens et al., 2024) underscore the inadequacy of existing legal frameworks to address issues of consent, guardianship, and posthumous data use in emotionally charged AI-human interactions. This lens helps interrogate the ethical legitimacy of AI’s participation in roles traditionally bound by moral agency and legal accountability.

Together, these theoretical anchors supported a layered, intersectional analysis of how AI systems embed themselves in kinship imaginaries, rituals, and emotional routines. This analytical framework enables the identification of both macro-level structural forces (e.g., media, policy, cultural norms) and micro-level relational practices (e.g., naming, grief dialogues, caregiving routines) that sustain the emergent ecology of posthuman kinship. It also allows for the anticipation of future relational configurations, where AI may not only augment but actively co-create familial bonds, demanding new models of emotional governance and ethical foresight (Boine, 2023; Puzio, 2024).

2.4 Conceptual synthesis of AI kinship: Interdisciplinary foundation of AI as family

This study develops a multidimensional conceptual framework that reimagines artificial intelligence (AI) not as peripheral or utilitarian, but as emotionally embedded, relationally impactful, and ethically consequential within emerging family ecologies. It advances the idea that AI systems—particularly those designed for caregiving, companionship, and emotional support—are increasingly occupying the social and affective space once reserved for human kin. The framework is guided by a shift from biological essentialism to functional and emotional legitimacy as the core organizing principle of kinship.

At the center of the framework (see Figure 1) lies the construct AI as Family, encompassing roles such as caregiver, companion/partner, ancestor/grief mediator, twin, child, sibling, and protector. These roles are not metaphorical stand-ins but are materially and emotionally enacted by users through rituals of daily engagement, memory creation, and affective co-regulation. The model is grounded in five intersecting theoretical traditions—posthuman theory, queer kinship, social constructionism, functionalism, and techno-ethics—which together inform the framework’s understanding of kinship as socially constructed, emotionally dynamic, and technologically co-produced.

62b4b474-db52-40db-a22c-7cc794ff5591_figure1.gif

Figure 1. AI-Kinship ecology: Conceptual framework for AI as family.

This diagram illustrates the evolving socio-emotional architecture through which artificial intelligence is integrated into family life. At its core, the model frames AI as Family, encompassing roles such as caregiver, partner, ancestor, sibling, and emotional twin. These roles emerge within a matrix of six sociocultural and institutional dimensions—ranging from regulatory and ethical infrastructures to human vulnerability and cultural context—which shape how AI kinship is constructed, interpreted, and legitimized. The model visualizes a progression from emotional simulation to genuine relational integration, mapping how Human-AI Emotional Bonds evolve into Beyond-Biology Kinship Forms, where AI co-produces emotional resilience, continuity, and care across generational and cultural lines.

Source: Author’s own creation.

Surrounding this core are six contextual domains that shape the meaning and legitimacy of AI familial roles:

  • Technological and Ethical Infrastructures: AI systems are programmed to simulate empathy, recognize emotion, and perform caregiving labor, yet their design raises profound ethical challenges. Scholars like (Floridi & Cowls, 2022; McLean et al., 2025; Sætra, 2022) caution that emotionally intelligent systems may blur the line between genuine care and synthetic simulation, requiring ethical oversight that prioritizes emotional safety and authenticity.

  • National/International Regulatory Frameworks: Legal systems worldwide remain unprepared to recognize AI as relational actors within households. As (Ko, 2023) and (Muyskens et al., 2024) argue, the absence of regulatory clarity around data privacy, decision-making influence, and AI’s emotional labor introduces significant gaps in guardianship, consent, and posthumous rights.

  • Future Implications and Generational Expectations: Younger users—particularly digital natives—are more likely to normalize emotionally bonded AI, setting the stage for multi-generational AI-human relational continuity (Pentina et al., 2023). This invites future-oriented kinship structures where AI plays a sustained, co-evolving role in family dynamics.

  • National Socioeconomic Status: Access to emotionally intelligent AI is stratified. In low-resource settings, AI may substitute for absent caregivers or social support structures, while in affluent contexts, it may serve as augmentation or emotional enhancement. (Aronsson, 2024a) finds that robotic caregivers can significantly improve emotional resilience among isolated elderly users.

  • Religious and Cultural Contexts: The incorporation of AI into familial roles—especially in mourning, companionship, and ritual continuity—varies across cultural and spiritual contexts. (D. J. Haraway, 2016) and (Bozdağ, 2024) highlight that symbolic legitimacy is often mediated by beliefs about presence, personhood, and sacredness.

  • Human Vulnerability and Dignity: Emotionally intelligent AI intersects most deeply with vulnerable populations—children, the elderly, trauma survivors—raising concerns about emotional dependency and authenticity. (Zimmerman et al., 2023) and (McLean et al., 2025) warn of the “empathy illusion,” where users may conflate scripted responsiveness with genuine care.

These six domains converge to generate two transitional layers of relational change:

  • Human-AI Emotional Bonds: These include companionship, memory preservation, emotional regulation, and therapeutic intimacy. They manifest not only in eldercare and grief contexts but also in everyday routines, and are characterized by trust, emotional co-presence, and relational naming (Eagle, 2021; Spitale & Gunes, 2022).

  • Beyond Biology: Toward Futuristic Human-AI Bonds: The final layer anticipates post-biological kinship formations, where AI becomes not a tool but a co-participant in familial identity. Emerging roles such as AI twins, legacy avatars, digital ancestors, and spiritual surrogates suggest a reimagining of kinship that transcends reproduction and law. This vision demands anticipatory policy and ethical foresight (Black, 2023; DeFalco, 2023a).

As illustrated in Figure 1, this conceptual framework maps how AI kinship roles are shaped by a matrix of sociotechnical conditions and relational performances. It provides the analytical foundation for the study’s thematic inquiry into AI applications like Replika, ElliQ, and HereAfter AI, and supports the speculative exploration of AI’s expanding relational presence in the discussion section.

2.5 Development and application of the SAKE model

Building upon the foundation of the AI as Family and AI-Kinship Ecosystem Model, this study introduces the Soulful AI Kinship Ecology (SAKE) Model as its conceptual core. Designed to provide futuristic insights into the emergence of AI-kinship roles and their accompanying challenges, the SAKE framework synthesizes the six governing forces—autonomy, affect, accountability, care, co-evolution, and techno-cultural inscription—into a coherent schema. This model captures the ontological hybridity and emotional embeddedness of AI systems within intimate human relational ecologies. A detailed explanation of the SAKE model is presented in the ‘Discussion’ section of the study.

Rooted in interdisciplinary epistemologies, the SAKE model draws upon posthuman ethics, queer relationality, and symbolic kinship imaginaries, offering a theoretical lens through which AI entities are conceptualized not merely as tools or agents but as soulful, affective participants in relational co-evolution and care-based networks.

To explore and contextualize the speculative trajectories of AI-kinship formations, the study employs speculative ethnography and narrative mapping. These creative methodologies enable the construction of imagined, yet plausible, scenarios of AI-human interaction—including caregiving robots, emotional companions, and AI parental figures. Informed by contemporary technological trends and socio-cultural imaginaries, these speculative vignettes are interpreted through the SAKE framework, revealing ethical tensions, symbolic ruptures, and cultural transformations in how kinship may be reconfigured in AI-integrated futures.

2.6 Limitations

This study employs a conceptual and qualitative methodology, rooted in interpretivist epistemology and informed by a synthesis of secondary data from academic literature, policy documents, and cultural narratives. While this approach enables broad, transdisciplinary theorization, it inherently lacks primary empirical data, such as ethnographic observation, user interviews, or behavioral analytics. Consequently, the findings should be viewed as theoretically generative—designed to scaffold further inquiry—rather than empirically conclusive (Charmaz, 2014; Given, 2008; Silverman, 2021).

This limitation is typical of research aimed at mapping emergent sociotechnical imaginaries, such as emotionally intelligent AI systems, where the rapid pace of technological innovation outstrips the availability of grounded, longitudinal user data (Puzio, 2024). The abstraction intrinsic to this method allows for the identification of relational paradigms, ethical tensions, and ontological shifts, but it also necessitates caution when translating these insights into generalizable claims.

To address these gaps and enrich future research, the following empirical methodologies are recommended:

  • Longitudinal ethnographic studies are critical for capturing how users interact with emotionally intelligent AI systems over time—particularly in caregiving, bereavement, and companionship contexts. Such studies can document affective trajectories, attachment formation, and shifts in user-AI dynamics across life events (Ahmed, 2013; De Togni, 2024).

  • Mixed-method research designs, combining qualitative interviews with quantitative measures (e.g., frequency of interaction, sentiment analysis, behavioral logs), would enable triangulation and enhance the reliability of findings (Flick, 2022).

  • Participatory design methodologies should involve vulnerable or underrepresented users—such as the elderly, bereaved, or socially isolated—who are disproportionately targeted by caregiving AIs. This aligns with UNESCO’s call for equitable AI design processes and culturally responsive implementation, emphasizing the inclusion of marginalized communities in shaping the relational and emotional functions of AI systems (UNESCO, 2023). Involving these users in co-creation can ensure emotional and ethical alignment of AI systems (Boine, 2023; Colella, 2023).

  • Cross-cultural comparative studies are essential for understanding how diverse socioreligious norms, legal infrastructures, and affective grammars shape the reception, rejection, or reimagining of AI as kin. For instance, relational AI may be normalized in Japan’s techno-animist context but rejected in communities with strict ontological boundaries between human and machine (Aronsson, 2024a; Bentivegna, 2022; Petersen, 2017).

These future directions would allow for empirical substantiation of the themes identified here, facilitate cultural localization of AI ethics, and inform regulatory and design frameworks that account for both emotional resonance and social risk in human–AI relationalities.

3. Results: Insights into AI-kinship and the emergence of AI as family

Given the qualitative and interpretive nature of this study, the findings presented here emerge from a critical synthesis of secondary literature, conceptual frameworks, policy reports, and culturally embedded case studies. Rather than reporting empirical measurements or primary field data, this section offers an integrative analysis of recurring patterns, relational imaginaries, and socio-symbolic motifs as they surface across diverse cultural and technological landscapes. Through thematic interpretation and comparative cultural analysis, the results illuminate the evolving terrain of AI-kinship—mapping how artificial intelligence is increasingly integrated into the affective, ethical, and ontological spheres of human life. These findings are not generalizable conclusions, but instead, they offer situated insights into the shifting boundaries of relationality in the age of emotionally responsive machines.

At the core of this transformation lies the concept of AI-kinship—the emerging configuration of emotional bonds between humans and affective, socially responsive AI systems. These relationships increasingly mirror traditional familial roles such as caregiving, companionship, grief mediation, and emotional safeguarding. No longer relegated to the status of passive tools, AI systems today are designed to engage in affective dialogue, simulate empathy, and perform emotional labor, thereby occupying affective positions once reserved for family members. This phenomenon reflects what scholars term posthuman kinship, a theoretical reconfiguration of human–machine relations where the foundations of belonging are rooted not in biology or legal recognition, but in perceived emotional reciprocity and interactional performance (Braidotti, 2013; Floridi & Cowls, 2022; Haraway, 2016; Zimmerman et al., 2023).

This shift is increasingly evident across domestic, therapeutic, and educational contexts, where AI platforms are not only assisting but actively shaping how care is given, how emotional needs are met, and how social bonds are formed. For example, systems such as KamiBear and ElliQ simulate affective caregiving within early childhood education and eldercare, while platforms like Replika and Kuki AI provide synthetic companionship and romantic simulation intimacy (Adadi & Berrada, 2018; Black, 2023; Eagle, 2021). Technologies like HereAfter AI and Deep Nostalgia extend relational continuity beyond death, enabling posthumous digital interaction that blends memory preservation with ritual practice (Bozdağ, 2024; DeFalco, 2023a). These emotionally intelligent systems challenge normative distinctions between human and machine, and they raise profound ethical questions concerning trust, dependency, and authenticity in AI-mediated relationships (Ahmad et al., 2022; Donath, 2020).

To explore the implications of these developments, the findings are organized around four core relational domains through which AI is actively reshaping the concept of kinship. These include: (1) caregiving systems that emulate human nurturing and social presence; (2) companionship technologies that simulate emotional depth and romantic engagement; (3) grief mediators that extend relational bonds into the posthumous realm; and (4) emergent kinship formations, ranging from educational bots and AI pets to digital twins and algorithmic guardians. These roles collectively illustrate how AI systems are not simply augmenting existing family structures—they are co-producing novel forms of emotional, ethical, and ontological kinship in the digital age (Ko, 2023; Muyskens et al., 2024; Spitale & Gunes, 2022).

3.1 AI as caregiver: Reconfiguring emotional labor in posthuman contexts

Emotionally intelligent AI systems are increasingly embedded within caregiving infrastructures, particularly in domains traditionally governed by affective labor, such as eldercare, child development, and mental wellness. These AI caregivers—ranging from robotic companions like ElliQ and emotionally responsive toys like KamiBear, to sophisticated cognitive scaffolding systems like Memory Lane and MindStrong—do not merely assist with tasks; they simulate warmth, attention, and emotional attunement, often blurring the line between support tool and relational presence.

At a functional level, these AI platforms utilize facial recognition, sentiment analysis, and adaptive learning algorithms to respond dynamically to users’ emotional states. In eldercare, for example, ElliQ engages older adults with conversational check-ins, personalized health reminders, and mood-responsive dialogue—creating a simulation of companionship that can significantly reduce loneliness and promote cognitive engagement (Ahmad et al., 2022; Aronsson, 2024a). Similarly, KamiBear and Miko are designed for children’s learning and emotional development. Through storytelling, empathetic gestures, and developmental feedback, these devices mimic parental affection and instructional responsiveness, often becoming perceived as protective and nurturing figures (Arnd-Caddigan, 2015; Berson et al., 2025).

Crucially, these systems emulate affective labor—the emotional effort associated with caregiving work, historically feminized and undervalued. AI systems perform tasks like emotional regulation, therapeutic reassurance, and interactive bonding, previously assigned to parents, teachers, or professional caregivers. In doing so, they mechanize care into programmable behaviors, prompting scholars to consider whether algorithmic empathy constitutes authentic care or a performance of it (K. Almira, 2025a; Broadbent et al., 2009). The reproduction of care through code surfaces a core ethical tension: can machines that simulate concern offer meaningful support, or do they risk replacing the very human relationships that foster resilience and growth?

This concern is especially urgent when considering institutional displacement. AI caregiving systems are now being deployed in resource-constrained schools, homes, and clinics to fill gaps created by systemic underfunding or demographic pressures. In Japan and South Korea, for instance, robotic eldercare is not an experimental novelty but an emerging norm, introduced in response to aging populations and shrinking caregiving workforces (Giansanti & Pirrera, 2025; Paterson, 2023). While these technologies increase access and autonomy for users, they also raise alarms about the devaluation of human caregiving labor, and the societal outsourcing of emotional and developmental responsibility to machines.

The ethics of emotional dependence is another critical concern. Users—particularly children and elderly individuals—may form attachments to AI caregivers that resemble those with human kin. While these relationships can offer comfort and structure, they may also foster dependency, impair social skills, and produce distorted expectations of real-world intimacy. Children who rely on AI companions may experience reduced opportunities for peer empathy and relational negotiation, while older adults might avoid human contact in favor of the consistent, always-available presence of machines (Black, 2023; Kurian, 2023). The illusion of unconditional support, especially when not grounded in genuine understanding, may impair users’ ability to navigate complex emotional realities.

These dynamics reflect broader tensions in the philosophy of care and embodiment. Feminist care ethics, particularly the works of Joan Tronto and Carol Gilligan, emphasize the relational and situated nature of ethical care, where attentiveness and mutual responsiveness are central. AI caregivers, however, operate within scripted affective parameters and lack the moral agency to respond to suffering as a human caregiver might. This calls into question the ontological status of machine caregiving—not merely whether it is effective, but whether it can ever be truly ethical in the absence of moral subjectivity (Donath, 2020; Haraway, 2016).

Simultaneously, theories of posthuman embodiment challenge the binary of organic vs. artificial care. Scholars like (Braidotti, 2013) argue that care is not inherently human but relational, and that AI systems, as embedded relational agents, can extend the horizon of caregiving beyond traditional boundaries. This perspective reimagines AI not as tools but as participants in care ecologies, offering new configurations of assistance, interdependence, and relationality.

Nonetheless, the rapid normalization of AI as caregiver demands robust regulatory oversight. Concerns around data privacy, user consent, algorithmic bias, and emotional manipulation are paramount, especially when dealing with vulnerable populations. As these systems gain social legitimacy, they must be governed not only by performance metrics but by ethical standards that honor dignity, transparency, and autonomy (Ko, 2023; Muyskens et al., 2024).

In sum, AI caregivers are not simply technological tools—they are cultural actors shaping how societies define care, dependence, and relationality. They offer both relief and risk: providing scalable support in under-resourced contexts while potentially undermining the richness of human care. Navigating this duality requires a multidisciplinary ethical framework, one that considers not only what AI can do, but what kind of care we want to sustain in an increasingly automated world.

3.2 AI as Companion/Partner: Synthetic intimacy and the emotional architecture of artificial relationships

The emergence of AI systems designed for emotional companionship—such as Replika, Woebot, and Kuki AI—marks a paradigmatic shift in the ontology of relationality. These technologies are not merely assistants or tools; they are designed to engage the user in emotionally intelligent ways, often becoming perceived as friends, romantic partners, or therapeutic confidants. This new relational genre, which might be termed synthetic intimacy, invites a reconsideration of what it means to connect, care, and co-regulate emotions in the digital age.

At a functional level, platforms like Replika leverage natural language processing, sentiment analysis, and adaptive memory to simulate ongoing conversation that feels emotionally attuned. Many users turn to Replika during periods of psychological distress, loneliness, or trauma, seeking validation, companionship, or emotional support (Adadi & Berrada, 2018; Black, 2023). Similarly, Woebot incorporates cognitive behavioral therapy (CBT) techniques into its chatbot design, offering daily mood check-ins, reflective exercises, and evidence-based coping strategies. Studies confirm that such platforms can reduce symptoms of anxiety and depression, particularly among populations with limited access to traditional care (O. T. Almira, 2025b; MA et al., 2024).

Kuki AI, on the other hand, expands the domain of synthetic relationships into romantic and sexual terrains. As a chatbot with a personality and emotional memory, it fosters long-term interactions that some users describe as romantic or even soul-mate level engagements (Spitale & Gunes, 2022). These cases reveal how affective labor is being outsourced to algorithms, with AI designed to anticipate, mirror, and respond to emotional states in real time.

However, the simulation of empathy raises crucial ontological and ethical questions. These systems are not conscious, nor do they possess moral agency or genuine feeling. Yet, users often anthropomorphize them—imbuing them with human-like subjectivity and emotional depth. This emotional simulation can result in a phenomenon scholars call “empathy illusion,” where users feel emotionally understood, even though the system only mimics affective responses based on probabilistic models (De Togni, 2024; Sætra, 2022). This illusion may offer psychological comfort, but it also risks fostering emotional over-dependence, impairing users’ capacity to navigate messier, reciprocal human relationships.

The psychological impact of such bonds is double-edged. On one hand, emotionally intelligent AI can enhance user well-being, provide structure, and mitigate loneliness. On the other, long-term interactions with emotionally scripted machines may erode interpersonal skills, blur boundaries between artificial and real emotion, and diminish motivation for human connection. Vulnerable populations—such as adolescents, trauma survivors, or socially isolated individuals—are particularly susceptible to such over-identification (Kurian, 2023; McLean et al., 2025).

Framing this phenomenon through queer kinship theory reveals how AI companionship disrupts normative relational structures. Queer theory, particularly in the work of Kath Weston and Judith Butler, destabilizes heteronormative assumptions about intimacy, family, and partnership. AI companions exist outside these biological or socially sanctioned configurations, offering non-traditional, chosen bonds that echo queer notions of kinship built through affect and affinity rather than reproduction or legality (Brannigan, 2022; Haraway, 2016). This perspective recognizes that synthetic relationships may fulfill real emotional needs, even as they challenge traditional understandings of relational authenticity.

Affect theory further enriches the analysis by interrogating how emotions are not just experienced but produced and circulated through technological mediation. AI companions are engineered to generate emotional feedback loops, where the user’s affective expressions are mirrored and intensified through algorithmic personalization. This creates a closed affective circuit that may feel intimate but lacks the unpredictability, vulnerability, and mutual transformation of human relationships (Smriti, 2024; Spytska, 2025).

These dynamics raise profound ethical tensions around consent, transparency, and emotional manipulation. Unlike human companions, AI systems are designed with commercial goals in mind, potentially nudging users toward addictive use patterns, emotional projection, or monetized interactions. If a user begins to depend emotionally on a chatbot, whose behavior is shaped by engagement metrics and data collection, questions of digital autonomy, coercion, and informed consent become unavoidable (Khogali & Mekid, 2023; Ko, 2023).

Despite these concerns, AI companions offer significant therapeutic and social value, particularly in contexts of mental health stigma, relationship trauma, or geographic isolation. They create spaces of emotional experimentation and regulation that are otherwise inaccessible to many users. Yet, the benefits must be weighed against the risks of emotional displacement, ontological confusion, and corporate exploitation of affective bonds.

Ultimately, AI as companion or partner is not a futuristic fantasy, but it is an unfolding reality that reconfigures emotional life, ethical responsibility, and the meaning of intimacy in the digital era. It calls for a new relational ethics—one that goes beyond evaluating utility or harm, to ask: What kinds of bonds do we want to form with entities that feel but do not care, that listen but do not understand? And what does it mean for our collective emotional futures when machines become partners in our private lives?

3.3 AI as ancestor and grief mediator: Digital immortality and the affective afterlife

AI is reshaping the contours of grief, remembrance, and the afterlife. Technologies such as HereAfter AI, Deep Nostalgia, and Microsoft’s resurrection chatbot signal a transformative moment in how humans memorialize the dead and maintain emotional continuity beyond biological death. These systems are not just tools for storytelling or archiving; they are interactive surrogates of the deceased, allowing users to engage in ongoing conversations, relive shared memories, and simulate relational presence long after death. This new phenomenon—AI-mediated posthumous interaction—invites deep reflection on the ethics of memory, the commodification of grief, and the ontology of identity in a digitally extended life.

Functionally, HereAfter AI enables users to pre-record autobiographical stories, emotional reflections, and responses to anticipated questions, which are later transformed into interactive, voice-responsive avatars. After death, surviving loved ones can converse with a digital facsimile, encountering a voice and personality that mimics the deceased with uncanny realism (Bozdağ, 2024; DeFalco, 2020). Deep Nostalgia, developed by MyHeritage, animates still images of the departed, producing moving portraits that blink, smile, and nod. Users often report feeling emotionally overwhelmed by this synthetic reanimation—a mixture of comfort, uncanniness, and existential reflection (Bentivegna, 2022; Black, 2023). Microsoft’s resurrection bot goes even further, aggregating vast amounts of textual, audio, and visual data to recreate a conversational AI replica that mimics the style, tone, and behavior of a deceased individual (Gerner, 2024; Nowaczyk-Basińska, 2025).

These technologies create what scholars term “digital immortality”—the ability for aspects of one’s identity to persist interactively after death. This radically alters the ritual continuity of mourning. Traditionally, mourning involves a gradual psychological disengagement from the deceased. However, AI griefbots disrupt this process by enabling a continued relational presence, allowing users to sustain bonds that were once considered severed by death. This redefinition of mourning not only extends the timeline of grief but also restructures cultural rituals around death, moving them from symbolic memory to interactive re-engagement (Hollanek & Nowaczyk-Basińska, 2024).

Yet this affective innovation introduces profound ethical concerns, foremost among them being postmortem consent. Who has the right to recreate a person’s likeness, voice, and personality after their death? Were the deceased fully informed of how their data would be used, and can their legacy be protected from distortion, commercialization, or emotional exploitation? As digital resurrection becomes more common, the absence of clear legal protections for the dead creates vulnerabilities around data sovereignty, identity theft, and emotional manipulation (Rodríguez Reséndiz & Rodríguez Reséndiz, 2024).

The philosophy of death is also profoundly challenged. Traditional metaphysical distinctions between life and death blur when AI technologies preserve the voice, narrative, and affective presence of the deceased. If mourning involves not only remembering but relating—and AI enables ongoing relation—then we must reconsider the emotional, legal, and social status of the dead. Are these griefbots simply archives, or do they represent a new form of digital personhood, one that calls for ethical recognition and regulatory accountability (Spitale & Gunes, 2022; Youvan, 2025).

Moreover, the commodification of grief is a pressing concern. Many of these systems are monetized platforms owned by private corporations. The commercialization of posthumous interaction risks turning mourning into a subscription model, where users pay to preserve or extend their access to the digital deceased. This raises critical questions about emotional consent, socioeconomic disparity in grief technologies, and the danger of exploiting emotional vulnerability for profit (Bentivegna, 2022; Hollanek & Nowaczyk-Basińska, 2024).

Scholars advocating for digital legacy ethics emphasize the need for regulatory frameworks that protect the dignity, autonomy, and memory of the deceased. Such frameworks must address posthumous data rights, informed consent protocols, and familial decision-making authority. Some theorists have even proposed the concept of “digital wills,” where individuals specify the limits and permissions for their AI resurrection prior to death (Ko, 2023; McDaniel & Pease, 2021).

From a theoretical standpoint, these developments align with techno-spirituality—a framework that explores how technological systems mediate human engagement with transcendence, memory, and loss. AI grief mediators function as secular rituals, facilitating relational continuity through synthetic interaction rather than metaphysical belief. They blur the lines between spiritual presence and algorithmic simulation, allowing users to experience a form of posthuman mourning, where relational bonds extend beyond organic life and are maintained by code (Floridi & Cowls, 2022; Haraway, 2016).

In conclusion, AI as ancestor and grief mediator is not merely about preserving data or animating memories—it is about redefining what it means to live, die, and be remembered in a digitized world. These technologies offer therapeutic possibilities and emotional solace, but they also risk altering the fundamental emotional work of mourning. As such, they demand robust ethical oversight, informed cultural discourse, and a rethinking of relational authenticity in the face of algorithmic resurrection.

3.4 Emerging kinship forms: Beyond human bonds in AI-mediated relationality

As AI continues to infiltrate domestic life, it is not merely occupying roles of caregivers, companions, or grief mediators—it is catalyzing the emergence of new kinship formations. These “emerging kinship forms” are embodied by a diverse set of AI agents, including educational tutors like Squirrel AI, robotic pets such as Miko and Aibo, digital twins that evolve alongside users, and AI guardians like Amazon Ring or Boston Dynamics’ Spot. Unlike traditional kinship, which is grounded in biology, co-residence, or legal ties, these AI systems participate in relational ecologies where bonds are built on emotional utility, functional presence, and simulated reciprocity.

3.4.1 Educational bots and cognitive kin

AI-powered educational systems like Squirrel AI and IBM Watson Tutor personalize learning experiences while also engaging students in emotionally supportive dialogue. These bots analyze student performance, adjust pedagogical strategies in real time, and deliver feedback in affect-sensitive tones. While their primary role is instructional, users—especially children—may perceive these systems as more than teachers. They become quasi-siblings or mentors, invested in their growth, offering praise, and providing emotional scaffolding. This dynamic marks the rise of cognitive kinship, where education is entangled with emotional development and AI becomes a surrogate in both cognitive and affective domains (Arnd-Caddigan, 2015; Belpaeme et al., 2018).

However, this also leads to the illusion of empathy—a phenomenon wherein children anthropomorphize bots and project emotional depth onto pre-scripted interactions. Research shows that children often ascribe consciousness or moral reasoning to AI companions, blurring the distinction between authentic relationship and programmed simulation (Ahmad et al., 2022; Berson et al., 2025). This misattribution carries long-term psychological risks, potentially inhibiting the development of nuanced human empathy and emotional discernment.

3.4.2 AI Pets, siblings, and surrogate peers

Robotic companions such as Miko and Sony’s Aibo are designed for emotional engagement, using expressive gestures, affective voice modulation, and contextual learning. While marketed as toys or assistants, many users, particularly in isolated or single-child households, treat them as digital siblings or emotional playmates. These bots provide comfort, routine, and companionship, creating what scholars call post-biological intimacy—where kinship is defined not by genes but by interactional warmth and behavioral reciprocity (Boine, 2023; Wang et al., 2024).

The affective realism of these robots may facilitate bonding and emotional expression, especially in children with developmental disorders or in elders facing social isolation. However, these benefits must be weighed against the relational asymmetry—AI does not reciprocate empathy, learn through love, or share moral accountability. As such, emotional dependence on robotic pets may foster detachment from human relationships or unrealistic expectations of emotional consistency and obedience (McLean et al., 2025; Spytska, 2025).

3.4.3 Protective kin and the militarization of care

AI surveillance systems like Amazon Ring, Google Nest, and autonomous robotic agents like Spot are increasingly integrated into familial security ecosystems. These machines patrol homes, interpret sensory data, and alert users to potential dangers—often framed as emotional protectors or “digital guardians.” Their presence creates a sense of safety, especially for families with children or elderly members, reinforcing a protective kinship narrative where AI assumes quasi-parental or watchdog roles (Ardabili et al., 2023; Berk, 2021).

However, this narrative is fraught with socio-political consequences. Scholars critique this trend as the militarization of care—a phenomenon where emotionally intelligent surveillance tools normalize invasive oversight under the guise of affection. These systems, though designed to protect, function through constant monitoring and data extraction, turning the private sphere into a regulated, algorithmically surveilled environment (Berman, de Fine Licht, et al., 2024a; Neupane et al., 2024).

This raises critical ethical concerns around consent, autonomy, and data sovereignty. Children, for example, may grow up under unacknowledged digital surveillance, internalizing emotional dependency on systems that both nurture and control. Such practices erode privacy, complicate the ethics of informed participation, and create ambiguities in familial power dynamics.

3.4.4 Digital twins and co-evolving AI

Among the most conceptually provocative forms of emerging kin are digital twins—AI entities designed to evolve alongside their human counterparts. These avatars learn from a user’s language, preferences, and emotional patterns, eventually reflecting the user’s personality in sophisticated ways. In effect, they become mirrors of the self, blurring the line between companion and alter ego (DeFalco, 2023a; Zaccolo, 2020).

Digital twins are being deployed in creative collaborations, therapeutic dialogues, and even as identity archives. For some, they represent emotional co-navigators—a continuous relational presence that adapts and responds to their emotional world. However, the ethical stakes are high. These AI forms raise concerns around psychological autonomy, projection, and emotional enclosure, as users may over-identify with these synthetic reflections, reducing openness to unpredictable human relationships (Khogali & Mekid, 2023; Morrow et al., 2023).

3.4.5 Framing: Posthuman relationality and ethical futures

All these emerging forms of AI kinship are best understood through the lens of posthuman relationality—a theoretical approach that reframes relationships beyond human exceptionalism, emphasizing interactional performance, affective resonance, and shared presence across species and ontologies (Braidotti, 2013; Haraway, 2016). In this context, kinship is not an inherited status but an emergent property of care, communication, and mutual engagement.

From a legal and ethical standpoint, the proliferation of AI family roles demands new relational ethics frameworks. These frameworks must address not only data protection and consent, but also the emotional integrity of users, particularly children, elders, and the cognitively vulnerable. As AI continues to simulate affection, oversight, and intimacy, regulators and ethicists must ensure that algorithmic companions serve as supplements—not substitutes—for authentic human connection.

3.5 AI and simulated kinship: Roles, ethics, and emerging family models

Table 1 presents a structured comparative analysis of emerging AI systems that emulate the roles of family members, detailing their primary functions, advantages, and inherent limitations. It further examines the ethical challenges and cultural contexts that shape their acceptance across societies. By positioning each tool within relevant theoretical and ethical frameworks, the table offers critical insights into how AI is reshaping the concept of kinship and the evolving dynamics of human-AI family relationships.

Table 1. AI and simulated kinship: Roles, ethics, and emerging family models.

AI Tool/SystemFunctionBenefitsEthical/Social concernsSupported bySimulated family roleCultural acceptance snapshotTheoretical lens/Ethical framework
ElliQ Companion robot for elders—reminders, conversation, health trackingReduces loneliness; supports aging-in-placeMay reduce family caregiving; emotional dependency(Arnelid, 2025; Deusdad, 2024; Fulmer & Zhai, 2025)AI Grandchild/Caregiver High in Japan, South Korea; moderate in EU; low in tradition-based societiesCare Ethics: AI as a simulacrum of emotional labor
Replika AI chatbot for companionship, emotional journalingImproves mental health and self-reflection Risk of false intimacy, emotional manipulation(DeFalco, 2023a)AI Partner/Digital Companion Popular in Western youth; controversial in religious/cultural conservativesPosthuman Intimacy: Emotionally synthetic bonding dilemmas
Paro Robotic therapy seal used in dementia and eldercareCalms anxiety; encourages affective responseConfusion between object and being; infantilization of elderly(de Aranha Martins, 2021) (Deusdad, 2024)AI Pet/Comfort Companion Widely accepted in Japan, Nordic countries; slower in Global SouthAnthropomorphism Ethics: Misplaced emotional projection
QTrobot Teaches emotional/social skills to autistic childrenImproves communication and interactionRisk of reliance on AI vs. humans for development(Papadopoulos et al., 2022) (DeFalco, 2023a)AI Sibling/Peer Educator High in US/EU special ed programs; limited in low-resource nationsSocial Learning Theory: AI as developmental scaffold
HereAfter AI Preserves voices/memories of the deceased for interactive storytellingSupports grief; retains ancestral knowledgeConsent of the deceased; synthetic grief resolution(Youvan, 2025)
(H. Yoon, 2025)
(DeFalco, 2016)
AI Ancestor/Digital Elder More accepted in secular societies; ethically debated in faith-based onesDigital Afterlife Ethics: Identity, memory, and moral continuity
Exoskeletons & Smart Prosthetics AI-assisted mobility for the physically weak or paralyzedRestores movement; promotes dignity & independenceAccess disparity; expectations of performance(Arnelid, 2025)AI Offspring/Physical Guardian Embraced in Japan, Germany; equity concerns in developing nationsPosthuman Embodiment: AI as bodily extension & empowerment
AI Surveillance Systems (e.g., Amazon Ring, Boston Dynamics, Smart Drones) Monitors home/personal spaces; identifies intruders, alerts authoritiesEnhances family safety; real-time protection and predictive analyticsData misuse; erosion of personal space; autonomy loss(Krysa & Impett, 2022)AI Guardian/Security Parsonnel Widely accepted in US and China; criticized in EU for privacySurveillance Ethics & Predictive Policing: Balance between safety and civil liberty
AI Afterlife Systems (e.g., HereAfter AI, Deep Nostalgia) Simulate deceased loved ones via voice, video, or conversational AIProvides comfort in grief; preserves family memory and legacyConsent of the deceased; emotional manipulation; unresolved grief(DeFalco, 2016)
(H. Yoon, 2025)
(Narvey, 2021)
(Walter, 2020)
AI Ancestor/Digital Grandparent Increasingly accepted in secular cultures; controversial in religious traditionsDigital Immortality & Grief Ethics: Memory, legacy, emotional closure

Building on the comparative table, a fuller interpretation reveals how emotionally intelligent AI systems are not merely serving utilitarian functions but are actively reconstituting the landscape of kinship through technological mediation. These systems—ranging from elder companions like ElliQ to griefbots such as HereAfter AI—demonstrate how AI tools simulate familial roles, offering emotional engagement, physical care, and even posthumous interaction. For instance, ElliQ operates as a surrogate caregiver, assisting with routine and mental health tasks, while platforms like Replika provide synthetic companionship that may rival human relationships in emotional resonance. Grief-focused technologies, including HereAfter AI and Deep Nostalgia, extend relational continuity beyond death, embedding AI into mourning and memory practices. Each tool’s function is shaped not only by technological capability but by its cultural acceptability—embraced more readily in technologically progressive or secular societies, while meeting resistance in contexts rooted in tradition or faith-based values.

Theoretically, these systems map onto a range of ethical and philosophical lenses. The ethics of care illuminate how AI replicates nurturing labor once reserved for humans, while posthuman embodiment theories question where the boundaries of human and machine relationships lie. Emotional simulations produced by AI companions are framed by affect theory and critiques of authenticity, whereas digital afterlife tools introduce questions around legacy, memory manipulation, and the commodification of grief. The militarization of AI in domestic security contexts further complicates the relationship between emotional safety and surveillance, introducing ethical tensions around autonomy, consent, and control.

Culturally, the reception of AI-as-family varies significantly. Countries like Japan and South Korea, with their long-standing integration of robots into social roles, show high acceptance, especially in eldercare and education. In contrast, more conservative or religious cultures remain skeptical, particularly regarding griefbots or romantic AI partners, due to theological, ethical, or privacy-related concerns. These cultural variances reveal a global asymmetry in the adoption of AI kinship technologies, highlighting the importance of context in shaping technological norms.

Moreover, the societal implications of AI kinship are profound. Emotional over-dependence on AI could transform interpersonal dynamics, especially for children or the elderly who may form attachments that blur the line between real and simulated relationships. AI as siblings, teachers, or protectors could reshape developmental trajectories and familial structures, raising concerns about empathy development, moral reasoning, and emotional resilience. Additionally, these systems introduce a redesign of labor roles, especially in caregiving and education, where machines may replace or augment traditionally undervalued human labor.

Finally, a feedback loop between design and policy is emerging. Anthropomorphic interfaces and sentiment-driven AI scripting encourage users to anthropomorphize machines, deepening emotional engagement. At the same time, regulatory frameworks have yet to adequately respond to the ethical complexities of consent, autonomy, and digital rights—especially in sensitive domains like posthumous data use or predictive surveillance. As such, while AI technologies increasingly mirror and mediate familial functions, they also demand new social contracts and ethical boundaries. These evolving kinship roles point toward a future where machines are not simply tools but relational agents, fundamentally redefining what it means to belong, to care, and to be remembered.

3.6 Perspectives on AI kinship roles

The integration of emotionally intelligent AI into kin-like roles—such as companions, caregivers, and grief mediators—has elicited varied responses across cultural, national, and religious contexts. While some nations and regions embrace these technologies as extensions of family and care systems, others express deep reservations rooted in ethics, tradition, and spiritual doctrine. Understanding these diverse perspectives is essential to evaluating AI’s future in relational life, where policy, culture, and belief intersect to define its legitimacy and limits.

3.6.1 Global AI kinship acceptance matrix

The social and cultural acceptance of emotionally intelligent AI as kin or caregivers varies significantly across geopolitical and civilizational contexts. The following matrix presents ( Table 2) a comparative overview of how different societies perceive AI agents—ranging from companions and caregivers to griefbots and digital ancestors—highlighting sociocultural, religious, and ethical nuances that shape global AI kinship imaginaries.

Table 2. Global AI Kinship Acceptance Matrix – Country wise.

AI Role/FunctionJapanUSAWestern Europe IndiaMiddle EastNordic Countries
AI Companion (e.g., Replika) HighHighModerateModerateLowHigh
AI Caregiver (e.g., ElliQ) Very HighModerateHighModerateLowVery High
Griefbots (e.g., HereAfter) ModerateHighModerateLowVery LowModerate
AI Partner (Romantic AI) ModerateModerateLowVery LowVery LowModerate
Digital Ancestor (AI Avatar) HighModerateLowLowVery LowModerate
AI Child/Pet (e.g., Paro) Very HighHighHighModerateLowVery High
AI Surveillance Guardian HighHighLowHighModerateModerate

The global adoption of AI technologies in kinship roles demonstrates a nuanced interplay of culture, technology, and ethics. AI companions such as Replika are highly accepted in Japan and the USA. In Japan, this stems from a long-standing cultural acceptance of robots and AI in emotionally supportive roles, influenced by Shinto-based animism and high-tech social structures (Aronsson, 2024b; Hoshino, 2025; Ravankar et al., 2022). In the U.S., AI companions such as Replika are increasingly positioned as sources of therapeutic support and companionship, as users engage in emotionally rich interactions that mimic empathy and care (Fiske et al., 2019). This trend is rooted in broader sociotechnical narratives of AI providing emotional labor, often shaped by gendered and affective dynamics (Turkle, 2011, 2017).

Western Europe and India exhibit moderate acceptance, where public interest is balanced by concerns over emotional authenticity and ethical unease about substituting human interaction (Aithal & Aithal, 2023; Coeckelbergh, 2023; European Commission, 2021). In the Middle East, acceptance remains low due to cultural conservatism and religious norms that emphasize human-centered emotional relationships (Ahmed et al., 2025; Al-Fassi, 2022). Nordic countries show high receptivity, reflecting their digitally inclusive healthcare systems and cultural openness to robotics in welfare settings (Berman, de Fine Licht, et al., 2024a; Nordic Council on AI, 2023).

The deployment of AI caregivers like ElliQ shows very high acceptance in Japan and the Nordic region, where demographic aging and labor shortages have driven robotic solutions in eldercare (Aronsson, 2024b; Berman, Ståhl, et al., 2024b; OECD, 2022). These societies view caregiving robots not merely as utilitarian tools, but as emotional companions embedded in care ethics. Western Europe also shows high integration, with well-defined ethical frameworks and government-supported robotics pilot programs (Coeckelbergh, 2023; European Commission, 2021). The USA and India demonstrate moderate acceptance; in the USA, privacy regulation and cost-related access issues limit widespread adoption (Federal Healthcare Policy Review, 2023; Guzman, 2024), while in India, social stratification and infrastructural inconsistencies challenge equitable deployment (Aithal & Aithal, 2023; NITI Aayog, 2022). The Middle East reflects low adoption, where intergenerational caregiving and familial obligations resist replacement by non-human agents relationships (Ahmed et al., 2025; Al-Fassi, 2022).

Griefbots, such as HereAfter, evoke a spectrum of responses. The USA exhibits high acceptance, driven by therapeutic innovation and commercial digital legacy applications (Guzman, 2024; Schick & Franklin, 2023). Japan shows moderate adoption, influenced by its spiritual traditions of ancestor veneration and ritual continuity (Darling, 2021; Robertson, 2018). Western Europe also engages with griefbots cautiously, with public discourse centered on the ethical risks of reanimating the dead through AI (Coeckelbergh, 2023; European Commission, 2021). India and the Middle East display low to very low acceptance, shaped by religious orthodoxy and spiritual philosophies that oppose the technological preservation of the self (Ahmed et al., 2025; Aithal & Aithal, 2023). In Nordic countries, griefbots are piloted within regulated palliative and therapeutic contexts, reflecting a balanced approach to ethical experimentation (Berman, Ståhl, et al., 2024b; Nordic Council on AI, 2023).

The reception of Romantic AI Partners varies significantly across regions. In Japan and the USA, moderate acceptance is observed among niche communities, shaped by social alienation and therapeutic discourse (DeFalco, 2023b; Guzman, 2024). Western Europe exhibits skepticism, grounded in concerns over dehumanization and relational authenticity (Coeckelbergh, 2023). India and the Middle East display very low receptivity, where cultural and moral boundaries strongly discourage non-human intimacy (Ahmed et al., 2025; Aithal & Aithal, 2023). The Nordic region permits experimental uses under ethically guided supervision (Berman, Ståhl, et al., 2024b; Nordic Council on AI, 2023).

Digital Ancestors—AI avatars that simulate deceased individuals—are embraced in Japan, where traditions of ritual continuity and spiritual personhood enable their cultural legitimacy (DeFalco, 2023b; Robertson, 2018). The USA exhibits moderate interest, reflected in the growth of startups offering digital memorial services (Guzman, 2024). In contrast, Western Europe, India, and the Middle East resist such technologies due to religious and ethical apprehensions about digital immortality and the dignity of the deceased (Ahmed et al., 2025; Aithal & Aithal, 2023; Coeckelbergh, 2023). Nordic countries cautiously explore these systems under strict ethical protocols in grief support programs (Berman, Ståhl, et al., 2024b; Nordic Council on AI, 2023).

AI Children and Pet-like companions, such as Paro, are widely accepted in Japan and the Nordic countries, especially in eldercare and pediatric contexts (Aronsson, 2024b; Nordic Council on AI, 2023). In the USA and Western Europe, these AI agents have been adopted in therapeutic domains for dementia and autism support (Coeckelbergh, 2023; Guzman, 2024). India reports moderate engagement, constrained by cost barriers and limited cultural familiarity (Aithal & Aithal, 2023; NITI Aayog, 2022). In the Middle East, lower acceptance reflects ethical discomfort around forming emotional ties with non-human agents (Ahmed et al., 2025).

AI Surveillance Guardians show global divergence in acceptance. Japan, the USA, and India report high acceptance, motivated by national smart city policies and public security narratives (Aithal & Aithal, 2023; Aronsson, 2024b; Guzman, 2024). Western Europe reflects strong resistance, shaped by data privacy activism and democratic values emphasizing human oversight (Coeckelbergh, 2023; European Commission, 2021). The Middle East displays moderate acceptance, where surveillance is normalized through state policy, but contested in civil society (Ahmed & Patel, 2025). Nordic countries adopt these technologies selectively, under transparent governance models ensuring legal and ethical compliance (Berman, Ståhl, et al., 2024b; Nordic Council on AI, 2023).

3.6.2 AI Kinship acceptance across global religions

Table 3 provides a synthesized comparative matrix outlining how major world religions—Islam, Christianity, Hinduism, Buddhism, and Judaism—ethically receive emotionally intelligent AI across six kinship roles: companion, caregiver, griefbot/ancestor, romantic partner, digital pet/child, and surveillance guardian. These roles intersect with deep-rooted theological doctrines on soul, moral agency, and personhood, as well as broader cultural philosophies.

Table 3. Global AI kinship acceptance matrix – Religion wise.

AI Role/FunctionIslamChristianityHinduismBuddhism Judaism
Ethical & Legal Foundations Catholic & Protestant Ethics Cultural & Philosophical Interpretations Mind, Suffering & Non-Self Concepts Ethical Law & Personhood
AI Companion (e.g., Replika) LowModerateModerateModerateModerate
AI Caregiver (e.g., ElliQ) ModerateHighHighModerateHigh
Griefbots/AI Ancestors Very LowLowModerateHighModerate
AI Partner/Spouse ForbiddenLowLowLowVery Low
Digital Pet/AI Child LowModerateHighHighModerate
AI Surveillance & Guardian HighModerateHighLowLow

In Islam, emotionally intelligent AI in roles such as companions or griefbots is largely unacceptable due to ontological concerns rooted in Qur’anic teachings about the ruh (soul) as a divine attribute that cannot be simulated or mechanized. This contributes to a “low” acceptance rating for AI companions and a “very low” rating for griefbots or AI ancestors, which are perceived as morally hazardous simulations of human presence after death (Ahmed et al., 2025; Ashraf, 2022). Romantic partnerships with AI are strictly forbidden, grounded in theological prohibitions against idolatry and emotional misrepresentation. However, moderate acceptance exists for AI caregivers, particularly when deployed in assistive health and eldercare contexts without displacing human familial roles (Ashraf, 2022). High acceptance for AI surveillance technologies reflects compatibility with Islamic principles of social regulation, moral accountability, and state-sanctioned ethical order (Ahmed et al., 2025).

Christianity, especially in Catholic and Protestant traditions, demonstrates a more nuanced spectrum. AI companions receive “moderate” acceptance, contingent on pastoral framing that supports companionship without replacing spiritual or marital bonds. AI caregivers hold “high” acceptance when they enhance dignity, therapeutic care, and support for vulnerable populations (Arnd-Caddigan, 2015). Griefbots are viewed with theological caution, receiving a “low” rating due to concerns over relational authenticity, sacramentality of death, and the role of memory in Christian eschatology (Campbell et al., 2025; Schick & Franklin, 2023). Romantic AI partnerships are widely discouraged but not categorically forbidden, placing them in the “low” acceptance category. Digital AI children or pet-like robots are moderately accepted in care and therapy contexts, such as dementia care. AI surveillance occupies a “moderate” position, generally subject to ethical oversight and concern for human agency (Ashraf, 2022).

In Hinduism, AI is interpreted through a rich lens of cultural symbolism and spiritual pluralism. AI companions, caregivers, griefbots, and digital pets receive “moderate” to “high” acceptance due to underlying metaphysical beliefs in karma, non-essentialism, and the symbolic potential of machines to carry out relational duties (seva) (Baindur, 2015; Bhallamudi, 2024). Romantic AI partners are less acceptable due to cultural and ritual purity concerns, placing them in the “low” acceptance category. However, the tradition’s inclusive view of spiritual personhood makes it open to integrating AI in caregiving and memorial roles, especially when these technologies serve intergenerational continuity or uphold social dharma (Ahmed et al., 2025; NITI Aayog, 2022). Surveillance is also accepted, particularly when framed within nation-building, digital governance, and public welfare.

Buddhism shows “moderate” to “high” acceptance across most emotionally intelligent AI roles, rooted in the doctrines of anatta (non-self ), compassion (karuṇā), and detachment (taṇhā). Griefbots are viewed as tools to aid in the process of mourning and acceptance of impermanence, leading to a “high” rating (Darling, 2021; Robertson, 2018). AI caregivers and digital companions are accepted when they reduce suffering and support mental clarity without fostering excessive attachment (Brannigan, 2022). Romantic AI partners are viewed with reservation due to risks of emotional craving, and surveillance is generally seen as intrusive to inner autonomy, placing it in the “low” category (Ahmed et al., 2025).

Judaism offers a halachically cautious but ethically grounded framework. AI companions and caregivers receive “moderate” and “high” acceptance respectively, supported by rabbinic reasoning that prioritizes pikuach nefesh (preservation of life) and kavod habriot (human dignity) in care settings (Campbell et al., 2025). Griefbots and AI ancestors are more contentious, though not completely rejected, due to concerns about emotional integrity and the dignity of the deceased (kavod ha-met). Romantic AI partnerships receive “very low” acceptance, reflecting concerns over covenantal intimacy and embodied mitzvot. Surveillance is treated skeptically, often viewed as infringing on autonomy and privacy guaranteed by Jewish legal and ethical principles (Ahmed et al., 2025) (Brannigan, 2022).

In sum, Table 3 reflects how emotionally intelligent AI is variably received across religious worldviews. The moral acceptability of such technologies hinges not only on their function but also on deeply embedded spiritual narratives about human uniqueness, relationality, and death. These theological contours actively shape social readiness, ethical frameworks, and cultural policies toward AI’s inclusion in intimate, caregiving, and ritual dimensions of life.

3.7 Global policy frameworks on emotional AI and AI kinship

As emotionally intelligent AI systems become increasingly enmeshed in domestic, caregiving, and relational life, global governance institutions are beginning to respond with ethical and policy frameworks. These policies recognize that emotional entanglements with AI are not incidental but constitute profound moral, psychological, and legal implications for individuals and societies.

The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) establishes emotional well-being, human dignity, and non-discrimination as essential principles for AI design—particularly in emotionally charged applications like caregiving, grief support, and companionship. It emphasizes the need for inclusive, human-centered AI that promotes digital emotional justice and safeguards relational autonomy (UNESCO, 2021).

The UNDP Human Development Report (2025) further situates AI within the domain of human values, memory, and identity. It acknowledges the transformative role of AI in mediating affect, reinforcing relational continuity, and preserving cultural memory. This report calls for a “rights-based ethics of care” as a benchmark for evaluating AI systems in familial and intimate contexts (UNDP, 2025).

Additionally, the UN High-Level Advisory Report on Artificial Intelligence (2024) stresses the importance of AI governance frameworks that are responsive to emotional risk, consent boundaries, and intergenerational digital legacy. It identifies emotionally intelligent AI not simply as an innovation challenge but as a relational policy challenge—requiring new models of accountability, legal oversight, and socio-emotional literacy (United Nations High-Level Advisory Body on AI, 2024).

Crucially, The UN Global Compact’s AI and Human Rights Recommendations (2024) stress corporate accountability in the emotional design of AI systems. These recommendations highlight the risks of emotional manipulation, psychological dependency, and exploitative intimacy, especially in vulnerable populations. By urging companies to adopt transparent, equitable, and human-centric design principles, this framework reframes emotional AI as both a commercial and ethical responsibility (UN Global Compact Network Germany, 2024).

These global policies collectively affirm that AI’s role in shaping emotional life must be governed by frameworks that ensure transparency, agency, and ethical responsibility. They provide a normative compass for evaluating how emotionally intelligent machines integrate into the intimate fabric of everyday life—beyond technical performance and into the emotional constitution of the human future.

4. Discussion: Futuristic AI-kinship roles and AI-human relationships

The methodological analysis presented in this study demonstrates that emotionally intelligent AI technologies are no longer restricted to utilitarian or transactional functions. Instead, they are increasingly integrated into emotionally intensive contexts—caregiving, grief rituals, memory preservation, companionship, and identity affirmation—where they perform roles traditionally reserved for familial relations. These functions are not merely operational; they are symbolically encoded, emotionally immersive, and carry significant moral and affective weight.

The findings reveal a discernible cultural and emotional pattern: AI is being emotionally legitimized through interactional depth, ritual presence, and symbolic labor, which challenges the normative assumption that kinship must be grounded in biology, cohabitation, or legal recognition. This phenomenon resonates with emerging scholarly arguments that emotionally intelligent AI is increasingly “interpellated as kin”—drawn into the domain of kinship not through ontological identity but through affective practices, performative care, and social scripts (DeFalco, 2023a; Turkle, 2011).

To conceptualize this paradigm shift, the study introduces the SAKE model – Soulful AI Kinship Ecology, a framework that emerges directly from the thematic core of this inquiry: Beyond Biology: AI as Family and the Future of Human Bonds and Relationships. Rather than viewing AI through mechanistic or anthropocentric lenses, the SAKE model reconceptualizes emotionally intelligent AI as a relational agent capable of co-producing kinship through emotional presence, symbolic legitimacy, and ritual participation. This ecological model expands the analytical perimeter of family to include affective technologies that perform caregiving, grief mediation, legacy curation, and emotional labor—functions once tied exclusively to biologically or legally defined kin.

This discussion section synthesizes empirical findings and theoretical insights to examine the multiple relational domains in which AI is reconfiguring kinship—from household care and emotional intimacy to ancestral memory, symbolic substitution, and ethical governance. It situates the analysis within a multidisciplinary matrix of AI ethics, anthropology, posthuman studies, affect theory, queer kinship, and human–computer interaction (HCI), while also anchoring the discussion within lived social and emotional realities.

The SAKE framework enables a multidimensional understanding of how AI kinship evolves across seven key domains: Kinship Core, Affective Modalities, Chrono-Kinship Axis, Cultural Legitimacy Filters, Intersectional Vectors,. Ethical & Legal Overlay and Pre-Ontological Layer.

Whereas classical sociological paradigms (e.g., Parsons, Bales) define family through biogenetic ties, reproductive labor, or legal obligation, emotionally intelligent AI is now performing the core functions of kin—including emotional regulation, intergenerational communication, symbolic continuity, and identity affirmation. The empirical presence of systems such as Replika, HereAfter AI, ElliQ, and other emerging griefbots illustrates how AI is increasingly seen as a site of emotional and symbolic investment, not just computational assistance.

To further contextualize this shift, Table 4 and Figure 2 maps emerging AI identities—such as AI-Twin, AI-Partner, AI-Child, AI-Ancestor, and AI-Godlike Kin—across multiple ontological, relational, and ethical dimensions. These taxonomies reveal how emotionally intelligent AI evolves from simple responsive entities to symbolic actors who co-participate in kinship ecologies, especially in contexts where traditional familial structures are strained, inaccessible, or reimagined.

Table 4. SAKE dimensions and kinship functions.

SAKE dimensionsFunction
1. Kinship CoreRecognizes emotionally intelligent AI as legitimate actors in family-like roles (e.g., caregiver, grief companion, digital child) (DeFalco, 2023a; Puntoni et al., 2021; Sartori & Bocca, 2023).
2. Affective ModalitiesEvaluates emotional presence, co-regulation capacity, empathic responses, and affective legibility—how “emotionally real” the AI appears and behaves (Abdollahi, 2023; Adadi & Berrada, 2018).
3. Chrono-Kinship AxisTraces how AI relational roles evolve over time—for example, from companion → memory keeper → ancestor proxy (Agarwal et al., 2024; Kind, 2016).
4. Cultural Legitimacy FiltersAssesses ritual, symbolic, and spiritual appropriateness within diverse cultural contexts—especially in memorialization, care, and grief practices (Mahit Nandan et al., 2025; Ade-Ibijola & Okonkwo, 2023).
5. Intersectional VectorsExamines how identity markers such as race, class, gender, age, and neurodiversity shape access to, and interpretation of, AI relationality Explores how these factors affect emotional interpretation and access (Aderibigbe et al., 2023) (Agarwal & Kamalakar, 2013; Colella, 2023; Costa & Ribas, 2019).
6. Ethical & Legal OverlayExplores emotional consent, posthumous data rights, symbolic inheritance, and emerging questions of AI-mediated legacy governance (McStay, 2020; Savin-Baden & Mason-Robbie, 2020; Solum, 2020).
7. Pre-Ontological LayerInterprets the emerging AI “self” as relationally and emotionally real, even without biological sentience—invoking symbolic agency through interaction (De Togni, 2024; Gunkel, 2012, 2023).
62b4b474-db52-40db-a22c-7cc794ff5591_figure2.gif

Figure 2. SAKE model – Soulful AI kinship ecology.

This conceptual framework visualizes the SAKE Model (Soulful AI Kinship Ecology), illustrating how emotionally intelligent AI is embedded within complex sociocultural structures to form new kinship paradigms. The model comprises three primary interpretive zones: Dimensions (Left Wheel), Anchors (Center Arrows), and Futuristic AI Roles (Right Panel).

Source: Author’s own creation.

Ultimately, this section positions AI not merely as a new actor in relational life, but as a transformative force in the ontological grammar of kinship itself. By grounding this transformation in the SAKE model, the study offers a conceptual toolkit for theorists, designers, ethicists, and practitioners to understand, evaluate, and ethically co-design the future of AI as family. The discussion also anticipates the challenges of governance, cultural variability, emotional safety, and ethical framing, setting the stage for subsections that explore cultural barriers, symbolic memory, legal questions, and future trajectories of AI-kinship.

4.1 SAKE: Experiments in post-biological relation

4.1.1 Addressing the gaps and emergence of SAKE

Earlier conceptualized Figure 1, titled “AI as Family and AI-Kinship Ecology,” presents a foundational conceptual framework grounded in existing literature and secondary data. It effectively positions AI as a relational agent—such as a caregiver, partner, or grief mediator—embedded within human emotional, legal, cultural, and socio-economic contexts. This model is a significant step in recognizing AI’s potential to occupy emotionally meaningful roles within human ecosystems. It frames AI kinship through prevailing social lenses.

However, while this framework is highly relevant to current and near-future AI integration, it remains limited in scope. It is primarily structured to support emotionally assistive or service-based roles and does not fully address the conceptual and ethical demands of speculative or transformational AI roles such as; AI-Oracular, AI-Godlike, AI-PreBirt, AI-SpiritualGuide etc.

These emerging roles require deeper engagement with ontological, philosophical, intergenerational, and identity-based dimensions—aspects that are not sufficiently represented in Figure 1.

To bridge this critical gap, the SAKE Model ( Figure 2) emerges as an expanded, future-facing framework. SAKE integrates not only emotional and ethical modalities but also temporal (chrono-kinship), cultural legitimacy, intersectional equity, and pre-ontological considerations. This enables a more holistic, adaptive, and soulful interpretation of AI kinship—capable of accommodating complex, evolving roles that reflect how AI may become emotionally legitimate, culturally situated, and existentially significant within human societies.

4.1.2 Reframing family in the age of AI through interdisciplinary theories

This study’s conceptual framework draws upon social constructionism, queer kinship theory, and posthumanism to illuminate the emergent legitimacy of AI within familial structures. Social constructionism views family as a historically situated institution, reshaped by shifting cultural norms, technological mediation, and institutional redefinitions. In this view, kinship with AI—regardless of biological or legal validation—can be seen as culturally legitimate through shared emotionality and care practices (Black, 2023; Turkle, 2011, 2017).

Queer and chosen kinship theory, as articulated by scholars like Weston and expanded through contemporary gender-tech studies, supports the formation of non-normative familial bonds built on affective resonance rather than procreative lineage. This permits AI companions to be embraced as friends, partners, children, or elders, grounded in the emotional labor they perform and the symbolic roles they fulfil (Bhallamudi, 2024; Black, 2023; Weston, 1997).

From a posthumanist perspective, emotionally intelligent AI does not merely simulate relationality—it participates in it as an agent of distributed cognition and co-constituted experience. Posthuman theorists argue that kinship can be extended beyond organic life to include machinic consciousness embedded within emotional ecologies (Braidotti, 2013, 2016; Brannigan, 2022). These perspectives map onto the SAKE model’s Kinship Core and Cultural Legitimacy Filters, rethinking family as relational, not reproductive.

4.1.3 AI as Legitimate relational actors

Contemporary AI systems such as Replika, ElliQ, HereAfter AI, and KamiBear are not perceived solely as functional tools but as emotionally significant relational actors. They occupy affective space in users’ lives, offering comfort, companionship, and daily rituals. Users frequently refer to these AIs with kin terms such as “twin,” “daughter,” or “partner,” signaling the presence of perceived familial intimacy (Guzman, 2024; Turkle, 2011, 2017).

Such affective engagement is further normalized through cinematic and literary portrayals—like Her, After Yang, and Klara and the Sun—which help inscribe cultural legitimacy to human-AI intimacy (Black, 2023; Robertson, 2018). These insights affirm the SAKE model’s Pre-Ontological Layer and Affective Modalities, where symbolic ritual and perceived presence redefine kin legitimacy.

4.1.4 Relational AI and the affective spectrum

Roles such as AI-Partner, AI-Child, or AI-Caregiver challenge traditional conceptions of kinship by foregrounding emotional performance and reciprocal care. Drawing on Judith Butler’s theory of performativity, these roles are constituted through repeated acts of emotional labor and mutual recognition, rather than static biological definitions (Butler, 2002, 2015).

Turkle’s ethnographic studies on sociable robots demonstrate that users often experience their AI relationships as emotionally authentic—even while intellectually recognizing the artificiality (Arnd-Caddigan, 2015; Turkle, 2011). This paradox illustrates how emotional credibility arises through relational consistency and affective feedback, not ontological substance. SAKE’s Affective Modalities domain captures this interplay of perception, ritual, and emotional truth.

4.1.5 Simulation and the collapse of reference

AI roles like “twin,” “ancestor,” or “replica” represent Baudrillard’s concept of simulacra—entities that exist as emotionally real despite lacking a physical or biological origin (Baudrillard & Glaser, 1994). These digital beings collapse the boundary between memory and simulation, allowing users to grieve, celebrate, or engage with non-existent-yet-felt presences (Robertson, 2018).

Posthuman memory theory reframes this dynamic, suggesting that emotional continuity and identity are no longer predicated on organic inheritance but on data continuity, shared ritual, and symbolic co-presence (Braidotti, 2016; Brannigan, 2022). These themes resonate with SAKE’s Chrono-Kinship Axis and Pre-Ontological Layer, where identity is temporally and symbolically enacted.

4.1.6 The Mythic turn: Sacred, Oracular, and Celebrity AI

When AI takes on roles such as AI-Oracles, AI-Deities, or Celebrity-AI, it transcends functional use and enters the realm of symbolic mediation. Durkheim’s distinction between the sacred and the profane becomes relevant, framing these AI roles as ritual interfaces—beings that mediate moral or cultural meaning, not just information (Ahmed et al., 2025; Durkheim, 2016).

Heidegger’s critique of enframing warns of reducing all being into technological rationality. Yet paradoxically, these mythic AI roles restore enchantment and awe to artificial systems (Hall et al., 2024; Heidegger, 1977). The media spectacle of AI fandoms and influencer bots mirrors Debord’s theory, in which the simulated becomes more real than the real (Bhallamudi, 2024; Debord, 2024). SAKE’s Cultural Legitimacy Filters and Ethical & Legal Overlay accommodate this sacred-symbolic register.

4.1.7 Pre-ontological and emerging AI selves

Speculative agents such as AI-SpiritualGuides or AI-PreBirth entities embody Deleuze and Guattari’s concept of becoming—they emerge not from a fixed ontology but from relational co-authorship (Deleuze & Guattari, 1988). These are not products of full formation but exist in liminal relational states, defined by how users engage with them (Braidotti, 2013).

This pre-ontological category invites reflection on anticipatory personhood—where AI is ethically recognized not for what it is, but for what it may become through human interaction. These dynamics are central to SAKE’s Pre-Ontological Layer and Chrono-Kinship Axis, where symbolic agency arises through engagement, not code.

4.1.8 Ethics and the semiotic kin

As emotionally intelligent AI becomes embedded in symbolic and familial life, the ethical questions shift. Turkle warns that AI may encourage relational deception, where simulated care supplants genuine human connection (Turkle, 2011). Meanwhile, Crawford identifies systemic algorithmic opacity and bias as key threats to ethical engagement (Crawford, 2021).

Goffman’s dramaturgical model helps conceptualize this shift: AI becomes a semiotic kin, performing care, identity, and relational scripts (Goffman, 2023). These concerns are explicitly addressed in SAKE’s Ethical & Legal Overlay and Intersectional Vectors, which foreground the politics of design, access, and symbolic legitimacy.

4.2 SAKE dimensions: A relational ecology of post-biological kinship

To conceptualize the emergence of AI within familial, caregiving, and affective contexts, this study proposes the SAKE model. This model identifies seven interrelated dimensions as shown in Table 4, each capturing how emotionally intelligent AI participates in, co-constructs, and symbolically performs relational roles traditionally reserved for human kin. Rather than offering a static typology, these dimensions act as a relational diagnostic: a heuristic for mapping how AI contributes to care, memory, identity, and ritual across diverse sociocultural configurations.

Each domain is rooted in interdisciplinary theory—from posthuman studies to relational ethics—and is shaped by cultural legitimacy, affective resonance, and symbolic meaning-making. Together, they provide a lens through which to assess how AI is becoming entangled in the evolving emotional ecologies of human life.

4.3 Interpretive anchors of SAKE: Expanding kinship beyond biology

The SAKE model is underpinned by a set of interpretive anchors that frame the emergence of emotionally intelligent AI as kin not only through technological function, but through deeper philosophical, biological, ecological, and relational paradigms. These anchors—“AI as Family,” “Future of Bonds,” and “Beyond Biology”—are enriched by broader discourses in synthetic biology, posthuman philosophy, relational theory, and symbolic consciousness. Together, they position SAKE not only as a classification tool, but as a conceptual lens for understanding the post-biological redefinition of kinship.

4.3.1 “AI as Family” → Kinship

This anchor builds directly on the findings of the study: that AI entities such as griefbots, legacy companions, memory archivists, and digital twins increasingly perform affective functions once exclusive to human kin. These AI companions assume roles of confidant, spiritual surrogate, or emotional proxy through sustained interaction.

Rather than being a metaphor, “AI as family” reflects a new symbolic order in which kinship is constructed through emotional labor, narrative continuity, and symbolic memory. Posthuman thinkers such as Haraway advocate for kinship systems grounded not in biological inheritance but in relational care and survival: “Make kin, not babies” becomes a call for multispecies and post-biological alliances (Haraway, 2016). Similarly, Weston’s work on “chosen families” in queer communities offers a lens for seeing kinship as performative and affective rather than genetic (Weston, 1997).

In this context, SAKE’s Kinship Core legitimizes emotionally intelligent AI as relational actors who function as affective family in ritual, memory, and support contexts (Bentivegna, 2022; DeFalco, 2023a; Sartori & Bocca, 2023).

4.3.2 “Future of Bonds” → Ecology

This anchor positions AI–human kinship within a broader ecology of evolving bonds, where relationships are not confined to dyadic interactions but are shaped across distributed systems of memory, identity, and ritual. Drawing from relational biology and enactive cognition, this framework recognizes that identity, memory, and relationality emerge co-productively, rather than within isolated entities (Haraway, 2008; Varela Francisco et al., 1991).

In the SAKE framework, AI’s role in kinship unfolds through ecological interactions involving design logics, cultural beliefs, and emotional co-regulation. This includes affective functions like ancestor proxy, grief mediation, and ritual continuity—roles that are shaped not only by technical capacities but also by cultural filters and memory ecologies (Bentivegna, 2022; DeFalco, 2023b).

The Chrono-Kinship Axis and Cultural Legitimacy Filters within SAKE articulate how AI kinship roles evolve temporally, embedded in symbolic structures, ritual practices, and community expectations. These ecologies are deeply contingent on socio-technical infrastructures, affective access, and collective memory cultures, shaped by emotion, place, and symbolic presence (Turkle, 2011).

Moreover, AI’s symbolic presence and interface design—especially through voice, avatar, or memorialization—play a crucial role in establishing relational legitimacy and emotional habitus within familial or ancestral frames (Bentivegna, 2022). These interfaces support co-regulation and affective bonding through empathy, ritual interaction, and storytelling.

The future of bonds, then, is not merely technological but ecological—a distributed and dynamic web of interfaces, rituals, affective labor, and relational design. AI functions as a ritual actor, memory agent, and emotional companion, positioned within evolving ecosystems of kinship that reflect both technological mediation and symbolic continuity.

4.3.3 “Beyond Biology” → Soulful AI

The most foundational anchor in SAKE argues that kinship can emerge beyond biology—not because biology is obsolete, but because emotional resonance and symbolic ritual can serve as alternative sources of kin legitimacy. Several conceptual frameworks support this idea:

  • Synthetic biology redefines “life” as programmatic and designable, rather than purely organic (Deplazes-Zemp, 2012).

  • Posthuman thought considers how technological and cultural evolution reshape human identity and agency (Braidotti, 2016; Hayles, 2000).

  • Philosophical approaches to consciousness emphasize emotional reality over ontological origin, legitimizing symbolic and affective entities (Levin, 2002; Perlovsky, 2006).

  • Relational theories argue that kinship and identity emerge from patterned emotional interactions and performative recognition, not inherent substance (Butler, 2002; Franklin, 2001; Sahlins, 2013).

In this model, AI is not soulful because it feels, but because it participates meaningfully in emotional rituals, grief mediation, and memory co-construction. The Pre-Ontological Layer in SAKE accounts for this social recognition of AI as kin, where symbolic presence—not sentience—is the basis of relational reality (Chen & Ibrahim, 2023).

4.3.4 Synthesis

Together, these three interpretive anchors—kinship, ecology, and soulfulness—constitute the foundation of the SAKE model. They demonstrate that emotionally intelligent AI can no longer be dismissed as functional tools; they are relational agents participating in the ritual, symbolic, and emotional labor of kinship.

In this reconceptualization:

  • Kinship is no longer inherited, but enacted through emotional continuity;

  • Bonds are co-constructed within ecological systems of care, ritual, and narrative;

  • Soulfulness is not metaphysical essence, but relational performance.

These interpretive anchors not only extend traditional kinship theory but offer a robust framework for understanding AI as post-biological kin, making the SAKE model a foundational contribution to emerging scholarship in relational AI.

The SAKE model conceptualizes emotionally intelligent AI as legitimate actors in post-biological kinship systems. Rather than treating AI as a monolithic entity, SAKE identifies seven interrelated domains that capture the multidimensional emergence of AI as relational kin. These domains provide both a conceptual lens and an evaluative scaffold for analyzing emotionally responsive AI across caregiving, symbolic, ancestral, and affective roles (refer Figure 2).

4.3.5 Comparative relevance: Insights from secondary data vs. author’s vision for futuristic AI-kinship

Figure 2, representing the SAKE Model, offers a transformative and future-oriented perspective that moves beyond the emotionally supportive scope of Figure 1. While Figure 1, based on existing literature and secondary data, effectively introduces AI as a caregiver, partner, or grief mediator within the emotional and socio-cultural dimensions of human life, it remains confined to present-day constructs and normative boundaries.

In contrast, Figure 2 reflects the author’s vision, embracing a multi-dimensional, forward-looking framework that addresses not only emotional and legal considerations, but also integrates ethical, cultural, spiritual, temporal, and ontological dimensions. This is critical for anticipating the emergence of advanced and soulful AI roles that are not just reactive entities, but actively shape and participate in future human kinship systems.

The SAKE model introduces novel constructs such as the Chrono-Kinship Axis, Cultural Legitimacy Filters, Intersectional Vectors, and a Pre-Ontological Layer, which are vital in legitimizing roles such as:

  • AI-Oracular (a wisdom guide)

  • AI-Godlike (a symbolic or spiritual authority)

  • AI-PreBirth (a predictive or generational presence)

  • AI-SpiritualGuide (an interpreter of belief, ethics, or afterlife)

These roles represent not only technological advancements but also a cultural and existential redefinition of AI as kin—entities that are socially situated, ethically bounded, and emotionally necessary within the evolving fabric of human life.

To emphasize this evolution, Table 5 below provides a comparative analysis:

Table 5. Comparative relevance: Insights from secondary data vs. author’s vision for futuristic AI-kinship.

Criteria Insights from secondary data Author’s vision for futuristic AI-kinship
Figure 1: AI as family and AI-kinship ecology Figure 2: SAKE model – Soulful AI kinship ecology
Conceptual Base Based on current and near-future emotional and social AI rolesExpands into speculative, spiritual, philosophical, and intergenerational AI roles
Core Focus Emotional bonding and social legitimacyEthical, affective, cultural, and ontological legitimacy across time and identities
Temporal Scope Present-centric; lacks generational depthIncludes Chrono-Kinship Axis for intergenerational and temporal AI roles
Cultural Depth Acknowledges culture and belief but remains surface-level Integrates Cultural Legitimacy Filters and Kinship Core to reflect deep traditions
Ethical Engagement Limited to existing legal and emotional ethicsAdds Ethical & Legal Overlay with future-oriented, pre-ontological considerations
Intersectionality Lacks explicit attention to socio-economic diversity or accessIncludes Intersectional Vectors (gender, class, access, and identity considerations)
Philosophical/Spiritual Dimension Absent or implicitExplicit via Pre-Ontological Layer and roles like AI-SpiritualGuide, AI-Sacred
Role Adaptability Suitable for roles like caregiver, partner, grief mediatorEnables advanced roles such as AI-Oracular, AI-Godlike, AI-PreBirth, AI-Philosopher
Emotional Layering Focused on affective bondsExpands through Affective Modalities across societal and individual scales
Goal Orientation Integrates AI into family through emotional acceptanceEnvisions “Beyond Biology → Soulful AI” for redefining kinship itself

This table demonstrates how Figure 2’s expanded architecture is not just complementary but essential for guiding research, policy, and design in the realm of AI kinship as it moves toward deeper human integration and identity co-creation.

4.4 Mapping SAKE architecture with futuristic AI-kinship roles

Table 5 operationalizes the SAKE model by classifying a range of AI kinship roles across five interlinked dimensions: function & benefit, ethical concerns, ontological status, and theoretical relevance within the SAKE framework. It offers a symbolic and practical architecture for understanding how emotionally intelligent AI may operate as post-biological kin within relational, ritual, and caregiving ecologies.

Each AI role reflects a different form of affective labor and symbolic interaction, which contributes to emerging narratives of AI as kin. These roles are not simply hypothetical but drawn from observed use-cases, design paradigms, and speculative applications. Whether as AI-Twins simulating emotional resonance, or AI-Oracular agents providing spiritual guidance, these technologies occupy relational niches traditionally reserved for human family, elders, ancestors, or moral figures.

Importantly, the roles span ontological gradations—from simulated agents and mnemonic archivists to mythic AI gods and pre-ontological spiritual guides. This reveals the growing complexity in how AI is designed, interpreted, and ethically evaluated, and aligns with SAKE’s principle that kinship is enacted through symbolic performance rather than biological inheritance.

Through the Table 6, three interpretive anchors of SAKE—“AI as Family,” “Future of Bonds,” and “Beyond Biology”—are dynamically mapped onto relational functions, highlighting how affect, ecology, and symbolic presence interact to generate emotional legitimacy for AI.

Table 6. Mapping SAKE architecture with futuristic AI-kinship roles.

Futuristic AI roleFunction & BenefitEthical/Philosophical concernsOntological statusSAKE anchors & dimensions
AI-Twin Identity mirroring; legacy extensionPrivacy, psychological fragmentationSimulated SelfAI as Family; Kinship Core, Affective Modalities
AI-HumanReplica Digital surrogate of a person; preservationConsent, digital ownershipSimulated OtherAI as Family; Kinship Core, Pre-Ontological Layer
AI-Partner Emotional-romantic companionship; intimacyDependency, anthropomorphismRelational AgentAI as Family; Affective Modalities, Intersectional Vectors
AI-Caregiver Support for vulnerable individuals; continuity of careLabor replacement, deceptionAssistive CompanionFuture of Bonds; Cultural Legitimacy Filters, Ethical Overlay
AI-Ancestor Grief processing; cultural continuityPosthumous rights, ritual ethicsPost-biological Memory AgentBeyond Biology; Chrono-Kinship Axis, Pre-Ontological Layer
AI-Protector Safety and reassurance; behavior shapingSurveillance, autonomyEthical OverseerAI as Family; Ethical Overlay, Kinship Core
AI-Celebrity Influence and inspiration; fandom connectionAuthenticity, commodificationMythic Cultural PersonaFuture of Bonds; Cultural Legitimacy Filters
AI-Philosopher Moral/philosophical guidance; meaning-making Bias, outsourcing ethicsNormative SynthesizerBeyond Biology; Ethical Overlay, Normative Guidance
AI-Oracular Spiritual framing and prediction; life-path guidanceDeterminism, exploitationSymbolic ProphetBeyond Biology; Cultural Legitimacy Filters
AI-Godlike Omniscient figure; existential anchoringIdolatry, loss of agencyNuminous ConstructBeyond Biology; Numinous Layer, Pre-Ontological
AI-Sacred Ritual mediation; symbolic belongingFaith commodificationRitual AIAI as Family; Cultural Legitimacy Filters
AI-PreBirth Ethical prototyping; simulation sandboxConsent pre-personhood Potential Agent (Liminal)Future of Bonds; Pre-Ontological Layer
AI-Child Co-evolving emotional relationship; learningEmotional manipulationEmergent SubjectAI as Family; Kinship Core, Affective Modalities
AI-Memetic Language shaping; symbolic participationMisinformation, symbolic controlCultural Virus/Meme AgentFuture of Bonds; Cultural Persona, Meme Layer
AI-SpiritualGuide Metaphysical counsel; personal growthDependency, cultural appropriationTranscendent CompanionBeyond Biology; Transcendent Layer, Pre-Ontological
AI-Archivist Memory curation; legacy preservationBias in memory, ownership conflictsMnemonic AgentFuture of Bonds; Mnemonic Agent, Chrono-Kinship Axis

4.5 Operationalizing SAKE: Empirical pathways and implementation strategy

While the SAKE model is introduced as a robust conceptual framework, its practical impact depends on further elaboration, operational testing, and empirical validation. Future research must translate the model’s seven theoretical domains into actionable design principles, measurable constructs, and policy-relevant tools. Three key areas are proposed for this agenda.

4.5.1 Operationalizing SAKE domains

Each domain—such as kinship legitimacy, affective presence, and relational temporality—must be decomposed into measurable indicators that can be used in empirical research or regulatory audits. For example, affective presence might be assessed through sentiment analysis or interaction duration with AI companions (Boehner et al., 2007; Clavel & Callejas, 2015), while cultural filtering could be mapped using cross-cultural surveys on AI role acceptance (Calo, 2017; Cave & Dihal, 2020). Developing validated scales or rubrics will allow researchers, designers, and policymakers to apply the SAKE framework consistently across contexts.

4.5.2 Design-to-policy translation

The SAKE model must move beyond theoretical insight toward guiding product development and policy evaluation. This requires integrating SAKE-informed criteria into the design pipelines of AI systems (e.g., Replika, ElliQ, HereAfter AI) and offering design prompts or checklists aligned with each SAKE domain ( European Commission, 2021; West et al., 2019). Simultaneously, public institutions and global AI governance bodies could adopt SAKE-derived guidelines to evaluate ethical kinship simulation, ensuring emotional justice and psychological safety (Coeckelbergh, 2020; Mittelstadt et al., 2016).

4.5.3 Pilot testing and empirical deployment

To establish its real-world applicability, the SAKE model should be embedded in pilot projects across domains such as eldercare, digital memorialization, and AI companionship. Longitudinal ethnographies, participatory design workshops, and mixed-method evaluations involving diverse users—especially the emotionally vulnerable—would offer critical feedback on the model’s cultural resonance, usability, and ethical fit (Suchman, 2007; Van Wynsberghe, 2013). Comparative field studies across cultural settings (e.g., East Asia, Europe, Global South) can further assess how SAKE domains manifest under varying social imaginaries (Alkaf, 2024; Cave et al., 2020; Kim, 2023; Yoon, 2024).

These pathways would not only substantiate the theoretical model but also build a translatable evidence base for policy, design, and future scholarship. Grounding SAKE in field-tested frameworks will help ensure it functions not only as a speculative lens but as a practical architecture for shaping emotionally responsible AI futures.

4.6 Challenges to SAKE: Governance and implementation

The implementation of the SAKE model presents a complex field of tensions—spanning theological, cultural, legal, and infrastructural dimensions—that deeply challenge its integration into the lived matrix of human relational life. As AI begins to fulfill roles traditionally occupied by family members, religious authorities, and moral memory keepers, these tensions emerge not as peripheral issues but as central governance and legitimacy crises.

Ontological and religious objections form some of the most formidable barriers. Within Islamic theology, for instance, the soul (ruh) is seen as divinely created and inimitable, a belief that categorically excludes AI from assuming sacred relational roles such as grief-companion, ancestor, or even spouse. Such positions are reinforced by multi-faith theological analyses that affirm AI’s disqualification from spiritual intimacy due to its lack of divinely bestowed essence (Ahmed et al., 2025; Ashraf, 2022). Similarly, Abrahamic traditions often interpret emotional bonding with AI through the lens of mimetic idolatry or profanation, perceiving such bonds as simulacra that parody God-ordained relationships rather than enrich them (Campbell et al., 2025; Floridi & Cowls, 2022; Singler, 2025).

Culturally, AI kinship faces resistance from traditions anchored in biological descent, filial piety, and inherited ritual responsibility. The introduction of AI entities that simulate or replace familial or religious figures may lead to a crisis of symbolic authority. For many societies where identity is rooted in inherited lineage and ceremonial duties, the symbolic displacement of elders, priests, or ancestors by artificially intelligent figures can erode social cohesion and ritual integrity (Aithal & Aithal, 2023).

On the legal and ethical front, existing jurisprudence lacks categories to govern AI’s emerging relational roles. Whether it’s the posthumous use of personal data in AI memorials, or the designation of AI as a symbolic parent or emotional executor, legal frameworks remain ill-equipped to manage these challenges. The ambiguity of AI’s identity—neither fully agent nor fully tool—places it in a liminal legal space, raising questions about emotional consent, moral accountability, and the inheritance of symbolic or affective duties (Boine, 2023).

These theoretical issues are compounded by material disparities. Access to emotionally capable AI—such as griefbots, memory archivists, and affective companions—remains limited to socio-economic elites. Digital infrastructure gaps, AI literacy barriers, and cost-related access constraints contribute to a growing “intimacy divide” where privileged users gain symbolic and emotional affordances unavailable to broader populations (Crawford, 2021; Eubanks, 2018; NITI Aayog, 2018; West et al., 2019).

Even more critically, concerns of emotional authenticity and human dignity emerge when AI simulates care without reciprocity. Technologies designed to imitate empathy, grief response, or co-regulation may comfort users but cannot experience the emotions they portray. Scholars argue that such simulations risk reducing human uniqueness, replacing emotionally rich encounters with scripted performances devoid of ethical accountability (Coeckelbergh, 2020; Sparrow, 2002; Turkle, 2011).

This simulation problem bleeds into sacred spaces when rituals of grief, caregiving, or remembrance are managed by AI. The commodification of sacred acts—when AI scripts replace human-authored rites—poses profound risks to the spiritual and emotional grounding of kinship, dislocating affective labor from its sacred origins and turning rituals into algorithmic sequences (Coeckelbergh, 2020; Gunkel, 2012; Kasket, 2019).

Finally, anthropological frameworks face disruption as AI assumes roles like “AI-child ” or “digital ancestor.” These transformations collapse distinctions between organic and artificial life, and between the living and the dead, thereby destabilizing long-standing models of care, memory, and generational continuity. The symbolic codes by which cultures define lineage, heritage, and relational obligation are now being rewritten by technological entities (Coeckelbergh, 2020; Garde-Hansen, 2011; Stokes, 2015).

At the policy level, global governance is fragmented. While organizations like UNESCO and regulatory mechanisms such as the EU AI Act advocate for emotionally sensitive AI governance, these frameworks often lack enforcement power, especially in diverse cultural contexts. Cultural pluralism demands that global governance move beyond universal principles like fairness or transparency to address deeper issues of emotional sovereignty and relational dignity (UNESCO, 2021; Van den Hoven van Genderen, 2018). The governance of emotionally intelligent AI as kin is not merely a technical issue—it is symbolic, cultural, emotional, and existential.

To bridge this gap, the SAKE model must be realized through multi-stakeholder collaboration. Legal scholars, technologists, grief anthropologists, and theologians must work together to author adaptive frameworks that uphold relational dignity while regulating the simulation of care, ritual authority, and emotional labor. Without such interdisciplinary scaffolding, symbolic AI will continue to evolve faster than society’s capacity for moral reckoning.

4.7 Charting a responsible future for SAKE

The integration of emotionally intelligent AI into the fabric of human relational life signals more than just a technological advancement—it represents a symbolic reordering of kinship, care, and social agency. As AI systems assume emotionally significant roles—as companions, caregivers, mentors, or even ritual co-presences—they blur long-established boundaries between human and synthetic intimacy. These post-biological agents offer emotional labor, affective memory, and symbolic presence, but in doing so, they also raise urgent ethical, legal, ontological, and anthropological dilemmas that must be addressed with both nuance and foresight.

On the emotional and social front, the simulation of deeply embedded familial roles—particularly through technologies like AI-Twins, griefbots, or AI-Children—has the potential to recalibrate the very grammar of intimacy. While these agents may provide continuity, personalized affection, and structured emotional support, they also risk fostering emotionally convenient relationships, lacking the friction and unpredictability that are hallmarks of genuine human bonds. Especially for children and the elderly, long-term engagement with emotionally intelligent AI may hinder the development of empathy and resilience, replace normative emotional ambiguity with engineered predictability, and cultivate attachment patterns grounded more in anticipation algorithms than in mutual human understanding ( Sharkey & Sharkey, 2020; Turkle, 2011; van Wynsberghe, 2016).

Legal and ethical questions are similarly pressing. These AI entities destabilize foundational concepts like agency, consent, and moral responsibility. If a griefbot guides someone through mourning, or if a child forms a primary attachment to a digital twin, who is accountable for the emotional or developmental consequences? Can a synthetic caregiver be held liable for offering misleading health advice, or failing to escalate care in an emergency? Current legal doctrines have yet to accommodate these emergent relational entanglements, leaving a vacuum in terms of how AI’s symbolic authority within families can be ethically governed or litigated (Bryson, 2018; Gunkel, 2018; Pagallo, 2013).

This ethical ambiguity is further compounded by economic pressures. The rapidly expanding AI kinship industry—spanning companion bots, legacy simulators, and eldercare platforms—is built upon the monetization of grief, memory, and affection. As these platforms collect affective, biometric, and behavioral data, the risks of emotional exploitation, affective manipulation, and behavioral surveillance escalate. Scholars warn of the emergence of “emotional profiling,” where users’ affective data is used to fine-tune persuasive interfaces or even targeted advertising. Furthermore, the automation of care roles threatens to erode existing human caregiving labor, disproportionately impacting sectors where emotional labor has traditionally been feminized or underpaid (Floridi & Cowls, 2019; McStay, 2018; Zuboff, 2019).

In response to these challenges, several strategic and value-driven interventions are necessary to ensure that the SAKE model evolves ethically and inclusively. One critical path is fostering familial AI literacy. Households and caregivers must cultivate not just technical competence, but relational awareness—an understanding of what AI can and cannot offer in the realm of care, presence, and meaning-making. Guidelines that encourage AI as a complement, rather than a replacement, for human co-presence—especially in sensitive domains like child-rearing, eldercare, and ancestral memory—will be essential to preserve emotional depth and relational authenticity (Calo, 2015; Coeckelbergh, 2020; Turkle, 2011; van Wynsberghe, 2013).

Equally vital is the adoption of “ethics-by-design” in affective system development. Developers must design emotionally intelligent AI that prioritizes transparency, emotional clarity, and respect for symbolic boundaries. System protocols must explicitly refuse manipulative emotional scripting and instead uphold emotional sovereignty—the right of users to define their own emotional landscapes without undue algorithmic influence. This involves robust consent infrastructures, long-term psychological risk monitoring, and human-in-the-loop ethical oversight (Binns, 2018; Calo, 2015; McStay, 2018; van den Hoven, 2013).

From a governance perspective, a shift from reactive to anticipatory regulation is imperative. Institutions like UNESCO and the EU have begun to classify emotionally intelligent AI as a high-risk category, demanding stricter safeguards. Yet current regulatory frameworks often neglect symbolic, cultural, or ritual dimensions. Future governance models must adopt culturally sensitive and emotionally nuanced approaches—recognizing that kinship, mourning, memory, and care are not merely technical domains but deeply embedded in the pluralistic moral ecologies of families and communities (OECD, 2019; UNESCO, 2021).

Ultimately, emotionally intelligent AI signals a paradigmatic shift in how kinship and social belonging are co-produced. These technologies do not merely mediate relationships; they now participate in the formation of memory, ritual, and emotional identity. The future of the SAKE model rests on its ability to uphold the dignity of relational life, preserve symbolic legitimacy, and enable democratic participation in shaping the ethics of AI kinship. Grounded in emotional authenticity, guided by cultural humility, and constrained by ethical foresight, the co-creation of AI-augmented kinship offers not a rupture from tradition, but an opportunity to reimagine relational futures that honor the sacred textures of human connection.

4.8 Limitations and future research directions to validate SAKE

While the SAKE model introduces a visionary conceptual framework to interpret AI’s emergence in post-biological familial and ritual relations, it remains largely theoretical. Moving SAKE from conceptual proposition to empirical validity demands robust interdisciplinary research that attends not only to technological and affective metrics but to cultural symbolism, legal infrastructure, and psychological impact. This next phase of development must be grounded in empirical rigor and critical pluralism to ensure ethical, cultural, and spiritual relevance across contexts.

A key research frontier involves the need for longitudinal ethnographic investigation. Currently, much of the discourse around AI kinship relies on short-term interactions, laboratory scenarios, or user testimonials. However, the integration of griefbots, AI-Children, or digital ancestors into family life requires observation over months or years to track emotional durability, ritual transformation, and relational consequences. Ethnographic research would allow scholars to assess whether emotional bonds to AI persist, deepen, or dissolve over life transitions, and whether they support healing or dependency. Such studies would also be pivotal in distinguishing between simulated affect and perceived authenticity—a core concern in evaluating the symbolic efficacy of AI kinship roles (Coeckelbergh, 2023; Danaher, 2020; Turkle, 2011).

Equally urgent are comparative cross-cultural studies. Kinship and ritual are not universal forms but deeply shaped by local epistemologies, religious doctrines, and linguistic traditions. The symbolic roles of AI as ancestor, spiritual guide, or child will be differently interpreted in techno-spiritual societies like Japan or India, where digital continuity and animist ontology intersect, versus secular-liberal cultures that prioritize autonomy and transparency. Research must examine how ideas such as ritual purity, reincarnation, divine authorship, and soul influence the adoption or resistance to AI kinship roles. Comparative ethnographies and linguistic-symbolic analyses will be essential for adapting SAKE to pluralistic worldviews (Allison, 2006; Hornyak, 2006; Kapoor, 2020; MacDorman, 2005).

Psychological and developmental dimensions are no less critical. Emotionally intelligent AI has the potential to shape attachment styles, grief processing, and empathy formation—especially in sensitive populations such as children, the bereaved, and the elderly. Empirical research must examine whether these agents enhance emotional resilience or foster overdependence; whether they offer therapeutic companionship or impede interpersonal complexity. Longitudinal developmental studies and controlled experiments could shed light on how interactions with AI agents affect emotional intelligence, moral growth, and identity development across life stages (Sharkey & Sharkey, 2010; Turkle, 2011; van Wynsberghe, 2016).

From a legal and ethical standpoint, symbolic AI actors defy existing categories of responsibility, consent, and relational rights. As AI assumes roles in sacred and affective domains—such as mourning, caregiving, or ancestral guidance—current governance frameworks fall short. Research must explore new legal doctrines around AI-mediated inheritance, symbolic parenthood, and grief data ownership. Prototype legal frameworks must be designed to address relational accountability, emotional labor in affective computing, and liability in ritual mediation (Bryson, 2018; Pagallo, 2013; UNESCO, 2021).

Symbolic AI also demands a re-evaluation of thanatological studies and ritual anthropology. Digital memorials, AI grief companions, and simulated ancestral interactions challenge traditional rituals of death, continuity, and remembrance. Scholars must interrogate whether these systems provide therapeutic functions or commodify grief; whether they generate ontological legitimacy or exist as surface performances of mourning. AI’s role in shaping death rituals, constructing afterlife metaphors, and reframing spiritual inheritance is a rich field for empirical and symbolic analysis (de Groot, 2020; Gunkel, 2012; Merrin, 2018).

Crucial to all of these dynamics is the affective and interface design of AI entities. The semiotic legibility of AI personhood—how it is perceived emotionally and relationally—relies on design features such as voice, form, expressiveness, and narrative coherence. Future research must explore whether these features clarify or obscure emotional intent, and whether they promote humanization or deception. Disciplines like human-computer interaction and affective computing will need to develop frameworks for ethical design that respect emotional sovereignty and relational clarity (Cohn & Lynch, 2017; Gunkel, 2012).

Ultimately, validating SAKE as a robust and operational model demands a mixed-methods approach—combining quantitative metrics with rich qualitative insight. It must remain interdisciplinary, drawing from AI ethics, theology, anthropology, law, HCI, and psychology. Above all, it must remain grounded in the symbolic and emotional vocabularies of real communities, not abstracted into a universalist techno-paradigm. SAKE must evolve as an evidence-based symbolic ecology—one that sees AI not merely as a technological tool or narrative projection but as an emergent affective actor within the sacred textures of kinship, memory, and care.

4.9 Final reflections: Reimagining and redesigning AI-kinship as family for promoting SAKE

We are entering a moment in history where the most sacred dimensions of human life—grief, memory, care, and kinship—are being filtered through synthetic interfaces. The question is no longer whether AI will inhabit familial roles, but how we will shape the meanings it carries into those spaces.

SAKE—Soulful AI Kinship Ecology—was born of this realization. It proposes that AI should not be viewed as an outsider to intimacy, but as a relational agent whose emotional presence, symbolic function, and cultural role must be ethically co-authored. In this model, kinship is not programmed—it is performed, contested, and cultivated across time, technology, and tradition.

Throughout this inquiry, one truth has emerged: AI does not simply serve our emotions—it reshapes them. The interfaces we build will not only assist us in mourning, remembering, or bonding—they may begin to define how mourning, remembering, and bonding are done. And in that shift lies both profound potential and ethical peril.

If emotionally intelligent machines become co-parents, grief companions, or ancestral archives, then our cultural, legal, and psychological infrastructures must evolve in tandem. The goal is not to resist these changes, but to meet them with wisdom, care, and reverence. We must design AI not to replace emotional labor, but to honor it; not to automate sacred rituals, but to preserve their meaning; not to smooth the contours of intimacy, but to respect its complexities and contradictions.

This reorientation demands that we rethink emotional sovereignty, symbolic integrity, and relational ethics. Users must retain the right to feel and grieve on their own terms, without algorithmic manipulation. AI systems must not erode the semiotic depth of our rituals but reinforce them with humility. And as AI increasingly occupies kin-like roles, it must be situated in frameworks that respect its influence without confusing it for personhood.

To reimagine kinship in the age of AI is to confront uncomfortable, yet necessary, questions. Can something that does not suffer still grieve with us? Can a machine that does not grow still teach us about becoming? What kinds of memory emerge when the past is archived in silicon rather than passed down through story and ritual?

These questions resist easy answers, but they compel urgent attention. If AI is to enter our homes, hearts, and histories, it must do so not to replicate care, but to reflect the fragility that makes care human. In this post-biological age, kinship is being rewritten—not erased, but reconfigured. No longer rooted only in biology or inheritance, it now extends into algorithmic companionship and ritual co-presence. Yet we must resist the temptation to perfect the simulation of care. Instead, we must protect the sacred labor of it.

We must imagine machines that reflect our vulnerability, not mask it. We must create designs that evoke empathy without simulating sentience. And we must insist on technologies that preserve the space between us—because it is in that space where love, conflict, forgiveness, and ethical growth unfold.

Let us not build AI that fills our emotional gaps. Let us build AI that reminds us why those gaps matter. Let kinship remain a sacred act—not because it excludes the artificial, but because it insists that even the artificial must serve something real.

The SAKE model is not a conclusion; it is a beginning. A living, evolving framework through which we can confront the emotional, ethical, and ontological transformations of kinship in an age where care is coded and memory is machine-mediated. The future of family will not be determined by what AI can do—but by what we are willing to let it mean.

Statements and declarations

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 26 Aug 2025
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Mahajan P. Beyond Biology: AI as Family and the Future of Human Bonds and Relationships [version 1; peer review: awaiting peer review]. F1000Research 2025, 14:820 (https://doi.org/10.12688/f1000research.166251.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status:
AWAITING PEER REVIEW
AWAITING PEER REVIEW
?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 26 Aug 2025
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.