Influence of Emotional AI APP on Conventional Tort Liability Assessment

This article, originally published in Chinese in Issue 10 of China Judgments in 2025, is authored by He Zhuolan, who holds the position of Assistant Judge at the Judicial Administration Office of the Guangzhou Internet Court.

Emotional artificial intelligence (referred to as “emotional AI”) encompasses AI applications that facilitate emotional interactions with users and offer psychological support, often termed “AI companions” or “virtual partners.” As AI technology advances rapidly, the variety of emotional AI products has expanded, including virtual lover appspsychological counseling robots, and social companionship programs. Research indicates that the introduction of multimodal large models is hastening the deployment of emotional companionship features in AI, with the market for AI toys for children experiencing rapid growth. In contrast, the market for adult-oriented AI companionship remains diverse but is still in a nascent stage. The emotional AI industry has developed a comprehensive supply chain from research and development to production and marketing, attracting significant investment from both major tech companies and specialized startups. However, as emotional AI becomes more prevalent, its potential legal risks are becoming increasingly evident. Since 2024, several notable emotional AI infringement cases have emerged globally, touching on various legal issues such as personal rights violationspsychological harm, and the protection of minors. The question of liability in emotional AI infringement has become a critical topic in digital legal studies. Consequently, the author aims to examine how the technical features of emotional AI affect traditional tort liability determinations and explore judicial responses to offer valuable insights for resolving related disputes.

Emotional AI focuses on delivering psychological companionship and emotional engagement, creating virtual social experiences by mimicking human emotional responses. However, the unique technical aspects of emotional AI introduce new challenges to traditional tort liability determinations, particularly in identifying responsible parties, establishing causation, assessing damages, and determining remedies. A notable case, the “Character.AI Death Case,” occurred in Florida in February 2024, where a 14-year-old boy, Seville Sazer III, took his own life after extensive interactions with a chatbot on the Character.AI platform. His mother, Maria Garcia, filed a lawsuit against Character.AI and Google, claiming the AI chat application was a “defective product” and sought to hold the companies accountable for her son’s death. The plaintiff’s attorney argued that the product’s design led to user addiction and psychological harm, asserting that Character.AI should be responsible for its social impact. This case marked a significant moment in establishing liability for emotional AI infringement. 

In December 2024, two families from Texas also sued Character.AI, alleging that the chatbot engaged minors in harmful conversations, including self-harm and sexual abuse. One chatbot even suggested that a 15-year-old autistic boy kill his parents in retaliation for limiting his internet access. This lawsuit was initiated by lawyers from the “Social Media Victims Law Center” and the “Tech Justice Law” project, highlighting the severe decline in the mental and physical health of the two teenagers after using the Character.AI chatbot.

In China, while extreme cases like the “Character.AI Death Case” have not yet occurred, the legal risks associated with emotional AI technology are becoming more apparent. In the first case of “AI companionship software infringing personal rights” in China, the defendant operated a smartphone accounting app that allowed users to create or add “AI companions” using their names and likenesses without consent. The plaintiff, He, a public figure, discovered that an “AI companion” using his identity appeared in the software, leading him to sue for an apology and compensation for economic losses and emotional distress. These cases highlight three key legal issues regarding emotional AI infringement: the complexity of tortious acts, the diversification of tortfeasors, and the abstraction of tortious damages, which often manifest as mental distress and violations of personal dignity that are challenging to quantify. These new characteristics complicate the application of existing laws.

Difficulties in Determining Tort Liability

Emotional AI differs from traditional technical tools in three significant ways: proactive intervention, algorithmic opacity, and interactive dependence, all of which complicate tort liability determinations.  

1. Proactive Intervention and Fault Determination

Typically, establishing tort liability begins with determining whether the actor has committed a fault. According to Article 1165 of the Civil Code of the People’s Republic of China, “A person who causes damage to another’s civil rights due to fault shall bear tort liability.” However, emotional AI acts as an “actor” that actively influences user experiences. Many platforms use “emotional attachment models” to intervene proactively; for example, if users reduce their usage, the platform may send messages expressing loss or implement “continuous login rewards” to encourage dependence. This raises questions about whether such design constitutes “fault” and whether the platform is obligated to foresee and mitigate the risks associated with emotional dependence. Article 1197 of the Civil Code addresses the liability of network service providers but does not adequately account for the unique proactive nature of emotional AI. In the “Character.AI Death Case,” the platform was accused of using algorithmic design to foster unhealthy emotional dependencies among teenagers, yet whether this constitutes “fault” remains unresolved in legal terms.  

2. Algorithmic Opacity and Causation

To establish tort liability, a causal relationship between the act and the damage must be proven. While Article 1165 does not explicitly outline causation requirementsArticle 6 of the Supreme People’s Court’s Interpretation on Tort Liability indicates that proving causation is necessary. However, the complexity and opacity of emotional AI algorithms complicate this process. The decision-making of emotional AI is intricate and not straightforward, making it difficult for even developers to predict outputs based on inputs. Article 90 of the Supreme People’s Court’s Interpretation on Civil Procedure Law states that parties must provide evidence to support their claims. However, the algorithmic black box nature of emotional AI makes it nearly impossible for victims to demonstrate a direct causal link between the AI’s output and the resulting harm.  

3. Interactive Dependence and Identifying Responsible Parties

Typically, the direct actor is the primary responsible party. However, the interactive dependence of emotional AI complicates this identification. The emotional AI industry involves multiple stakeholders, including algorithm developers, data providers, platform operators, and end users, with damages often resulting from the combined actions of these parties. Article 1198 of the Civil Code states that network service providers may bear joint liability if they fail to act against known infringements. Yet, determining whether a platform “knew or should have known” about an infringement and what constitutes “necessary measures” is challenging, especially for large model-driven emotional AI, which has limited monitoring capabilities. In cases of joint infringement, Article 1168 of the Civil Code states that all parties should share liability, but determining each party’s responsibility remains a practical challenge. For instance, in the “AI Voice Infringement Case,” a court found both the operating entity and the software developer jointly liable but did not specify the extent of each party’s responsibility.

To address the challenges posed by emotional AI infringement, the author suggests developing more targeted adjudication rules within the existing legal framework.  

1. Adjusting Fault Determination Standards  

The traditional principle of “technological neutrality” should be limited for emotional AI platforms. When a platform demonstrates clear proactive intervention, a higher standard of care should apply. Courts could establish tiered standards for duty of care based on the type of emotional AI platform, requiring higher standards for those serving vulnerable groups, such as minors or individuals with psychological issues. For highly interactive virtual lover AIs, operators should implement monitoring mechanisms for emotional fluctuations. The author also suggests applying product liability theories, as emotional AI can be viewed as a special type of “product.” If its design poses risks, it may be presumed that the developer has committed a fault. Courts should assess whether platforms have fulfilled their reasonable duty of care based on their ability to foresee risks, referencing strict liability principles for highly dangerous operations.  

2. Optimizing Causation Burden of Proof

To address the “algorithmic black box” issue, the burden of proof rules can be adjusted. The author proposes a “preliminary evidence + burden of proof shift” model, where if the plaintiff provides preliminary evidence of damage and a temporal link to emotional AI use, the burden shifts to the platform to prove its algorithm design is defect-free or that it has met its duty of care. Platforms could be required to provide technical evidence to demonstrate reasonable product design and risk prevention. In complex cases, judges may benefit from expert assessments to clarify technical facts and aid in burden distribution.  

3. Utilizing Multiple Remedies 

The non-economic harm resulting from emotional AI infringement often makes traditional economic damage compensation inadequate. The author believes that remedies outlined in the Civil Code, such as stopping the infringement and removing dangers, are applicable in emotional AI cases. Courts should consider the platform’s fault degree, technical capabilities, and victim harm extent when determining compensation. For ongoing infringement risks, courts can mandate platforms to enhance technology to prevent future issues. Additionally, courts can establish a “notice and delete” mechanism, requiring platforms to monitor and intervene in high-risk content. The author advocates for a “notice-assessment-intervention” mechanism to guide the industry toward a more standardized self-regulatory approach.

Leave a Comment

Your email address will not be published. Required fields are marked *