Groundbreaking Court Ruling: Setting the Standard for AI Platform Responsibility in Copyright Violations Through LoRA Models

Case Background

The Hangzhou Internet Court recently adjudicated a pivotal copyright dispute concerning an AI platform enabling users to train and distribute LoRA models derived from protected Ultraman imagery. This ruling establishes critical boundaries for generative AI service providers’ legal responsibilities when facilitating user-generated content that implicates third-party intellectual property.

Technical Functionality & Alleged Infringement

The defendant’s platform offered four integrated services: text-to-image generation, image-to-image transformation, custom LoRA model training, and content creation using these trained LoRA models. The dispute focused on the platform’s management of Ultraman-related content. Although direct prompts containing “Ultraman” were blocked from producing images, users were allowed to upload Ultraman artwork to train custom LoRA models. These models then automatically generated cover images closely resembling Ultraman, which were made available in the platform’s repository. Notably, the platform highlighted these models on its homepage recommendations and within a dedicated “IP Works” section. After training, users could utilize these models to create Ultraman derivatives that were virtually indistinguishable by entering the same keywords, with the resulting images being shareable across the platform.

The plaintiff contended the platform committed direct infringement of communication rights by distributing infringing models and images through its information network. Alternatively, it argued the platform indirectly infringed by failing its duty of care, given its constructive knowledge of infringement risks inherent in these functionalities.

Court’s Analytical Framework

Direct Infringement Exclusion

The court determined the platform qualified as a technical service provider rather than a content creator. This characterization rested on three factual findings: users independently uploaded Ultraman training images; users initiated LoRA training workflows; and users controlled publication of models and derivatives. Consequently, the platform fell under information storage service provider status under Article 1197 of China’s Civil Code, precluding direct liability.

Indirect Liability: Four-Factor Duty of Care Assessment

The court established a judicial test weighing:

  1. Profit Model: The platform generated revenue from LoRA training by offering premium membership tiers that provided faster processing speeds, thereby creating commercial benefits from activities that could potentially infringe on copyrights and increasing its level of responsibility.
  2. Content Prominence: Curatorial elements—particularly the featuring of Ultraman-derived LoRA models in the “Recommendations” and “IP Works” sections—actively promoted this content, significantly enhancing the foreseeability of infringement.
  3. Infringement Obviousness: The cover images produced during LoRA training showed clear visual similarities to copyrighted Ultraman characters, making any infringement readily apparent without the need for specialized tools.
  4. Preventive Adequacy: While the platform implemented some safeguards, such as keyword blocking in prompts and post-publication takedown procedures, these measures were insufficient. The platform failed to adopt technically feasible proactive steps, like image-recognition screening during uploads, to prevent infringement before it occurred.

Critical Compliance Deficiencies

While the platform implemented several protective measures—such as restricting “Ultraman” text prompts and conducting dynamic reviews of uploads—three significant shortcomings persisted: a lack of image-similarity screening for training data, permitting infringing cover images, and algorithmically promoting infringing models within specialized content areas. The court highlighted that utilizing current reverse-image-search technology could have effectively reduced infringement without incurring unreasonable expenses.

This ruling advances three key legal principles:

  1. Platforms forfeit their “neutral tool” protection when their interfaces actively encourage infringing content.
  2. Being classified as a technical service does not provide absolute immunity, especially when commercial interests coincide with high-risk activities.
  3. The duty of care becomes more stringent when copyrighted material is widely known and can be prevented using readily available technology.

Operational Compliance Requirements

For training input stages, platforms must implement image-recognition filters to block uploads matching registered IP assets. At model deployment, manual or AI-assisted review should verify cover images for substantial similarity. Regarding content promotion, interfaces must avoid dedicated sections inviting IP-derived content (“IP Works”) unless incorporating rights-clearance protocols. Monetization features require heightened scrutiny, particularly when accelerating potentially infringing workflows.

Actionable Guidance for AI Providers

1. Input Control Systems

  • Implement visual-content fingerprinting to screen uploads against copyrighted material databases.
  • Require rights declarations for conspicuous third-party IP before processing.

2. Output Governance Protocols

  • Automatically flag generated content exceeding similarity thresholds through perceptual hashing.
  • Disable automated publication for models trained on protected works without human review.

3. Interface Architecture

  • Design recommendation algorithms to deprioritize content with high infringement probability.
  • Replace suggestive category labels (“IP Works”) with neutral terminology (“Community Models”).

4. Monetization Alignment

  • Suspend fast-track processing for workflows involving unverified copyrighted material.
  • Implement revenue segregation between infrastructure fees and content-generation services.

Sector-Wide Implications

This landmark ruling marks a shift toward stricter judicial expectations regarding “willful blindness” in generative AI. Courts are increasingly requiring security measures to advance in step with technological capabilities, particularly for platforms that generate revenue from AI-created content. As visual similarity detection improves through multimodal AI, compliance frameworks must incorporate cutting-edge protection technologies. The decision highlights that passive approaches—such as keyword filters and takedown procedures—are no longer sufficient to meet duty-of-care responsibilities when active prevention is both technologically achievable and commercially reasonable.

For an ecosystem striving to balance innovation with rights protection, this judgment offers essential guidance: the transformative promise of generative AI remains legally sustainable only when service providers embed rights-respecting designs into their platforms from the outset.

Leave a Comment

Your email address will not be published. Required fields are marked *