view all news & events
01/21/2026

Physical AI and Humanoid Robots as a New Regulatory Reality

This year’s Consumer Electronics Show (CES) 2026 in Las Vegas impressively demonstrated that artificial intelligence has reached a new stage of development and is increasingly leaving the purely digital realm. Current developments in robotics show that so-called physical AI systems no longer merely perceive their environment, but are capable of autonomous reasoning, decision-making, and physical action. Humanoid robots are evolving from experimental niche applications into scalable platforms that can be deployed in diverse environments and dynamically adapt to new situations. This shift from task-specific automation toward adaptive, learning systems is significantly accelerating commercial deployment—whether in industry, logistics, services, or work environments. For companies, however, this development entails not only technological opportunities but also a new level of regulatory complexity: wherever AI acts physically, encounters humans, and responds autonomously, a multifaceted regulatory framework arises for manufacturers, integrators, and operators, which should be addressed at an early stage.


 

Regulatory Classification

From a legal perspective, there is no standalone category of “humanoid robots”. Instead, a functional assessment across the lifecycle is decisive: development, placing on the market, operation, and updates. Depending on the specific design and use, regulations from product safety law, AI regulation, data protection, cybersecurity, and liability law may apply in parallel.


 

Product Safety and Market Access

Humanoid robots are generally classified as machinery and are therefore subject to conformity assessment requirements and CE marking. With the new EU Machinery Regulation (EU) 2023/1230, requirements—particularly for software-based control systems and safety-related AI functions—are increasing significantly. The regulation will apply from 20 January 2027 (full application date). Key issues include safe human–robot interaction, emergency mechanisms, and the handling of autonomous and learning behavior.


 

AI Regulation under the EU AI Act

The EU AI Act (EU) 2024/1689 applies to AI systems that are placed on the market or used in the EU. It does not focus on the external form of a robot, but rather on the specific context of use. Humanoid robots may be classified as high-risk AI systems, for example when used in the workplace, for biometric identification, or in safety-critical areas. In such cases, obligations include risk management, technical documentation, logging, human oversight, and post-market monitoring. For low-risk applications, only limited transparency and information obligations apply.

 

 

Data Protection and Use in Operations

Due to their sensor systems, humanoid robots regularly process personal data, often including image and audio data. Humanoids are effectively “data scrapers.” Deployment in publicly accessible or operational spaces may constitute high-risk processing and require a data protection impact assessment (Art. 35 GDPR). Use in workplaces (e.g., as “co-workers,” in warehouses, or at reception desks) is particularly sensitive, as it raises issues related to monitoring, performance and behavior control, and may be subject to the co-determination rights of the works council.


 

Access to and Use of Data

Once a humanoid or robotics stack generates device or usage data—particularly in the IoT context of connected products and related services—the substantive provisions of the Data Act (EU) 2023/2854 may apply. The Data Act contains rules on interoperability, data portability, and contractual arrangements between providers, customers, and users of products that contain or collect data. It has been in force since 11 January 2024 and largely applies from 12 September 2025.


 

Cybersecurity and Updates

As connected products with digital elements, humanoid robots typically fall under the Cyber Resilience Act (EU) 2024/2847. The Cyber Resilience Act has been in force since 10 December 2024, with most obligations applying after transitional periods. Manufacturers must establish secure update and patch processes, vulnerability management, and a comprehensive security concept. Changes to AI models or control software may be relevant from both a liability and safety perspective.


 

Liability

With the new EU Product Liability Directive (EU) 2024/2853, liability is explicitly extended to software and AI systems. Defective or insufficiently controlled AI behavior may lead to liability across the entire supply chain. Documentation, logging, and clearly defined responsibilities therefore become significantly more important. The new Product Liability Directive must be implemented at national level by 9 December 2026.


 

Key Takeaways at a Glance

  • No special category: Humanoid robots are classified legally based on function, not appearance.
  • CE marking remains central: Product safety and conformity assessment are the regulatory entry point.
  • Use case is decisive: Whether the EU AI Act applies depends on the context of use, not on the robot type.
  • Data protection is critical: Camera and audio functions make privacy by design mandatory.
  • Cybersecurity becomes mandatory: Update obligations and connectivity significantly increase regulatory requirements.
  • Liability risks increase: AI-specific product liability requires robust documentation and governance.

    Share

  • LinkedIn
  • XING