The EU Commission recently adopted a new digital package. The digital package is intended to help companies in the EU – from start-ups to industrial enterprises – reduce compliance and administrative burdens so they can focus more on innovation and growth. At the heart of the package is the Omnibus Regulation (‘Digital Omnibus’), which is primarily intended to simplify rules for artificial intelligence, cybersecurity and data. Below, we provide an overview of the most relevant rules and changes.
After numerous EU digital regulations have gradually come into force in recent years as part of the Digital Decade and are already being applied in some cases (an overview of the status of the laws can be found on our Digital Decade landing page), the EU now wants to move into a phase of consolidation and simplification – primarily in response to pressure from industry, which is facing increasingly significant compliance costs and, in some cases, overlapping obligations. The package is intended to address precisely this issue by better harmonising existing regulations, reducing duplicate requirements and making application and implementation more practical for businesses.
The Digital Omnibus, which primarily aims to consolidate regulations on artificial intelligence, cybersecurity and data, is complemented by the Data Union Strategy, which aims to facilitate access to high-quality data for AI, and by the European Business Wallets, which provide companies with a single digital identity.
Below, we would like to provide an initial overview of the changes in the Digital Omnibus:
What are the key changes to EU data law?
With the Digital Omnibus, the EU is pursuing a consolidated further development of data law. The aim is to simplify regulations, reduce administrative burdens and create a clearer framework for data-driven innovation. The focus is on amendments to the Data Act and selective changes to the GDPR – all data rules are to be consolidated in these two main pieces of legislation.
The following adjustments are planned for the Data Act:
- Consolidation of previous legal acts: Several previously coexisting legal acts, including the Open Data Directive, the Free Flow of Non-Personal Data Regulation and the Data Governance Act, are to be integrated into the Data Act in order to create a uniform set of rules for non-personal data.
- Data intermediation services: Mandatory registration and the EU label for data intermediaries are to be abolished, significantly streamlining the regulatory framework as a whole. New intermediation models should be able to be offered more quickly and with less red tape as barriers to market entry are lowered.
- Data altruism: The legal framework for public interest data sharing will be simplified to reduce the complexity of existing structures and requirements. Organisations should be able to make data available more easily for research, health or sustainability purposes without having to comply with extensive administrative processes.
- Public sector data sets: Existing requirements for public data sets are to be consolidated and harmonised to eliminate existing fragmentation. It should be easier for companies to understand which public data can be used under which conditions in order to strengthen innovation in the internal market.
- Business-to-government access (B2G): Access by government agencies to company data should be clearly limited to genuine emergencies and crises such as natural disasters or pandemics. Outside of such situations, companies should not be subject to additional or unclear disclosure requirements.
- Relief through bureaucracy reduction and harmonisation: The regulatory framework in the areas of data, data protection, cybersecurity and AI should be streamlined and harmonised by centralising reporting systems and reducing information requirements.
The following changes are planned for the scope of cloud switching obligations:
The Omnibus proposal specifically realigns the scope of the Data Act's cloud switching rules. The basic principle of easier switchability between cloud, edge and data processing services remains in place, but is made more precise and placed on a more proportionate and risk-based basis. The most important adjustments are summarised below:
- Restriction of the scope of application for SMEs and micro-enterprises: The switching obligations shall only apply if they are technically feasible and economically reasonable for these providers. This is intended to relieve smaller market participants of disproportionate regulatory requirements.
- Exemption for customer-specific data processing services: Individually developed data processing solutions that are provided exclusively for a single customer will no longer be subject to the full cloud switching obligations. The reason for this is that interoperability and standardised data portability are often neither technically feasible nor practicable in such tailor-made architectures.
- Emphasis on the technical and economic feasibility of switching: The omnibus clarifies that the requirements should only apply if they can be met at reasonable cost. This clarification reduces existing legal uncertainties and prevents providers from finding themselves in situations where compliance would be virtually impossible or disproportionately expensive.
- Strengthening existing industry standards: The reform makes it clear that service providers do not have to develop new proprietary interfaces. The use of industry-standard data formats and protocols should be sufficient to meet the requirements. This reduces development effort and integration costs, especially for smaller providers.
- More user-friendly switching framework: The EU remains committed to reducing switching costs and giving users real opportunities to switch providers. At the same time, the reform aims to ensure that specialised or smaller providers are not squeezed out by excessive compliance burdens.
Overall, the aim is to create a more differentiated, proportionate and risk-based switching framework. The cloud switching regime of the Data Act will remain functional, but will focus more clearly on standardisable services and on providers for whom the implementation of the obligations is realistic and economically viable.
The following adjustments are planned for the GDPR and the rules on cookies:
- New approach to cookie banners and consent management: Until now, regulation has been based on a two-tier divided structure: access to end devices fell under the ePrivacy Directive, while the subsequent processing of personal data was subject to the GDPR. The Commission's new proposal ends this dual system. In future, cookies and similar tracking technologies will be fully integrated into the GDPR, resulting in a harmonised legal framework with common principles, enforcement mechanisms and sanctions. The Commission recognises an existing problem: consent management often works poorly in practice. Users are confronted with complex pop-ups, and many reflexively click ‘Accept all’ to continue browsing. This hardly represents the informed consent originally intended by the legislator. The aim of the reform is therefore to make consent a functional and credible legal basis again. Among other things, the proposal stipulates that cookie banners must offer a genuine ‘one-click option’ to reject all non-essential cookies – visible, equivalent and as easily accessible as the ‘Accept all’ option. A rejection must be valid for at least six months.
- Central system for data protection preferences: The planned rules on technical preference signals are even more far-reaching: users should be able to set data protection decisions once (e.g. in their browser or operating system). Websites and apps must automatically respect these machine-readable signals in future. Companies must therefore design their consent mechanisms in such a way that these standards can be processed.
- Differentiation between high-risk tracking and low-risk uses: The proposal introduces a ‘whitelist’ of certain privacy-friendly types of use, for example for statistical analyses or aggregated audience measurements. If the specified conditions are met, companies may process device data for narrowly defined purposes without consent and without cookie banners. For companies that primarily perform performance analyses or service optimisations, this means fewer banners, less compliance overhead and a more user-friendly experience.
- Stricter enforcement, but more legal certainty: Through integration into the GDPR, violations of rules on end device access will in future be subject to the existing framework of sanctions. At the same time, the reform aims to increase legal certainty by reducing fragmentation and clarifying protection standards.
- Clarifications regarding the definition of personal data: The proposal implements current ECJ case law. Data is not considered personal to a recipient if the recipient has no realistic possibility of re-identification. However, the original controller who pseudonymised the data retains all obligations under the GDPR.
- Technical guidelines via implementing acts: The Commission is given the power to lay down technical criteria and methods for pseudonymisation and the assessment of re-identification risks. This is intended to provide companies with clearer assessment criteria and practical guidance in future.
- Changes to the GDPR: The ‘Digital Omnibus’ does not change the basic structure of the GDPR, but addresses specifically identified problem areas:
- Innovation and AI: The proposal clarifies that the development and operation of AI systems and models can be based on the legal basis of ‘legitimate interest’ as long as the processing meets all the requirements of the GDPR and is not prohibited by other EU or national regulations or subject to consent. If special categories of personal data appear only residually in training or test data sets and are not the subject of the collection, a narrow exception to the usual processing prohibition is introduced. Controllers must implement appropriate safeguards throughout the AI lifecycle, remove such data as soon as it is identified, and ensure that it is not used to derive results or made available to third parties. Data subjects retain an unrestricted right to object to the processing of their personal data for these AI purposes.
- Simplification of everyday compliance obligations: Information obligations do not apply if there are legitimate reasons to believe that data subjects already have the information and the processing does not pose a high risk. This benefits smaller companies with limited data usage.
In addition, the right to information is protected against misuse: Controllers can respond to manifestly unfounded requests with a refusal or a reasonable fee; with a lower burden of proof than today to show that a request is excessive.
- Data protection impact assessments are harmonised through EU-wide uniform lists, both for types of processing that always require a DPIA and for those that do not. This is supplemented by a uniform methodology and template.
- Notifications of data breaches to supervisory authorities will in future be aligned with the ‘high risk’ threshold – the same threshold at which notification of the data subjects is already required. Notification will be made centrally via a single point of contact linked to other digital and cybersecurity-related regulations. For companies, this means fewer reports with little benefit, a more predictable risk assessment and more efficient communication with supervisory authorities.
- Innovation and AI: The proposal clarifies that the development and operation of AI systems and models can be based on the legal basis of ‘legitimate interest’ as long as the processing meets all the requirements of the GDPR and is not prohibited by other EU or national regulations or subject to consent. If special categories of personal data appear only residually in training or test data sets and are not the subject of the collection, a narrow exception to the usual processing prohibition is introduced. Controllers must implement appropriate safeguards throughout the AI lifecycle, remove such data as soon as it is identified, and ensure that it is not used to derive results or made available to third parties. Data subjects retain an unrestricted right to object to the processing of their personal data for these AI purposes.
What does this mean for companies? For many businesses, the immediate headline will be the prospect of fewer and simpler cookie banners. But the real change runs deeper: all device‑based data access is drawn into a single GDPR‑based regime, augmented with central preference signals, a privacy‑friendly whitelist for low‑risk uses, and tougher expectations around consent design. At the same time, long‑running ambiguities around pseudonymized data, AI training, access requests, information duties, DPIAs and breach notifications are addressed through targeted legislative clarifications and mechanisms for future technical guidance. In practical terms, businesses that invest early in mapping their cookie and tracking practices to the new whitelist, in re‑engineering consent flows around “one‑click” choice and central signals, and in aligning AI and analytics projects with the clarified legitimate‑interest and pseudonymization framework will be best placed to benefit from the promised simplification – and to avoid becoming the test cases for the strengthened enforcement system that comes with it.
What changes will the Digital Omnibus bring to cyber security law?
Simplified reporting of cyber security incidents: Under current law, companies must comply with various legal reporting obligations under different legal acts in the event of a cyber security incident (e.g. Art. 32 GDPR, Art. 23 NIS-2 Directive, Art. 14 CRA and many other sector-specific reporting obligations such as Art. 73 AI Act for high-risk AI systems, Art. 19 DORA in the financial sector, etc.). Each of these reporting obligations is subject to different content requirements and different reporting deadlines and is addressed to different authorities. The EU Commission's proposal aims to simplify reporting under cybersecurity law and consolidate it in a single point of contact at the European Agency for Cybersecurity (ENISA). A central reporting portal is to be set up at ENISA, where affected companies can submit their mandatory reports on cybersecurity incidents in a collective manner. These reports will then be processed centrally by ENISA and forwarded to the relevant authorities. The exchange of reported information between authorities is also to be facilitated. The reporting platform for vulnerabilities established under Article 16 CRA is to be used to implement the changes. The EU Commission expects that this will reduce the annual costs associated with reporting cybersecurity incidents by up to 50%.
The Commission's draft specifically addresses inter alia existing reporting obligations under NIS-2, GDPR, eIDAS-VO and DORA. However, the substantive requirements for the individual reporting obligations and the respective competent supervisory authority remain largely unaffected by the proposed amendments. However, the proposal also contains some substantive adjustments. For example, the deadline for reporting data protection incidents in Art. 33 GDPR is to be increased to 96 hours and will in future only apply to breaches with a high risk to data subjects.
What are the main changes in the area of artificial intelligence and the AI Act?
The AI Act came into force in August 2024 and is being implemented in stages: some provisions, such as certain prohibitions, requirements for AI competence and rules for general-purpose AI models, are already in force. The remaining provisions are to become binding from 2 August 2026. The European Commission identified several challenges during the 2025 stakeholder consultations and is now proposing the following adjustments:
- New timetable for high-risk AI systems: The application of the rules will be linked to the availability of standards and support tools. Once the Commission has confirmed that these are sufficiently available, the rules will enter into force after a transition period.
- Annex III AI systems: 6 months after the Commission's decision or by 2 December 2027 at the latest.
- Annex I systems: 12 months after the Commission's decision or by 2 August 2028 at the latest.
- AI competence: The obligation for companies to ensure an adequate level of AI competence is removed. Instead, the Commission and Member States should encourage providers and users to provide sufficient AI competence.
- Processing of special categories of personal data: Providers and users of AI systems may process special categories of personal data for bias detection and correction, provided that appropriate safeguards are in place.
- Registration of high-risk AI systems: Systems used in high-risk areas for tasks that are not themselves considered high-risk no longer need to be registered.
- Expansion of the use of AI regulatory sandboxes and real-world testing: From 2028, an EU-wide regulatory sandbox is to be established, among other things.
- Abolition of the requirement for a harmonised post-market monitoring plan.
- Extension of simplified compliance rules to small and medium-sized enterprises (SMEs): For example, simplified rules for the technical documentation required for AI systems are to apply to SMEs.
- Centralisation of supervision of AI systems based on general-purpose models: Supervision will be bundled at the AI Office to reduce governance fragmentation. AI in very large online platforms and search engines will also be supervised at EU level.
- Clarification of interaction with other EU legislation: Procedures will be simplified to ensure the timely availability of conformity assessment bodies.





