EU lawmakers move to delay high-risk AI Act rules until December 2027
European Parliament committees backed a proposal on March 26, 2026, to push the EU AI Act’s high-risk system obligations back to December 2, 2027, giving companies more runway before the bloc’s strictest requirements bite. The move is not final, but it marks a meaningful shift in the implementation timetable for AI systems used in sectors such as employment, credit, education, health care and law enforcement.
March 26 vote resets the EU AI Act timetable
The joint session of the European Parliament’s internal market and civil liberties committees approved a report proposing amendments to the AI Act’s application dates. If adopted through the full legislative process, the change would delay the rules for many high-risk systems by about 16 months from the original August 2026 schedule. The proposal still needs a plenary vote in Parliament and negotiations with the Council before it can become law.
For companies building compliance programs now, the significance is immediate: a later deadline could reduce near-term pressure on testing, documentation, risk-management and conformity-assessment workstreams that were being organized around the 2026 start date.
Why the delay matters for regulated AI systems
The high-risk category is where the AI Act becomes operationally demanding. It covers systems used in areas where mistakes can have direct consequences for people’s livelihoods or rights, including hiring, access to essential services, medical decision-making and critical infrastructure. Pushing that regime into late 2027 would give vendors and deployers more time to map model behavior, tighten governance, and align product release schedules with the EU’s compliance requirements.
That extra time could also affect procurement. Buyers in heavily regulated sectors often wait for legal certainty before signing contracts or expanding deployments, especially when model outputs influence eligibility, ranking or approval decisions. A longer runway may let some pilots continue while firms avoid prematurely freezing product designs around standards that are still being finalized.
Compliance teams now have a wider window, not a reprieve
The proposal does not remove the AI Act’s core obligations, and it does not erase the broader regulatory burden facing AI developers in Europe. Providers still have to plan for transparency duties, risk controls and other obligations already moving through the implementation pipeline. The practical effect of the committee vote is to shift the most expensive part of the regime later, not to eliminate it.
That matters because the EU remains the most detailed large-market regulator of AI, and its timeline shapes product planning well beyond Europe. Companies selling into multiple jurisdictions often build one compliance architecture and then adapt it to local rules. A later European deadline may give those teams room to sequence their work, but it also extends a period of uncertainty over when the toughest obligations will actually begin.
Source: Proskauer?
Date: 2026-03-26T00:00:00Z