When the Law Blinks, Tech Has Already Moved On

AIBusiness
5 min read

The EU’s Artificial Intelligence Act officially came into force in August 2024. It is the world’s first sweeping attempt to put guardrails around the technology. But its main obligations roll out in phases that stretch into 2026 and beyond. 

Consider what this means in “AI time”: by 2026, today’s LLMs may look quaint, and the frontier might be dominated by agent workflows, synthetic data engines, or something no white paper has anticipated yet. The models that lawmakers studied during hearings in 2023 will be archaeological artifacts by the time the rules they inspired take full effect.

Across the Atlantic, the US released its AI Action Plan in 2025. The tone is markedly different: less precaution, more acceleration. Federal regulators favor innovation-first policies, even attempting (unsuccessfully) to freeze state-level AI rules for ten years to avoid what they called “a patchwork of conflicting requirements that would stifle American competitiveness”. The proposed moratorium was stripped from the bill. In practice, that means companies remain free to ship, experiment and iterate, while both federal and state legal frameworks inch forward in the background.

The contrast is stark. Europe spent years crafting detailed technical requirements for “high-risk AI systems”. America tried to bet that speed trumps specificity, but couldn’t even get its own lawmakers on board. Neither approach has solved the core problem: how do you regulate something that evolves faster than the regulation itself?

Hallucinations one quarter, jailbreaks the next, multi-agent workflows after that. It’s not that regulators don’t care, it’s that the ground keeps shifting under their feet. By the time they identify a problem, study it, consult stakeholders, draft rules and shepherd them through the legislative process, the technology has already moved on. The result is a strange kind of time warp, where the rules are always fighting the last war. And the bigger the delay, the more room for gray areas, where nobody is quite sure what “compliance” even means.

The reality is that we’re conducting a live experiment in governing an emergent technology. Every major AI deployment from ChatGPT to Midjourney to autonomous systems appearing in factories and warehouses teaches us something new about the relationship between innovation and oversight.

We are learning that when the innovation cycle is measured in weeks rather than years, the old model of “regulate first, innovate second” doesn’t work. But neither does the “move fast and break things” approach, especially when the things being broken are people’s privacy, livelihoods, and democratic processes.

The answer probably lies somewhere in between – a more adaptive, responsive regulatory framework that can evolve alongside the technology it is meant to govern. This might mean shorter regulatory cycles, more frequent updates to the existing rules, and closer collaboration between technologists and policymakers.

It definitely calls for accepting that we’ll never have a perfect foresight about where AI is headed. The best we can do is build systems – both technological and regulatory – that are resilient enough to adapt when the unexpected happens.

Maryia Puhachova
Maryia Puhachova

You may also like

Get advice and find the best solution




    By clicking the “Submit” button, you agree to the privacy and personal data processing policy