1. Three approaches to AIEurope — AI Act (2024–2027)- Strict risk-based scale (ban → heavy control → transparency → minimal risk).
- High fines — up to €35 million or 7% of global turnover.
- Applies to GPAI (ChatGPT and analogues) with detailed requirements for data, evaluations, and documentation.
- Phased introduction: some bans already in force from February 2025, full regime by 2027.
USA — flexibility and standards- No single federal law, reliance on voluntary frameworks (NIST AI RMF) and post-fact oversight (FTC, courts).
- Focus on “innovation and lowering barriers” (America’s AI Action Plan).
- Fast product cycles without costly entry procedures.
China — fast licenses for its own- Mandatory registration and security assessment for public AI services.
- Strict content control and censorship, but quick approval for “domestic” developers.
- Support for national champions in infrastructure and computing.
2. Why the EU takes two steps back- Expensive entry and startup exodus. Registration, audits, and documentation before market entry — a deadly mix for small AI businesses. The US allows launching first, regulating later.
- Concentration among big players. Only corporations (often non-European) can bear the high compliance costs.
- Infrastructure dependence. Clouds, chips, LLM platforms — mostly American. The EU does not yet offer comparable alternatives at scale.
- Speed gap. AI evolves in months, while EU norms are introduced over years. China and the US test and release faster.
3. Counterarguments — and their weakness- Safety and trust matter more. But if “safe” services are imported, dependence will persist.
- The Code of Practice is voluntary. In reality, it’s a standard for access to government and big deals; violations can still trigger AI Act penalties.
- The US is also tightening rules. True, but their approach is post-control and standards, not pre-bans.
4. Recommendations for the EU- Sandboxes for startups. Accelerated regimes with “guardrails” instead of full compliance at the start.
- Safe harbor for small models and open-source. Minimal bureaucracy for smaller developers.
- Frequent clarifications for cases. The AI Office should issue bulletins monthly, not yearly.
- Compute sovereignty. Public and private investments in GPU clusters accessible to SMEs, not just corporations.
- Gradual compliance. Full requirements only once a project reaches significant scale and risk level.
The EU now acts like a driver who, fearing an accident, sticks to 30 km/h — while competitors are already disappearing over the horizon. If Europe does not adjust its balance between values and speed, it risks remaining the main consumer of foreign technologies, while still dreaming of becoming their main exporter.