Germany’s proposed KI-MIG legislation names the Federal Network Agency as the central coordinator, urging businesses to establish internal classification, routing, and vendor-governance frameworks as stringent high-risk AI regulations come into force in the coming months.
The German Federal Cabinet has given its approval to a draft law designed to implement the EU’s AI Act, appointing the Federal Network Agency (Bundesnetzagentur) as the nation’s primary AI supervisory body.
Through the proposed AI Market Surveillance and Innovation Promotion Act (KI-MIG), Germany aims to establish its own national framework for regulating the development and deployment of AI systems. This draft legislation will now proceed to the Bundestag (the lower house) and Bundesrat (the upper house) for parliamentary endorsement.
Federal Digital Minister Karsten Wildberger stated in a press release, “This law enables us to implement European mandates in a highly innovation-friendly manner, establishing streamlined AI supervision that focuses keenly on the economy’s requirements.”
A Model of Distributed Oversight
According to the draft legislation, the Federal Network Agency will function as the central coordinating entity, market surveillance authority, and notifying body. This Bonn-based agency is already responsible for coordinating Germany’s implementation of the EU Digital Services Act and overseeing platforms like Facebook, Instagram, YouTube, TikTok, and X.
The draft law further delegates AI supervision to existing regulatory bodies, including the Federal Cartel Office, the Federal Financial Supervisory Authority (BaFin), and data protection authorities at both federal and state levels, the statement elaborated.
Sanchit Vir Gogia, chief analyst at Greyhound Research, commented, “The landscape of supervision has evolved. It’s no longer practical to envision a singular regulatory relationship for AI. Germany has opted to centralize coordination within the Federal Network Agency, which provides the system with a focal point. However, it has not consolidated enforcement power into one entity.”
Gogia noted that this decentralized approach introduces complexity for businesses. A scoring model, whether used in HR, for credit, or integrated into a regulated device, will not follow a uniform supervisory pathway. “This necessitates that enterprises develop an internal capability for classification and routing,” he explained.
Germany’s strategy mirrors a broader trend across the EU. France is progressing towards coordinated decentralization, while Spain has focused on sandbox experimentation. Italy, meanwhile, passed a national AI law that maintains existing sector-specific supervision channels. Gogia observed, “Structurally, the combination of central coordination and sector-specific execution isn’t unique to Germany; it’s becoming the prevailing operational model.”
Industry Advocates for EU-Level Reforms
Industry associations have expressed approval for Germany’s implementation strategy but simultaneously called for fundamental revisions to the EU AI Act itself.
Sarah Bäumchen, managing director of the German Electrical and Digital Industries Association (ZVEI), told ComputerWorld, “We commend the proposed framework, which grants the Federal Network Agency a key coordinating function while leveraging the specialized knowledge developed by sectoral market surveillance authorities. However, as the AI Act is a European regulation, Germany’s pragmatic implementation law cannot rectify its significant deficiencies.”
Bäumchen highlighted the August 2026 deadline as a major concern for businesses. “Crucial components, such as harmonized European standards that define how companies can meet high-risk requirements, are still unavailable. Therefore, a 24-month extension of the implementation deadline is essential to prevent companies from postponing or even abandoning the adoption of AI functionalities.”
ZVEI advocates for the complete exclusion of industrial AI from the Act. Bäumchen asserted that “the necessary safeguards for the secure application of AI in industrial contexts are already established,” referencing the Machinery Regulation and Medical Devices software regulations.
She further noted that the AI Act introduces legal ambiguities by failing to align with existing product safety laws, the Cyber Resilience Act, and the Data Act. Bäumchen added that neither AI regulatory sandboxes nor the AI Service Desk can offer the “large-scale legal certainty” required to keep compliance costs at an acceptable level.
Key Compliance Priorities for Enterprises
Under the EU AI Act, businesses are obligated to evaluate AI system risk levels and implement corresponding transparency and security protocols. The regulation prohibits AI applications that conduct social behavior assessments and bans emotion recognition in professional and educational environments. Companies involved in developing or utilizing high-risk AI systems must adhere to requirements concerning transparency, data governance, documentation, robustness, and cybersecurity once these obligations take effect within the next six months.
For companies operating in Germany, the immediate imperative is to establish what Gogia terms “a functional compliance operating system” before the August 2026 deadline.
Gogia stated, “Most enterprises still lack a comprehensive inventory of AI systems, encompassing internal developments, vendor-supplied features, and informal deployments across various business units.” He identified vendor governance as a critical pressure point, emphasizing that businesses must ensure suppliers can furnish technical documentation and proof of conformity assessment.
Gogia predicted that financial services would face intense scrutiny regarding credit scoring and automated underwriting processes, while employment systems are likely to prompt complaint-driven enforcement due to their direct impact on individuals. Germany’s implementation incorporates a central pathway for complaint intake, meaning “enforcement is not solely dependent on regulatory initiative; it can also be triggered externally.”
Germany missed the EU’s August 2, 2025, deadline for establishing national supervisory frameworks due to unforeseen early federal elections. The Federal Network Agency, however, launched an AI Service Desk in July 2025 and issued AI literacy guidelines in June 2025. The ministry’s statement indicated that over 1,000 proposed changes were considered during the drafting process.