09/29/2025 | Press release | Distributed by Public on 09/28/2025 15:08
By uniting strategy, promotion, and regulation in a single law, South Korea has given itself a powerful instrument to shape AI-but its blunt regulatory mandates threaten to drag down the very strengths that make the act ambitious.
Contents
Key Takeaways 1
Introduction. 2
Overview of the Act 5
Chapter 1: General Provisions 6
Chapter 2: Governance for Sound AI Development and Trust 8
Chapter 3: Policies for AI Development and Industry Promotion. 9
Chapter 4: AI Ethics and Trustworthiness 13
Chapter 5: Supplementary Provisions 17
Chapter 6: Penalties 18
Conclusion. 18
Appendix: AI Act Summary 19
Endnotes 23
South Korea's National Assembly passed the AI Framework Act {인공지능 발전과 신뢰 기반 조성 등에 관한 기본법(약칭: 인공지능기본법)} in December 2024, and in doing so made South Korea the first country to fold three levers of AI policy into a single statute.[1]The act is at once a strategy to coordinate government direction, an industrial policy to promote artificial intelligence (AI) development and adoption, and a regulatory framework to manage risks.
In any country, these three AI policy levers-strategy, promotion, and regulation-are interdependent: flaws in one will inevitably weaken the others. But by tying them together in one law, South Korea has given a single instrument the weight of the whole system. That makes the upside greater if the AI Framework Act (hereafter, "the Act") succeeds, but it also raises the cost of mistakes.
The Act gets the first two levers broadly right, laying the groundwork for a national strategy and treating AI as a strategic industry, but falters in its third, where blunt regulatory mandates risk dragging down the strengths of the rest.
It succeeds in building a coherent strategy, correctly treating AI as a domain that demands centralized vision and coordinated investment. It also advances industrial policy in the right spirit by backing data, clusters, and adoption, though its bias toward small and medium-sized enterprises (SMEs) risks stifling the scale needed for global competitiveness. But in regulation, the Act falls dangerously short: it applies broad definitions that sweep in ordinary tools, imposes blunt obligations such as labeling and compute thresholds, and burdens firms with process-heavy reporting rather than performance-based oversight.
The law is set to take effect in January 2026, and South Korea's Ministry of Science and ICT (MSIT) is working on an Enforcement Decree that details how the Act should apply in practice. On September 8, MSIT issued a draft Enforcement Decree (AI기본법 하위법령) that is expected to be finalized and issued by the end of 2025.[2]
Given the Act's single-instrument design, South Korea cannot carry forward what works without also locking in what does not unless the defects are fixed now. To achieve its goals of safeguarding human rights and dignity, driving innovation that improves quality of life, and strengthening national competitiveness, the law will need targeted adjustments at two levels. The National Assembly should refine the statute to fix structural flaws, such as broad definitions, rigid research and development (R&D) mandates, and blunt regulatory triggers, that risk misdirecting oversight and constraining innovation. At the same time, MSIT must ensure that the final Enforcement Decrees translate the law into balanced, practical rules that support South Korea's AI ecosystem without imposing undue burdens or distorting competition.
The National Assembly should make the following amendments to the AI Framework Act:
▪ Amend Article 2 (Definitions) of the AI Framework Act to narrow the definition of "AI system" so that later uses of the term in the Act apply only to systems that create novel governance challenges. A more precise formulation would be: "artificial intelligence system" (AI system) means a system that, based on parameters unknown to the provider or user, infers how to achieve a given set of objectives using machine learning and produces system-generated outputs such as content (generative AI systems), predictions, recommendations, and decisions, influencing the real or virtual environments with which it interacts.
▪ Amend Articles 7-9 of the AI Framework Act, which cover the National AI Strategy Committee and its functions. Preserve the committee's authority to set the national AI strategy through the AI Master Plan and to allocate resources across government AI programs but remove its power to dictate regulatory changes. Regulatory design and enforcement should instead remain with sectoral ministries that have the necessary expertise. This would keep the benefits of strategic unity at the top while ensuring that rules for medical AI, financial AI, and autonomous vehicles are crafted by the agencies best equipped to manage their specific risks.
▪ Amend Article 13 (Support for AI Technology Development and Safe Use) to remove prescriptive R&D priorities and instead empower MSIT to design and update a flexible national AI R&D roadmap. This change would let South Korea's investments track global breakthroughs rather than be locked into outdated mandates.
▪ Amend Article 17 (SME Priority in AI Support Policies) to remove the statutory requirement that SMEs receive "priority consideration." The law should adopt size-neutral language that enables the government to support firms of all sizes according to their strengths-allowing start-ups and SMEs to focus on experimentation and diffusion, while giving larger firms the backing they need to drive capital-intensive R&D, scaling, and global market reach.
▪ Amend Article 31 (Obligation to Ensure AI Transparency) to remove mandatory disclosure requirements. Watermarks and AI labels are technically fragile and inconsistent across jurisdictions and give a false sense of security because they do not address the specific harms that policymakers are concerned about, such as disinformation, intellectual property violations, and deepfakes. Instead, the law should direct MSIT and other ministries to promote voluntary provenance standards such as the Coalition for Content Provenance and Authenticity (C2PA), invest in digital and AI literacy programs, and adopt targeted rules for specific harms such as intellectual property (IP) violations, campaign transparency, and online harassment.[3]
▪ Amend Article 32 (Obligation to Ensure AI Safety), which now triggers oversight for systems trained above a compute threshold, to remove compute as the criterion. Compute use is not a reliable predictor of risk. The law should amend Article 12 to authorize the AI Safety Research Institute (AISI) to conduct post-deployment evaluations of AI systems. AISI should test deployed models in high-impact sectors, monitor failures and incidents, and publish findings so that oversight is based on real-world performance rather than arbitrary training inputs.
▪ Amend Articles 33-35, which impose extensive self-assessments, documentation, and risk reporting for high-impact AI, to replace these process-heavy mandates with performance-based requirements. The law should direct each sectoral ministry to set measurable outcomes for AI systems in its domain, while the Korea Research Institute of Standards and Science (KRISS) should be directed to design evaluation protocols that test whether AI systems meet those outcomes. This would shift oversight from box-ticking paperwork to meaningful performance standards.
▪ Amend Article 36 (Designation of Domestic Agent) to remove revenue and user thresholds as triggers for stricter oversight of foreign firms. Oversight should be triggered only when a system is designated as high impact, regardless of whether the provider is domestic or foreign. This would ensure equal treatment, eliminate arbitrary thresholds, and ground regulation in actual risk rather than company size or location.
MSIT should use its final Enforcement Decrees under the AI Framework Act to provide clarity and balance in implementation. In particular:
▪ For Article 14 (Standardization of AI Technology), clarify that standards should be industry-led, with government serving only as convenor and coordinator. Government should focus on bringing firms together, supporting participation in international forums, and aligning agencies, while leaving technical standards development to industry consortia. This would keep South Korea's approach consistent with global practice and ensure that its firms remain competitive in allied markets.
▪ For Articles 18 (Support for AI Start-ups) and 19 (AI Convergence Policies), clarify that implementation should be size neutral. Support should continue for start-ups and SMEs through training, commercialization help, and adoption programs, but larger firms must also receive resources to expand AI use in key industries and strengthen South Korea's global competitiveness. Funding programs, training schemes, and adoption incentives should not automatically favor SMEs but instead strengthen the entire AI ecosystem.
▪ For Article 40 (Authority to Demand Data and Conduct Inspections), set clear guidelines on what data may be requested, from whom, and for what purposes. Requests should be limited to information strictly necessary to verify compliance with the Act-such as documentation of risk management measures or user protection practices-and must never extend to unrelated business data. The decree should also require strong safeguards for confidentiality to protect firms' proprietary information, preserving accountability without creating unnecessary compliance costs.
▪ For Article 42 (Penalties), scale sanctions according to the severity and impact of violations. Penalties should be increased for systemic or repeated breaches but remain proportionate for minor or first-time infractions. Clear guidance in the decree would ensure accountability while avoiding overly harsh punishments that discourage participation or cooperation under the Act.
▪ For Article 43 (Administrative Fines), establish a grace period before penalties are imposed. During this transitional phase, firms that fall short of new obligations should receive warnings or corrective guidance rather than immediate fines. This would give both domestic and foreign operators time to build effective compliance systems while still ensuring strong enforcement once the regime is fully in place.
The AI Framework Act unfolds across six chapters. (For a chapter-by-chapter summary of articles, see the appendix.) The chapters are as follows:
1. General Provisions (총칙), which lays out the purpose, scope, and key definitions such as "AI system," "high-impact AI," and "generative AI."
2. Governance for Sound AI Development and Trust (인공지능의 건전한 발전과 신뢰 기반 조성을 위한 추진체계), which establishes the machinery for a national AI strategy. It centers on a presidential committee, a recurring Master Plan, and support institutions such as the AI Policy Center and AISI.
3. Policies for AI Technology Development and Industry Promotion (인공지능기술 개발 및 산업 육성), which combines innovation policies focused on R&D, data infrastructure, and standards with industrial development measures that promote SMEs and start-ups, foster clusters and data centers, and support firms' international expansion.
4. AI Ethics and Trustworthiness (인공지능윤리 및 신뢰성 확보), which sets the regulatory approach. It combines soft-law measures, such as ethical principles and voluntary ethics committees, with hard-law obligations on transparency, safety, oversight, and impact assessments for generative and high-impact AI.
5. Supplementary Provisions (보칙), which sets the rules for funding and operating the Act's programs.
6. Penalties (벌칙), which sets penalties and administrative fines for failing key compliance obligations such as disclosing AI use, designating a domestic representative, or complying with corrective orders.
This report follows the Act's own structure: chapter 1 examines the foundational definitions that shape its scope; chapters 2, 3, and 4 take up its three core goals of building a national AI strategy, advancing industrial policy, and setting out a regulatory approach to manage risks, respectively; and chapters 5 and 6 address the operational framework and penalties. The sections that follow assess each chapter in turn, highlighting what they get right, where they fall short, and what policymakers should fix to make the framework more effective.
Chapter 1 of the Act, which runs from articles 1 to 5, establishes the fundamental principles and definitions that set the legal foundation for the entire Act. Table 1 below is a summary of the key definitions from article 2 of the Act.
Table 1: Summary of key definitions from South Korea's AI Framework Act
Term |
Act's Definition |
Articles Invoked In |
Artificial Intelligence (인공지능") |
The electronic implementation of human intellectual abilities, such as learning, reasoning, perception, judgment, and language understanding. |
This is the foundational term used throughout the entire Act to refer to the core technology being promoted. |
AI System (인공지능시스템) |
An AI-based system that infers outputs such as predictions, recommendations, and decisions with various levels of autonomy and adaptability. |
Primarily referenced in Article 32 (safety measures for large-scale AI systems) and also underpins Articles 33-35 on high-impact AI. |
AI Technology (인공지능기술 ) |
The hardware, software, or application technologies necessary for AI implementation. |
Article 6 (Government support for R&D) Article 13 (AI technology and data infrastructure) |
High-Impact AI (고영향 인공지능) |
An AI system that has the potential to significantly impact human life, safety, or fundamental rights. |
Article 33 (Identification of high-impact AI) Article 34 (Responsibilities for high-impact AI operators) Article 35 (AI impact assessments) |
Generative AI (생성형 인공지능) |
An AI system that generates new text, sound, images, or other outputs by imitating the structure and characteristics of input data. |
Article 31 (Transparency obligation for generative AI) |
AI Industry (인공지능산업) |
The industries that develop, manufacture, produce, or distribute products or services using AI or AI technology. |
Article 6 (Government support for AI industry promotion) |
AI Business Operator (인공지능사업자") |
A legal entity, organization, individual, or government agency that conducts business related to the AI industry. Translates to two sub-categories: "AI Development Operator" and "AI Utilization Operator." |
Articles 31-36 (This is the main regulatory subject for all "hard-law" provisions.) |
User (이용자) |
The person who receives AI products or services. |
Article 32 (safety measures) Article 34 (right to explanation) Article 35 (impact assessments. |
Affected Person (영향받는 자) |
The person whose life, physical safety, or fundamental rights are significantly affected by an AI product or service. |
Article 34 (Protections and explanations for affected persons) Article 35 (Impact assessments on fundamental rights) |
AI Society (인공지능사회") |
A society that creates value and drives progress in all fields (industry, economy, society, culture, administration, etc.) through AI. |
|
AI Ethics (인공지능윤리) |
Ethical standards to be observed by all members of society in all areas of AI (development, provision, utilization, etc.), based on respect for human dignity, to realize a safe and reliable AI society that protects the rights, lives, and property of the people. |
Articles 27-30 (This is the main subject for all "soft-law" provisions.) |
One challenge with the Act serving three different goals at once is that it has to invoke "AI" in very different senses-sometimes referring to AI as a broad technology, sometimes to AI as an industry, and sometimes to specific systems that people actually use. That means the law has to be precise at every turn. In the regulatory sections, it should be crystal clear that what's being governed are AI systems and tools in practice, not the broad technology itself. By contrast, in the strategy and industrial policy sections, it makes sense to speak more broadly about supporting AI as a field of research and technology development.
To define AI precisely and appropriately, and make regulation effective for the systems that genuinely create novel risks, the law has to draw boundaries that are narrow enough to avoid sweeping in ordinary software, clear enough to provide certainty for developers and users, and stable enough to remain relevant as technologies evolve. It should capture only those technical properties that raise new governance challenges rather than generic functions such as making a prediction or recommendation.[4]
South Korea's definition-"An AI-based system that infers outputs such as predictions, recommendations, and decisions with various levels of autonomy and adaptability"-falls short on all three counts. First, it is too broad: many traditional software programs generate predictions or recommendations, but they do not introduce the novel risks that justify regulation. Second, it is vague: terms such as "AI-based," "autonomy," and "adaptability" lack clear technical meaning, leaving the scope open to interpretation and constant expansion. Third, it is unstable: by tying regulation to generic functions, it risks sweeping in yesterday's tools (e.g., statistical models in spreadsheets) and tomorrow's innovations alike, forcing endless amendments rather than providing a durable, technology-neutral foundation.
▪ The National Assembly should amend Article 2 (Definitions) of the AI Framework Act to narrow the definition of "AI system" so that later uses of the term in the Act apply only to systems that create novel governance challenges. A more precise formulation would be "Artificial intelligence system" (AI system) means a system that, based on parameters unknown to the provider or user, infers how to achieve a given set of objectives using machine learning and produces system-generated outputs such as content (generative AI systems), predictions, recommendations, and decisions, influencing the real or virtual environments with which it interacts.
Chapter 2 of the Act, which runs from article 6 to 12, establishes a national AI strategy and builds the institutional machinery to give it force.
Article 6 requires MSIT to establish and implement a comprehensive AI Master Plan, which serves as a national-level strategic roadmap for AI policy across all government sectors, every three years.[5]This plan sets the national direction by outlining strategies for talent development, AI ethics, investment priorities, and a national-level response to societal changes brought about by AI.
To give the plan political weight and authority, articles 7-9 create a National AI Committee under the president with the power to deliberate and make decisions on critical matters such as approving the Master Plan and on R&D strategies, investment, and identifying and improving AI regulations. The law creates a clear mandate for other government bodies to act on the committee's decisions. According to article 8, when the committee issues a recommendation or expresses an opinion for a law or system to be changed, the relevant national agency (e.g., a ministry) must establish a plan to implement the improvements. In September 2025, the government issued a presidential decree formally expanding and renaming the National AI Committee (국가인공지능위원회) as the National AI Strategy Committee (국가AI전략위원회).[6]It was elevated to the highest cross-ministerial decision-making organ on AI, with a new secretariat. At its inaugural meeting in September 2025, the Strategy Committee adopted four agenda items: the Korea AI Action Plan as the government's overarching master plan (committee-led), a roadmap for establishing the National AI Computing Center (MSIT-led), draft guidelines for the enforcement decrees under the AI Framework Act (ministry-led), and the committee's detailed operating rules (committee-led).
Article 10 allows the committee to establish subcommittees, special committees, and expert advisory groups. This mechanism provides the flexibility to address specialized issues. As of September 2025, there are eight subcommittees (분과위원회) in the following domains: Technology Innovation and Infrastructure; Data; Global Cooperation; Society; Science and Talent; Defense and Security; Industrial Applications and Ecosystems; and Public-Sector Applications.[7]Each subcommittee will play a role in reviewing and aligning the government's AI budget of 10.1 trillion won for 2026-a more than threefold increase from 3.3 trillion won in 2025.[8]
Article 11 authorizes the designation of an AI Policy Center tasked with providing technical support for the Basic Plan and related policies, analyzing AI's social, economic, and cultural impacts, forecasting future trends and legal needs, and carrying out projects assigned under law or by national agencies. The details of its designation and operation are set by Presidential Decree.
Finally, article 12 establishes AISI under MSIT. The institute is charged with analyzing risks, developing safety evaluation standards and technologies, and cooperating internationally on AI safety, ensuring that citizen protection and trust remain a pillar of South Korea's AI ecosystem.
The South Korean model correctly identifies that for a technology as transformative as AI, a unified national vision is essential. The Act wisely centralizes two crucial components of an effective AI strategy: the vision and resource allocation. By establishing a presidential committee with the authority to set a single, ambitious vision, the government can ensure that all AI efforts-from R&D to public-sector applications-are aligned toward a shared goal of fueling innovation, strengthening competitiveness, and improving the lives of individuals with AI. This centralized approach also allows the government to efficiently direct capital and talent to the areas it deems most critical for those goals, preventing resources from being scattered across uncoordinated projects. This level of centralization is a strategic advantage.
However, the Act goes too far by consolidating regulatory authority under the committee. By giving this single, non-sector-specific body the power to dictate regulatory changes to ministries, article 8 effectively creates a de facto master regulator. But a single body-even one supported by expert subcommittees-cannot match the deep, specialized expertise of individual ministries. The Ministry of Health understands the unique risks of medical AI; the Financial Services Commission grasps the complexities of algorithmic trading; the Transport Ministry is best placed to address autonomous vehicles. When a central regulator with broad but shallow expertise dictates rules across such diverse sectors, the results will almost inevitably misfire-too rigid for safe innovations to navigate, while being dangerously inadequate for high-risk, safety-critical applications.
▪ The National Assembly should amend articles 7-9 to preserve the National AI Strategy Committee's authority to set the national AI strategy through the AI Master Plan and to allocate resources across government AI programs, but remove its power to dictate regulatory changes. Regulatory design and enforcement should remain with sectoral ministries that have the necessary expertise. This would keep the benefits of strategic unity at the top while ensuring that rules for medical AI, financial AI, and autonomous vehicles are crafted by the agencies best equipped to manage their specific risks.
Chapter 3 of the Act is structured in two parts that focus on both the inputs and outputs of the AI industry. The first part addresses the foundational building blocks for industry, while the second part targets their commercialization and vitalization.
The first part of the chapter, "AI Industry Foundation Building" (인공지능산업 기반 조성), which covers articles 13-15, is best characterized as innovation policy. It focuses on the fundamental inputs and infrastructure needed for a strong AI sector to emerge.
Article 13 empowers the government to fund projects it believes will help the AI industry develop, including initiatives that track international technology trends and support cooperation and commercialization and a set of R&D projects focused on safety functions, privacy protection, social impact assessments, and other safeguards for rights and dignity. The Enforcement Decree proposes criteria for these projects, such as requiring that they align with national AI policy, help build and use training data, support industry growth and jobs, generate economic benefits, and be both practical and technically feasible.
Article 14 authorizes the government to promote the development of standards for AI technology by creating, revising, and disseminating standards itself, as well as supporting private-sector efforts to advance standardization. It also requires the government to strengthen cooperation with international standard-setting bodies.
Article 15 requires MSIT to lead policies that expand the production, collection, and use of AI training data. It authorizes the government to fund projects that build and supply datasets and specifically requires MSIT to operate a centralized platform it calls "Integrated Provision System" that manages these datasets in one place. The law says that this system should be made available for free use by the private sector, but it also authorizes the government to collect fees in certain cases, with the details to be determined later by Presidential Decree. The Enforcement Decree adds details to article 15, proposing that the platform should let users search datasets in one place, organize and track them clearly, and connect with other systems, and include checks for quality. The Enforcement Decree also proposes a legal basis for fee collection, allowing MSIT to charge different rates based on the type and use of data, while requiring exemptions for public, nonprofit, and educational users, with full rules set by ministerial ordinance.
South Korea is right to recognize that government should play an active role in innovation policy for AI. The inputs needed to build a strong AI sector-R&D, data infrastructure, standards, and skills-are classic cases wherein markets underinvest because the benefits spill over far beyond any single firm. Left on their own, private actors will not build the shared foundations that allow an industry to flourish. A strong public hand in setting priorities, funding early-stage research, and ensuring that common resources such as datasets are available is essential if South Korea wants to remain competitive in AI. The intent of these provisions appears to rightly build on President Lee's "AI Highway" initiative, which already covers infrastructure, by adding other core inputs needed to sustain South Korea's rise as an AI leader.[9]
But the way the Act implements this role needs tweaking because it risks stifling the very innovation it seeks to fuel. MSIT already has charge of the legal and managerial framework for national R&D under the National R&D Innovation Act (국가연구개발혁신법) of 2020, but the guiding strategy for the R&D itself is now being dictated by article 13 the new AI Basic Act.[10]By locking specific R&D priorities related to ethics and safety, the law effectively predetermines what government-backed research should focus on. That approach risks both steering investment toward areas that may not be the most urgent or promising for South Korea's innovation ecosystem and leaving South Korea out of step with where global AI development is moving.
A similar issue arises with standards. Article 14 rightly emphasizes the importance of standardization and recognizes the role of private-sector efforts, while also aiming to increase participation in international standards setting. These are important steps that will help South Korean firms remain competitive globally. But the text also confusingly suggests that government itself should create standards. The actual development of technical standards is best left to industry, working through recognized standards bodies, with government playing a supportive role convening, aligning, and ensuring that South Korean firms are active and influential. If government shifts from supporting to directly setting standards, it risks producing rules that lag behind industry practice and fail to gain traction internationally.[11]
▪ The National Assembly should amend article 13 (Support for AI Technology Development and Safe Use) of the AI Framework Act to remove prescriptive R&D priorities and instead empower MSIT to design and update a flexible national AI R&D roadmap. This change would let South Korea's investments track global breakthroughs rather than be locked into outdated mandates.
▪ MSIT should ensure that the Enforcement Decree for Article 14 (Standardization of AI Technology) makes clear that standards must remain industry led, with government serving as convenor and coordinator. The ministry should focus on bringing firms together, supporting participation in international forums, and aligning agencies, while leaving technical standards development to industry consortia-keeping South Korea's approach consistent with global practice and its firms competitive in allied markets.
The second part, "AI Technology Development and AI Industry Vitalization" (인공지능기술 개발 및 인공지능산업 활성화), which covers articles 16-26, is better characterized as industrial policy. It moves beyond foundational inputs to focus on the active commercialization and growth of an AI-enabled industry.
Article 16 authorizes the state and local governments to support the introduction and use of AI technology by businesses and public institutions. This can take the form of helping develop and disseminate AI products and services, providing consulting and training (especially for SMEs, venture firms, and small businesses), and offering financial assistance to cover adoption costs. The Enforcement Decree proposes specific support measures.
Articles 17 and 18 prioritize smaller players in South Korea's AI ecosystem. Article 17 requires the government to give priority to SMEs when implementing AI-related support policies, while Article 18 authorizes projects to support start-up founders, provide training, assist with commercialization and financing, and foster institutions that back AI entrepreneurs.
Article 19 directs the government to create policies that encourage different sectors of the economy to adopt and integrate AI technology into their operations. It also authorizes the government to prioritize R&D projects related to this "AI convergence" within the national R&D framework, meaning that when it is deciding which research projects to fund, it can specifically choose those that involve combining AI technology with particular industries.
Article 20 requires the government to improve existing systems and legal and regulatory frameworks to support the development of South Korea's AI industry and authorizes support for research and public consultation to guide these reforms.
Article 21 places responsibility on MSIT to nurture domestic AI professionals and authorizes policies to attract overseas talent. These include monitoring global AI expertise, building international networks, supporting foreign experts' employment in South Korea, and facilitating cooperation with foreign institutions and international organizations.
Article 22 mandates the government to track global AI trends and promote cooperation. It authorizes support for firms seeking to expand abroad through information sharing, joint R&D, participation in international standards, foreign investment attraction, overseas marketing, and ethics-related cooperation. Public or private institutions may be tasked with carrying out this support, with government subsidies available.
Articles 21 and 22 address talent and internationalization. Article 21 tasks MSIT with cultivating domestic AI professionals and attracting overseas experts, while article 22 requires the government to track global AI trends and promote international cooperation. This includes supporting firms entering overseas markets through joint R&D, participation in standards setting, investment attraction, and marketing assistance.
Articles 23 to 25 focus on promoting industrial clustering, shared testbeds, and compute capacity. Article 23 authorizes national and local governments to designate AI clusters that bring firms and institutions together, with financial, administrative, and technical support. Article 24 empowers them to establish demonstration bases where companies can test, certify, and validate new technologies. Article 25 directs the government to promote AI data centers, supporting their establishment and use, especially by SMEs and research institutions, and encouraging balanced regional development. The Enforcement Decree proposes rules for designating AI clusters, selecting institutions to run them, and defining conditions for opening testbeds to firms.
Finally, article 26 creates the Korea Artificial Intelligence Promotion Association, a nonprofit corporation that promotes AI development, conducts surveys and statistics, operates joint-use facilities, supports international expansion, and runs education and awareness campaigns, with government subsidies available for its work. The Enforcement Decree proposes requirements for establishing the association and the standards for its articles of incorporation.
South Korea's approach here gets a lot right because it treats AI as a strategic industry that demands active industrial policy, not just generic support. By encouraging AI's integration across the economy, investing in talent, and building clusters, testbeds, and data centers, the Act embeds competitiveness into multiple policy realms. This is the kind of broad, cross-cutting strategy that countries facing global competition need-one that doesn't assume markets alone will deliver, but instead deliberately strengthens the players, infrastructure, and ecosystems that underpin long-term industrial advantage.[12]
However, several of the articles in this section over-privilege SMEs and start-ups. While it makes sense for South Korea to ensure that SMEs and start-ups have opportunities in the AI economy because, as the Organization for Economic Cooperation and Development (OECD) noted in its 2023 review of South Korea's innovation policy, SMEs are a crucial engine of job creation and serve as a key channel for the diffusion of digital technologies throughout the economy.[13]But the Act goes too far by giving them priority treatment across industrial policy measures.
A good innovation and industrial strategy should be size neutral, supporting small firms where they're strong (nimbleness, experimentation), but also enabling larger firms to do what they do best (capital-intensive R&D, scaling, global reach). By locking SMEs into "priority consideration," South Korea risks misallocating resources away from larger players who actually carry the bulk of AI investment and global competitiveness. Start-ups can develop new innovations, but without the scale and export capacity of larger firms, their innovations may never reach global markets. In other words, privileging SMEs may look supportive, but it could stifle South Korea's long-term ability to build globally competitive AI champions.[14]
▪ The National Assembly should amend article 17 (SME Priority in AI Support Policies) of the AI Framework Act to remove the statutory requirement that SME's receive "priority consideration." The law should adopt size-neutral language that enables the government to support firms of all sizes according to their strengths-allowing start-ups and SMEs to focus on experimentation and diffusion, while giving larger firms the backing they need to drive capital-intensive R&D, scaling, and global market reach.
▪ MSIT should implement articles 18 (Support for AI Start-Ups) and 19 (AI Convergence Policies) in a way that balances support between small and large firms. That means continuing to provide training, commercialization help, and early adoption programs for start-ups and SMEs, and also ensuring that larger firms receive resources to expand AI use in key industries and to lead South Korea's international competitiveness. MSIT should design funding programs, training schemes, and adoption incentives so that they do not automatically favor SMEs and instead strengthen the entire AI ecosystem.
Chapter 4 (articles 27-36) of the Act is the cornerstone for managing the risks associated with AI. It does this in a dual approach: by setting out soft-law, ethics-oriented measures (articles 27-30) and imposing hard-law, regulatory obligations (articles 31-36).
The Act takes an ethics-first approach, guiding AI development through a combination of nonbinding principles and voluntary compliance. Article 27 empowers MSIT to establish and publicly announce a set of broad ethical principles covering safety, reliability, and accessibility. While not legally binding, these principles would establish a foundational framework for the entire AI ecosystem. To promote adherence to this framework, article 28 authorizes companies, universities, and research institutes to form voluntary Private Autonomous AI Ethics Committees, which can verify compliance, investigate human rights concerns, and provide internal ethics education. Additionally, article 29 provides the legal basis for MSIT to establish an AISI, which is tasked with conducting research and development to protect citizens from potential risks posed by AI. Finally, article 30 supports voluntary verification and certification systems by directing the minister to provide relevant information, administrative assistance, and even financial aid to help SMEs meet these standards.
The Act's hard-law regulations are found from article 31 to 36 and establish a set of legally binding obligations for AI business operators:
▪Article 31 requires AI business operators to (1) notify users when a product or service is operated by AI, (2) indicate when outputs are generated by generative AI, and (3) clearly disclose when synthetic content such as sound, images, or video has been AI-generated so users can recognize it as such. The Enforcement Decree proposes specifying that operators have to meet these obligations using flexible notice.[15]
▪Article 32 requires AI business operators that develop or provide an AI system trained above a computation threshold (set by Presidential Decree) to establish a life cycle risk management plan, document it, and submit the results to MSIT. The Enforcement Decree proposes that this computation threshold be 1026 floating-point operations (FLOPs).
▪Article 33 requires AI business operators to conduct a self-assessment to determine whether their system qualifies as high impact. The draft Enforcement Decree establishes a formal process for operators to request confirmation from MSIT, with a 30-day response deadline (extendable in complex cases) and the possibility of appeal through a re-confirmation request. However, the law and Decree do not specify whether such reviews must occur regularly (e.g., annually), after significant system updates, or only upon initial deployment, leaving the frequency and triggers for review largely undefined.
▪Article 34 requires AI business operators of high-impact AI systems to ensure safety and reliability by establishing a risk management plan, providing user protection and explanations of results where feasible, putting in place human oversight, and maintaining thorough documentation of these measures. The Enforcement Decree proposes that operators publicly post the core elements of their risk management and user-protection measures (excluding trade secrets) and retain documentation for five years.
▪Article 35 requires AI business operators of high-impact AI systems to conduct impact assessments to evaluate potential effects on fundamental human rights. The Enforcement Decree proposes these cover groups affected, rights at risk, impacts, and mitigation plans, and allows assessments to be done in-house or by a third party.
▪Article 36 requires that foreign companies that provide AI services to South Korean users and meet certain criteria (e.g., user or revenue thresholds) designate a domestic representative to handle compliance with the Act. The Enforcement Decree proposes that the criteria for firms be that they fit one of the following conditions: annual revenue over 1 trillion won, revenue from AI service alone of over 10 billion won, an average of more than one million daily domestic users during the most recent three-month period, or they have been ordered to take corrective action following a serious safety incident.[16]
It seems clear that the intention of South Korean policymakers is to lead with voluntary, ethics-based measures and industry self-regulation. However, the regulatory obligations that follow in articles 31 through 36 undercut that approach. Instead of reinforcing the Act's emphasis on flexibility and innovation, these provisions lean on blunt, one-size-fits-all mandates-mandatory labeling, compute thresholds, and process-heavy reporting-that misdiagnose the risks of AI and misdirect oversight. The result is the Act's most promising elements are let down by regulatory tools that look tough on paper but will prove ineffective in practice.
First, as the Center for Data Innovation explained in its report "Why AI-Generated Content Labeling Mandates Fall Short," mandatory AI labeling such as that required by article 31 falls short for several reasons.[17]Watermarks and other marks are technically fragile and easily stripped, meaning bad actors can still spread unmarked content. Because South Korea's law applies extraterritorially, foreign providers offering AI services in South Korea must comply, but the moment South Korean users step outside that jurisdictional bubble, they will encounter unlabeled content on foreign-hosted platforms. Instead of reducing confusion, the law risks creating a patchwork of "AI-labeled here, not labeled there" that misleads users into thinking labels are a reliable indicator of trustworthiness. Most importantly, labeling does not address the underlying concerns that motivate it-disinformation, IP theft, or harmful deepfakes-which demand targeted, problem-specific solutions rather than one-size-fits-all disclosure rules.
The Enforcement Decree proposes adding flexibility to the disclosure requirements, such as by allowing nonvisible watermarking and multiple notice methods, but regulators should avoid leaning too heavily on labeling as a core compliance tool. Instead, policymakers should pivot toward building trust in digital content more broadly. That means adopting voluntary provenance standards for all content by promoting tools that embed cryptographically secure metadata so users can verify the source and history of both AI- and human-generated content. It also means investing in digital, AI, and media literacy to equip users to judge the trustworthiness of content themselves, rather than relying on labels that can be misleading or incomplete. Finally, regulators should create targeted rules for specific harms, including address disinformation, IP violations, and deepfakes, directly with problem-specific solutions (e.g., campaign disclosure rules, IP enforcement, antiharassment laws) rather than broad labeling requirements.
Second, using a compute threshold to determine which AI systems should be subject to heightened scrutiny, as article 32 does, is deeply problematic. Compute only measures the amount of resources spent to train a model, not the downstream impact of how that model is deployed. As researchers at Stanford University have explained, compute does not translate into reliable predictions of real-world capabilities, emergent behaviors, or risks.[18]Compute thresholds are also clumsy in practice. Different types of AI use very different amounts of compute: a level that captures large language models might miss powerful vision models, while a lower level would sweep in many harmless systems. And importantly, models can be changed in ways that dramatically increase their impact without much additional compute, such as small adjustments during fine-tuning or training with human feedback. Finally, as chips get faster and algorithms more efficient, today's compute thresholds quickly become outdated. In short, compute looks like an easy shortcut, but it is too blunt and unreliable to use as the main test for regulating AI.
Instead of relying on blunt compute thresholds, South Korea should adopt a system of post-deployment evaluations.[19]These are assessments that take place after an AI system is deployed, focusing on how it actually performs in the real world rather than how much compute was used to train it. They can reveal risks and failures that pre-deployment testing misses, because AI behavior often changes depending on the context in which it is used. For example, a model that looks safe in the lab may perform very differently when applied in health care, finance, or education.
South Korea has already established an AISI under article 12 of the Act, and it is the right body to lead this work. It should test deployed models in high-impact areas, collect and track incidents or failures as they arise, and publish clear findings on how models are functioning in practice.[20]Beyond just technical accuracy, these evaluations should look at whether outputs are understandable to users, whether they create unintended risks, and even their energy costs. That way, policymakers and the public can see how AI is really working on the ground, and South Korea's oversight can stay flexible and targeted-something static compute thresholds will never achieve.
Third, articles 33, 34, and 35 rely on transparency of process, assuming that exhaustive reports, risk documentation, and assessments will translate into meaningful accountability. But paperwork alone doesn't ensure progress. This looks comprehensive, but in practice, it risks becoming a box-ticking exercise. Procedural rules measure whether the right steps are followed, not whether the system actually performs fairly, safely, or reliably in the real world. For AI, where risks emerge from context and usage, paperwork cannot substitute for performance. The danger is that companies will generate reports and compliance files without improving outcomes, and regulators may lack the resources or expertise to test the underlying systems.
South Korea should regulate performance, not process, meaning policymakers should set performance-based requirements instead of mandating procedural checklists.[21]Regulators should focus on whether AI systems meet measurable standards of safety, fairness, and reliability once deployed. Performance-based regulation ensures that firms achieve real outcomes rather than simply check the box on compliance measures.
Finally, article 36 goes in the wrong direction by singling out foreign firms. If certain uses of AI pose risks to users or society, then those risks should trigger oversight no matter who provides the service. Holding only foreign firms to stricter rules creates loopholes for domestic companies, undermines fairness, and weakens protections. Risk should be the trigger, not where a company is based.
The regulatory provisions in chapter 4 risk undermining the rest of the Act. These provisions should be revisited through legislative amendment to bring them in line with the Act's broader vision-promoting innovation, supporting competitiveness, and safeguarding citizens.
The National Assembly should make the following amendments to the AI Framework Act:
▪ Amend Article 31 (Obligation to Ensure AI Transparency) to remove mandatory disclosure requirements. Watermarks and AI labels are technically fragile and inconsistent across jurisdictions, and give a false sense of security because they do not address the specific harms that policymakers are concerned about, such as disinformation, intellectual property violations, and deepfakes. Instead, the law should direct MSIT and other ministries to promote voluntary provenance standards such as the C2PA, invest in digital and AI literacy programs, and adopt targeted rules for specific harms such as IP violations, campaign transparency, and online harassment. [22]
▪ Amend Article 32 (Obligation to Ensure AI Safety), which now triggers oversight for systems trained above a compute threshold, to remove compute as the criterion. Compute use is not a reliable predictor of risk. Instead, the law should amend Article 12 to authorize the AISI to conduct post-deployment evaluations of AI systems. AISI should test deployed models in high-impact sectors, monitor failures and incidents, and publish findings so that oversight is based on real-world performance rather than arbitrary training inputs.
▪ Amend Articles 33-35, which impose extensive self-assessments, documentation, and risk reporting for high-impact AI, to replace these process-heavy mandates with performance-based requirements. The law should direct each sectoral ministry to set measurable outcomes for AI systems in its domain, while KRISS should be directed to design evaluation protocols that test whether AI systems meet those outcomes. This would shift oversight from box-ticking paperwork to meaningful performance standards.
▪ Amend Article 36 (Designation of Domestic Agent) to remove revenue and user thresholds as triggers for stricter oversight of foreign firms. Oversight should be triggered only when a system is designated as high-impact, regardless of whether the provider is domestic or foreign. This would ensure equal treatment, eliminate arbitrary thresholds, and ground regulation in actual risk rather than company size or location.
Chapter 5 of the Act provides the operational details of the law, covering how the government will fund, monitor, and enforce its provisions.
Article 37 requires the government to secure financial resources to support the AI industry, ensuring that R&D, infrastructure, and other plans are adequately funded.
Article 38 mandates regular surveys and statistical reporting so policies remain evidence based. The draft Enforcement Decree (article 29) further specifies the scope of these surveys, requiring coverage of industry scale, firm revenues, workforce supply and demand, facilities, technology trends, global policy developments, and investment flows. It also allows data collection through fieldwork, literature, surveys, and electronic methods.
Article 39 authorizes the delegation of duties to specialized bodies for more efficient implementation. Draft article 30 expands on this authority, enabling MSIT to entrust public institutions or associations with tasks such as supporting AI convergence projects, data center utilization, industry surveys and statistics, and even the operation of expert committees.
Article 40 authorizes MSIT to demand data, carry out investigations and on-site inspections when violations are suspected, and issue corrective orders when noncompliance is confirmed. Draft article 31 provides limited exceptions, allowing MSIT to not initiate an investigation if sufficient evidence is already available or if a complaint is judged to be frivolous or intended to obstruct official duties.
Finally, article 41 applies public-official accountability standards to all committee members and entrusted actors, reinforcing transparency and integrity in execution.
Giving MSIT broad authority to demand data and conduct inspections risks overreach. Without clear limits, it could impose heavy burdens on firms, discourage innovation, and raise concerns about how sensitive business data is handled.
▪ MSIT should use the Enforcement Decree for article 40 (Authority to Demand Data and Conduct Inspections) of the AI Framework Act to set clear guidelines on what data can be requested from AI business operators, for what oversight purposes, and under what conditions. Requests should be limited to information strictly necessary to verify compliance with the Act, such as documentation of risk management measures or user protection practices, and should never extend to unrelated business data. The decree should also require strong safeguards for confidentiality to ensure that sensitive commercial information is protected. This approach would preserve accountability while avoiding unnecessary compliance costs and protecting firms' proprietary data.
Chapter 6 establishes penalties to enforce compliance with the Act. Article 42 imposes a penalty of up to three years in prison or a fine of up to 30 million won for any individual who leaks a business's confidential information they obtained while performing duties under the Act. Article 43 imposes an administrative fine of up to 30 million won for three specific violations: a foreign firm failing to designate a domestic representative, a firm failing to notify a user that they are interacting with a generative or high-impact AI system, or a firm failing to comply with a corrective order issued by the government.
Articles 42 and 43 do not differentiate between minor infractions and systemic harms. Treating all violations as if they pose the same level of risk could discourage experimentation, overwhelm regulators with low-level cases, and dilute focus away from serious breaches that genuinely threaten citizens or the AI ecosystem. An effective operational framework should align penalties with the scale and nature of harm, ensuring accountability without stifling innovation.
▪ MSIT should use the Enforcement Decree for article 42 (Penalties) of the AI Framework Act to scale sanctions according to the severity and impact of violations. Penalties should be increased for systemic or repeated breaches but remain proportionate for minor or first-time infractions. Clear guidance in the decree would ensure accountability while avoiding overly harsh punishments that could discourage participation or cooperation under the Act.
▪ MSIT should use the Enforcement Decree for article 43 (Administrative Fines) to establish a grace period before fines are imposed. During this transitional phase, firms that fall short of new obligations should receive warnings or corrective guidance rather than immediate penalties. This would give both domestic and foreign operators time to build effective compliance systems while still ensuring that strong enforcement takes effect once the regime is fully in place.
The AI Framework Act will set the course for South Korea's AI trajectory for the next decade. It already lays strong foundations in strategy and industry development, but blunt, one-size-fits-all rules threaten to blunt those gains. The way forward is decisive and practical: Tighten the statute to fix structural flaws and use the final Enforcement Decrees to implement balanced, risk-based, performance-focused rules. If South Korea does this, the Act will protect rights, catalyze innovation that improves daily life, and lock in a durable edge in global competitiveness.
▪ Article 1 (Purpose): The purpose of the law: to both promote the sound development and establish a foundation of trust for AI, thereby protecting citizens' rights and dignity, improving their quality of life, and strengthening national competitiveness
▪ Article 2 (Definitions): The key definitions, covered in table 1
▪ Article 3 (Basic Principles and Responsibilities of the State): The principles: safety, reliability, right to explanation for affected persons, respect for operators' creativity, and policies for societal adaptation
▪ Article 4 (Scope of Application): The scope including extraterritorial application and exceptions for national defense/security
▪ Article 5 (Relationship with Other Laws): The precedence of the law and the requirement for consistency with it when creating or amending other laws
▪ Article 6 (Establishment of the AI Master Plan): The establishment of the master plan by MSIT, its purpose, required contents, and its relationship with other laws
▪ Article 7 (National AI Committee): The details of the National AI Committee, its purpose, composition, role of the president as chairman, terms of members, confidentiality requirements, and its limited duration
▪ Article 8 (Functions of the Committee): The specific functions of the committee, including deliberation on the master plan, policy, R&D strategy, and regulation. Note its ability to make recommendations
▪ Article 9 (Exclusion, Recusal, and Avoidance of Members): The rules for avoiding conflicts of interest
▪ Article 10 (Subcommittees, etc.): The provisions for establishing subcommittees, special committees, and advisory groups
▪ Article 11 (AI Policy Center): The role of the AI Policy Center in developing AI policy and international norms
▪ Article 12 (AI Safety Research Institute): The establishment and functions of the AISI to ensure "AI safety" and protect the public
▪ Article 13 (Support for AI Technology Development and Safe Use): Government support for R&D, commercialization, and safe use of AI technology
▪ Article 14 (Standardization of AI Technology): Government efforts in standardizing AI technology, training data, and safety
▪ Article 15 (Establishment of Policies Related to AI Training Data): The creation of policies and systems for the production, collection, management, and distribution of AI training data
▪ Article 16 (Support for AI Technology Introduction and Utilization): Support for businesses and public institutions to adopt and utilize AI
▪ Article 17 (Special Support for Small and Medium Enterprises, etc.): The principle of prioritizing support for SMEs
▪ Article 18 (Revitalization of Startups): Support for AI-related start-ups
▪ Article 19 (Promotion of AI Convergence): Policies to promote the integration of AI with other industries
▪ Article 20 (Institutional Improvement, etc.): The government's responsibility to improve laws and regulations
▪ Article 21 (Securing Professional Personnel): Policies for fostering and securing domestic and foreign AI talent
▪ Article 22 (Support for International Cooperation and Overseas Market Entry): Support for international collaboration and for businesses to enter foreign markets
▪ Article 23 (Designation of AI Clusters, etc.): The establishment and support for AI clusters
▪ Article 24 (Creation of AI Verification Infrastructure, etc.): The creation of facilities and equipment for verifying and testing AI technologies
▪ Article 25 (Promotion of Policies Related to AI Data Centers, etc.): The policies for establishing and operating AI data centers
▪ Article 26 (Establishment of the Korea AI Promotion Association): The details of establishing a private association for AI promotion
▪ Article 27 (AI Ethical Principles, etc.): The creation and dissemination of AI ethical principles and implementation plans
▪ Article 28 (Establishment of Private Autonomous AI Ethics Committee): The optional establishment of private ethics committees and their functions
▪ Article 29 (Establishment of Policies to Foster AI Trustworthiness): Government policies to minimize risks and build a foundation of trust
▪ Article 30 (Support for AI Safety and Trustworthiness Verification, Certification, etc.): Support for voluntary verification and certification activities, especially for SMEs. Note the obligation for high-impact AI
▪ Article 31 (Obligation to Ensure AI Transparency): The transparency obligations for AI operators, including pre-notification for high-impact and generative AI and clear labeling of AI-generated content
▪ Article 32 (Obligation to Ensure AI Safety): The safety obligations for AI systems that meet a certain computational threshold
▪ Article 33 (Verification of High-Impact AI): The requirement for AI operators to verify if their AI is high impact, and the process for requesting government confirmation
▪ Article 34 (Obligations of Operators related to High-Impact AI): The specific measures that operators must take for high-impact AI, including risk management, explainability, user protection, and human oversight
▪ Article 35 (High-Impact AI Impact Assessment): The effort-based requirement for operators to conduct human rights impact assessments
▪ Article 36 (Designation of Domestic Agent): The requirement for foreign AI operators to designate a domestic agent
▪ Article 37 (Expansion of Financial Resources for AI Industry Promotion, etc.): The government's responsibility to secure funding for AI promotion
▪ Article 38 (Fact-Finding Surveys, Statistics, and Indicators): The government's obligation to conduct surveys and compile statistics
▪ Article 39 (Delegation of Authority and Entrustment of Duties): The provisions for delegating and entrusting authority to other government bodies or organizations
▪ Article 40 (Fact-Finding Investigation, etc.): The government's right to investigate AI operators for violations
▪ Article 41 (Deemed Public Officials for Application of Penal Provisions): The provision that nonpublic committee members and entrusted employees are considered public officials for penal purposes
▪ Article 42 (Penal Provisions): The penalty for leaking confidential information
▪ Article 43 (Administrative Fines): The administrative fines for specific violations (failure to notify, designate a domestic agent, or comply with a corrective order)
▪The effective date, preparatory acts, and a special provision regarding a pre-existing entity
Acknowledgments
The authors would like to thank Robert D. Atkinson, president of the Information Technology and Innovation Foundation, and Erica Schaffer, senior digital communications manager at ITIF.
The authors would also like to thank Randolph Court for his editorial assistance.
Any errors or omissions are the authors' responsibility alone.
About the Author
Sejin Kim is a tech policy analyst specializing in AI, blockchain, space, and emerging tech for ITIF's Center for Korean Innovation and Competitiveness. Drawing on technology journalism experience bridging South Korean and U.S. tech ecosystems, she brings cross-cultural insights into national competitiveness and policy dynamics. Notable publications include "On the Recent Development of Central Bank Digital Currency (CBDC)" (December 2020, listed in Reuters Refinitiv), "WeMix, Web3 Gaming and Ethics" (January 2023), and "2025 Global Tech Trends: 17 of The Trend Revolution is Coming" (November 2024).
Hodan Omaar is a senior policy manager focusing on AI policy at ITIF's Center for Data Innovation. Previously, she worked as a senior consultant on technology and risk management in London and as a crypto-economist in Berlin. She has an M.A. in economics and mathematics from the University of Edinburgh.
About ITIF
The Information Technology and Innovation Foundation (ITIF) is an independent 501(c)(3) nonprofit, nonpartisan research and educational institute that has been recognized repeatedly as the world's leading think tank for science and technology policy. Its mission is to formulate, evaluate, and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. For more information, visit itif.org/about.
[1]. 법제처 국가법령정보센터, 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법 (약칭: 인공지능기본법), 2025년 1월 21일, https://www.law.go.kr/LSW//lsSc.do?section=&menuId=1&subMenuId=15&tabMenuId=81&eventGubun=060101&query=%EC%9D%B8%EA%B3%B5%EC%A7%80%EB%8A%A5#undefined.
[2]. 과학기술정보통신부(MSIT), [국가인공지능전략위 보도참고] 국가 최상위 AI 전략 논의기구, 대통령 직속 「국가인공지능전략위원회」 출범, 2025년 9월 8일, https://www.msit.go.kr/bbs/view.do?sCode=user&mId=307&mPid=208&pageIndex=8&bbsSeqNo=94&nttSeqNo=3186222&searchOpt=ALL&searchTxt=.
[3]. Coalition for Content Provenance and Authenticity (C2PA), "Technical Standard for Content Provenance and Authenticity," accessed September 15, 2025, https://c2pa.org.
[4]. Patrick Grady, "The AI Act Should Be Technology-Neutral," February 1, 2023, https://www2.datainnovation.org/2023-ai-act-technology-neutral.pdf.
[5]. 제6조(인공지능 기본계획의 수립) ① 과학기술정보통신부장관은 관계 중앙행정기관의 장 및 지방자치단체의 장의 의견을 들어 3년마다 인공지능기술 및 인공지능산업의 진흥과 국가경쟁력 강화를 위하여 인공지능 기본계획(이하 "기본계획"이라 한다)을 제7조에 따른 국가인공지능위원회의 심의ㆍ의결을 거쳐 수립ㆍ변경 및 시행하여야 한다. 법제처 국가법령정보센터, 인공지능 발전과 신뢰 기반 조성 등에 관한 기본법 (약칭: 인공지능기본법), 2025년 1월 21일, https://www.law.go.kr/LSW//lsSc.do?section=&menuId=1&subMenuId=15&tabMenuId=81&eventGubun=060101&query=%EC%9D%B8%EA%B3%B5%EC%A7%80%EB%8A%A5#undefined.
[6]. 과학기술정보통신부(MSIT), 「국가 최상위 AI 전략 논의기구, 대통령 직속 '국가인공지능전략위원회' 출범」 보도자료, 2025년 9월 8일, https://www.msit.go.kr/bbs/view.do;jsessionid=Q8c56HbMdJ25UAapjE_TRv2pnofxRLT_D4knAwt0.AP_msit_1?sCode=user&mPid=208&mId=307&bbsSeqNo=94&nttSeqNo=3186222.
[7]. 과학기술정보통신부, 「한국형 '과학기술×인공지능' 본격 추진」 보도자료, 2025년 9월 10일, https://www.msit.go.kr/bbs/view.do?sCode=user&mId=307&mPid=208&bbsSeqNo=94&nttSeqNo=3186242.
[8]. 과학기술정보통신부(MSIT), 「혁신경제의 두 엔진, 인공지능과 과학기술로 미래 성장을 견인하겠습니다」 보도자료, 2025년 9월 3일, https://www.msit.go.kr/bbs/view.do?sCode=user&mId=307&mPid=208&bbsSeqNo=94&nttSeqNo=3186192.
[9]. 과학기술정통부(MSIT), "배경훈 장관, '인공 지능 고속도로 협약식 및 간담회' 개최", 2025년 08월 29일. https://www.msit.go.kr/bbs/view.do?sCode=user&mId=307&mPid=208&bbsSeqNo=94&nttSeqNo=3186186; 대한민국 AI고속도로를 통해 AI 3대 강국 도약, 2025년 6월 23일, https://www.korea.kr/multi/visualNewsView.do?newsId=148944771.
[10]. National R&D Innovation Act (2020): Establishes MSIT's role in planning, coordinating, and managing national R&D programs and investments, https://elaw.klri.re.kr/eng_service/lawView.do?hseq=62484〈=ENG; MSIT, Mid- and Long-Term National R&D Investment Strategy (2023-2027): Confirms MSIT's central role in setting R&D priorities. MSIT press release; AI Framework Act, Article 13 (2024): Authorizes government to fund projects in areas such as international trend tracking, cooperation and commercialization, and R&D focused on safety, privacy, social impact, and rights protection. English translation, https://cset.georgetown.edu/wp-content/uploads/t0625_south_korea_ai_law_EN.pdf.
[11]. Nigel Cory, "Unpacking the Biden Administration's Strategy for Technical Standards: The Good, the Bad, and Ideas for Improvement" (ITIF, October 10, 2023), https://itif.org/publications/2023/10/10/unpacking-the-biden-administrations-strategy-for-technical-standards-the-good-the-bad-and-ideas-for-improvement/.
[12]. Robert D. Atkinson, "Weaving Strategic-Industry Competitiveness Into the Fabric of U.S. Economic Policy" (ITIF, February 7, 2022), https://itif.org/publications/2022/02/07/weaving-strategic-industry-competitiveness-fabric-us-economic-policy/.
[13]. OECD, OECD Reviews of Innovation Policy: Korea 2023, OECD Publishing, 2023, https://www.oecd.org/en/publications/oecd-reviews-of-innovation-policy-korea-2023_bdcf9685-en.html.
[14]. Robert D. Atkinson and Eric Kang, "The National Economic Council Gets It Wrong on the Roles of Big and Small Firms in U.S. Innovation" (ITIF, July 20, 2023), https://itif.org/publications/2023/07/20/nec-gets-it-wrong-on-roles-of-big-and-small-firms-in-us-innovation/.
[15]. Draft Enforcement Decree Article 22 specifies that such notice may be provided through multiple channels-contracts, manuals, or terms of service; on-screen or device displays; physical postings at the place of provision; or other methods approved by MSIT. For generative AI, outputs may be labeled in human- or machine-readable form, with non-visible watermarking (e.g., provenance standards) recognized as an acceptable option. The decree also requires that disclosures be easily perceivable and appropriate to the main user group. Exceptions apply when the AI basis is already self-evident, when systems are used only for internal business purposes, or when the minister designates additional cases by notice. 과학기술정보통신부(MSIT), AI기본법 하위법령 제정방향, 2025년9월8일.
[16]. Draft Enforcement Decree Article 28 further clarifies that the revenue thresholds under paragraph 1, subparagraphs 1 and 2 must be calculated in South Korean won, based on the average exchange rate of the previous year (or the previous fiscal year in the case of corporations). 과학기술정보통신부(MSIT), AI기본법 하위법령 제정방향, 2025년9월8일.
[17]. Justyna Lisinska and Daniel Castro, "Why AI-Generated Content Labeling Mandates Fall Short" (Center for Data Innovation, December 16, 2024), https://datainnovation.org/2024/12/why-ai-generated-content-labeling-mandates-fall-short/.
[18]. Rishi Bommasani, "Drawing Lines: Tiers for Foundation Models," Stanford Center for Research on Foundation Models, November 18, 2023, https://crfm.stanford.edu/2023/11/18/tiers.html.
[19]. Hodan Omaar, "The United States Should Seize the Global AI Stage in California to Shift Gears to Post-Deployment Safety" (Center for Data Innovation, October 28, 2024), https://datainnovation.org/2024/10/the-us-should-seize-global-ai-stage-in-california-to-shift-gears-to-post-deployment-safety/.
[20]. Daniel Castro, "Tracking AI Incidents and Vulnerabilities" (Center for Data Innovation, April 4, 2024), https://datainnovation.org/2024/04/tracking-ai-incidents-and-vulnerabilities/.
[21]. Daniel Castro, "Ten Principles for Regulation That Does Not Harm AI Innovation" (ITIF, February 8, 2023), https://itif.org/publications/2023/02/08/ten-principles-for-regulation-that-does-not-harm-ai-innovation/.
[22]. Coalition for Content Provenance and Authenticity (C2PA), "Technical Standard for Content Provenance and Authenticity," accessed September 15, 2025, https://c2pa.org.