This was the year the EU’s AI Act prohibitions took legal effect with penalties reaching €35 million. The year India operationalised its first comprehensive data protection framework with fines up to Rs 250 crore. The year GDPR enforcement exceeded €5.88 billion across Europe. The year China mandated AI content labelling and the US pivoted from regulation to aggressive deregulation.
From Brussels to Beijing, from Washington to New Delhi, from Singapore to São Paulo, 2025 witnessed an unprecedented wave of AI and data policy activity—22 major legislative movements, enforcement actions, and regulatory updates that fundamentally reshaped how organisations develop, deploy, and govern intelligent systems globally.
What follows is a comprehensive overview of every significant AI and data policy development that shaped 2025—organised by region and theme to help enterprise leaders, policymakers, and technologists navigate the rapidly evolving global governance landscape.
PART 1: THE REGULATORY SUPERPOWERS
Major frameworks from EU, US, China, UK shaping global baseline.
1. EU Data Act Implementation (September 2025)
On 12 September 2025, the majority of the EU Data Act’s provisions became applicable, marking the first comprehensive framework requiring companies to enable access to and sharing of both personal and non-personal data generated by connected products. The regulation grants users unprecedented rights to access and share IoT data, affecting operators across the entire data value chain with extraterritorial reach to non-EU businesses. Implementation occurs over multiple phases through September 2027, creating an 18-month compliance window for organisations to map data use cases, categorise data types, and implement safeguards protecting trade secrets whilst establishing fairness in data sharing, particularly for SMEs.
ALSO READ: Busting the 5 Biggest Myths About the EU Data Act
ALSO READ: EU Data Act Goes Live—Why Today Marks a Turning Point for Enterprise Strategy
2. EU AI Act Enforcement Begins (February-August 2025)
The EU AI Act entered its first major enforcement phase on 2 February 2025, when prohibitions on unacceptable-risk AI systems became legally binding, followed by requirements for high-risk AI systems and General Purpose AI models taking effect on 2 August 2025. The regulation establishes the world’s first comprehensive risk-based framework with penalties reaching €35 million or 7% of global annual turnover. Requirements include mandatory risk management, data governance, technical documentation, transparency, human oversight, and cybersecurity safeguards, with extraterritorial reach meaning organisations worldwide must comply if their AI systems affect EU individuals.
3. America’s AI Action Plan (July 2025)
On 23 July 2025, the Trump Administration unveiled “Winning the Race: America’s AI Action Plan;,” representing a decisive pivot from regulatory risk mitigation to prioritising innovation, deregulation, and global competitiveness. The plan directs federal agencies to roll back AI-related regulations, makes state funding contingent on deregulation, modernises permits for data centres, and updates procurement guidelines to ensure contracts only with “unbiased” developers free from “ideological dogmas such as DEI.” This marks a fundamental shift from the Biden administration’s emphasis on trustworthy AI, civil rights protections, and safety testing requirements.
4. China’s Global AI Governance Action Plan (July 2025)
China announced its Action Plan for Global Artificial Intelligence Governance on 26 July 2025, proposing a 13-point roadmap for global AI coordination and a potential global AI cooperation organisation headquartered in Shanghai. The government implemented concrete measures including the “AI Plus” plan setting targets for 70% penetration of intelligent terminals by 2027 and 90% by 2030, alongside mandatory AI labelling rules taking effect 1 September 2025 requiring clear marking of synthetic content. This positions China as a major player in setting international AI norms whilst maintaining strict domestic oversight aligned with “core socialist values.”
5. UK AI Opportunities Action Plan (January 2025)
On 13 January 2025, the UK government launched its AI Opportunities Action Plan, taking a fundamentally different approach from the EU’s prescriptive framework by favouring flexible, principles-based regulation requiring no new comprehensive structures in the short term. The plan includes establishing AI Growth Zones, creating a £500 million Sovereign AI Unit, committing nearly £2 billion to expand compute capacity (including a 20-fold increase and £750 million for an Edinburgh supercomputer), launching a National Data Library with over £100 million funding, and partnerships to train 7.5 million people in AI skills by 2030. The government published an AI Playbook in February 2025 and announced plans for legislation making voluntary agreements legally binding whilst granting independence to the AI Safety Institute.
PART 2: ASIA-PACIFIC LEADERSHIP
How Asia is defining its own AI governance path.
6. India’s Digital Personal Data Protection Rules (November 2025)
India notified the Digital Personal Data Protection Rules on 13-14 November 2025, operationalising the DPDP Act 2023 and establishing India’s first comprehensive data protection framework. The SARAL-approach rules create an 18-month phased compliance window requiring clear consent notices, India-based Consent Managers, mandatory breach notification “without delay,” and independent audits for Significant Data Fiduciaries. Penalties are severe: up to Rs 250 crore for failing to maintain security safeguards, Rs 200 crore for not reporting breaches, and Rs 50 crore for other violations.
ALSO READ: Data Act Unlocks the Physical World: Fintech’s Race to Monetise IoT Begins
7. Japan’s AI Promotion Act (May-September 2025)
Japan’s Parliament approved the AI Promotion Act on 28 May 2025, with full effect from 1 September 2025, making Japan the second major Asia-Pacific economy to enact comprehensive AI legislation. Characterised as “promotional” rather than prescriptive, the legislation focuses on fostering domestic AI ecosystem development through cooperative governance based on five fundamental principles including alignment with national frameworks, transparency, and international leadership. The Act established the AI Strategic Headquarters on 1 September 2025, led by the Prime Minister with all Ministers as members, publishing a draft AI Basic Plan on 12 September 2025.
8. South Korea’s AI Framework Act (December 2024-January 2025)
South Korea’s Framework Act on AI was passed on 26 December 2024, promulgated on 21 January 2025, taking effect 22 January 2026 following a one-year transition period. South Korea became the second country after the EU to adopt comprehensive AI regulation, combining industry promotion with binding safety requirements through a risk-based approach with extraterritorial reach. Requirements include risk management plans, result explanations, user protection mechanisms, and human oversight for high-impact systems, alongside establishment of the National AI Commission, Policy Centre, and Safety Research Institute.
9. Singapore AI Governance Advancement (2025)
Singapore continued solidifying its position as a global AI governance innovation hub, launching the Global AI Assurance Pilot in February 2025, releasing an Agentic AI Primer, and consulting on draft Guidelines for Securing Agentic AI from October-December 2025. The Monetary Authority released its most significant financial AI governance development in five years with November 2025 consultation on AI Risk Management Guidelines for financial institutions. Singapore maintains a “very light touch” regulatory approach relying on voluntary guidelines whilst applying existing laws, emphasising practical governance integration over prescriptive legislation.
10. Australia AI Safety Institute Announcement (November 2025)
On 25 November 2025, Australia announced establishment of the Australian AI Safety Institute (AISI) to commence operations in early 2026, joining the International Network of AI Safety Institutes. The AISI will monitor, test and share information on emerging AI technologies, help government keep pace with developments, act as a central hub supporting coordinated action across regulators, and guide businesses and the public on AI safety. This signals a shift from principles to technical assurance with regulators gaining in-house capacity to interrogate and test models, ensuring more consistent regulatory responses across privacy, consumer, competition, online safety, and financial services regimes.
PART 3: EMERGING FRAMEWORKS
Latin America, Africa, and Middle East establishing foundations.
11. Brazil’s AI Act Legislative Process (December 2024-2025)
Brazil’s Senate approved Bill No. 2,338/2023 in December 2024, moving to the House of Representatives in 2025 as Brazil’s most comprehensive AI regulation effort. Following a risk-based approach, the legislation establishes three tiers (excessive risk prohibited, high risk heavily regulated, other systems with basic requirements) with penalties up to R$50 million or 2% of revenue, taking effect one year after enactment. Notable provisions include mandatory human oversight, rights to explanation and contestation, creation of the National System for AI Governance coordinated by ANPD, and regulatory sandboxes, though social media algorithms were removed from the high-risk category following industry pressure.
12. African Union Continental AI Strategy Implementation (May-July 2025)
The African Union held a high-level dialogue in May 2025 building on the Continental AI Strategy adopted July 2024, positioning AI as central to AU Agenda 2063 implementation. With Africa’s AI compute capacity accounting for just 1% globally, the dialogue announced creation of a $60 billion Africa AI Fund and Africa AI Council whilst addressing concerns that over 83% of Q1 2025 AI startup funding went to just four countries. The year 2024-2025 proved pivotal with six AI-specific documents published in 2024 and three more in Q1 2025, as African policymakers aim to ensure “AI for Africa by Africa.”
13. Middle East AI Policy Developments (2025)
Middle Eastern GCC countries significantly advanced AI governance in 2025 with distinctive national approaches. Bahrain launched a National AI Policy in July 2025 and proposed draft AI Regulation Law potentially becoming the region’s first to impose administrative fines for non-compliance. Qatar continued implementing its National AI Strategy with $2.5 billion for Digital Agenda 2030 plus $2.4 billion for AI capabilities, notably as the only nation with legally binding AI guidelines requiring Qatar Central Bank approval for high-risk financial AI systems. The UAE, Saudi Arabia, and Egypt advanced strategies emphasising ethical oversight whilst balancing innovation ambitions.
PART 4: US SUBNATIONAL INNOVATION
States filling the federal void with pioneering legislation.
14. Colorado AI Act and Amendments (May-August 2025)
Colorado enacted the Consumer Protections for AI Act in May 2024 as the first comprehensive state AI law, followed by the AI Sunshine Act on 28 August 2025 amending implementation. The framework regulates high-risk AI systems preventing algorithmic discrimination in decisions affecting education, employment, loans, government services, healthcare, housing, insurance, or legal services. The August amendment delayed implementation from 1 February to 30 June 2026, requiring developers to prevent discrimination, provide risk information, and requiring deployers to complete annual impact assessments with Colorado Attorney General enforcement authority.
15. New York AI Regulation Package (2025)
New York advanced multiple initiatives including the RAISE Act (passed 12 June 2025, awaiting Governor signature) and NYC’s GUARD Act (passed November 2025). The RAISE Act targets “frontier” models with training costs exceeding $100 million, requiring safety protocols, adversarial testing, safeguards, incident reporting, and shutdown capabilities, with Attorney General penalties up to $10 million for initial violations and $30 million for subsequent violations. The GUARD Act established an Office of Algorithmic Accountability to assess city agency AI tools, create procurement rules, and publish reviewed systems.
16. California Privacy Law Updates (October 2025)
On 8 October 2025, Governor Newsom signed three significant privacy bills expanding consumer protections. Assembly Bill 566 requires mobile operating systems and browsers to include opt-out preference signals by 1 January 2027, moving privacy into core user experience. Senate Bill 361 expands data broker registration requiring detailed disclosures about sensitive data collection and whether data was sold to foreign actors, governments, or generative AI developers, with compliance required by 31 January 2026 for 2025 data brokers. Additional CCPA updates require privacy policy links on any data-collecting webpage, allow consent withdrawal anytime, and enable opt-out with the same ease as opt-in.
17. Tennessee ELVIS Act Implementation (July 2024-2025)
Tennessee’s Ensuring Likeness Voice and Image Security Act, effective 1 July 2024, gained significant attention throughout 2025 as the first state legislation specifically protecting against AI-based voice impersonation. The Act makes a person’s voice a protected property right, making unlawful the exploitation of voice for commercial gain without authorisation, with violations resulting in civil lawsuits and criminal prosecution as Class A misdemeanours carrying up to 11 months incarceration and $2,500 fines. The ELVIS Act serves as a template for California, Kentucky, and other jurisdictions introducing similar protections against AI voice cloning.
PART 5: ENFORCEMENT & CRITICAL DEBATES
From policy to practice: enforcement and unresolved tensions.
18. GDPR Enforcement Evolution (2025)
GDPR enforcement intensified in 2025, reaching approximately €5.88 billion across 2,245 enforcement actions by January 2025, with regulators prioritising violations of legal basis (€3.01 billion), processing principles (€2.51 billion), and security measures. Notable actions include €290 million for improper US data transfers, €30.5 million for AI company collecting special category data without consent, and €200 million to Google for disguised advertising emails. Enforcement expanded beyond technology to finance, healthcare, and other industries, with organisations facing monetary penalties, corrective actions, processing bans, and potential executive liability.
19. Copyright and AI Training Data Debates (2025)
Throughout 2025, copyright issues related to AI training data emerged as one of the most contentious global policy areas with no international consensus. On 9 May 2025, the US Copyright Office concluded certain unauthorised uses of copyrighted materials to train generative AI cannot be defended as fair use, establishing a spectrum from non-commercial research uses likely constituting fair use to commercial copying when licensing is available potentially not qualifying. The EU addressed training data through AI Act text and data mining provisions authorising TDM unless rights-holders refuse, India recommended revisiting copyright law to clarify whether TDM qualifies as lawful use, whilst China implemented mandatory labelling emphasising socialist values over Western copyright frameworks.
PART 6: INTERNATIONAL COOPERATION
Building global governance architecture
20. UN AI Governance Mechanisms (August 2025)
On 26 August 2025, the UN General Assembly adopted Resolution A/RES/79/325 by consensus, establishing two landmark mechanisms: an Independent International Scientific Panel on AI comprising forty experts producing annual evidence-based assessments, and a Global Dialogue on AI Governance convening annually to exchange best practices. The resolution sets priorities around ensuring safe and trustworthy AI through transparency and accountability, advancing equity by building capacity in developing countries, and promoting openness and interoperability supporting open-source software and data. The first dialogue launched during September 2025 UN General Assembly high-level week.
21. OECD AI Principles Update (2024-2025)
The OECD updated its landmark AI Principles in May 2024, incorporating considerations for generative AI and large language models whilst maintaining five core values-based principles (inclusive growth, human-centred values, transparency, robustness and security, accountability) and five practical recommendations. On 18 September 2025, the OECD released “Governing with Artificial Intelligence,” reviewing over 200 real-world AI implementations across 11 government functions and offering a policy roadmap for responsible adoption. This represents a transition from high-level norms to empirical deployment guidance examining how governments use AI in tax, social protection, justice, procurement, health, and public finance.
22. G7 AI Code of Conduct Continued Implementation (2025)
The G7’s Hiroshima Process International Code of Conduct, agreed 30 October 2023, continued implementation through 2025 with the Leaders’ Statement on AI for Prosperity issued 17 June 2025. The voluntary 11-point Code guides organisations developing advanced AI systems on risk management, vulnerability mitigation, transparency reporting, content authentication, bias identification, privacy protection, and reporting to AI Safety Institutes. The June 2025 Statement recognised AI’s potential for prosperity, emphasised powering infrastructure whilst addressing energy pressures, committed to helping SMEs adopt AI, reaffirmed Data Free Flow with Trust, and stressed working with developing countries to close digital divides.
ALSO READ: Big Tech’s Enterprise AI Initiatives in 2025: A Guide by Business Need
Conclusion: Convergence and Divergence
The year 2025 revealed both alignment and divergence in global AI governance. Convergence emerged through risk-based frameworks becoming standard (EU, Japan, South Korea, Brazil), universal transparency requirements, strengthened data rights, and international cooperation mechanisms (UN, OECD, G7). Divergence appeared in fundamental approaches: the US prioritised deregulation whilst the EU emphasised rights protection; detailed EU requirements contrasted with flexible UK-Singapore approaches; China’s state control differed from federal decentralisation; Europe’s €5.88 billion in GDPR fines diverged from Asia’s collaborative models.
For organisations operating globally, 2025 marked transition from policy to enforcement accountability with substantial penalties (€35 million AI Act, Rs 250 crore India DPDP, $10-30 million New York RAISE), operational restrictions, and reputational consequences—establishing the governance foundation for the coming decade.
ALSO READ: LLM Developers Building for Language Diversity in 2025
