Council of Europe Collaboration
From Treaty to Practice: How Ukraine Piloted the World's First Human Rights Impact Assessment for AI Systems
By
David Leslie
Professor of Ethics, Technology and Society, The Digital Environment Research Institute, QMUL
and
Sofia Klymchuk
Advisor to the Ukraine Deputy Minister on EU Integration and International AI Policy
The Council of Europe's Framework Convention on Artificial Intelligence established the first international, legally binding treaty to ensure that AI technologies uphold human rights, democracy, and the rule of law throughout their design, development, procurement, and deployment. Adopted by the Committee of Ministers on May 17, 2024, and opened for signature on September 5, 2024, the Convention emerged from inclusive negotiations among Council of Europe member states, the European Union, and observer states, with substantive input from civil society, academia, industry, and international organisations. It provides a technology-agnostic, lifecycle-wide legal framework with a risk-calibrated governance architecture that adapts to diverse AI systems and contexts. To operationalise these treaty commitments in practice, the Council of Europe's Committee on Artificial Intelligence (CAI) adopted HUDERIA—the Human Rights, Democracy, and the Rule of Law Risk and Impact Assessment for AI Systems—at its 12th plenary in Strasbourg, establishing a practical methodology that translates legal obligations into actionable governance processes.
Developed collaboratively by researchers from Queen Mary University of London and the Alan Turing Institute with the Council of Europe, member and observer states, and civil society organisations since 2020, HUDERIA provides a structured, multi-step governance process that enables public and private organisations to identify, assess, and mitigate adverse human rights impacts across the AI lifecycle. The framework integrates four core components: context-based risk analysis (COBRA), stakeholder engagement, impact assessment, and mitigation planning, complemented by iterative revisitation mechanisms that prompt periodic re-assessment as technologies and societal contexts evolve. Recognising the urgency of moving from principle to practice, the CAI established a forward work plan for 2025–2026 to develop HUDERIA into a more detailed, usable, and operationalisable model capable of achieving meaningful organisational uptake and real-world impact.
Ukraine’s Pathbreaking Pilot: Turning Treaty Principles into Governance Practice
In 2024–2025, Ukraine advanced this priority of making HUDERIA concrete by piloting the methodology under the leadership of the Ministry of Digital Transformation, with an explicit goal of introducing the Ukrainian AI community to HUDERIA and demonstrating the practical benefits of rights-based impact assessment for real products. The pilot launched with a national webinar in September 2024 and paired each participating company with a legally focused mentor to support the completion of the HUDERIA processes, turning abstract criteria into workflow, documentation, and design decisions. At participants’ request, the team also delivered a foundational presentation on human rights and fundamental freedoms, helping establish a common literacy essential for consistent risk and impact identification and evaluation.
The pilot’s portfolio spanned high‑sensitivity and socially salient domains—an AI video analytics platform, assistive technologies for people managing autism and trauma recovery, an AI solution for career planning, a computer‑vision fitness app, a tool for automatically filling out doctors’ reports, and an Open Source Intelligence analytics platform—precisely the kinds of contexts where ex ante risk and impact analysis, stakeholder engagement, severity calibration, and impact mitigation measures are decisive for responsible deployment. This cross‑section underscored HUDERIA’s algorithm‑neutral, context‑aware design: by starting with a context‑based risk analysis and then moving through stakeholder engagement into impact assessment and mitigation planning, teams could align system risks with rights‑based safeguards and accountability measures.
Practitioner feedback from the pilot was strikingly constructive, pragmatic, and forward‑looking: participants saw the need for a dedicated tool or guidance to assess EU AI Act compliance in parallel with HUDERIA; they requested baseline human‑rights training; and they asked for more concrete examples to calibrate severity of risk by stakeholder group when answering assessment questions. These inputs map directly onto HUDERIA’s strengths and next steps—bridging a rights‑first impact assessment with regulatory interoperability, building shared literacy for multidisciplinary teams, and embedding worked examples to standardise judgments across use cases. The pilot also surfaced policy pathways: integrating HUDERIA into an AI Sandbox, embedding its principles in AI‑enabled GovTech and public procurement, and using HUDERIA as a market signal for responsible AI to unlock sustainable finance and investment.
In August 2025, Ukraine’s Ministry of Digital Transformation integrated the HUDERIA methodology into its AI Sandbox expertise “menu”. Following the successful pilot, Ukraine’s next step is to scale HUDERIA — not only as a pilot to support selected startups, but, more importantly, to sustainably embed it within the existing AI Sandbox ecosystem and test AI GovTech solutions that help localise the ethical principles of the Framework Convention in Ukraine. This approach will foster the development of trustworthy AI systems across both the private and public sectors, advancing Ukraine’s ambition to become a leader in AI-powered GovTech.
Why the Ukrainian Pilot Matters for the Council of Europe’s AI Convention
The Convention’s ambition is global, but implementation happens country by country, domain by domain, and system by system. The HUDERIA mechanisms are designed to enable a translation of the Convention’s legal obligations into a replicable governance process for designers, deployers, and public authorities—yet its true test lies in diverse settings where socially and culturally specific circumstances are complex and trade‑offs are real. Ukraine’s pilot shows how to make that translation in practice: pairing teams with supportive mentors, documenting decisions through a project summary report, foregrounding stakeholder impacts, and iterating mitigation plans as learning accrues. It also demonstrates how HUDERIA can interoperate with adjacent regimes: while HUDERIA centres rights and rule‑of‑law impacts, teams must simultaneously navigate EU and international requirements; creating crosswalks and complementary tooling will reduce duplication and raise the quality of both compliance and ethics‑by‑design.
That combination—risk-based calibration, rights‑based depth, regulatory interoperability, and participatory practice—aligns precisely with the Convention’s end-to-end assurances and its inclusive, multi‑stakeholder pedigree. In other words, the pioneering Ukrainian pilot is not only a national initiative; it is a blueprint for making the Convention actionable in the wild.
The HUDERIA Academy: From Pioneering Pilot to Scalable Practice
To scale capacity from pilots to mainstream practice, the Council of Europe launched the HUDERIA Academy on June 16, 2025, introducing participants from across four continents to the HUDERIA methodology to promote its organisational adoption through experiential learning and example-based training. The Academy curriculum focuses on capacity‑building tailored to HUDERIA—training public officials, developers, and procurement leads in structured risk and impact assessment, mitigation planning, and iterative review across the AI lifecycle. As part of the inaugural Academy, the Council of Europe and the Alan Turing Institute also separately convened private‑sector actors to test usability, interoperability, and operational relevance. This dual approach aimed to bring policymakers, regulators, and procurement officials from the public sector together first, followed by industry practitioners, to ensure that public‑sector adoption and market uptake move in tandem.
The first HUDERIA Academy endeavoured tackle a widespread deficit in the governance stack: where the Convention provides legally binding commitments and HUDERIA provides governance methods, the Academy upskills practitioners and professionalises practice so that assessments are knowledgeable, meaningful, and robust, documentation is structured, diligent, and auditable, and continuous improvement becomes routine. This is also responsive to the CAI’s 2025–2026 plan to further detail HUDERIA through a more granular model, giving practitioners a learning pathway and a community of practice as HUDERIA processes continue to mature.
A Collaborative Path Forward
Three priorities emerge from Ukraine’s piloting experience and the Academy’s launch. The first is the need to deepen anticipatory governance through HUDERIA’s iterative revisitation—updating assessments as models, data, and deployment contexts shift—to maintain human rights due diligence throughout the entire project lifecycle. The second is the need to ensure that stakeholder engagement is substantive by resourcing participatory work and providing domain‑specific examples that calibrate severity by affected group, improving consistency across assessments. The third is the need to accelerate cross‑regime alignment by developing practical crossovers, compliments, and tooling that enable mappings from HUDERIA’s rights‑based assessments to other legal and regulatory obligations, reducing duplication while elevating rigour.
This agenda builds directly on the collaborative foundations that produced both the Convention and HUDERIA: multi‑year co‑development by a broad multi-stakeholder network of public sector, civil society, academic, and industry participants. With Ukraine’s pilot as a proving ground and the HUDERIA Academy as the scaling engine, the world’s first codified rights‑first approach to AI governance now has not only a legal backbone and a methodology, but also a pathway to practice and positive impact.
Ukraine’s pilot confirms that when teams are supported with ethical and legal mentorship, clear phases, accessible and common vocabularies of assessment and evaluation, and concrete examples, rights‑based impact assessment becomes an end-to-end catalyst for better products and more responsible technology, not just a compliance step. The Council of Europe’s adoption of HUDERIA methodology and the decision to develop and launch an Academy signal a shared commitment to make that catalyst available across jurisdictions and sectors. As signatures and ratifications bring the Convention toward entry into force, the lesson from Kyiv to Strasbourg is clear: the future of responsible, equitable, and trustworthy AI will be built by those who can turn principles into practice—carefully, iteratively, and together.
Council of Europe
COBRA
HUDERIA
The Alan Turing Institute
Prof David Leslie
Sofia Klymchuk