Beschreibung:
In December 2023, the European institutions reached a political agreement on the AI Act, a new regulation on artificial intelligence. The AI Act will require providers of high-risk AI systems to test their products against harmonised standards (hENs) before affixing a European Conformity (CE) mark to allow AI products to circulate freely on the European market. The CE mark and hENs are long-established European regulatory tools to deal with product safety and already apply to a wide range of products. To date, however, they have never been used to attest to compliance with fundamental rights, something the AI Act aims to achieve. In this article, we examine the role of hENs and CE marking in the AI Act, and how these product safety regulatory techniques have been expanded to cover protection of fundamental rights. We analyse the 5 March 2024 CJEU decision and the respective opinion of the Advocate General in the Public.Resource.Org case which raises questions on democratic processes in standardisation organisations. We show that unlike compliance with product safety norms, compliance with fundamental rights cannot be certified through use of technical standards because violations of rights are too context-specific and require a judicial determination. However, technical standards have an important role to play in encouraging best practices in AI governance.