The European Artificial Intelligence Regulation (AI Act), approved in 2024, represents a turning point in how organisations design, develop and deploy AI systems.
This regulation—the world’s first comprehensive law on artificial intelligence—classifies AI systems by risk category (unacceptable, high, limited or minimal) and sets out requirements for safety, human oversight and transparency to ensure ethical and responsible use of the technology.
Algorithmic Transparency in the Spanish Public Administration: The BOSCO Case
Spain’s Supreme Court has already taken a first step toward enforcing the principles set out by the AI Act. It recently issued a landmark ruling (cassation 7878/2024, dated 11 September 2025) obliging the Spanish Administration to disclose the source code of the BOSCO algorithm, a system used to determine who is eligible for the social electricity subsidy—an aid for vulnerable consumers managed through energy companies.
BOSCO has been classified as a “high-risk” system because it directly influences economic and social decisions that affect citizens.
With this ruling, Spain is anticipating the practical application of the AI Act, recognising algorithmic transparency as a fundamental element of ethical and accountable AI in the public sector.
Ethical Use of AI in Regulated Environments
In this context, the use of artificial intelligence for identity verification through biometrics is considered a limited-risk system under the AI Act, as it combines accuracy, security and strong privacy safeguards.
These technologies are used in regulated sectors such as banking, insurance and fintech to prevent fraud and reinforce digital trust.
The regulation highlights the importance of ensuring that individuals can freely, voluntarily and explicitly decide whether to use these technologies. For this reason, our identity verification solutions are always designed under the principle of informed consent, ensuring that users maintain control over their identity and that biometrics are deployed ethically.
Unlike centralised models, the processing of biometric data in our solutions is encrypted and anonymised, with no storage of personal data or possibility of reuse. All data remains under the exclusive custody of the organisation using the technology—reinforcing user sovereignty and full compliance with data-protection regulations.
In addition, we apply active mechanisms to mitigate bias using algorithms trained on diverse datasets and audited under NIST (National Institute of Standards and Technology) standards.
These certifications ensure accuracy, inclusiveness and fairness in verification processes, delivering consistent results for people of all ages, genders and backgrounds.
Towards a More Transparent and Secure AI
Artificial intelligence is transforming the relationship between citizens, companies and governments.
But its implementation must go hand in hand with transparency and human oversight to ensure technological innovation advances without compromising fundamental rights.
The AI Act not only establishes obligations: it opens the door to a new standard of digital trust, where transparency and ethics become the foundation of a fairer, safer and more human-centred AI.