By Ramón Villot, Legal, Compliance & GRC Director
The current debate around the possibility of applying intellectual property frameworks or “copyright” to biometric traits stems from a legitimate concern: ensuring that individuals retain control over their identity in an increasingly complex digital environment. However, to reach a balanced approach that satisfies both legal certainty and operational effectiveness, it is necessary to assess whether ownership laws are the most appropriate instrument or whether the answer instead lies in technical sovereignty grounded in trust-based architectures.
The Nature of the Inherent: Identity vs Creation
Any consensus must begin with a fundamental technical distinction already recognised within the European framework. The eIDAS Regulation and guidelines from the European Banking Authority define biometrics as an “inherence authentication factor”, meaning a physical attribute that an individual simply demonstrates possession of.
While copyright exists to protect works of the intellect (the result of external creative intent), biometrics belong to the realm of what a person is.
Attempting to fit identity into ownership frameworks imposes a logic that simply does not apply, no one “creates” their face or “designs” their iris. Rather than treating the body as a commercialisable asset, the self-sovereign identity approach allows users to decide, in a modular and contextual way, which attributes they share and for what purposes.
This model, aligned with the GDPR’s principle of data minimisation, prevents individuals from becoming products and ensures that technology acts as a tool for freedom.
The Systemic Risk of an Immutable “Master Key”
From a practical standpoint, it is important to recognise that, unlike passwords or digital certificates, biometric traits cannot be “reset”. If biometrics are treated as a property asset, this could unintentionally incentivise their commodification, creating a permanent risk for individuals.
Turning biological traits into a form of tradable “master key” means that, in the event of a data compromise, the damage would not only be financial but existential.
For this reason, true protection does not stem from a legal ownership title, but from the robustness of the technical architecture safeguarding that data.
Technical Security vs Legal Labels
Security guarantees rely on compliance with international standards and certifications, such as CCN guidelines or NIST evaluations. The real defence against fraud is not legalistic; it is technological.
No “ownership title” can stop a deepfake video injection attack. What truly protects users are advanced systems such as:
Presentation Attack Detection (PAD): Preventing spoofing attempts using photos or masks
Liveness Detection: Ensuring the real presence of a person within a single interaction
Towards a model of proactive governance
A solid consensus requires recognising that both regulators and technology companies share the same objective: protecting the user. The solution does not lie in rigid regulatory frameworks that attempt to force analog concepts onto digital realities, but in approaches that, while not identical, converge on that shared goal.
Initiatives such as Spain’s Digital ID, the European Union Digital Identity Wallet (eIDAS 2), Mexico’s biometric CURP, and emerging regulations in South Africa are already shaping this balance.
However, not all digital identity models or regulatory approaches follow the same logic, even if they pursue the same objective.
In countries like Mexico and South Africa, identity systems tend to reinforce the reliability of the identity-person link through more centralised approaches. In contrast, initiatives such as Spain’s Digital ID or eIDAS 2 move towards more decentralised, user-centric models, enabling selective disclosure — where users can prove specific attributes (e.g. legal age) without revealing their full identity or relinquishing control over their data.
These are not equivalent approaches, but complementary ones. This is where proactive governance becomes essential: ensuring that, regardless of the model, technology is implemented with accountability, privacy by design, and genuine user control over identity.
The fundamental question is not who “owns” a face, but who has real control over its use. The future of digital identity must be sovereign, not proprietary.
When implemented within transparent and auditable governance frameworks, technology ensures that biometrics become a vector for both convenience and security, protecting user dignity far more effectively than any legal construct of ownership ever could.