This text has been automatically translated, it may contain errors or inaccuracies.
Artificial intelligence
Favorite
Remove from my list

AA already interprets emotions, but the law has not yet been adapted, according to a UPV study

These systems work continuously by analyzing what we do at every moment and drawing conclusions about our behavior, which makes it difficult to classify it according to the current categories. The study has been awarded by the Spanish Data Protection Agency. The UPV/EHU professor and researcher Leire Escajedo has highlighted "the gap between the speed with which technology progresses and the ability of law to adapt to it."

Leire Escajedo_EHU
Leire Escajedo San Epifanio. Photo: Egoi Markaida (UPV).

Artificial intelligence (AA) already interprets people's emotions and behaviors, but current legislation does not yet respond adequately to this type of technology, as noted by a study by UPV/EHU Professor of Constitutional Law Leire Escajedo. 

In his work Accésit at the 2025 Emilio Aced Prize of the Spanish Data Protection Agency (AEPD), Biometric Non-identifier Treatments through Artificial Intelligence , Escajedo analyses an increasingly obvious phenomenon: the use of technologies capable of interpreting how we feel  or how we act from signals in our body.

The study notes that current legislation does not adequately respond to this type of technology, which is designed for longer data collection cases or for automated one-off decisions.

However, these systems work continuously by analyzing what we do at every moment and drawing conclusions about our behavior, which makes it difficult to classify them according to current categories.

That is why the UPV professor has stressed that "there isa gap between the speed with which technology progresses and the ability of law to adapt to that technology. "

Many everyday devices and applications have artificial intelligence systems that get and analyze information from the face, voice, look or how we interact with them. 

Thus, this system can detect , for example, whether a person pays attention, expresses interest or fatigue, stops at a content or reacts to it, and then uses that information to adjust what is displayed to it , from the videos  to ads or messages designed to attract attention or influence behavior.

To do this, they use biometric technologies, such as those used to recognize fingerprints or unlock the face, but in this case they are not used to identify the person, but to interpret his or her behavior and state at every moment.

"We have moved from receiving data directly provided by the user to deducing more intimate information without the person being identified."

"We have gone from collecting the data that the user provides directly to inferring more intimate information without indicating people, "says Escajedo." These tools, in addition to simply recording what we do, try to interpret how we feel or how we could behave and are able to adapt to modulate our behavior, "he warns.

One of the main risksidentified by the study concerns personal freedom, as these systems are at risk of "influencing decision-making" by adapting the digital environment to observed behavior, and warns ofthe "lack of transparency about the data generated and the purposes for which they are used. "

"The problem is not solved by just informing the user: accepting conditions or" clicking "doesn't mean we understand what kind of information is being generated, how it is being used or how it can ultimately condition our decisions," Escajedo said.

The study recalls that the General Data Protection Regulation was adopted in 2016, when biometrics were mainly used to identify people, whereas nowadays they are increasingly used to interpret people's behavior. 

Moreover, according to the researcher, the European Artificial Intelligence Regulation 'has not given sufficient importance to these uses, despite their real impact'.

You might like

Load more