The EU AI Act comes into effect today, outlining regulations for the development, market placement, implementation and use of artificial intelligence in the European Union.
The Council wrote that the Act is intended to “promote the uptake of human-centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, [and] fundamental rights…including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.”
According to the Act, high-risk use cases of AI include:
-
Implementation of the technology within medical devices.
-
Using it for biometric identification.
-
Determining access to services like healthcare.
-
Any form of automated processing of personal data.
-
Emotional recognition for medical or safety reasons.
“Biometric identification” is defined as “the automated recognition of physical, physiological and behavioral human features such as the face, eye movement, body shape, voice, prosody, gait, posture, heart rate, blood pressure, odor, keystrokes characteristics, for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a reference database, irrespective of whether the individual has given its consent or not,” regulators wrote.
Biometric identification regulation excludes the use of AI for authentication purposes, such as to confirm an individual is the person they say they are.
The Act says special consideration should be used when utilizing AI to determine whether an individual should have access to essential public and private services, such as healthcare in cases of maternity, industrial accidents, illness, loss of employment, dependency, or old age, and social and housing assistance, as this would be classified as high-risk.
Using the tech for the automated processing of personal data is also considered high-risk.
“The European health data space will facilitate non-discriminatory access to health data and the training of AI algorithms on those data sets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance” the Act reads.
“Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems.”
When it comes to testing high-risk AI systems, companies must test them in real-world conditions and obtain informed consent from the participants.
Organizations must also keep recordings (logs) of events that occur during the testing of their systems for at least six months, and serious incidents that occur during testing must be reported to the market surveillance authorities of the Member States where the incident occurred.
The Act says AI must not be used for emotional recognition regarding “emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement.”
However, AI for the use of emotional recognition pertaining to physical states, such as pain or fatigue, such as systems used to detect the state of fatigue of professional pilots or drivers to prevent accidents, is not prohibited.
Transparency requirements, meaning traceability and explainability, exist for specific AI applications, such as AI systems interacting with humans, AI-generated or manipulated content (such as deepfakes), and permitted emotional recognition and biometric categorization systems.
Companies are also required to eliminate or reduce the risk of bias in their AI applications and address bias when it occurs with mitigation measures.
The Act highlights the Council’s intention to protect EU citizens from the potential risks of AI; however, it outlines its objective not to stifle innovation.
“This Regulation should support innovation, should respect freedom of science, and should not undermine research and development activity. It is therefore necessary to exclude from its scope AI systems and models specifically developed and put into service for the sole purpose of scientific research and development,” regulators wrote.
“Moreover, it is necessary to ensure that this Regulation does not otherwise affect scientific research and development activity on AI systems or models prior to being placed on the market or put into service.”
The HIMSS Healthcare Cybersecurity Forum is scheduled to take place October 31-November 1 in Washington, D.C. Learn more and register.
Credit: Source link