By Foo Yun Chee
BRUSSELS (Reuters) – Individuals and companies that suffer harm from drones, robots and other products or services equipped with artificial intelligence software will find it easier to sue for compensation under EU draft rules seen by Reuters.
The AI Liability Directive, which the European Commission will announce on Wednesday, aims to address the increasing proliferation of AI-enabled products and services and the patchwork of national rules across the 27-country European Union.
Victims can sue for compensation for harm to their life, property, health and privacy due to the fault or omission of a provider, developer or user of AI technology or was discriminated in a recruitment process using AI, the draft rules said.
The rules seek to lighten the burden of proof on victims by introducing a “presumption of causality”, which means victims only need to show that a manufacturer or user’s failure to comply with certain requirements caused the harm and then link this to the AI technology in their lawsuit.
Under a “right of access to evidence”, victims can ask a court to order companies and suppliers to provide information about high-risk AI systems so that they can identify the liable person and find out what went wrong.
The EU executive will on Wednesday also update the Product Liability Directive which sets out the scope of manufacturers’ liability for defective products ranging from smart technology to machinery and to pharmaceuticals.
The proposed changes will allow users to sue for compensation when software updates render their smart-home products unsafe or when manufacturers fail to fix cybersecurity gaps.
Users with unsafe non-EU products will be able to sue the manufacturer’s EU representative for compensation.
The AI Liability Directive will need the green light from EU countries and EU lawmakers before it can become law.
(Reporting by Foo Yun Chee. Editing by Jane Merriman)