EU Commission wants clear liability rules for AI systems


Source: pixabay/geralt

EU Commission wants clear liability rules for AI systems

The EU Commission has issued a legislative proposal to answer the question of who is liable in the event of an accident involving a self-driving vehicle.

The AI Liability Directive(2022/0303(COD)) is intended to create rules to determine who is liable when products or services cause harm using artificial intelligence.
In addition to the question of who is liable, the new directive is also intended to solve problems in the provability of harmful effects.
On the one hand, a rebuttable presumption of causality for AI-induced personal injury and property damage is to be enacted. Secondly, action is to be taken against the so-called "black box" problem. This is the case when the inputs and operations of the AI are not visible to the user.
In such cases, a disclosure obligation is to be introduced for providers of high-risk AI systems. This affects systems that can have a particularly far-reaching impact on life and health, for example systems that grant access to social benefits or are used in asylum procedures.

Only non-contractual liability cases caused by an AI system and involving "a non-contractual civil claim for damages based on fault" (Art. 1(2) AI Liability Directive-E) are to be covered by the directive. The liable party is to be the operator of the AI system, regardless of whether it is used commercially or for private purposes. State AI systems are also to be covered.

The new liability rules are intended to apply to all AI products that do not yet have their own specific liability rule, including self-driving cars, vacuum cleaner robots or "smart" home assistants.

The concrete implementation of the EU directive is now up to the member states. The Commission sent its draft law to the Council of EU States and the EU Parliament.

Go back