AI

Does a lawsuit come with your own car?

Does a lawsuit come with your own car?
Written by admin
Does a lawsuit come with your own car?

When self-driving, so-called autonomous vehicles join the road in the future, this will be made possible by artificial intelligence (AI) that drives the car. This inevitably raises the question: Who is responsible when a self-driving car’s AI makes a decision that leads to an accident? Against this background, the European Commission now wants to modernize European product liability rules after almost 40 years, as well as harmonize artificial intelligence (AI) liability rules in the EU.

To this end, the Commission presented two legislative proposals: a revised Product Liability Directive and an Artificial Intelligence Liability Directive. On the one hand, the new regulations are intended to provide companies with legal certainty to invest in innovative products. On the other hand, it is about victims being adequately compensated if they have suffered harm as a result of AI.

Meanwhile, artificial intelligence has found its way into many areas of life, whether in Internet search engines, in smartphone facial recognition, in medicine, or in job applications. In autonomous driving, for example, radar, laser or ultrasound sensors in combination with video cameras on the vehicle replace the eyes and ears of the driver. The resulting large amount of data is “fused” by AI software: This means that AI connects and combines the data in such a way that management decisions can be derived from it.

For this, the AI ​​is trained using machine learning with selected datasets. This allows artificial intelligence to analyze complex information in such a way that it can, for example, “read” road signs and recognize other road users or situations. Decisions made by the AI ​​are then passed on to steering, braking and engine control, among others, and are implemented by these components accordingly. At least that’s the theory. It is not quite the same in everyday traffic practice.

However, since the arrival of autonomous vehicles and thus artificial intelligence in other applications can be foreseen, the European Commission felt compelled to modernize the European regulations on liability for defective products in the interest of greater legal certainty. Věra Jourová, Vice-President of the Commission for Values ​​and Transparency, said that the aim of the proposal on civil liability for artificial intelligence is to provide customers with the tools to remedy damage caused by artificial intelligence, so that they have the same level of protection as traditional technologies. It is always important to ensure consumer safety, added EU Justice Commissioner Didier Reynders. Appropriate standards for the protection of EU citizens are the basis of consumer trust and thus of successful innovation, he emphasized.

What exactly do the new legislative proposals of the Brussels authorities contain on the issue of liability for self-driving cars? In this context, it is important that the European Commission strives to reverse the burden of proof in individual cases. Until now, the manufacturer had to cover all damages caused by a defective product as part of product liability. However, the injured party must also prove that the damage was caused by the product in question. This can be difficult for a self-driving car controlled by artificial intelligence, for example. Therefore, in the future, the Brussels Commission wants to introduce a reversal of the burden of proof in clearly defined individual cases – in other words: in the event of damage, the supplier of the product must prove that its AI worked correctly in order to avoid liability.

The so-called “presumption of causality” aims to eliminate the difficulties AI victims face when they have to explain in detail how the damage was caused by a specific error or omission: This can be achieved when we try to understand complex AI systems and ourselves. it can be particularly difficult to navigate, the Commission says. The presumption of causation means that certain errors “can be assumed, based on reasonable judgment, to be causally related to the performance of the AI”.

In addition, victims of AI will be given more tools to seek legal redress, with the right to access evidence held by companies and individuals in cases where providers of high-risk AI systems are found. This means that in the future the aggrieved party should have the right to request training or test data sets, data from technical documentation and logs or information about quality management systems from the providers. If in doubt, you can sue. However, the court should subsequently check whether only the information that is actually required is disclosed in order to protect trade secrets. If the provider does not comply with such a request, the burden of proof reverses and they must prove that their AI did not cause the damage.

The new regulations are intended to strike a balance between protecting consumers and promoting innovation, as the European Commission emphasizes. At the same time, it wants to remove other barriers for victims to access compensation and establish safeguards for the AI ​​sector, such as the introduction of the right to challenge a liability claim based on the presumption of causation.

In order for AI technologies to thrive in the EU, people must trust digital innovation, said Vice-President of the Commission Věra Jourová on the modernization of liability rules. According to the EU Commission, the proposal on civil liability for AI is intended to give customers the tools to assert their own rights in the event of damage caused by AI, so that they have the same level of protection as with conventional technologies. This would also apply to self-driving vehicles controlled by artificial intelligence. (awm)

About the author

admin

Leave a Comment