Mental Property and A.I. Methods: is that simply the tip of the primary spherical?
AI has already entered our daily lives, even if many of us don’t know it yet.
This may be due to the fact that the AI systems currently available in the market and aimed at the general public belong to what is known as “weak” AI, which is the type of artificial intelligence that mimics the capabilities of the human intellect Simplified and often “single purpose” mode in which the purpose is preset by the trainer.
However, there are already some examples of so-called “strong” AI that have caught the attention of the media and some viewers.
For example, the satellite photo transformation algorithm used by a Stanford University spin-off working with a group of Google engineers who learned how to cheat. In fact, after training the machine, the researchers found that during the transformation phase of the satellite photos into cartographic maps, some elements that the machine had recognized as irrelevant were automatically removed and reappeared almost magically during the reverse process, namely from the cartographic version to satellite images.
The University of Tennessee’s Nautilus project is also interesting. Nautilus used artificial intelligence to analyze a large number of local news items, some of which came from archives of the BBC or the New York Times and spanned several post-war decades until today. In this case, the machine had been trained to recognize certain “mood words”, words that indicate the emotional state (e.g., terrible, nice) of the people or facts described in the articles. Based on the frequency of use of these words, the researchers found that the system was able to predict the growing dissatisfaction of large groups of people and the possible consequences of it (e.g. civil unrest). In particular, it provided clear indications of “critical” situations in relation to what later became known in North Africa over the past decade as the “Arab Spring”.
Not to mention, humanoid robots guest stars in numerous public events and television shows (just type in “humanoid robot” or something similar on YouTube and you will find several examples).
Probably received as a wake-up call in legal and business circles, it has caused real upheaval in the impact AI has had on privacy, corporate governance, labor law and, last but not least, intellectual property.
With regard to intellectual property, legal and political systems do not (yet?) Seem to be currently unwilling to recognize AI machines as a legal personality in their own right.
Recognizing that AI systems can enjoy rights or be subject to obligations like humans can be a major ethical barrier.
With regard to obligations and personal liability, some important rules of our system could probably be expanded and used with the necessary changes to ensure compensation for loss and damage caused by system-controlled tools. Take, for example, self-driving vehicles and road traffic regulations, specifically article 2054 of the Italian Civil Code, which ultimately bears liability to the owner of the vehicle / car unless he can prove that he did his best to avoid the damage. Also take article 2050 of the Italian Civil Code, which relates to the performance of (inherently) dangerous activities, which places responsibility on those who carry out the activities unless they can demonstrate that they have taken all appropriate measures to avoid the harm have (this could work) well in the field of robotic surgery).
On the other hand, it seems more complex to recognize legal rights in favor of AI systems, even though machines are quite capable of achieving some human works of art in terms of “creativity”. Take the portrait of Edmond Belamy (obviously a nickname) painted by AI after analyzing around 15,000 portraits between the 14th and 20th centuries (which were used as input information for the machine), which is absolutely new and creative is and was auctioned for a record sum of around 400,000 US dollars.
However, the current regulatory approach in Italy, Europe and several non-European countries is based on a formalistic and people-centered concept of authorship, which in some cases leads to “makeshift” solutions, such as those adopted by the UK Design and Copyright Act, under which it can give a creator and owner of the first level, i.e. a purely human, and a creator of the second level (the AI system), but who is not legal rights.
For the same reason, the European Patent Office denies to this day that AI can be referred to as “inventor” in the patent application, and consequently denies patentability in these cases, even though the patent developed by artificial intelligence is independent, inventive character.
The patents filed in the DABUS cases provide information about this approach.
Two patent applications were filed in 2019 by Mr. Thaler, who described himself as the “employer” and owner of the KI. The latter was instead referred to as the “inventor”. The first patent related to a food container and the other to a light signaling device to be used in the event of rescue or searches.
These applications were rejected for formal reasons. According to the office, the inventor must have a name, a surname and a postal address and therefore be a person with full legal personality. As a result, the AI system cannot be the inventor or the owner of the beneficial intellectual property rights in such an invention.
These solutions obviously show their limits and, like a transparent screen, protect the underlying ethical and substantial question.
More than ever, it is advisable that the political debate seriously address the issue of the legal personality of AI systems, as technology is infinitely faster than our parliamentary halls and this issue needs to be addressed urgently.
So for the time being legal formalism 1 – AI 0. But that’s only the end of the first round.
Comments are closed.