Artificial Intelligence Watchdog Necessary to Increase GDPR Transparency for Automated Decisions

The European Union’s General Data Protection Regulation offers safeguards for people against the risks that a damaging automated decision is undertaken without the intervention of a human.

Computerized decisions relate to decisions made following automatic processing of personal data without the involvement of humans. The rights of a person with regards to automated computer systems arise when the automatic decision causes a substantial impact on somebody. Such implications may relate to an individual’s creditworthiness or job performance. These rights are integrated into the GDPR to ensure that no critical decision concerning a person is made without the participation of human. In this regard, an artificial intelligence watchdog should be created to prevent discrimination when the automated decisions about people’s lives are taken.

Technological development in the area of artificial intelligence has resulted in the creation of many algorithms utilized by various organizations and institutions globally such as banks, police forces, and employers. However, such systems are not 100% perfect. They occasionally make mistakes which lead to bad decisions. Such erroneous resolutions have negative impacts on an individual’s life. Several people have fallen victim of a flawed automated decision and faced consequences such as detention, loss of jobs, removal from electoral voter registration and revocation of car license. It is errors in the algorithms that occasion all these instances. The world tends to rely too much on the automatic processes to the extent of giving up the right to question their correctness. The current laws also seem to be ineffective in protecting people from errors of computerized decisions.

The General Data Protection Regulations adopted by the EU member states look set to offer a real solution to several protection concerns regarding the automated decisions. GDPR will manage how Artificial Intelligence can be challenged. This law grants people the right not to be a subject of the decision if it is based on automatic process and leads to a legal or significant impact on the life of the individual. This law also contains the right to an explanation. However, it is feared that the GDPR framework lacks legal guarantees for the same. In this view, experts request for the improvement of this law so that these rights are unambiguously supported.

The suggestion to set up an Artificial Intelligence watchdog may experience some challenges though. It would be challenging to monitor algorithms because the process of appropriately equipping a watchdog to execute its function is not clear and seems complex.

Moreover, protests are bound to arise when firms are coerced to ensure that their Artificial Intelligences can explain themselves. This is connected to the idea that contemporary AI methods are inscrutable.

In sum, the GDPR enshrines some rights that safeguard the risks of automated decisions. However, it could be rational to strengthen the rules to ensure full protection. The presence of a regulatory body such as the proposed Artificial Intelligence watchdog with legal structures would be necessary to force transparency around the automated decisions.