Friday, November 10, 2023
Daily Tech Updates


Should Artificial Intelligence Be Awarded Its Own Legal Personality ?

By TrendsTechBlog , in TECHNOLOGY , at January 7, 2021 Tags: , ,

Before we start, I just want to say no, we’re not talking about how charismatic and energetic artificial intelligence can be and how legally it should have a personality that we can all relate too. By ‘legal personality,’ I mean that artificial intelligence is increasingly becoming a part of our everyday lives, perhaps more than we know.

It’s used in everything, from AI chatbots on social media to Google search page results, even to what data we see on our social media profiles; it’s all chosen and recommended to us by an AI trying to give us what we want. However, with the use of this technology, there are threats and dangers that begin to arise.

Things like physical human safety when working with machinery or using tools like self-driving cars. Privacy and data protection are huge when you have an AI having access to literally millions of private accounts, with little human interaction. So when things do go wrong, or leaks happen, who is to blame?

Who Is Legally Responsible?

It’s shockingly true, and perhaps obvious to some that the current legal frameworks of our global societies are not cut out for dealing with the complexities of AI. This means that as we progress into the future, governments and officials need to figure out what they’re doing to do and how AI is going to be regulated, and they need to do it fast.

So far, parliaments, especially those in Europe, have broken down what is essential to think about into six main categories, and these apply to both AI systems and smart robots. These are;

  • Ethics
  • Liability
  • Connectivity of intellectual property
  • Education and employment
  • Coordination and oversight
  • Safety and security

So, what do we mean by a ‘legal personality’? Well, as lawmanaging explains, if you go into a court of law because you’ve committed a crime, or at least you’ve been accused of committing a crime, you have a legal personality as a human being, which entitles you to rights and kinds of treatment.

For example, if you’re a US citizen, then you have a US-legal personality, which entitles you to the rights of a US citizen, as defined by law. However, if you’re a US citizen, but you’re in Russia, Europe, Australia, and so on, then you’ll have different rights to go by.

So Should AI Have It’s Own?

While it’s true that humans are creating AI, it’s just as true that nobody really knows how it works. If something goes wrong, should the designers and coders be held accountable for something they don’t really understand? Should the AI be charged for doing something it wasn’t supposed to or for breaking the law? How would such an AI be punished or penalized? And here lies the problem.

Conclusion

An AI cannot, as of yet, be penalized or punished as a criminal would. Only humans can be. So instead, the responsibility of these systems need to be in the hands of the company’s and organizations who are using them. When contracts are made between humans and companies using AI, it’s the humans that need to be held accountable for what happens.

Also Read: What Do You Need To Know About IoT As A PCB Designer

TrendsTechBlog