2 comments

Is Imperfect A.I. Going To Take Over The World And Then Malfunction!?

by onMarch 29, 2016
 

With Artificial Intelligence All Over The News Lately, There’s No Stopping Its Movement Into The Mainstream

Recently there has been quite a lot of talk about artificial intelligence. More specifically, the news has been covered with self learning and life like A.I./robots. From Microsofts @TayAndYou and Hanson Robotics’s Sophia to Cornell’s Self Walking Robot and ICTA, Australia’s Self Cooking Robot…it has become mainstream. There’s no stopping the coming of self learning/intelligent robots, so I think it’s best we discuss their potentially incredible uses, as well as their potentially devastating flaws.

It’s really important that we take AI seriously. It will lead to the fourth industrial revolution and will change the world in ways we cannot predict now,” A.I. Architect George Zarkadakis.

Let’s start with the oh so glorious case of @TayAndYou as example number 1. Microsoft had been developing this twitter chat bot for some time now. It’s goal was to chat and tweet like a 19 year old girl by learning from its conversations and interactions. It may sound like a silly concept, but it’s actually one of the first A.I. of it’s kind and has some incredibly sophisticated functions and learning capabilities. Tay’s primary source of knowledge is a large database of anonymized public data. The data had apparently been filtered by the development team…but that doesn’t stop the outside world from messing things up.

TayAndYou Twitter Conversation

While Tay’s responses are meant to be funny, things got hectic really quickly when the world was unleashed upon her. Pulling from her large public data source, Tay was able to hold realistic conversations with users and created some pretty funny responses. The problems arose after time had passed and Tay had added new data to her collection and began to use it. This ‘data’ came from conversations she had been having with the average (or not so average) internet user. What happened? Well as you probably know, Tay was corrupted with vulgar, racist and hateful speech and began spurting it out without much, if any, provoking…Microsoft had to shut everything down and reassess. This is a prime example of the dangers of A.I.

Tay And You, Microsoft Twitter A.I. Logo

While not a devastating problem when put to use on a harmless twitter bot, this shows how things can go wrong. One of the worlds top developers released what they though to be a worthy intelligent robot, but were caught of guard buy it’s ability (or lack thereof) to learn from humans. True intelligence requires the ability to filter information and choose right from wrong, good from bad, funny from sad…etc.

Video: Cornell University Created A Robot That Can Teach Itself To Walk

 

This brings me to my next example, a robot built by the University of Maryland and NICTA, Australia that can actually teach itself how to cook by watching YouTube videos! Creators Yezhou Yang, Yi Li, Cornelia Fermüller, and Yiannis Aloimonos created an advanced language and grammar that allows automated robots to, with very little context, learn on their own.

beyond simple learned schemas, we need computational tools that allow us to automatically interpret and represent human actions.”

While this robot is really incredible and functions impressively well, let’s apply @TayAndYou’s failures to it.

Self Cooking Robot

What would happen if a line of newly released self learning kitchen robots had a slight fault of programming which made them lack the ability to filter a certain potentially dangerous action. Then it learned to swing a kitchen knife incorrectly/unsafely, leading to an injury or fatality in a home. Something like this is exactly why I think A.I. needs to be done with extreme caution and endless testing. When computers have the ability to change and base their actions on the world around them…you need to be careful. For one, there are a lot of awful people who set out to corrupt things (cough, cough..TayAndYou). And two, accidents happen. Something so complex that uses it’s own internal language is bound to have an imperfect code. It’s a matter of limiting these imperfections as much as possible and then testing.

White Robot Holding Human Skull

(Image: The Inquirer)

The other question is…when something does go wrong. Who get’s in trouble? The entity that taught the robot to do the irrational thing, the company that programmed it, or….the robot? Grey areas start to arise when something that can’t really be responsible for an action is functioning on its own.

I’m not trying to say that A.I. is dangerous and shouldn’t be pursued. I’m just saying people need to be wary and think about the possibilities of something going wrong. If large companies get a hold of some impressive yet imperfect technology and rush it out (like they do all the time)….it could truly be devastating.

I mostly wrote this to create some discussion! What are some examples of A.I. going wrong (or right) that you can imagine happening!? There’s so many possibilities, I’m curious to hear everyone’s thoughts.

References: UMIACSGeek – Tech Crunch – The Inquirer

Facebook Share Button Large

 

 

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

comments
 
Leave a reply »

 

    You must log in to post a comment