Sunday, 10 December 2017
Artificial Intelligence: Where is it going? Militarization?
Seeing all the articles about Artificial Intelligence all over the net. One that struck a chord was the A.I. when given all the data about playing chess, mastered it in 4 hours (as well as GO in 8 hours)! Granted we are a long way off from sentience here, but what if you take an A.I. give it all the history of wars and battle maneuvers, give it every current battle and tactical knowledge... and predictive technology, and weaponize it..? Granted humans can be unpredictable, but what if it learnt everything it could about combat..? Do we have a Skynet scenario happening..? What if A.I. surpasses human intelligence and can predict human behavior..? Will A.I. see us as a flawed construct... always making mistakes, violent, not efficient..? It is a little worrisome, maybe my understanding of Artificial Intelligence is way off... but essentially, playing chess, and other games like that, could be a starter kit for predicting combat behavior... granted, there are way more variables involved, but yeah... I think you know what I'm getting at! And, I think I have answered my own question, in order for A.I. to truly know what a human is going to do, it has to understand everything about us, and every individual, it has understand behavior patterns, psychology... (if it's learning method is the same as the chess method, or GO method), it has to understand chemical reactions in the brain and body... I think artificial intelligence has many benefits (self-driving cars, no physical labor jobs), but there are risks as well (combat and tactical knowledge used against us, or anyone). I know Isaac Asimov suggested the three laws of robotics for a solution to some of these questions, I'm really curious about the militarization of A.I., and where does the A.I. draw the line when fighting other combatants, how will it determine friend from enemy..? Should I be even asking this question, cause from what I've read, there's a lot of consensus behind A.I. developers that we shouldn't develop autonomous weapons... and I agree! What are your guy's thoughts?
Note: There are some heavy moralistic questions we have to ask ourselves... and be very careful of our steps when we are developing A.I.... the main question being... should we develop autonomous weapons? And if we do, will this lead to a new arms race, where countries try to develop smarter and smarter A.I. that would be virtually impossible to beat in combat situations... which leads to another question, should there be laws in place that limit this kind of development..? cause A.I. would become a weapon of mass destruction!