Sunday, 10 December 2017

Artificial Intelligence: Where is it going? Militarization?



Seeing all the articles about Artificial Intelligence all over the net. One that struck a chord was the A.I. when given all the data about playing chess, mastered it in 4 hours (as well as GO in 8 hours)! Granted we are a long way off from sentience here, but what if you take an A.I. give it all the history of wars and battle maneuvers, give it every current battle and tactical knowledge... and predictive technology, and weaponize it..? Granted humans can be unpredictable, but what if it learnt everything it could about combat..? Do we have a Skynet scenario happening..? What if A.I. surpasses human intelligence and can predict human behavior..? Will A.I. see us as a flawed construct... always making mistakes, violent, not efficient..? It is a little worrisome, maybe my understanding of Artificial Intelligence is way off... but essentially, playing chess, and other games like that, could be a starter kit for predicting combat behavior... granted, there are way more variables involved, but yeah... I think you know what I'm getting at! And, I think I have answered my own question, in order for A.I. to truly know what a human is going to do, it has to understand everything about us, and every individual, it has understand behavior patterns, psychology... (if it's learning method is the same as the chess method, or GO method), it has to understand chemical reactions in the brain and body... I think artificial intelligence has many benefits (self-driving cars, no physical labor jobs), but there are risks as well (combat and tactical knowledge used against us, or anyone). I know Isaac Asimov suggested the three laws of robotics for a solution to some of these questions, I'm really curious about the militarization of A.I., and where does the A.I. draw the line when fighting other combatants, how will it determine friend from enemy..? Should I be even asking this question, cause from what I've read, there's a lot of consensus behind A.I. developers that we shouldn't develop autonomous weapons... and I agree! What are your guy's thoughts?

Note: There are some heavy moralistic questions we have to ask ourselves... and be very careful of our steps when we are developing A.I.... the main question being... should we develop autonomous weapons? And if we do, will this lead to a new arms race, where countries try to develop smarter and smarter A.I. that would be virtually impossible to beat in combat situations... which leads to another question, should there be laws in place that limit this kind of development..? cause A.I. would become a weapon of mass destruction!

3 comments:

  1. I fully agree with all your concerns. I think building autonomous weapons is an obviously foolish choice! These things don't have the same weaknesses of flesh, lifespan, and feelings, and can add to its own knowledge in a continuous way...and would, in theory, also eventually acquire knowledge to BUILD MORE. Even AI that is not originally designed for military purposes would be far superior to us (illogical war-monkeys) in learning ability and life-span, and would eventually see us as something that would at least need to be controlled, if not eliminated entirely...humans are a serious problem on this planet, and a logical brain will eventually see that, even if we built that brain ourselves... And once the AIs start building more of themselves, as well as new things we haven't thought of, we are doomed for sure... I think AI is a mistake entirely, in all it's forms.

    ReplyDelete
    Replies
    1. I wouldn't necessarily say humanity is doomed... I do believe we will survive for quite some time (not without our trials), but I do definitely think there is a danger in A.I. and autonomous weapons... I think in the next few years (probably 10-15) A.I. will definitely be replacing jobs, and I do think that it's important that we get some rules and laws in place so, we don't accidentally destroy ourselves with this new technology... I also think it's important to put in place laws for autonomous weapons... cause, I do think A.I. being weaponized, we'll be creating a weapon of mass destruction.

      Delete
    2. I do agree...but I think there is significant danger in summoning this particular sort of demon, no matter what we have in place...

      Delete