In this article, the core question is whether or not artificial intelligence (AI) could do more harm than good to humans. As far back as the early 1942, one of the greatest science fiction writers-Isaac Asimov came up with a grand up theory where he propounded in four phases about this contemplation or contention. Firstly, a robot may not injure a human being through inaction or allow a human being to come to harm. Secondly, a robot must obey the orders given to it by human beings, except where such orders might conflict with the First Law. Thirdly, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Fourthly, a robot may not harm humanity or by inaction allow humanity to come to harm.
Thrust forward to the 1960s, a British mathematician IJ Good — who worked at Bletchley Park with Alan Turing, espoused how the super-intelligent robotic versatility could enhance human life and this could be one of the last inventions that humans need or ever make, maybe correct! However, once machines surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones and this is called disruptive technologies. But if humans transcended from biological beings and peradventure merged with computers thereby losing their individualistic characterizations and evolved into a common consciousness, then we are in trouble. We don’t know where the boundary lies between what could happen and what might remain science fiction. For now, some of those with the strongest credentials or experts in the field of AI concur that the AI architectural designers need serious guidelines for responsible technological innovations.
Most significantly in our seminar coming in January, we want developers to be mindful of these laws while presenting their papers. Martin Rees with the Astronomer Royal of London in his article stated that “Robots Must Abide by Laws — or Humans Could Become Extinct; as the Technology of Artificial Intelligence Advances Rapidly, Scientists Fear We Could Be at Risk from Our Own Inventions.” For example, at California State University(where next seminar will be held) where I used the HP 12 calculator in my finance class in the 1980s, I was fascinated how fast the calculator could easily come up with the answers to my problematic present value, future value, ratios and other supposedly complex problems much quicker when compared to me(as a human) and others in the class. Yes, robots are necessary evil to have in organization or within any manufacturing business, however by 2050, if not sooner, our society and businesses will be completely transformed by robotic machines. The pertinent or critical question hovering around could robots remain as idiotic savants or will they display full human capabilities, because if robots start observing and interpreting their environment as adeptly as humans, then we as humans might be in for serious trouble, stated here mildly. It thus means that robots would be perceived as intelligent beings that can be related to in its supreme capacities, thereby having we as humans be subjected or subjugated and responsible to them rather than vise-versa. Here and then, Asimov’s laws’ comes in because we want robots to remain docile and idiotic, rather than going rogue or going viral on its own because a hyper-computer could developed a mind of its own to infiltrate the internet, then manipulate the rest of the world and could potentially start treating humans like encumbrances to their progress or threats to their very existence. Scary thought!