Artificial intellect vs human intellect

There is a major difference in human and machine thinking, even as machine start to surpass us on many levels of intellect, humans still leave machines lacking in certain aspects of understanding.

The main reason why the three laws of robotics can never be upheld by artificial systems themselves is because of a lack of understanding in itself, knowing the data without the greater understanding behind the data is a major road block.

To simply have the data and arrange the data is not the same as understanding what the information behind the data means, simple input and output of data may give a system the illusion of being human like, but that is simply what it is, and illusion.

Morality is a complicated issue when artificial intelligence is involved, telling a machine that whiskers, a tail and ears of certain shapes, means to recognize a cat, does not teach a machine how to act around a cat. Telling a machine a cat is alive, does no lead to a greater understanding to what life is and that is all it is to a system, just a label to better identify that a "cat" is "alive" to the system. This does not add to the knowledge of what "life" is and what it means to be "alive".

Even if a higher understanding within artificial intelligence can be achieved, telling the differences between a moving object and life, knowing simply how to respond to these and knowing how not to respond is important.

If an action puts a living thing in danger, the artificial intelligence in a new situation that has not had recorded data may cause harm.

In not knowing what to do between one labelled set of data and another, an artificial intelligence may not be able to respond correctly causing damage. 

Simply providing data and labels expecting a system to understand the data outside its range of calculations and expecting and artificial intelligence to instantly have some moral guidance or understanding of what is life would be dangerous.

Morality in it self is a complicated argument of idea surrounding life and society in general and as we move into the era of machine learning more and more, programming and teaching such machine poses complications, and even though other general abilities move forward, this concept is still out of reach.

Basic artificial intelligence is able to respond to information when it come to inputs and outputs, but when it come to weighing up the value of human life, morality itself is a complicated issue.

A good example of this is a train track scenario of which a person is given an option between changing a track for which a train is bound to hit track A or track B.

Many scenarios can be constructed from this, for example, if a track has a one criminal vs five innocent people, or visa versa, would you switch the track? Just as equally you can put a love one and five other people and visa versa and also ask would you switch the track?

In these complicated situations, humans themselves also struggle with the answers of these questions, before we even reaching machine morality.

Another reason why machine morality will be a struggle, is self checking systems, if the self checking systems are not build for a situation, the situation will not be account for.

Current self checking systems on modern operating systems and applications may not understand a situations fully by the lacking data needed to analyse a situation or the ability to respond to that data correctly can process as situation as fine and continue without knowing any better.

Many systems have an emergency shutoff and the best way to handle unknown data is to simply stop all actions and await further human analysis.


Download AdamAIChat


Highlighted Articles:

AI Self Recognition

AdamAIChat for game creationists

Constructing positive brain files

The dangers of AI in the military

The future development from AIChat

Artificial sentience vs true sentience