Possible Future Development From AdamAIChat

This is a rough roadmap of further application constructions based on the technical code that makes AdamAIChat.

Certain advanced components from AdamAIChat can use separately to create other useful systems.

Image by Adam Watson

After analysing some of the features of AdamAIChat which is under the hood, one of these systems being the machine code indexing system, this feature compresses text into a smaller piece of text making the data easier to search by creating a compressed version of what is said, a general compression software could be developed from this with very little effort.

Another feature that has been pointed out by a general user which is a specialist in code breaking.

While analysing the machine code a level of encryption is achievable. On further discussion, if this system was to be used as a encryption system, it could be made unique with a self changing system, this would make the encryption polymorphic, making it even harder to crack.


Download AdamAIChat 


Highlighted Articles:

AI Self Recognition

AdamAIChat for game creationists

Constructing positive brain files

Artificial intellect vs human intellect

The dangers of AI in the military

Artificial sentience vs true sentience

Artificial intellect vs human intellect

There is a major difference in human and machine thinking, even as machine start to surpass us on many levels of intellect, humans still leave machines lacking in certain aspects of understanding.

Image by Adam Watson

The main reason why the three laws of robotics can never be upheld by artificial systems themselves is because of a lack of understanding in itself, knowing the data without the greater understanding behind the data is a major road block.

To simply have the data and arrange the data is not the same as understanding what the information behind the data means, simple input and output of data may give a system the illusion of being human like, but that is simply what it is, and illusion.

Morality is a complicated issue when artificial intelligence is involved, telling a machine that whiskers, a tail and ears of certain shapes, means to recognize a cat, does not teach a machine how to act around a cat. Telling a machine a cat is alive, does no lead to a greater understanding to what life is and that is all it is to a system, just a label to better identify that a "cat" is "alive" to the system. This does not add to the knowledge of what "life" is and what it means to be "alive".

Even if a higher understanding within artificial intelligence can be achieved, telling the differences between a moving object and life, knowing simply how to respond to these and knowing how not to respond is important.

If an action puts a living thing in danger, the artificial intelligence in a new situation that has not had recorded data may cause harm.

In not knowing what to do between one labelled set of data and another, an artificial intelligence may not be able to respond correctly causing damage. 

Simply providing data and labels expecting a system to understand the data outside its range of calculations and expecting and artificial intelligence to instantly have some moral guidance or understanding of what is life would be dangerous.

Morality in it self is a complicated argument of idea surrounding life and society in general and as we move into the era of machine learning more and more, programming and teaching such machine poses complications, and even though other general abilities move forward, this concept is still out of reach.

Basic artificial intelligence is able to respond to information when it come to inputs and outputs, but when it come to weighing up the value of human life, morality itself is a complicated issue.

A good example of this is a train track scenario of which a person is given an option between changing a track for which a train is bound to hit track A or track B.

Many scenarios can be constructed from this, for example, if a track has a one criminal vs five innocent people, or visa versa, would you switch the track? Just as equally you can put a love one and five other people and visa versa and also ask would you switch the track?

In these complicated situations, humans themselves also struggle with the answers of these questions, before we even reaching machine morality.

Another reason why machine morality will be a struggle, is self checking systems, if the self checking systems are not build for a situation, the situation will not be account for.

Current self checking systems on modern operating systems and applications may not understand a situations fully by the lacking data needed to analyse a situation or the ability to respond to that data correctly can process as situation as fine and continue without knowing any better.

Many systems have an emergency shutoff and the best way to handle unknown data is to simply stop all actions and await further human analysis.


Download AdamAIChat


Highlighted Articles:

AI Self Recognition

AdamAIChat for game creationists

Constructing positive brain files

The dangers of AI in the military

The future development from AIChat

Artificial sentience vs true sentience

Positive brain files

Image by Adam Watson

The creation of a new blank brain in AdamAIChat and molding it into a basic file able to have simple conversations can be hard to do with little guidance.

Just like other systems, a basic understanding of what is going on in the background helps in producing fun and useful brain files.

The start of a conversation is important and is the seed the grows into larger files, most people when starting a conversation tend to start with different variations of "hi" or "hello", followed by "how are you?" then responses like "I am ok" or "good thanks".

Conversation in AdamAIChat branches off from this initial seed and grows based on response given to it, just like a normal a conversation, AdamAIChat can give many responses based on the many possible responses given to it.

Keeping a positive tone, even when the chatbot is being negative is very important, just like with any chatbot, as it grows in knowledge, it will respond in the way it has learned from input. Bringing the conversation back to a positive tone once the chatbot has been rude or negative helps the artificial intelligence change around conversations in the future in the same way.

An aggressive AI tends to lead the user into being aggressive back

Being negative or rude to a negative or stupid response that a chatbot said, teaches the chatbot to respond that ways in future, growing the negativity from the negative attitudes fed into the system.

Unlike other chatbots available AdamAIChat users have full access to the brain behind the AI, allowing the users a view the learned data and making it easier to edit the responses that are available to the AI.

Taking use of these features in AdamAIChat to edit the data in the brain of the AI can drastically improve responses the given. 


Download AdamAIChat


Highlighted Articles:

AI Self Recognition

AdamAIChat for game creationists

Artificial intellect vs human intellect

The dangers of AI in the military

The future development from AIChat

Artificial sentience vs true sentience

The danger of AI in the military

Artificial intellect in a military robot or drone in strictly prohibit across many nations due to the dangers of the loss of control over a learning system with weaponry.

U.S. Air Force photo by Paul Ridgeway

AI can effect a military in multiple ways, not just on the front line, it can also effect on military in communication and tactics.

An artificial intelligence that learns based on communication can bring up old events that has already passed back to the present, or its data can be replaced with something nasty, provoking, blacklisting and deceiving.

If placed between communication of the human element of an army, information stored can be stolen and by applying different data trained to do damage, communication can be intercepted.

Many negative effects can happen when communicating with an artificial intelligence about war, it can return to old events like they are still ongoing and can treat events in action as finished, causing a blacklisting effect.

Other negative uses of artificial intelligence is in politics, as politicians are voted for by the people upholding peoples opinions on how the country should be run. Moving away from the peoples point of view over an AI can cause damage to a nation and damage to a political representation of a party or group in politics.

As machine learning move forward, it should be kept from the main infrastructure of a country. Artificial intelligent learning systems should be kept from political, policing and military communication.


Highlighted Articles:

AI Self Recognition

AdamAIChat for game creationists

Constructing positive brain files

Artificial intellect vs human intellect

The future development from AIChat

Artificial sentience vs true sentience