Monday, November 16, 2009

Happiness and Artificial Intelligence: Part I

Designing an artificial intelligence gives insight into the way people think, and creating an AI that feels happiness seems a long way off. AI's are not motivated by a desire to be happy, at least not yet. There does not seem to be many people working on motivation.
AI designers of video games give "agents" in the games fairly simple motivations by coding it in, and in fancier agents, the motivations can change. For example, a character can get hungry and look for food. In my mind, these game motivations are not real emotions or motivations, Just like a chess program that is trained to take an opponent's rook is not feeling emotions.

Other AI programs create motivational algorithms that can be fairly complicated, and the planned actions of the "agent" are subject to evaluation by an elaborate goal function, and its actions are determined finding an optimum outcome, perhaps using a Monte Carlo approach.

Artificial intelligence works can make machines that simulate emotions, and game makers can create programs that fictionally kill people or conquer the world, but these program simply act out programming, and don't have any reason or need to do these things. Animals from earthworms to people are built to eat when they are hungry, and have a will to survive.

An interesting AI program would be self-aware enough to want to understand and justify its internal valuation function. Simply coding it makes a soldier than can follow rules, not a general to lead them.

I recall the old Star Trek robot villain Roc  said, "Survival cancels programming! That! is the equation." Before he goes bezerk and starts bashing heads. Meaning that survival caused him to set aside his motivational algorithm, and go bezerk.  [Somebody help me find which episode this is. Leave a comment.]