Relationship between game theory and artificial intelligence

relationship between game theory and artificial intelligence

Certainly to some extent in systems that do planning. Let's say that an AI system is trying to evaluate potential actions where actions have a cost and some value. “2/3 of the average” game. • Everyone writes down a number between 0 and • Person closest to 2/3 of the average wins. • Example: – A says – B says. Recent research however shows that the connections between these areas are Game Theory and Artificial Intelligence are two mature areas of research, . Close connection beetween game theory and decision theory allows the use of.

This is a usual practice when testing new techniques, as it offers a computationally cheap and intuitive testbed.

What Data Scientists Should Know About Game Theory: Types of Games

Nevertheless, it is important not to ignore the effect that the practical characteristics of the problem, such as noise, delays, and finite memory, have on the algorithm. Perhaps the most misleading assumption in AI research is that of representing interaction with iterated static games. But what about the effect learning will have on the behavior of the agent?

The tournaments that Axelrod organized revealed that strategies that adapt with time and interaction, even as simple as Tit-for-Tat may sound, are very effective. What differentiates multi-agent from single-agent learning is the increased complexity. Training one deep neural network is already enough of a pain, while adding new networks, as parts of the agents, makes the problem exponentially harder. One less obvious, but more important concern, is the lack of theoretical properties for this kind of problem.

Single-agent reinforcement learning is a well-understood area, as Richard Bellman and Christopher Watkins have offered the algorithms and proofs necessary to learn. In the multi-agent case, however, the proofs lose their validity. Just to illustrate some of the mind-puzzling difficulties that arise: In our case, the environment includes the other agents, which also execute the learning algorithm. Thus, the algorithm has to consider the effect of its action before it acts.

This roughly means that agents always act for their own sake. There are interesting puzzles, like the blue-eyed islandersthat describe the effect common knowledge has on a problem. In Kenn Arrow expressed his reservations towards classical game theory. In this paperI want to disentangle some of the senses in which the hypothesis of rationality is used in economic theory. In particular, I want to stress that rationality is not a property of the individual alone, although it is usually presented that way.

Rather, it gathers not only its force but also its very meaning from the social context in which it is embedded. It is most plausible under very ideal conditions.

relationship between game theory and artificial intelligence

When these conditions cease to hold, the rationality assumptions become strained and possibly even self-contradictory. If you find that Arrow is a bit harsh with classical game theory, how rational would you say your last purchases have been? Or, how much consciousness and effort did you put into your meal today? But Arrow is not so much worried about the assumption of rationality. He is worried about the implications of it. For an agent to be rational, you need to provide them with all the information necessary to make their decisions.

This calls for omniscient players, which is bad in two ways: Second, game theory is no longer a game theory, as you can replace all players by a central ruler and where is the fun in that? The value of information in this view is another point of interest. We have already discussed that possessing all the information is infeasible.

But what about assuming players with limited knowledge? You may ask anyone involved in this area, but it suffices to say that optimization under uncertainty is tough.

Yes, there still are the good-old Nash equilibria. The problem is that they are infinite. Game theory does not provide you with arguments to evaluate them. So, even if you reach one, you shouldn't make it such a big deal. Just to mention a few obstacles on the path of applying the Nash equilibrium approach in a robotic application: How fast, strong, and intelligent are your players and your opponents? What strategies does the opponent team use?

How should you reward your players? Clearly, just being familiar with the rules of football will not win you the game. If game theory has been raising debates for decades, if it has been founded on unrealistic assumptions and, for realistic tasks, if it offers complicated and little-understood solutions, why are we still going for it?

If we actually understood how groups interact and cooperate to achieve their goals, psychology and politics would be much clearer. Researchers in the area of multi-agent reinforcement learning either completely emit a discussion on the theoretical properties of their algorithms and nevertheless often exhibit good results or traditionally study the existence of Nash equilibria.

relationship between game theory and artificial intelligence

The latter approach seems, to the eyes of a young researcher in the field, like a struggle to prove, under severe, unrealistic assumptions, the theoretical existence of solutions that — being infinite and of questionable value — will never be leveraged in practice. Originating in biology, it was introduced inby John M. Smith and George R. Price, as an alternative to classical game theory. The alterations are so profound that we can talk about a whole new approach.

The subject of reasoning is no longer the player itself, but the population of players. Thus, probabilistic strategies are defined as the percentage of players that make a choice, not the probability of one player choosing an action as in classical game theory. This removes the necessity for rational, omniscient agents, as strategies evolve as patterns of behavior.

The evolution process resembles Darwinian theory. Players reproduce following the principles of survival of the fittest and random mutations, and can be elegantly described by a set of differential equations, termed the replicator dynamics.

We can see the three important parts of this system in the illustration below. A population represents the team of agents, and is characterized by a mixture of strategies. The game rules determine the payoffs of the population, which can also be seen as the fitness values of an evolutionary algorithm. Finally, the replicator rules describe how the population will evolve based on the fitness values and the mathematical properties of the evolution process.

A strategy can bear this characterization if it is immune to an invasion by a population of agents that follow another strategy, provided that the invading population is small. Thus, the behavior of the team can be studied under the well-understood area of stability of dynamical systems, such as Lyapunov stability.

How to apply some of principles to the modeling of AI agents?

In need of evolution: game theory and AI

Well, the first step is to identify the nature of the game we are trying to create: Since its inception ingame theory has focused on modeling the most common interaction patterns that now we are seeing every day in multi-agent AI systems. Here is a taxonomy that might help you identify some of the most relevant types of games that have an equivalent in the AI world: One of the simplest classifications of games is based on their symmetry.

A symmetric game describes an environment in which each player has the same goals and the results will only depend on the strategies involved. Chess is a classic example of a symmetric game. Many of the situations we encountered in the real world lack the mathematical elegance of symmetry as participants often have different and even conflicting goals.

A business negotiation is an example of asymmetric game in which each party has different goals and evaluates the results from a different perspective ex: Imperfect Information Another important categorization of games is based on the type of information available. Chess, again, is an example of a perfect information game. Many modern interactions are based on environments in which the moves from each player are hidden from other players and game theory classifies those scenarios as imperfect information games.

From card games like poker to self-driving car scenarios, imperfect information games are all around us. Non-Cooperative A cooperative game environment is one in which the different participants can establish alliances in order to maximize the end result. Contractual negotiations are often modeled as cooperative games.