The following was written as a response to a Quora question, linked here.
Game theory is a framework for strategic interaction developed so that players can generally “solve” for some strategy or mix of strategies that optimizes utility, profit, payoffs, or some other indicator of well-being. Even in complicated games, the idea is that players attempt to calculate some “best response” to all other possible strategy combinations that could be employed by their fellow players. Essentially, game theory is constructed so that the system reaches some kind of equilibrium or simple cyclic state as a result of every player employing their best response strategy (which may be a single strategy, a pattern of strategies, or a randomization of strategies).
Evolutionary game theory involves another layer of calculation, whereby players are interested in not just the one-off maximum, but a long-term maximum. Remember, in evolutionary game theory, players want to maximize the aggregate payoff of some number of plays of the game. Evolutionary game theory expands the space of all strategy combinations or best responses available to the player. For instance, players may be able to avoid the anti-social equilibrium of the prisoner’s dilemma by employing a “trigger strategy” whereby they punish defection with defection for a certain number of turns. Depending on the expected payoff of defecting versus cooperating given the existence of a trigger strategy, and each player’s discount factor, players may be able to attain the welfare-maximizing optimum of the prisoner’s dilemma in evolutionary gameplay, while that optimum is not attainable in the absence of other contextual assumptions in one-off gameplay of the prisoner’s dilemma.
Game theory provides a theoretical framework for 1) how players interact and 2) what they can and are trying to achieve through interaction. Agent-based modeling, on the other hand, leaves open-ended the questions of how players interact and what they can and are trying to achieve. Agents in agent-based models 1) interact with each other 2) towards some end. Agent characteristics, knowledge, and goals are left open-ended. Agents may not be trying to maximize utility, for instance, but are instead trying to maintain some kind of balance between the proportion of one type of agent in their immediate neighborhood versus another, as in the Schelling segregation model. Plan-ends are substituted for payoff maximization.
In this sense, we see how agent-based models can easily contain game theoretic models. We can define agents who play the prisoner’s dilemma with other agents. It’s typically easier to encode evolutionary gameplay in an agent-based model, since agents can record historical gameplay and update their states in most prefab agent-based frameworks like NetLogo.
But agent-based modeling can go far beyond what is analytically feasible in traditional game theoretic frameworks. It can describe systems wherein not all agents play the same games, or play with each other. It allows for complex systemic outcomes, not just outcomes equivalent to static equilibria or simple patterns. It can more easily deal with nonintegral topologies of agent interaction, like when agent relationships constitute some kind of social network. It can track different levels of gameplay in the same system by updating one of a list of variables that describe the agent’s state.
Most importantly, agent-based modeling doesn’t have the severe epistemological requirements of traditional, analytical game theory. That is, players are not assumed to have the cognitive or informational ability to completely solve for their best response. Generally, gameplay in agent-based models is locally constructive, meaning that agents play by taking into account their local states. So, a player on a social network plays according to the states of its nearest-neighbors, and not the states of the neighbors-of-their-neighbors, and so on, even if those distant relationships do in fact influence the gameplay of their nearest neighbors. That is, in agent-based modeling, there can be a system-level rationality that is not accessible by the individual agent.
In traditional game theory, there is no separation between system-level and individual rationality. Even when players deviate from play that would increase their overall payoff if all other players cooperated, it’s not because they are unaware of this other system-level maximum, it’s because they do not believe it is accessible given the incentives faced by their fellow players. In agent-based modeling, there may be a number of system-level states that are generally unknown by the individual agent. This does not mean that agents cannot access these states. It just means they cannot access these states through direct computation of their existence.
For more information on this very rich topic, I suggest looking at the work of Scott Page, Jenna Bednar, and Leigh Tesfatsion on the connection between game theory and agent-based modeling. I have also published a paper on the subject, with a few more in the works at the time of writing this answer.