>/tta/

← back to teachings
jan 14, 2026 tamás takács, lászló gulyás 7 min read
share: copied!

Game Theory

2025/26/1

This marks the first iteration of the new Game Theory course for the AI Masters specialization. Building a comprehensive Game Theory curriculum from scratch has been a significant challenge, and while this initial version is still a work in progress, it covers most of the essential topics in the field. With the focus heavily on developing lecture material, assignments, and exams, there hasn’t yet been an opportunity to create dedicated practice materials. My goal is to refine this into one of the strongest courses in our specialization.

Please note that the materials may contain small mistakes, typos, or even implementation bugs. I would appreciate any notifications about these issues sent to my email.


Lecture and Practice Content

1
Lecture

Requirements, Logistics, Warm-Up.

1
Practice

Introduction to Game Theory.

coming soon
2
Lecture

Normal-Form Games, Repeated NFGs.

2
Practice

NashPy: NFGs.

coming soon
3
Lecture

Best Response.

3
Practice

NashPy: Repeated Games.

coming soon
4
Lecture

Best Response, Nash Equilibria.

4
Practice

NashPy: Computing Nash Equilibria.

coming soon
5
Lecture

ε-Nash Equilibria.

5
Practice

NashPy: Mixed Strategies.

coming soon
6
Lecture

Correlated Equilibrium.

6
Practice

NashPy: Computing CE.

coming soon
7
Lecture

First Midterm

7
Practice

Assignment 1 deadline.

8
Lecture

Welfare, and Learning.

8
Practice

Fairness experiments.

coming soon
9
Lecture

Joint Policies.

9
Practice

Minimax implementations.

coming soon
10
Lecture

Expected Return, and Minimax.

10
Practice

Minimax implementations part 2.

coming soon
11
Lecture

Stochastic Games and Communication

11
Practice

POSG examples.

coming soon
12
Lecture

Bayesian Games and Mechanism Design.

12
Practice

NashPy: Bayesian Games.

coming soon
13
Lecture

First Midterm

13
Practice

Assignment 2 deadline.


Assignments

The two assignments correspond to the first and second halves of the course material, each covering practical implementations of the concepts introduced in lectures. They are designed to be completable using only the NashPy library and lecture content, no additional practice materials are required. These assignments give students hands-on experience with the theory while also serving as a modest grade boost.

1
Assignment

Nash Equilibria, ε -NE, and Structured Analyses.

2
Assignment

Minimax, Stochastic Games, Bayesian NE, and Correlated Equilibrium.


Exams

The final grade is based on a composite score from the two midterms and assignments. To receive this grade, students must also pass a written exam covering the lecture material. Below are some example exams for reference:

Mock Exams

Past exams for practicing the lecture material.


Course Syllabus

Schedule

Lecture:

  • Schedule: Wednesdays, 10:00 AM - 12:00 PM
  • Location: South Building, Room 2-712

Note:

  • Hungarian: Déli Tömb 2-712 (Interaktív tábla)

Practice:

  • Schedule: Thursdays 7:30 PM - 9:00 PM
  • Location: South Building, Room 2-502

Note:

  • Hungarian: Déli Tömb 2-502 (Interaktív tábla)

Description

This course provides a rigorous introduction to Game Theory with a focus on its relevance for Artificial Intelligence and Multi-Agent Systems. Students will learn how to formally represent strategic interactions, analyze agent behavior, and evaluate solution concepts such as Nash equilibria, mixed strategies, and correlated equilibria.

The course progresses from foundational models of normal-form and extensive-form games to more advanced topics, including stochastic and repeated games, communication between agents, and evolutionary dynamics. Particular emphasis is placed on the computational aspects of game theory, such as the complexity of equilibrium computation, as well as on learning in games through methods like fictitious play, no-regret learning, and replicator dynamics.

Practical sessions complement the lectures by introducing computational tools, most notably NashPy, enabling students to model games, compute equilibria, and simulate adaptive dynamics. By the end of the semester, participants will have developed both the mathematical foundations and the computational skills required to analyze strategic interaction in AI contexts, bridging classical theory with modern applications in multi-agent reinforcement learning.

Grading

Your final grade is calculated using the formulas below:

Final Lecture Score (LS) = Midterm 1 (50 points) + Midterm 2 (50 points)
Final Practice Score (PS) = Assignment 1 (50 points) + Assignment 2 (50 points)
Final Score (FS) = (LS + PS) / 2

Final Score Range Grade
> 85 5
75 - 85 4
65 - 74 3
40 - 64 2
< 40 F
  • Pass required on both LS and PS (individually) to attend the final exam
  • Pass/Fail written exam from lecture material required to receive the final grade

Prerequisites

  • Python (moderate level)
  • Linear Algebra and Probability (moderate level)
  • Reinforcement Learning Concepts (advantageous but not required)

Tools and Frameworks

  • Programming Language: Python
  • Frameworks: PyTorch, NashPy
  • Libraries: NumPy
  • Additional Tools: Google Colab

Learning Objectives

  • Understand the mathematical and conceptual foundations of Game Theory
  • Learn classical solution concepts and their computational aspects
  • Explore the role of Game Theory in AI and MARL
  • Apply libraries like NashPy to model and analyze games
  • Develop intuition for fairness, efficiency, and equilibrium concepts in strategic interaction
  1. Bonanno, G. (2024). Game Theory (3rd ed.). University of California, Davis.
  2. Axelrod, R. (1984). The Evolution of Cooperation. Basic Books.
  3. Nisan, N., Roughgarden, T., Tardos, É., & Vazirani, V. V. (2007). Algorithmic Game Theory. Cambridge University Press.
  4. Myerson, R. B. (1991). Game Theory: Analysis of Conflict. Harvard University Press.
  5. Shoham, Y., & Leyton-Brown, K. (2008). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press.
  6. Christianos, F. et al. (2023). Multi-Agent Reinforcement Learning: Foundations and Modern Approaches.
  7. NashPy Documentation.

1282 words updated: 2026-01-19 19:51:06 +0100

6301abd

← back to teachings