This is the final list of tutorials which will be developed at CIG 2012.
Please contact the tutorials chair if you have any question or request about them.
Encoding and Generating Videogame Mechanics
This tutorial will give an overview of how videogame mechanics have been
encoded in AI systems, and how systems have used those encodings to
generate new games, or variations on existing games.
It will cover a number of representational approaches: generative grammars, game parameterization, symbolic logic, and more. In addition, it will take a close look at a several specific game-generation systems, which use techniques ranging from constraint solving to evolutionary computation.
Examples of work that will be covered include that by: Pell (Metagame, 1992), Orwant (EGGG, 2000), Hom & Marks (2007), Togelius & Schmidhuber (2008), Smith & Mateas (2010), and Cook & Colton (Angelina, 2011).
The tutorial will also give an introduction to some other uses of game-mechanics encodings in AI systems, such as in game-playing bots, and in design-assistance tools.
|Mark J. Nelson is Assistant Professor in the Center for Computer Games Research at the IT University of Copenhagen. His research focuses on automatic analysis of videogame properties based on formal encodings of their mechanics, and the usefulness of such analysis for game designers. Prior to ITU Copenhagen, he did his graduate work at the Georgia Institute of Technology, and served as a visiting researcher in the Expressive Intelligence Studio at the University of California, Santa Cruz.|
Evolutionary Computation in Games: Dealing With Uncertainty
Evolutionary computation techniques have proven to be a valuable instrument for adaptation and content generation in computer games, allowing to automatically generate high quality content tailored to specific requirements. The adaptation of weapons in
Galactic Arms Race and the generation of Super Mario Bros levels are two successful examples of the application of such techniques to game content generation.
In these examples, as in many others, evolutionary computation is used to find an optimal configuration of a set of game parameters to optimize a number of game experience predictors/heuristics. When applying evolutionary computation techniques in such contexts, a common problem to face is the uncertainty of the objective function under optimization. Such uncertainty may be due to several reasons: there is uncertainty in the evaluation (e.g. if the function depends on real-time sensor data), the objective function depends on dynamic game elements (e.g. in automatic camera control), or the objective function depends on player preferences which are stochastic by nature and might evolve over time.
In this tutorial, I will cover state-of-the-art methods to deal with uncertainty in optimization. Different game adaptation scenarios and objective functions will be covered as examples, ranging from dynamic and realtime optimization to off-line content generation.
|Paolo Burelli is a PhD candidate at the IT University of Copenhagen, Center for Computer Games and an elected research associate at the Medialogy Department of the Aalborg University Copenhagen. His research work focuses on computer games cinematography and adaptation. He has previously worked at University of Udine in the HCILab as a research fellow and at Eurotech as an embedded Linux developer. Paolo has published articles on virtual camera control in AAAI and IEEE conferences on artificial intelligence and computer graphics, he has also been in the organization committee for CIG 2010 and WICED 2012. His research interests include computer games, human-computer interaction, perception and artificial intelligence with a particular focus on virtual cinematography.|
Affect in Games
Capturing and analyzing player affect has been a challenging area within
the crossroads of cognitive science, psychology, artificial intelligence
and human-computer interaction, with novel gameplay modalities such as
motion and acceleration sensing, and image and speech processing
enhancing the complexity of player experience.
Multiple input modalities can provide a novel means for game platforms to measure player affect, satisfaction and engagement while playing, without having to resort to post-experience and off-line questionnaires. For instance, players immersed by gameplay will rarely gaze away from the screen, while disappointed or indifferent players will typically show very little response or emotion.
In addition to this, procedural content generation techniques may be employed, based on the level of user engagement and interest, to dynamically produce new, adaptable and personalised content, predicted to be more interesting for the player. This tutorial discusses low-complexity algorithms and tools to capture user affect, modelling user experience based on non-verbal input and game performance, and game adaptation based on affect.
|Kostas Karpouzis is an associate researcher at the Institute of Communication and Computer Systems (ICCS-NTUA) and has worked in affective computing research projects since 2001 (ERMIS, Humaine, Callas, Feelix Growing, and, currently, Siren). He is a contributor to the "Humaine Handbook on Emotion research" and K. Scherer's "Blueprint for Affective Computing: a Sourcebook" and coordinates the FP7 ICT Siren project, which deals with serious games for conflict resolution in school environments. He was the invited speaker of the EvoGames 2010 conference and co-chair the Humaine Association SIG on Games.|
Monte Carlo Tree Search
Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision
of tree search with the generality of random sampling. It has received considerable interest
due to its spectacular success in the challenging game of Go, but has also proved to be a
leading method in many other games and some applications beyond games.
In this tutorial we first cover the basics of the algorithm starting with flat Monte Carlo (no tree), then show the benefits of building a tree, and the standard ways of balancing exploration versus exploitation using the Upper Confidence Bounds for Trees (UCT) formula. Despite the theoretical appeal of UCT, many MCTS programs rely more heavily on heuristics, so examples of these are also included.
We then explore some of the challenges in applying MCTS beyond the more conventional perfect information board games, to real-time video games and to games involving uncertainty or imperfect information.
The tutorial will include many demonstrations to help explain the key points.
|Simon Lucas is a Professor of Computer Science at the University of Essex, where he leads the Game Intelligence Group. His main research interests are games, evolutionary computation, and machine learning, and he has published widely in these fields with over 160 peer-reviewed papers. He is the founding Editor-in-Chief of the IEEE Transactions on Computational Intelligence and AI in Games. Professor Lucas has given keynote talks and tutorials at many international conferences including IEEE CIG, IEEE CEC, IEEE WCCI and PPSN.|
|Peter Cowling is currently Professor of Computer Science and Associate Dean (Research) at the University of Bradford (UK). From September 2012, he will join the University of York (UK), to take up an Anniversary Chair in Computer Science and Management. His work centres on computerised decision-making in games, scheduling and resource-constrained optimisation, using heuristics, metaheuristics, hyperheuristics, machine learning approaches (particularly Monte Carlo Tree Search) and exact algorithm hybrids. He has worked with a wide range of industrial partners, is director of 2 research spin-out companies, and has published over 80 scientific papers in high-quality journals and conferences. He is a founding Associate Editor of the IEEE Transactions on Computational Intelligence and AI for Games.|
Version 2.5 - September, 2012