IEEE Symposium on Computational Intelligence and Games

Perth, Australia
15 - 18 December 2008

The following papers have been accepted for presentation at the symposium:

  1. Alexandros Agapitos, Julian Togelius, Simon Lucas, Juergen Schmidhuber, and Andreas Konstantinidis, Generating Diverse Opponents with Multiobjective Evolution
  2. Phillipa Avery and Zbigniew Michalewicz, Adapting to Human Game Play
  3. Erkin Bahceci and Risto Miikkulainen, Transfer of Evolved Pattern-Based Heuristics in Games
  4. Roderick Baker, Peter Cowling, Thomas Randall, and Ping Jiang, Can Opponent Models Aid Poker Player Evolution?
  5. Sander Bakkes, Pieter Spronck and Jaap van den Herik, Rapid Adaptation of Video Game AI
  6. Nicola Beume, Tobias Hein, Boris Naujoks, Nico Piatkowski, Mike Preuss, and Simon Wessing, Intelligent Anti-Grouping in Real-Time Strategy Games
  7. Alan Blair, Learning Position Evaluation for Go with Internal Symmetry Networks
  8. Adrian Boeing, Morphology Independent Dynamic Locomotion Control for Virtual Characters
  9. Nathalie Chetcuti-Sperandio, Fabien Delorme, Sylvain Lagrue, and Denis Stackowiak, Determination and Evaluation of Efficient Strategies for a Stop or Roll Dice Game: Heckmeck am Bratwurmeck (Pickomino)
  10. Benjamin Childs, James Brodeur, and Levente Kocsis, Transpositions and Move Groups in Monte Carlo Tree Search
  11. Andrew Chiou and Kok Wai Wong, Player Adaptive Entertainment Computing (PAEC): Mechanism to Model User Satisfaction by Using Neuro Linguistic Programming (NLP) Techniques
  12. Holger Danielsiek, Raphael Stuer, Andreas Thom, Nicola Beume, Boris Naujoks, and Mike Preuss, Intelligent Moving of Groups in Real-Time Strategy Games
  13. Luis delaOssa, Jose A. Gamez, and Veronica Lopez, Improvement of a Car Racing Controller by Means of Ant Colony Optimization Algorithms
  14. Donna Djordjevich, Patrick Xavier, Michael Bernard, Jonathan Whetzel, Matthew Glickman, and Stephen Verzi, Preparing for the Aftermath: Using Emotional Agents in Game-Based Training for Disaster Response
  15. Garrison Greenwood and Richard Tymerski, A Game-Theoretical Approach for Designing Market Trading Strategies
  16. Johan Hagelback and Stefan J. Johansson, Dealing with Fog of War in a Real Time Strategy Game Environment
  17. Daniel Harabor and Adi Botea, Hierarchical Path Planning for Multi-size Agents in Heterogeneous Environments
  18. Stephen Hladky and Vadim Bulitko, An Evaluation of Models for Predicting Opponent Positions in First-Person Shooter Games
  19. Marcel van der Heijden, Sander Bakkes, and Pieter Spronck, Dynamic Formations in Real-Time Strategy Games
  20. Yuhki Inoue and Yuji Sato, Applying GA for Reward Allotment in an Event-driven Hybrid Learning Classifier System for Soccer Video Games
  21. Su-Hyung Jang and Sung-Bae Cho, Evolving Neural NPCs with Layered Influence Map in the Real-time Simulation Game 'Conqueror'
  22. Kyung-Joong Kim and Sung-Bae Cho, Ensemble Approaches in Evolutionary Game Strategies: A Case Study in Othello
  23. Julien Kloetzer, Hiroyuki Iida, and Bruno Bouzy, A Comparative Study of Solvers in Amazons Endgames
  24. Alan Lockett and Risto Miikkulainen, Evolving Opponent Models for Texas Hold 'Em
  25. Daniele Loiacono, Julian Togelius, Pier Luca Lanzi, Leonard Kinnaird-Heether, Simon M. Lucas, Matt Simmerson, Diego Perez, Robert G. Reynolds, and Yago Saez, The WCCI 2008 Simulated Car Racing Competition
  26. Simon Lucas, Investigating Learning Rates for Evolution and Temporal Difference Learning
  27. Vukosi Marivate and Tshilidzi Marwala, Social Learning Methods in Board Game Agents
  28. Michelle McPartland and Marcus Gallagher, Creating a Multi-Purpose First Person Shooter Bot with Reinforcement Learning
  29. Hasan Mujtaba and Rauf Baig, Survival by Continuous Learning in a Dynamic, Multiple Task Environment
  30. Keisuke Ohsone and Takehisa Onisawa, Friendly Partner System of Poker Game with Facial Expressions
  31. Jacob Kaae Olesen, Georgios Yannakakis, and John Hallam, Real-time Challenge Balance in an RTS Game Using rtNEAT
  32. Yasuhiro Osaki, Kazutomo Shibahara, Yasuhiro Tajima, and Yoshiyuki Kotani, An Othello Evaluation Function Based on Temporal Difference Learning Using Probability of Winning
  33. Matt Parker and Bobby Bryant, Visual Control in Quake II with a Cyclic Controller
  34. Diego Perez, Saez Yago, Recio Gustavo, and Isasi Pedro, Evolving a Rule System Controller for Automatic Driving in a Car Racing Competition
  35. Payam Aghaei Pour, Tauseef Gulrez, Omar Al Zoubi, Gaetano Gargiulo, and Rafael Calvo, Brain-Computer Interface: Next Generation Thought Controlled Distributed Video Game Development Platform
  36. John Reeder, Roberto Miguez, Jessica Sparks, Michael Georgiopoulos, and Georgios Anagnostopoulos, Interactively Evolved Modular Neural Networks for Game Agent Control
  37. Mostafa Sahraei-Ardakani, Ashkan Rahimi-Kian, and Majid Nili-Ahmadabadi, Hierarchical Nash-Q Learning in Continuous Games
  38. Mostafa Sahraei-Ardakani, Mahnaz Roshanaei, Ashkan Rahimi-Kian, and Caro Lucas, A Study of Electricity Market Dynamics Using Invasive Weed Colonization Optimization
  39. Tom Schaul and Juergen Schmidhuber, A Scalable Neural Network Architecture for Board Games
  40. Shiven Sharma, Ziad Kobti, and Scott Goodwin, Learning and Knowledge Generation in General Games
  41. Kazutomo Shibahara and Yoshiyuki Kotani, Combining Final Score with Winning Percentage by Sigmoid Function in Monte-Carlo Simulations
  42. Shogo Takeuchi, Tomoyuki Kaneko, and Kazunori Yamaguchi, Evaluation of Monte Carlo Tree Search and the Application to Go
  43. Duc Thang Ho and Jon Garibaldi, A Fuzzy Approach For The 2007 CIG Simulated Car Racing Competition
  44. Thomas Thompson and John Levine, Scaling-up Behaviours in EvoTanks: Applying Subsumption Principles to Artificial Neural Networks
  45. Thomas Thompson, John Levine, and Russell Wotherspoon, Evolution of Counter Strategies: Application of Co-evolution to Texas Hold'em Poker
  46. Thomas Thompson, Lewis McMillan, John Levine, and Alastair Andrew, An Evaluation of the Benefits of Look-Ahead in Pac-Man
  47. Julian Togelius and Juergen Schmidhuber, An Experiment in Automatic Game Design
  48. Ian Watson, Song Lee, Rubin Jonathan, and Wender Stefan, Improving a Case-Based Texas Hold'em Poker Bot
  49. Stefan Wender and Ian Watson, Using Reinforcement Learning for City Site Selection in the Turn-Based Strategy Game Civilization IV
  50. Joost Westra, Hado Hasselt, Virginia Dignum, and Frank Dignum, On-line Adapting Games Using Agent Organizations
  51. Nathan Wirth and Marcus Gallagher, An Influence Map Model for Playing Ms. Pac-Man
  52. Mark Wittkamp, Luigi Barone, and Philip Hingston, Using NEAT for Continuous Adaptation and Teamwork Formation in Pacman
  53. Georgios Yannakakis and John Hallam, Real-time Adaptation of Augmented-Reality Games for Optimizing Player Satisfaction