The report states: “building a controller for the full game based on machine learning is out-of-reach for current methods”.

Therefore they separating tasks into levels and testing different methods of machine learning with the premise to find “the best” level which they can apply to all tasks/levels. But there is a problem, because there is probably not one best solution for everything. It would be much better to have different approaches (machine learning/no machine learning) and to switch between them. With this solution the system could learn, what would be the best approach and stick to it.


From a machine learning point of view, StarCraft provides an ideal environment to study the control of multiple agents at large scale, and also an opportunity to define tasks of increasing difficulty, from micromanagement, which concerns the short-term, low-level control of fighting units during battles, to long-term strategic and hierarchical planning under uncertainty. While building a controller for the full game based on machine learning is out-of-reach for current methods, we propose, as a first step, to study reinforcement learning algorithms in micromanagement scenarios in StarCraft.

Click here to read more.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s