code

OpenAI에서 새로운 체육관 환경을 만드는 방법은 무엇입니까?

codestyles 2020. 12. 12. 10:53
반응형

OpenAI에서 새로운 체육관 환경을 만드는 방법은 무엇입니까?


ML을 사용하여 비디오 게임을 배우는 AI 에이전트를 만드는 임무가 있습니다. 기존 환경을 사용하고 싶지 않기 때문에 OpenAI Gym을 사용하여 새로운 환경을 만들고 싶습니다. 새로운 사용자 지정 환경을 어떻게 만들 수 있습니까?

또한 OpenAI Gym의 도움없이 특정 비디오 게임을 할 수있는 AI Agent 제작을 개발할 수있는 다른 방법이 있나요?


banana-gym매우 작은 환경에 대해서는 저 참조하십시오 .

새로운 환경 만들기

저장소의 기본 페이지를 참조하십시오.

https://github.com/openai/gym/blob/master/docs/creating-environments.md

단계는 다음과 같습니다.

  1. PIP 패키지 구조로 새 저장소 만들기

다음과 같이 보일 것입니다.

gym-foo/
  README.md
  setup.py
  gym_foo/
    __init__.py
    envs/
      __init__.py
      foo_env.py
      foo_extrahard_env.py

그 내용을 보려면 위의 링크를 따르십시오. 언급되지 않은 세부 사항은 특히 일부 기능이 어떻게 생겼는지에 대한 foo_env.py것입니다. 예제를보고 gym.openai.com/docs/에서 도움이됩니다. 다음은 그 예입니다.

class FooEnv(gym.Env):
    metadata = {'render.modes': ['human']}

    def __init__(self):
        pass

    def _step(self, action):
        """

        Parameters
        ----------
        action :

        Returns
        -------
        ob, reward, episode_over, info : tuple
            ob (object) :
                an environment-specific object representing your observation of
                the environment.
            reward (float) :
                amount of reward achieved by the previous action. The scale
                varies between environments, but the goal is always to increase
                your total reward.
            episode_over (bool) :
                whether it's time to reset the environment again. Most (but not
                all) tasks are divided up into well-defined episodes, and done
                being True indicates the episode has terminated. (For example,
                perhaps the pole tipped too far, or you lost your last life.)
            info (dict) :
                 diagnostic information useful for debugging. It can sometimes
                 be useful for learning (for example, it might contain the raw
                 probabilities behind the environment's last state change).
                 However, official evaluations of your agent are not allowed to
                 use this for learning.
        """
        self._take_action(action)
        self.status = self.env.step()
        reward = self._get_reward()
        ob = self.env.getState()
        episode_over = self.status != hfo_py.IN_GAME
        return ob, reward, episode_over, {}

    def _reset(self):
        pass

    def _render(self, mode='human', close=False):
        pass

    def _take_action(self, action):
        pass

    def _get_reward(self):
        """ Reward is given for XY. """
        if self.status == FOOBAR:
            return 1
        elif self.status == ABC:
            return self.somestate ** 2
        else:
            return 0

환경 사용

import gym
import gym_foo
env = gym.make('MyEnv-v0')

  1. https://github.com/openai/gym-soccer
  2. https://github.com/openai/gym-wikinav
  3. https://github.com/alibaba/gym-starcraft
  4. https://github.com/endgameinc/gym-malware
  5. https://github.com/hackthemarket/gym-trading
  6. https://github.com/tambetm/gym-minecraft
  7. https://github.com/ppaquette/gym-doom
  8. https://github.com/ppaquette/gym-super-mario
  9. https://github.com/tuzzer/gym-maze

확실히 가능합니다. 그들은 문서 페이지의 끝 부분에서 그렇게 말합니다.

https://gym.openai.com/docs

As to how to do it, you should look at the source code of the existing environments for inspiration. Its available in github:

https://github.com/openai/gym#installation

Most of their environments they did not implement from scratch, but rather created a wrapper around existing environments and gave it all an interface that is convenient for reinforcement learning.

If you want to make your own, you should probably go in this direction and try to adapt something that already exists to the gym interface. Although there is a good chance that this is very time consuming.

There is another option that may be interesting for your purpose. It's OpenAI's Universe

https://universe.openai.com/

It can integrate with websites so that you train your models on kongregate games, for example. But Universe is not as easy to use as Gym.

If you are a beginner, my recommendation is that you start with a vanilla implementation on a standard environment. After you get passed the problems with the basics, go on to increment...

참고URL : https://stackoverflow.com/questions/45068568/how-to-create-a-new-gym-environment-in-openai

반응형