Env

The OpenAIGym-inspired Env base class is the main API that represents the environmental dynamics or “generative process” with which agents exchange observations and actions

Base class

class pymdp.envs.Env

The Env base class, loosely-inspired by the analogous env class of the OpenAIGym framework.

A typical workflow is as follows:

>>> my_env = MyCustomEnv(<some_params>)
>>> initial_observation = my_env.reset(initial_state)
>>> my_agent.infer_states(initial_observation)
>>> my_agent.infer_policies()
>>> next_action = my_agent.sample_action()
>>> next_observation = my_env.step(next_action)

This would be the first step of an active inference process, where a sub-class of Env, MyCustomEnv is initialized, an initial observation is produced, and these observations are fed into an instance of Agent in order to produce an action, that can then be fed back into the the Env instance.

Specific environment implementations

All of the following dynamics inherit from Env and have the same general usage as above.

pymdp.envs.GridWorldEnv

2-dimensional grid-world implementation with 5 actions (the 4 cardinal directions and staying put).

pymdp.envs.DGridWorldEnv

1-dimensional grid-world implementation with 3 possible movement actions ("LEFT", "STAY", "RIGHT")

pymdp.envs.VisualForagingEnv

Implementation of the visual foraging environment used for scene construction simulations

pymdp.envs.TMazeEnv

Implementation of the 3-arm T-Maze environment

pymdp.envs.TMazeEnvNullOutcome

Implementation of the 3-arm T-Maze environment where there is an additional null outcome within the cue modality, so that the agent doesn't get a random cue observation, but a null one, when it visits non-cue locations