MO-Ant¶
Action Space |
Box(-1.0, 1.0, (8,), float32) |
Observation Shape |
(105,) |
Observation High |
inf |
Observation Low |
-inf |
Reward Shape |
(3,) |
Reward High |
[inf inf inf] |
Reward Low |
[-inf -inf -inf] |
Import |
|
Description¶
Multi-objective version of the AntEnv environment.
See Gymnasium’s env for more information.
The original Gymnasium’s ‘Ant-v5’ is recovered by the following linear scalarization:
env = mo_gym.make(‘mo-ant-v4’, cost_objective=False) LinearReward(env, weight=np.array([1.0, 0.0]))
Reward Space¶
The reward is 2- or 3-dimensional:
0: x-velocity
1: y-velocity
2: Control cost of the action If the cost_objective flag is set to False, the reward is 2-dimensional, and the cost is added to other objectives. A healthy reward and a cost for contact forces is added to all objectives.
A 2-objective version (without the cost objective as a separate objective) can be instantiated via: env = mo_gym.make(‘mo-ant-2obj-v5’)
Version History¶
v5: Now includes contact forces in the reward and observation. The 2-objective version has now id ‘mo-ant-2obj-v5’, instead of ‘mo-ant-2d-v4’. See https://gymnasium.farama.org/environments/mujoco/ant/#version-history