Skip to content

Commit 9fc7e68

Browse files
committed
2 parents 4253c42 + ecfa39e commit 9fc7e68

File tree

2 files changed

+6
-6
lines changed

2 files changed

+6
-6
lines changed

docs/Getting Started/quickstart.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,11 +38,11 @@ def build_rlgym_v2_env():
3838
import numpy as np
3939

4040
spawn_opponents = True
41-
team_size = 1
41+
team_size = 2
4242
blue_team_size = team_size
4343
orange_team_size = team_size if spawn_opponents else 0
4444
action_repeat = 8
45-
no_touch_timeout_seconds = 10
45+
no_touch_timeout_seconds = 30
4646
game_timeout_seconds = 300
4747

4848
action_parser = RepeatAction(LookupTableAction(), repeats=action_repeat)

docs/Rocket League/training_an_agent.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@ sidebar_position: 1
77

88
This guide builds on our [Quick Start Guide](../Getting%20Started/quickstart) to help you train a more sophisticated Rocket League bot than the simple setup in the quickstart guide. We'll use RocketSim to run training much faster than the actual game, and cover all the key concepts you need to know.
99

10-
his tutorial is adapted from an excellent guide written by Zealan, the creator of RocketSim. You can find the [original tutorial here](https://github.com/ZealanL/RLGym-PPO-Guide/tree/main) for even more details.
10+
This tutorial is adapted from an excellent guide written by Zealan, the creator of RocketSim. You can find the [original tutorial here](https://github.com/ZealanL/RLGym-PPO-Guide/tree/main) for even more details.
1111

1212
## A Better Agent
1313

14-
We'll start off this by first creating a richer reward function so our agent has an easier time learning what to do. Then we'll adjust the PPO hyperparameters, and finally set up a visualizer so we can watch our agent learn.
14+
We'll start this off by first creating a richer reward function so our agent has an easier time learning what to do. We'll then adjust the PPO hyperparameters, and finally set up a visualizer so we can watch our agent learn.
1515

1616
First you'll need to make sure you have RLGym installed with RLViser support (unless you are using a different visualizer, in which case you can skip this step):
1717

@@ -22,8 +22,8 @@ pip install rlgym[rl-rlviser]
2222
Now let's make a few custom reward functions to help our agent out. It's best to move these to a separate file from the main script and then import them when making the environment, but you can put them wherever you like.
2323
```python
2424
from typing import List, Dict, Any
25-
from rlgym.api import RewardFunction
26-
from rlgym.rocket_league.api import GameState, AgentID
25+
from rlgym.api import RewardFunction, AgentID
26+
from rlgym.rocket_league.api import GameState
2727
from rlgym.rocket_league import common_values
2828
import numpy as np
2929

0 commit comments

Comments
 (0)