- Clone this repository
- Clone submodules:
git submodule update --init --recursive - Create conda env from
env.ymlfile:conda env create -f env.yml - Install dependencies
python -m pip install requirements.txt - Install pkg
python setup.py develop - Move into each subdir inside
third_partiesand executepython setup.py develop --all - Specify
base_diratconfs/config.yamlas the absolute path of this project
All the data are contained inside the data directory.
- Download test-scenes
http://dl.fbaipublicfiles.com/habitat/habitat-test-scenes.zip - Unzip inside project directory
- Suggestion: you can keep data in a separate folder and use soft links (
ln -s /path/to/dataset /path/to/project/data)
python scripts/run_exp.py run training or deployment of a policy. More information about "RL baselines" and our RL policy below.
To replay an experiment, use the following
python scripts/visualize_exp.py replay.episode_id={ID episode} replay.exp_name={PATH TO EPISODE} replay.modalities="['rgb', 'depth','semantic']"
The following learned baselines are implemented:
neuralslam: start fromconfs/habitat/gibson_neuralslam.yamlseal-v0: start fromconfs/habitat/gibson_seal.yamlcuriosity-v0: start fromconfs/habitat/gibson_semantic_curiosity.yaml
The following classical baselines are implemented:
randomgoalsbaselinefrontierbaseline-v1(frontierbaseline-v2,frontierbaseline-v3)bouncebaselinerotatebaselinerandombaseline
Start from confs/habitat/gibson_goal_exploration.yaml
CHECKPOINT_FOLDERfolder in which checkpoints are savedTOTAL_NUM_STEPSmax number of training steps- under
ppo:replanning_stepshow often to run the policynum_global_stepshow often to train the policysave_periodichow often to save a checkpointload_checkpoint_pathfull path to a checkpoint to load at startload_checkpointset True to loadload_checkpoint_pathvisualizeif True, debug images are shown
Environments:
SemanticDisagreement-v0reward: sum(disagreement_t)
Environments for the RL baselines are also provided:
SemanticCuriosity-v0(Semantic Curiosity)sealenv-v0(SEAL)ExpSlam-v0(NeuralSLAM)
Policies:
goalexplorationbaseline-v0State: disagreement_t, map_t, agent pose
Checkpoints:
- Ours {ADD LINK}
Start from confs/habitat/gibson_goal_exploration.yaml
replanning_stepshow often to run the policyload_checkpoint_pathfull path to a checkpoint to load at startload_checkpointset to True
| Scenes models | Extract path | Archive size |
|---|---|---|
| Gibson | data/scene_datasets/gibson/{scene}.glb |
1.5 GB |
| MatterPort3D | data/scene_datasets/mp3d/{scene}/{scene}.glb |
15 GB |
You can download the task at the following link {ADD LINK}, unzip and put it in data/datasets/objectnav/gibson/v1.1
| Task | Scenes | Link | Extract path | Config to use | Archive size |
|---|---|---|---|---|---|
| Point goal navigation | Gibson | pointnav_gibson_v1.zip | data/datasets/pointnav/gibson/v1/ |
datasets/pointnav/gibson.yaml |
385 MB |
| Point goal navigation corresponding to Sim2LoCoBot experiment configuration | Gibson | pointnav_gibson_v2.zip | data/datasets/pointnav/gibson/v2/ |
datasets/pointnav/gibson_v2.yaml |
274 MB |
| Point goal navigation | MatterPort3D | pointnav_mp3d_v1.zip | data/datasets/pointnav/mp3d/v1/ |
datasets/pointnav/mp3d.yaml |
400 MB |
- Follow instruction in the main Habitat-lab repository
- Ask for the license from Gibson website
https://stanfordvl.github.io/iGibson/dataset.html - Download gibson tiny with
wget https://storage.googleapis.com/gibson_scenes/gibson_tiny.tar.gz - Follow instructions at Habitat-sim to generate gibson semantic
- detectron2 >= 0.5
- torch >= 1.9
- pytorch-lightning >= 1.5
- habitat-sim = 0.2
- habitat-lab
- torchmetrics >= 0.6
If you want to contribute to the project, I suggest to install requirements-dev.txt and abilitate pre-commit
python -m pip install -r requirements-dev.txt
pre-commit install