Replies: 1 comment
-
|
How did you solve this problem? I'm facing the same issue. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Habitat-Lab and Habitat-Sim versions
Habitat-Lab: tags challenge-2022
Habitat-Sim: tags challenge-2022
Datasets: MP3D sence and its object navigation task episodes
❓ Questions and Help
Questions: the semantic sensor output perspective differ from RGB and depth sensors. (I try to print the sensor state. This three kinds of sensor are same states)
Following is my code:
config_paths = "/Configs/Object_Nav_mp3D.yaml"
config = get_config(config_paths)
hab_env = Env(config=config)
print(hab_env._sim.get_agent_state())
action = {1: 'MOVE_FORWARD', 2: 'TURN_LEFT', 3: 'TURN_RIGHT'}
ep_i = 0
while ep_i < 10:
observations = hab_env.reset()
start = 0
while start < 15:
obs = hab_env.step(action[random.choice([1, 2, 3])])
sematic, depth = dis(obs['semantic'], obs['depth']) # dis is a function only for creating Visual image
cv.imshow("RGB", obs['rgb'])
cv.imshow("Semantic", sematic)
cv.imshow("depth", depth)
cv.waitKey(500)
start += 1
Following is my config yaml:

Following is the running result:

Beta Was this translation helpful? Give feedback.
All reactions