diff --git a/docs/developer_guides/pipelines/dataparsers.md b/docs/developer_guides/pipelines/dataparsers.md index 458abdeb..66a46611 100644 --- a/docs/developer_guides/pipelines/dataparsers.md +++ b/docs/developer_guides/pipelines/dataparsers.md @@ -103,7 +103,7 @@ dataparser_outputs = dataparser.get_dataparser_outputs(split="train") input_dataset = InputDataset(dataparser_outputs) ``` -You can also pull out information from the DataParserOutputs for other DataMangager componenets, such as the RayGenerator. The RayGenerator generates RayBundle objects from camera and pixel indices. +You can also pull out information from the DataParserOutputs for other DataManager components, such as the RayGenerator. The RayGenerator generates RayBundle objects from camera and pixel indices. ```python ray_generator = RayGenerator(dataparser_outputs.cameras) diff --git a/docs/developer_guides/pipelines/fields.md b/docs/developer_guides/pipelines/fields.md index f4c9c6d2..5533db49 100644 --- a/docs/developer_guides/pipelines/fields.md +++ b/docs/developer_guides/pipelines/fields.md @@ -56,7 +56,7 @@ class Field(nn.Module): The forward function is the main function you'll use, which takes in RaySamples returns quantities for each sample. You'll notice that the get_density function is called for every field, followed by the get_outputs function. -The get_outputs function is what you need to implement to return custom data. For example, check out of SemanticNerfField where we rely on different FieldHeads to produce correct dimensional outputs for typical quantiites. Our implemented FieldHeads have the following FieldHeadNames names. +The get_outputs function is what you need to implement to return custom data. For example, check out of SemanticNerfField where we rely on different FieldHeads to produce correct dimensional outputs for typical quantities. Our implemented FieldHeads have the following FieldHeadNames names. ```python class FieldHeadNames(Enum): diff --git a/docs/developer_guides/pipelines/models.md b/docs/developer_guides/pipelines/models.md index 92f90695..6aef9ec8 100644 --- a/docs/developer_guides/pipelines/models.md +++ b/docs/developer_guides/pipelines/models.md @@ -20,7 +20,7 @@ A model, at a high level, takes in regions of space described by RayBundle objec ## Functions to Implement -[The code](https://github.com/nerfstudio-project/nerfstudio/blob/master/nerfstudio/models/base_model.py) is quite verbose, so here we distill the most important functions with succint descriptions. +[The code](https://github.com/nerfstudio-project/nerfstudio/blob/master/nerfstudio/models/base_model.py) is quite verbose, so here we distill the most important functions with succinct descriptions. ```python class Model: @@ -52,7 +52,7 @@ class Model: """Returns the training callbacks, such as updating a density grid for Instant NGP.""" def get_outputs(self, ray_bundle: RayBundle): - """Process a RayBundle object and return RayOutputs describing quanties for each ray.""" + """Process a RayBundle object and return RayOutputs describing quantities for each ray.""" def get_metrics_dict(self, outputs, batch): """Returns metrics dictionary which will be plotted with wandb or tensorboard.""" diff --git a/docs/nerfology/model_components/index.md b/docs/nerfology/model_components/index.md index e3619d89..3d2f0dd9 100644 --- a/docs/nerfology/model_components/index.md +++ b/docs/nerfology/model_components/index.md @@ -1,6 +1,6 @@ # Model components -It can be difficult getting started with NeRFs. The reserach field is still quite new and most of the key nuggets are burried in academic papers. For this reason, we have consoladated many of the key concepts into a series of guides. +It can be difficult getting started with NeRFs. The reserach field is still quite new and most of the key nuggets are burried in academic papers. For this reason, we have consolidated many of the key concepts into a series of guides. ```{toctree} :maxdepth: 1 diff --git a/docs/sdfstudio-data.md b/docs/sdfstudio-data.md index 82ea1084..4518b863 100644 --- a/docs/sdfstudio-data.md +++ b/docs/sdfstudio-data.md @@ -30,7 +30,7 @@ The json file (meta_data.json) stores meta data of the scene and has the followi 'has_mono_prior': true, # use monocular cues or not 'pairs': 'pairs.txt', # pairs file used for multi-view photometric consistency loss 'worldtogt': [ - [1, 0, 0, 0], # world to gt transformation (useful for evauation) + [1, 0, 0, 0], # world to gt transformation (useful for evaluation) [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1], @@ -52,7 +52,7 @@ The json file (meta_data.json) stores meta data of the scene and has the followi }, 'frames': [ # this contains information for each image { - # note that all paths are relateive path + # note that all paths are relative path # path of rgb image 'rgb_path': '000000_rgb.png', # camera to world transform @@ -134,7 +134,7 @@ ns-download-data sdfstudio --dataset-name neural-rgbd-data Then run the following command to convert the downloaded neural-rgbd dataset to SDFStudio format: ```bash -# kitchen scene for example, replca the scene path to convert other scenes +# kitchen scene for example, replica the scene path to convert other scenes python scripts/datasets/process_neuralrgbd_to_sdfstudio.py --input_path data/neural-rgbd-data/kitchen/ --output_path data/neural_rgbd/kitchen_sensor_depth --type sensor_depth ```