Skip to content

Commit 3d3e08c

Browse files
[Doc]: fix various typos in different files (#3497)
1 parent ebf055e commit 3d3e08c

File tree

16 files changed

+22
-22
lines changed

16 files changed

+22
-22
lines changed

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ check how they look before committing for instance). You don't have to commit th
4343

4444
## Building the documentation
4545

46-
Once you have setup the `doc-builder` and additional packages with the pip install command above,
46+
Once you have set up the `doc-builder` and additional packages with the pip install command above,
4747
you can generate the documentation by typing the following command:
4848

4949
```bash

docs/source/en/guides/inference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ You can use [`InferenceClient`] to run chat completion with local inference serv
152152
```
153153

154154
> [!TIP]
155-
> Similarily to the OpenAI Python client, [`InferenceClient`] can be used to run Chat Completion inference with any OpenAI REST API-compatible endpoint.
155+
> Similarly to the OpenAI Python client, [`InferenceClient`] can be used to run Chat Completion inference with any OpenAI REST API-compatible endpoint.
156156
157157
### Authentication
158158

docs/source/en/quick-start.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ hf auth login
7575

7676
The command will tell you if you are already logged in and prompt you for your token. The token is then validated and saved in your `HF_HOME` directory (defaults to `~/.cache/huggingface/token`). Any script or library interacting with the Hub will use this token when sending requests.
7777

78-
Alternatively, you can programmatically login using [`login`] in a notebook or a script:
78+
Alternatively, you can programmatically log in using [`login`] in a notebook or a script:
7979

8080
```py
8181
>>> from huggingface_hub import login

src/huggingface_hub/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ to the Hub: https://huggingface.co/docs/hub/adding-a-model.
7373

7474
### API utilities in `hf_api.py`
7575

76-
You don't need them for the standard publishing workflow (ie. using git command line), however, if you need a
76+
You don't need them for the standard publishing workflow (i.e. using git command line), however, if you need a
7777
programmatic way of creating a repo, deleting it (`⚠️ caution`), pushing a
7878
single file to a repo or listing models from the Hub, you'll find helpers in
7979
`hf_api.py`. Some example functionality available with the `HfApi` class:

src/huggingface_hub/_inference_endpoints.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -329,7 +329,7 @@ def pause(self) -> "InferenceEndpoint":
329329
"""Pause the Inference Endpoint.
330330
331331
A paused Inference Endpoint will not be charged. It can be resumed at any time using [`InferenceEndpoint.resume`].
332-
This is different than scaling the Inference Endpoint to zero with [`InferenceEndpoint.scale_to_zero`], which
332+
This is different from scaling the Inference Endpoint to zero with [`InferenceEndpoint.scale_to_zero`], which
333333
would be automatically restarted when a request is made to it.
334334
335335
This is an alias for [`HfApi.pause_inference_endpoint`]. The current object is mutated in place with the
@@ -367,7 +367,7 @@ def resume(self, running_ok: bool = True) -> "InferenceEndpoint":
367367
def scale_to_zero(self) -> "InferenceEndpoint":
368368
"""Scale Inference Endpoint to zero.
369369
370-
An Inference Endpoint scaled to zero will not be charged. It will be resume on the next request to it, with a
370+
An Inference Endpoint scaled to zero will not be charged. It will be resumed on the next request to it, with a
371371
cold start delay. This is different than pausing the Inference Endpoint with [`InferenceEndpoint.pause`], which
372372
would require a manual resume with [`InferenceEndpoint.resume`].
373373

src/huggingface_hub/_local_folder.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -206,7 +206,7 @@ def get_local_download_paths(local_dir: Path, filename: str) -> LocalDownloadFil
206206
[`LocalDownloadFilePaths`]: the paths to the files (file_path, lock_path, metadata_path, incomplete_path).
207207
"""
208208
# filename is the path in the Hub repository (separated by '/')
209-
# make sure to have a cross platform transcription
209+
# make sure to have a cross-platform transcription
210210
sanitized_filename = os.path.join(*filename.split("/"))
211211
if os.name == "nt":
212212
if sanitized_filename.startswith("..\\") or "\\..\\" in sanitized_filename:
@@ -246,7 +246,7 @@ def get_local_upload_paths(local_dir: Path, filename: str) -> LocalUploadFilePat
246246
[`LocalUploadFilePaths`]: the paths to the files (file_path, lock_path, metadata_path).
247247
"""
248248
# filename is the path in the Hub repository (separated by '/')
249-
# make sure to have a cross platform transcription
249+
# make sure to have a cross-platform transcription
250250
sanitized_filename = os.path.join(*filename.split("/"))
251251
if os.name == "nt":
252252
if sanitized_filename.startswith("..\\") or "\\..\\" in sanitized_filename:

src/huggingface_hub/cli/_cli_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ class AlphabeticalMixedGroup(typer.core.TyperGroup):
5050
"""
5151

5252
def list_commands(self, ctx: click.Context) -> list[str]: # type: ignore[name-defined]
53-
# click.Group stores both commands and sub-groups in `self.commands`
53+
# click.Group stores both commands and subgroups in `self.commands`
5454
return sorted(self.commands.keys())
5555

5656

src/huggingface_hub/inference/_client.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ def __init__(
190190
)
191191
token = token if token is not None else api_key
192192
if isinstance(token, bool):
193-
# Legacy behavior: previously is was possible to pass `token=False` to disable authentication. This is not
193+
# Legacy behavior: previously it was possible to pass `token=False` to disable authentication. This is not
194194
# supported anymore as authentication is required. Better to explicitly raise here rather than risking
195195
# sending the locally saved token without the user knowing about it.
196196
if token is False:
@@ -859,7 +859,7 @@ def chat_completion(
859859
>>> messages = [
860860
... {
861861
... "role": "user",
862-
... "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I saw and when?",
862+
... "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I see and when?",
863863
... },
864864
... ]
865865
>>> response_format = {
@@ -1427,7 +1427,7 @@ def image_to_text(self, image: ContentT, *, model: Optional[str] = None) -> Imag
14271427
Takes an input image and return text.
14281428
14291429
Models can have very different outputs depending on your use case (image captioning, optical character recognition
1430-
(OCR), Pix2Struct, etc). Please have a look to the model card to learn more about a model's specificities.
1430+
(OCR), Pix2Struct, etc.). Please have a look to the model card to learn more about a model's specificities.
14311431
14321432
Args:
14331433
image (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`):

src/huggingface_hub/inference/_common.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -364,7 +364,7 @@ async def _async_yield_from(client: httpx.AsyncClient, response: httpx.Response)
364364
#
365365
# Both approaches have very similar APIs, but not exactly the same. What we do first in
366366
# the `text_generation` method is to assume the model is served via TGI. If we realize
367-
# it's not the case (i.e. we receive an HTTP 400 Bad Request), we fallback to the
367+
# it's not the case (i.e. we receive an HTTP 400 Bad Request), we fall back to the
368368
# default API with a warning message. When that's the case, We remember the unsupported
369369
# attributes for this model in the `_UNSUPPORTED_TEXT_GENERATION_KWARGS` global variable.
370370
#

src/huggingface_hub/inference/_generated/_async_client.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ def __init__(
181181
)
182182
token = token if token is not None else api_key
183183
if isinstance(token, bool):
184-
# Legacy behavior: previously is was possible to pass `token=False` to disable authentication. This is not
184+
# Legacy behavior: previously it was possible to pass `token=False` to disable authentication. This is not
185185
# supported anymore as authentication is required. Better to explicitly raise here rather than risking
186186
# sending the locally saved token without the user knowing about it.
187187
if token is False:
@@ -885,7 +885,7 @@ async def chat_completion(
885885
>>> messages = [
886886
... {
887887
... "role": "user",
888-
... "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I saw and when?",
888+
... "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I see and when?",
889889
... },
890890
... ]
891891
>>> response_format = {
@@ -1460,7 +1460,7 @@ async def image_to_text(self, image: ContentT, *, model: Optional[str] = None) -
14601460
Takes an input image and return text.
14611461
14621462
Models can have very different outputs depending on your use case (image captioning, optical character recognition
1463-
(OCR), Pix2Struct, etc). Please have a look to the model card to learn more about a model's specificities.
1463+
(OCR), Pix2Struct, etc.). Please have a look to the model card to learn more about a model's specificities.
14641464
14651465
Args:
14661466
image (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`):

0 commit comments

Comments
 (0)