Skip to content

Commit e156f38

Browse files
pesuchinryoji.nagatatomaarsen
authored
[Fix] Resolve loading private Transformer model in version 3.3.0 (#3058)
* add: Add token and local_files_only to the find_adapter_config_file arguments. * add: Add revision to the find_adapter_config_file arguments. * For some reason, the part I didn't fix got fixed, so I put it back in. * Set False as the default for local_files_only --------- Co-authored-by: ryoji.nagata <[email protected]> Co-authored-by: Tom Aarsen <[email protected]>
1 parent e28f97d commit e156f38

File tree

1 file changed

+9
-1
lines changed

1 file changed

+9
-1
lines changed

sentence_transformers/models/Transformer.py

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,15 @@ def __init__(
101101

102102
def _load_config(self, model_name_or_path: str, cache_dir: str | None, backend: str, config_args: dict[str, Any]):
103103
"""Loads the configuration of a model"""
104-
if find_adapter_config_file(model_name_or_path) is not None:
104+
if (
105+
find_adapter_config_file(
106+
model_name_or_path,
107+
token=config_args.get("token"),
108+
revision=config_args.get("revision"),
109+
local_files_only=config_args.get("local_files_only", False),
110+
)
111+
is not None
112+
):
105113
if not is_peft_available():
106114
raise Exception(
107115
"Loading a PEFT model requires installing the `peft` package. You can install it via `pip install peft`."

0 commit comments

Comments
 (0)