At the moment you cant use tibetian language tokenizer. It gives the error message:
TypeError: "module" object is not callable
The error is thrown here in sentence_split.py:
elif split_algo == "bodnlp":
logger.info(f" - Tibetan NLTK sentence splitter applied to '{lang}'")
from botok.tokenizers import sentencetokenizer as bod_sent_tok