Skip to content

Conversation

Bainainai
Copy link

Some of the client exposed features of web-llm require tokenization and decoding of tokens to be used effectively. The tokenizer is already loaded for web-llm's internal functionality and can be made available to clients. When clients use the tokenizer that is already loaded, it avoids the need to load another copy and also provides immediate reliable access to a tokenizer for any model that is supported by web-llm, rather than requiring clients to manage different tokenizers for models themselves.

@Neet-Nestor Neet-Nestor force-pushed the main branch 2 times, most recently from b10566f to 90520c3 Compare December 9, 2024 05:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant