Skip to content

Conversation

pcuenca
Copy link
Member

@pcuenca pcuenca commented Jan 30, 2025

Pending stuff / edge cases / annoying issues

  • Strict concurrency errors and warnings. Fix and verify they still work for Swift 5.9
  • When compiling the package in Xcode, we get the availability warnings because Tokenizers is built without the trait. It works in the CLI with swift build --traits ChatTemplates. I don't know if there's a workaround for Xcode.
  • Testing builds Tokenizers without the trait, so the chat template tests don't pass. It works in the command-line with swift test --traits ChatTemplates (or swift test --filter TokenizersTests.ChatTemplateTests --traits ChatTemplates to run just the chat template tests).

This builds on top of #166 by @greenrazer.

Two new top-level library products are exposed:

  • Hub
  • Tokenizers

(Transformers still exists, and comprises everything in the package including tensor ops and Core ML inference).

The Tokenizers library is heavy when using chat templates, because it requires a Jinja template engine and swift-collections. Thanks to @mattt, we can easily opt-in to using this feature using package traits, which require Swift 6.1. We attempted another solution that was compatible with previous versions of Swift, but we found it to be too unreliable.

How to use Tokenizers.

  • On Swift < 6.1, chat templates (and the corresponding dependencies) are always available. [email protected] applies.
  • On Swift > 6.1
    • Without chat templates, declare the dependency as usual:
      dependencies: [
          .package(url: "https://github.com/huggingface/swift-transformers.git", branch: "hub-tokenizers-templates"),
      ],
    • Opt-in to chat templates using the ChatTemplates trait:
      dependencies: [
          .package(
              url: "https://github.com/huggingface/swift-transformers.git",
              branch: "hub-tokenizers-templates",
              traits: ["ChatTemplates"],
          ),
      ],

Previous discussion for reference (no longer applies)

The goal is to be able to opt-in to the chat template feature, which carries the Jinja dependency, which in turn depends on swift-collections and whatnot. It works in its current state, but it's ugly – I found way more cross-module quirks than I was expecting. I wanted to make it as easy as possible for consumers and allow to use either version (with or without templates) from their SPM manifests.

Opinions, corrections, and alternative ideas are encouraged and most welcome!


How to use:

  • If you want to use the core Tokenizers functionality, without the chat templates:

Add the following dependency to your target, as usual (except Tokenizers is now a product, no need to use the full Transformers lib). This applies to projects such as WhisperKit.

            dependencies: [
                .product(name: "Tokenizers", package: "swift-transformers"),
            ],
  • To opt-in to using chat-templates:

This applies to projects such as mlx-swift-examples.

            dependencies: [
                .product(name: "Tokenizers", package: "swift-transformers"),
+               .product(name: "TokenizersTemplates", package: "swift-transformers"),
            ],

That's it, you don't need to import TokenizersTemplates or do anything other than just declaring it as a dependency.


Known issues:

  • I haven't looked at tests yet, they probably won't compile.
  • I know there are conflicts because of last night's tools PR. Let's agree on the general direction before addressing them.

try applyChatTemplate(messages: messages, chatTemplate: .literal(chatTemplate), addGenerationPrompt: true, truncation: false, maxLength: nil, tools: nil)
}
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment in TokenizersTemplates

]

open class PreTrainedTokenizerWithTemplates : PreTrainedTokenizer {
// I don't know why these need to be here. They are implemented in the protocol, **and** in the superclass.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if these overrides don't exist, the linker can't find the implementations.

import Foundation
import Hub

@_exported import TokenizersCore
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good chunk of the magic. The new Tokenizers implementation is just this wrapper file, which exposes the imported TokenizersCore types.

Comment on lines 6 to 9
#if canImport(TokenizersTemplates)
import TokenizersTemplates
public typealias PreTrainedTokenizer = PreTrainedTokenizerWithTemplates
#endif
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if TokenizersTemplates is available (because users have declared it as a dependency), we override the definition so the factory below uses the subclass.

}

// See https://github.com/xenova/transformers.js/blob/1a9964fb09b8f54fcbeac46dc6aae8d76795809d/src/tokenizers.js#L3203 for these exceptions
class LlamaPreTrainedTokenizer: PreTrainedTokenizer {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be moved back to TokenizersCore, but then we'd need a typealias here as well (and a subclass).

@pcuenca
Copy link
Member Author

pcuenca commented Jan 30, 2025

cc @greenrazer @FL33TW00D @Vaibhavs10 for opinions and feedback.

@greenrazer
Copy link
Collaborator

I like this solution. however, I cannot get the canImport approach in TokenizersWrapper to work for the tests due to how Swift Package Manager handles module dependencies and conditional imports within the same package. All the tests work great with almost no modification (besides changing Tokenizers to TokenizersCore) except for the ChatTemplatesTests.swift.

I've tried a few different options:

  • The best option: Just adding TokenizersTemplates as a dependency to the test, similar to how you would for an external package. This doesn’t work because it doesn’t trigger recompilation for the test.
  • Adding swiftSettings: [.define("USE_TEMPLATES")] to the test and then adding another build condition to TokenizersWrapper. Same issue as above—it doesn’t trigger recompilation for the test.
  • Creating two targets pointing to the same TokenizersWrapper source file, with one including USE_TEMPLATES. This doesn’t work because you can't have two targets referencing the same source file.
  • Adding .target(name: "TokenizersTemplates", condition: .when(platforms: nil)) as a dependency to hopefully run only during testing. This doesn’t work in newer Swift versions.
  • Adding swiftSettings: [.define("ENABLE_TEMPLATES", .when(configuration: .debug))] to the Tokenizers target. This almost works, but TokenizersTemplates must then be a dependency of Tokenizers, which defeats the whole purpose.

Since users must explicitly import TokenizersTemplates anyway, I think the best approach is to create two wrappers and two targets. That way, users only need to import either TokenizersTemplates or Tokenizers, depending on their needs.

pcuenca and others added 10 commits August 16, 2025 10:07
I found inconsistencies in SPM resolution when using Xcode vs the
command line, or when running unit tests, or when using local vs
remote dependencies. The auto-magical wrapper was a bit too fragile,
or I don't know how to make it work consistently.

The gist of the problem is that #canImport runs at compile time, and
different modules may or may not be available depending on compilation
order.
@mattt mattt force-pushed the hub-tokenizers-templates branch from 7996322 to 8c92b4e Compare August 23, 2025 10:23
@pcuenca
Copy link
Member Author

pcuenca commented Aug 27, 2025

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex Review: Here are some suggestions.

Reply with @codex fix comments to fix any unresolved comments.

About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".

Comment on lines 274 to 282
) throws -> [Int] {
if additionalContext == nil {
try applyChatTemplate(
messages: messages, chatTemplate: chatTemplate, addGenerationPrompt: addGenerationPrompt, truncation: truncation, maxLength: maxLength,
messages: messages, chatTemplate: chatTemplate,
addGenerationPrompt: addGenerationPrompt, truncation: truncation,
maxLength: maxLength,
tools: tools
)
} else {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P0] Missing return in default chat-template overload

The fallback implementation of Tokenizer.applyChatTemplate(..., additionalContext:) calls the six-parameter overload but never returns its result. Because neither branch returns a value, the compiler reports “Missing return in a function expected to return ‘[Int]’” and the module will not build. This branch should return the value from the inner call so the default implementation is usable.

Useful? React with 👍 / 👎.

Comment on lines 38 to +41
.executableTarget(
name: "TransformersCLI",
dependencies: [
"Models", "Generation", "Tokenizers",
.product(name: "ArgumentParser", package: "swift-argument-parser"),
"Models", .product(name: "ArgumentParser", package: "swift-argument-parser"),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P0] TransformersCLI target omits Generation dependency

The new TransformersCLI.swift file imports Generation, but the TransformersCLI executable target now only depends on Models and ArgumentParser. Because Generation is not listed, a build will fail with No such module 'Generation', preventing the CLI from compiling. The target still needs a direct dependency on Generation (and any other modules it imports).

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants