Skip to content

Add dat1.co as an inference provider #1460

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 23 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
7b0797f
added dat1.co as provider
ArsenyYankovsky May 14, 2025
0a857bc
added dat1.co as provider
ArsenyYankovsky May 18, 2025
7c3bf1a
added dat1.co as provider
ArsenyYankovsky May 18, 2025
31fa0aa
added dat1.co as provider
ArsenyYankovsky May 18, 2025
163131c
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 18, 2025
3f71f00
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 19, 2025
ed87c6d
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 20, 2025
8f5627c
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 22, 2025
47ef5ba
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 22, 2025
65cb1fd
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 22, 2025
bdb21eb
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 23, 2025
9688b5c
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 23, 2025
f74ab2a
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 26, 2025
8d54940
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 26, 2025
0a23d82
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 27, 2025
c278be8
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 28, 2025
b8b90c6
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 28, 2025
1670908
Merge branch 'main' into dat1-provider
ArsenyYankovsky May 29, 2025
d017c87
Merge branch 'main' into dat1-provider
ArsenyYankovsky Jun 4, 2025
3ae2105
Merge branch 'main' into dat1-provider
ArsenyYankovsky Jun 7, 2025
ccb1957
Merge branch 'main' into dat1-provider
ArsenyYankovsky Jun 10, 2025
cdf3681
Merge branch 'main' into dat1-provider
ArsenyYankovsky Jun 12, 2025
b7f2523
Merge branch 'main' into dat1-provider
ArsenyYankovsky Jun 23, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions packages/inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ Currently, we support the following providers:
- [Cohere](https://cohere.com)
- [Cerebras](https://cerebras.ai/)
- [Groq](https://groq.com)
- [Dat1](https://dat1.co)

To send requests to a third-party provider, you have to pass the `provider` parameter to the inference function. The default value of the `provider` parameter is "auto", which will select the first of the providers available for the model, sorted by your preferred order in https://hf.co/settings/inference-providers.

Expand Down Expand Up @@ -95,6 +96,7 @@ Only a subset of models are supported when requesting third-party providers. You
- [Together supported models](https://huggingface.co/api/partners/together/models)
- [Cohere supported models](https://huggingface.co/api/partners/cohere/models)
- [Cerebras supported models](https://huggingface.co/api/partners/cerebras/models)
- [Dat1 supported models](https://huggingface.co/api/partners/dat1/models)
- [Groq supported models](https://console.groq.com/docs/models)
- [Novita AI supported models](https://huggingface.co/api/partners/novita/models)

Expand Down
6 changes: 6 additions & 0 deletions packages/inference/src/lib/getProviderHelper.ts
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,11 @@ import type {
import * as Replicate from "../providers/replicate.js";
import * as Sambanova from "../providers/sambanova.js";
import * as Together from "../providers/together.js";
import * as Dat1 from "../providers/dat1";
import type { InferenceProvider, InferenceProviderOrPolicy, InferenceTask } from "../types.js";
import { InferenceClientInputError } from "../errors.js";


export const PROVIDERS: Record<InferenceProvider, Partial<Record<InferenceTask, TaskProviderHelper>>> = {
"black-forest-labs": {
"text-to-image": new BlackForestLabs.BlackForestLabsTextToImageTask(),
Expand All @@ -60,6 +62,10 @@ export const PROVIDERS: Record<InferenceProvider, Partial<Record<InferenceTask,
cohere: {
conversational: new Cohere.CohereConversationalTask(),
},
dat1: {
"text-to-image": new Dat1.Dat1TextToImageTask(),
conversational: new Dat1.Dat1ConversationalTask(),
},
"fal-ai": {
"text-to-image": new FalAI.FalAITextToImageTask(),
"text-to-speech": new FalAI.FalAITextToSpeechTask(),
Expand Down
1 change: 1 addition & 0 deletions packages/inference/src/providers/consts.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ export const HARDCODED_MODEL_INFERENCE_MAPPING: Record<
"black-forest-labs": {},
cerebras: {},
cohere: {},
dat1: {},
"fal-ai": {},
"featherless-ai": {},
"fireworks-ai": {},
Expand Down
81 changes: 81 additions & 0 deletions packages/inference/src/providers/dat1.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
/**
* See the registered mapping of HF model ID => Together model ID here:
*
* https://huggingface.co/api/partners/together/models
*
* This is a publicly available mapping.
*
* If you want to try to run inference for a new model locally before it's registered on huggingface.co,
* you can add it to the dictionary "HARDCODED_MODEL_ID_MAPPING" in consts.ts, for dev purposes.
*
* - If you work at Together and want to update this mapping, please use the model mapping API we provide on huggingface.co
* - If you're a community member and want to add a new supported HF model to Together, please open an issue on the present repo
* and we will tag Together team members.
*
* Thanks!
*/
import { InferenceOutputError } from "../lib/InferenceOutputError";
import type { BodyParams } from "../types";
import { omit } from "../utils/omit";
import {
BaseConversationalTask,
TaskProviderHelper,
type TextToImageTaskHelper,
} from "./providerHelper";

const DAT1_API_BASE_URL = "https://api.dat1.co/api/v1/hf";

interface Dat1Base64ImageGeneration {
data: Array<{
b64_json: string;
}>;
}

export class Dat1ConversationalTask extends BaseConversationalTask {
constructor() {
super("dat1", DAT1_API_BASE_URL);
}

override makeRoute(): string {
return "/chat/completions";
}
}

export class Dat1TextToImageTask extends TaskProviderHelper implements TextToImageTaskHelper {
constructor() {
super("dat1", DAT1_API_BASE_URL);
}

override makeRoute(): string {
return "/images/generations";
}

preparePayload(params: BodyParams): Record<string, unknown> {
return {
...omit(params.args, ["inputs", "parameters"]),
...(params.args.parameters as Record<string, unknown>),
prompt: params.args.inputs,
response_format: "base64",
model: params.model,
};
}

async getResponse(response: Dat1Base64ImageGeneration, outputType?: "url" | "blob"): Promise<string | Blob> {
if (
typeof response === "object" &&
"data" in response &&
Array.isArray(response.data) &&
response.data.length > 0 &&
"b64_json" in response.data[0] &&
typeof response.data[0].b64_json === "string"
) {
const base64Data = response.data[0].b64_json;
if (outputType === "url") {
return `data:image/jpeg;base64,${base64Data}`;
}
return fetch(`data:image/jpeg;base64,${base64Data}`).then((res) => res.blob());
}

throw new InferenceOutputError("Expected Dat1 text-to-image response format");
}
}
1 change: 1 addition & 0 deletions packages/inference/src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ export const INFERENCE_PROVIDERS = [
"black-forest-labs",
"cerebras",
"cohere",
"dat1",
"fal-ai",
"featherless-ai",
"fireworks-ai",
Expand Down
59 changes: 59 additions & 0 deletions packages/inference/test/InferenceClient.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1018,6 +1018,65 @@ describe.skip("InferenceClient", () => {
TIMEOUT
);

describe.concurrent(
"dat1",
() => {
const client = new InferenceClient(env.DAT1_KEY ?? "dummy");

HARDCODED_MODEL_INFERENCE_MAPPING["dat1"] = {
"unsloth/Llama-3.2-3B-Instruct-GGUF": {
hfModelId: "unsloth/Llama-3.2-3B-Instruct-GGUF",
providerId: "unsloth-Llama-32-3B-Instruct-GGUF",
status: "live",
task: "conversational",
},
"Kwai-Kolors/Kolors": {
hfModelId: "Kwai-Kolors/Kolors",
providerId: "Kwai-Kolors-Kolors",
status: "live",
task: "text-to-image",
},
};

it("chatCompletion", async () => {
const res = await client.chatCompletion({
model: "unsloth/Llama-3.2-3B-Instruct-GGUF",
provider: "dat1",
messages: [{ role: "user", content: "Complete this sentence with words, one plus one is equal " }],
});
if (res.choices && res.choices.length > 0) {
const completion = res.choices[0].message?.content;
expect(completion).toContain("two");
}
});

it("chatCompletion stream", async () => {
const stream = client.chatCompletionStream({
model: "unsloth/Llama-3.2-3B-Instruct-GGUF",
provider: "dat1",
messages: [{ role: "user", content: "Complete the equation 1 + 1 = , just the answer" }],
}) as AsyncGenerator<ChatCompletionStreamOutput>;
let out = "";
for await (const chunk of stream) {
if (chunk.choices && chunk.choices.length > 0) {
out += chunk.choices[0].delta.content;
}
}
expect(out).toContain("2");
});

it("textToImage", async () => {
const res = await client.textToImage({
model: "Kwai-Kolors/Kolors",
provider: "dat1",
inputs: "award winning high resolution photo of a giant tortoise",
});
expect(res).toBeInstanceOf(Blob);
});
},
TIMEOUT
)

/**
* Compatibility with third-party Inference Providers
*/
Expand Down