Skip to content

Releases: vercel/ai

[email protected]

31 Jul 15:38
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • e1cbf8a: chore(@ai-sdk/rsc): extract to separate package

  • a847c3e: chore: rename reasoning to reasoningText etc

  • 13fef90: chore (ai): remove automatic conversion of UI messages to model messages

  • d964901: - remove setting temperature to 0 by default

    • remove null option from DefaultSettingsMiddleware
    • remove setting defaults for temperature and stopSequences in ai to enable middleware changes
  • 0a710d8: feat (ui): typed tool parts in ui messages

  • 9ad0484: feat (ai): automatic tool execution error handling

  • 63f9e9b: chore (provider,ai): tools have input/output instead of args,result

  • ab7ccef: chore (ai): change source ui message parts to source-url

  • d5f588f: AI SDK 5

  • ec78cdc: chore (ai): remove "data" UIMessage role

  • 6a83f7d: refactoring (ai): restructure message metadata transfer

  • db345da: chore (ai): remove exports of internal ui functions

  • 496bbc1: chore (ui): inline/remove ChatRequest type

  • 72d7d72: chore (ai): stable activeTools

  • 40acf9b: feat (ui): introduce ChatStore and ChatTransport

  • 98f25e5: chore (ui): remove managed chat inputs

  • 2d03e19: chore (ai): remove StreamCallbacks.onCompletion

  • da70d79: chore (ai): remove getUIText helper

  • c60f895: chore (ai): remove useChat keepLastMessageOnError

  • 0560977: chore (ai): improve consistency of generate text result, stream text result, and step result

  • 9477ebb: chore (ui): remove useAssistant hook (breaking change)

  • 1f55c21: chore (ai): send reasoning to the client by default

  • e7dc6c7: chore (ai): remove onResponse callback

  • 8b86e99: chore (ai): replace Message with UIMessage

  • 04d5063: chore (ai): rename default provider global to AI_SDK_DEFAULT_PROVIDER

  • 319b989: chore (ai): remove content from ui messages

  • 14c9410: chore: refactor file towards source pattern (spec)

  • a34eb39: chore (ai): remove data and allowEmptySubmit from ChatRequestOptions

  • f04fb4a: chore (ai): replace useChat attachments with file ui parts

  • f7e8bf4: chore (ai): flatten ui message stream parts

  • 257224b: chore (ai): separate TextStreamChatTransport

  • fd1924b: chore (ai): remove redundant mimeType property

  • 2524fc7: chore (ai): remove ui message toolInvocations property

  • 6fba4c7: chore (ai): remove deprecated experimental_providerMetadata

  • b4b4bb2: chore (ui): rename experimental_resume to resumeStream

  • 441d042: chore (ui): data stream protocol v2 with SSEs

  • ef256ed: chore (ai): refactor and use chatstore in svelte

  • 516be5b: ### Move Image Model Settings into generate options

    Image Models no longer have settings. Instead, maxImagesPerCall can be passed directly to generateImage(). All other image settings can be passed to providerOptions[provider].

    Before

    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });

    After

    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });

    Pull Request: #6180

  • a662dea: chore (ai): remove sendExtraMessageFields

  • d884051: feat (ai): simplify default provider setup

  • e8324c5: feat (ai): add args callbacks to tools

  • fafc3f2: chore (ai): change file to parts to use urls instead of data

  • 1ed0287: chore (ai): stable sendStart/sendFinish options

  • c7710a9: chore (ai): rename DataStreamToSSETransformStream to JsonToSseTransformStream

  • bfbfc4c: feat (ai): streamText/generateText: totalUsage contains usage for all steps. usage is for a single step.

  • 9ae327d: chore (ui): replace chat store concept with chat instances

  • 9315076: chore (ai): rename continueUntil to stopWhen. Rename maxSteps stop condition to stepCountIs.

  • 247ee0c: chore (ai): remove steps from tool invocation ui parts

  • 109c0ac: chore (ai): rename id to chatId (in post request, resume request, and useChat)

  • 954aa73: feat (ui): extended regenerate support

  • 33eb499: feat (ai): inject message id in createUIMessageStream

  • 901df02: feat (ui): use UI_MESSAGE generic

  • 4892798: chore (ai): always stream tool calls

  • c25cbce: feat (ai): use console.error as default error handler for streamText and streamObject

  • b33ed7a: chore (ai): rename DataStream_ to UIMessage_

  • ed675de: feat (ai): add ui data parts

  • 7bb58d4: chore (ai): restructure prepareRequest

  • ea7a7c9: feat (ui): UI message metadata

  • 0463011: fix (ai): update source url stream part

  • dcc549b: remove StreamTextResult.mergeIntoDataStream method
    rename DataStreamOptions.getErrorMessage to onError
    add pipeTextStreamToResponse function
    add createTextStreamResponse function
    change createDataStreamResponse function to accept a DataStream and not a DataStreamWriter
    change pipeDataStreamToResponse function to accept a DataStream and not a DataStreamWriter
    change pipeDataStreamToResponse function to have a single parameter

  • 35fc02c: chore (ui): rename RequestOptions to CompletionRequestOptions

  • 64f6d64: feat (ai): replace maxSteps with continueUntil (generateText)

  • 175b868: chore (ai): rename reasoning UI parts 'reasoning' property to 'text'

  • 60e2c56: feat (ai): restructure chat transports

  • 765f1cd: chore (ai): remove deprecated useChat isLoading helper

  • cb2b53a: chore (ai): refactor header preparation

  • e244a78: chore (ai): remove StreamData and mergeStreams

  • d306260: feat (ai): replace maxSteps with continueUntil (streamText)

  • 4bfe9ec: chore (ai): remove ui message reasoning property

  • 1766ede: chore: rename maxTokens to maxOutputTokens

  • 2877a74: chore (ai): remove ui message data property

  • 1409e13: chore (ai): remove experimental continueSteps

  • b32e192: chore (ai): rename reasoning to reasoningText, rename reasoningDetails to reasoning (streamText, generateText)

  • 92cb0a2: chore (ai): rename CoreMessage to ModelMessage

  • 2b637d6: chore (ai): rename UIMessageStreamPart to UIMessageChunk

Minor Changes

  • b7eae2d: feat (core): Add finishReason field to NoObjectGeneratedError
  • bcea599: feat (ai): add content to generateText result
  • 48d675a: feat (ai): add content to streamText result
  • c9ad635: feat (ai): add filename to file ui parts

Patch Changes

  • a571d6e: chore(provider-utils): move ToolResultContent to provider-utils

  • de2d2ab: feat(ai): add provider and provider registry middleware functionality

  • c22ad54: feat(smooth-stream): chunking callbacks

  • d88455d: feat (ai): expose http chat transport type

  • e7fcc86: feat (ai): introduce dynamic tools

  • da1e6f0: feat (ui): add generics to ui message stream parts

  • 48378b9: fix (ai): send null as tool output when tools return undefined

  • 5d1e3ba: chore (ai): remove provider re-exports

  • 93d53a1: chore (ai): remove cli

  • e90d45d: chore (rsc): move HANGING_STREAM_WARNING_TIME constant into @ai-sdk/rsc package

  • b32c141: feat (ai): add array support to stopWhen

  • bc3109f: chore (ai): push stream-callbacks into langchain/llamaindex adapters

  • 0d9583c: fix (ai): use user-provided media type when available

  • 38ae5cc: feat (ai): export InferUIMessageChunk type

  • 10b21eb: feat(cli): add ai command line interface

  • 9e40cbe: Allow destructuring output and errorText on ToolUIPart type

  • 6909543: feat (ai): support system parameter in Agent constructor

  • 86cfc72: feat (ai): add ignoreIncompleteToolCalls option to convertToModelMessages

  • 377bbcf: fix (ui): tool input can be undefined during input-streaming

  • d8aeaef: feat(providers/fal): add transcribe

  • ae77a99: chore (ai): rename text and reasoning chunks in streamText fullstream

  • 4fef487: feat: support for zod v4 for schema validation

    All these methods now accept both a zod v4 and zod v3 schemas for validation:

    • generateObject()
    • streamObject()
    • generateText()
    • experimental_useObject() from @ai-sdk/react
    • streamUI() from @ai-sdk/rsc
  • b1e3abd: feat (ai): expose ui message stream headers

  • 4f3e637: fix (ui): avoid caching globalThis.fetch in case it is patched by other libraries

  • 14cb3be: chore(providers/llamaindex): extract to separate package

  • 1f6ce57: feat (ai): infer tool call types in the onToolCall callback

  • 16ccfb2: feat (ai): add readUIMessageStream helper

  • 225f087: fix (ai/mcp): prevent mutation of customEnv

  • ce1d1f3: feat (ai): export mock image, speech, and transcription models

  • fc0380b: feat (ui): resolvable header, body, credentials in http chat transport

  • 6622441: feat (ai): add static/dynamic toolCalls/toolResults helpers

  • 4048ce3: fix (ai): add tests and examples for openai responses

  • 6c42e56: feat (ai): validate ui stream data chunks

  • bedb239: chore (ai): make ui stream parts value optional when it's not required

  • 9b4d074: feat(streamObject): add enum support

  • c8fce91: feat (ai): add experimental Agent abstraction

  • 655cf3c: feat (ui): add onFinish to createUIMessageStream

  • 3e10408: fix(utils/detect-mimetype): add support for detecting id3 tags

  • d5ae088: feat (ui): add sendAutomaticallyWhen to Chat

  • ced8eee: feat(ai): re-export zodSchema from main package

  • c040e2f: fix (ui): inject generated response message id

  • d3960e3: selectTelemetryAttributes more robustness

  • faea29f: fix (provider/openai): multi-step reasoning with text

  • 66af894: fix (ai): respect content order in toResponseMessages

  • 332167b: chore (ai): move maxSteps into UseChatOptions

  • 6b1c55c: feat (ai): introduce GLOBAL_DEFAULT_PROVIDER

  • 5a975a4: feat (ui): update Chat tool result submission

  • 507ac1d: fix (ui/react): update messag...

Read more

@ai-sdk/[email protected]

31 Jul 15:40
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • d5f588f: AI SDK 5

  • 516be5b: ### Move Image Model Settings into generate options

    Image Models no longer have settings. Instead, maxImagesPerCall can be passed directly to generateImage(). All other image settings can be passed to providerOptions[provider].

    Before

    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });

    After

    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });

    Pull Request: #6180

Minor Changes

Patch Changes

@ai-sdk/[email protected]

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • 0a710d8: feat (ui): typed tool parts in ui messages
  • d5f588f: AI SDK 5
  • 40acf9b: feat (ui): introduce ChatStore and ChatTransport
  • 98f25e5: chore (ui): remove managed chat inputs
  • 9477ebb: chore (ui): remove useAssistant hook (breaking change)
  • 901df02: feat (ui): use UI_MESSAGE generic
  • 98f25e5: chore (ui/vue): replace useChat with new Chat
  • 8cbbad6: chore (ai): refactor and use chatstore in vue

Patch Changes

Read more

@ai-sdk/[email protected]

31 Jul 15:40
a5e92fe
Compare
Choose a tag to compare

Patch Changes

@ai-sdk/[email protected]

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Major Changes

Patch Changes

@ai-sdk/[email protected]

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • d5f588f: AI SDK 5

  • 516be5b: ### Move Image Model Settings into generate options

    Image Models no longer have settings. Instead, maxImagesPerCall can be passed directly to generateImage(). All other image settings can be passed to providerOptions[provider].

    Before

    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });

    After

    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });

    Pull Request: #6180

Patch Changes

@ai-sdk/[email protected]

31 Jul 15:40
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • 0a710d8: feat (ui): typed tool parts in ui messages
  • d5f588f: AI SDK 5
  • 496bbc1: chore (ui): inline/remove ChatRequest type
  • 40acf9b: feat (ui): introduce ChatStore and ChatTransport
  • 98f25e5: chore (ui): remove managed chat inputs
  • 901df02: feat (ui): use UI_MESSAGE generic

Patch Changes

@ai-sdk/[email protected]

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • e1cbf8a: chore(@ai-sdk/rsc): extract to separate package

Patch Changes

Read more

@ai-sdk/[email protected]

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Patch Changes

@ai-sdk/[email protected]

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • d5f588f: AI SDK 5

  • 516be5b: ### Move Image Model Settings into generate options

    Image Models no longer have settings. Instead, maxImagesPerCall can be passed directly to generateImage(). All other image settings can be passed to providerOptions[provider].

    Before

    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });

    After

    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });

    Pull Request: #6180

Patch Changes