Skip to content

Commit 0a7cdf0

Browse files
authored
Updates enums to support the latest Perplexity api models (#1)
1 parent f753a31 commit 0a7cdf0

File tree

4 files changed

+28
-17
lines changed

4 files changed

+28
-17
lines changed

LICENSE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
MIT License
22

3-
Copyright (c) 2024 Josh grenon
3+
Copyright (c) 2025 Josh Grenon
44

55
Permission is hereby granted, free of charge, to any person obtaining a copy
66
of this software and associated documentation files (the "Software"), to deal

README.md

Lines changed: 17 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ let messages = [Message(role: "user", content: "What is the capital of France?")
3333

3434
// Make a chat completion request
3535
do {
36-
let response = try await api.chatCompletion(messages: messages, model: .sonarLarge)
36+
let response = try await api.chatCompletion(messages: messages, model: .sonar)
3737
print(response.choices.first?.message.content ?? "No response")
3838
} catch {
3939
print("Error: \(error)")
@@ -46,13 +46,15 @@ do {
4646

4747
The framework supports various Perplexity AI models through the `PerplexityModel` enum:
4848

49-
- `.sonarSmallOnline`: "llama-3.1-sonar-small-128k-online"
50-
- `.sonarLargeOnline`: "llama-3.1-sonar-large-128k-online"
51-
- `.sonarHugeOnline`: "llama-3.1-sonar-huge-128k-online"
52-
- `.sonarSmallChat`: "llama-3.1-sonar-small-128k-chat"
53-
- `.sonarLargeChat`: "llama-3.1-sonar-large-128k-chat"
54-
- `.llama8bInstruct`: "llama-3.1-8b-instruct"
55-
- `.llama70bInstruct`: "llama-3.1-70b-instruct"
49+
### Research and Reasoning Models
50+
- `.sonarDeepResearch`: Advanced research model with 128K context length
51+
- `.sonarReasoningPro`: Enhanced reasoning model with 128K context length
52+
- `.sonarReasoning`: Base reasoning model with 128K context length
53+
54+
### General Purpose Models
55+
- `.sonarPro`: Professional model with 200K context length
56+
- `.sonar`: Standard model with 128K context length
57+
- `.r1_1776`: Base model with 128K context length
5658

5759
## Error Handling
5860

@@ -62,6 +64,13 @@ PerplexityApiSwift defines a `PerplexityError` enum for common errors:
6264
- `.invalidResponse(statusCode:)`: The API returned an invalid response with the given status code
6365
- `.invalidResponseFormat`: The API response could not be decoded
6466

67+
## Upcoming Features
68+
69+
The following features are planned for future releases:
70+
71+
- **Structured Outputs**: Support for receiving structured, typed responses from the API
72+
- **Streaming Response**: Real-time streaming of model responses for improved user experience
73+
6574
## Documentation
6675

6776
For more detailed information about the Perplexity AI API, please refer to the official documentation:

Sources/PerplexityApiSwift/PerplexityApiSwift.swift

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ public class PerplexityApiSwift {
99
self.bearerToken = token
1010
}
1111

12-
public func chatCompletion(messages: [Message], model: PerplexityModel = .sonarLargeOnline) async throws -> PerplexityResponse {
12+
public func chatCompletion(messages: [Message], model: PerplexityModel = .sonar) async throws -> PerplexityResponse {
1313
guard let bearerToken = bearerToken else {
1414
throw PerplexityError.tokenNotSet
1515
}

Sources/PerplexityApiSwift/PerplexityModels.swift

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,15 @@
11
import Foundation
22

33
public enum PerplexityModel: String {
4-
case sonarSmallOnline = "llama-3.1-sonar-small-128k-online"
5-
case sonarLargeOnline = "llama-3.1-sonar-large-128k-online"
6-
case sonarHugeOnline = "llama-3.1-sonar-huge-128k-online"
7-
case sonarSmallChat = "llama-3.1-sonar-small-128k-chat"
8-
case sonarLargeChat = "llama-3.1-sonar-large-128k-chat"
9-
case llama8bInstruct = "llama-3.1-8b-instruct"
10-
case llama70bInstruct = "llama-3.1-70b-instruct"
4+
// Research and Reasoning Models
5+
case sonarDeepResearch = "sonar-deep-research" // 128k context
6+
case sonarReasoningPro = "sonar-reasoning-pro" // 128k context
7+
case sonarReasoning = "sonar-reasoning" // 128k context
8+
9+
// General Purpose Models
10+
case sonarPro = "sonar-pro" // 200k context
11+
case sonar = "sonar" // 128k context
12+
case r1_1776 = "r1-1776" // 128k context
1113
}
1214

1315
// We can keep this enum if it's still useful for your application

0 commit comments

Comments
 (0)