@@ -33,7 +33,7 @@ let messages = [Message(role: "user", content: "What is the capital of France?")
33
33
34
34
// Make a chat completion request
35
35
do {
36
- let response = try await api.chatCompletion (messages : messages, model : .sonarLarge )
36
+ let response = try await api.chatCompletion (messages : messages, model : .sonar )
37
37
print (response.choices .first ? .message .content ?? " No response" )
38
38
} catch {
39
39
print (" Error: \( error ) " )
46
46
47
47
The framework supports various Perplexity AI models through the ` PerplexityModel ` enum:
48
48
49
- - ` .sonarSmallOnline ` : "llama-3.1-sonar-small-128k-online"
50
- - ` .sonarLargeOnline ` : "llama-3.1-sonar-large-128k-online"
51
- - ` .sonarHugeOnline ` : "llama-3.1-sonar-huge-128k-online"
52
- - ` .sonarSmallChat ` : "llama-3.1-sonar-small-128k-chat"
53
- - ` .sonarLargeChat ` : "llama-3.1-sonar-large-128k-chat"
54
- - ` .llama8bInstruct ` : "llama-3.1-8b-instruct"
55
- - ` .llama70bInstruct ` : "llama-3.1-70b-instruct"
49
+ ### Research and Reasoning Models
50
+ - ` .sonarDeepResearch ` : Advanced research model with 128K context length
51
+ - ` .sonarReasoningPro ` : Enhanced reasoning model with 128K context length
52
+ - ` .sonarReasoning ` : Base reasoning model with 128K context length
53
+
54
+ ### General Purpose Models
55
+ - ` .sonarPro ` : Professional model with 200K context length
56
+ - ` .sonar ` : Standard model with 128K context length
57
+ - ` .r1_1776 ` : Base model with 128K context length
56
58
57
59
## Error Handling
58
60
@@ -62,6 +64,13 @@ PerplexityApiSwift defines a `PerplexityError` enum for common errors:
62
64
- ` .invalidResponse(statusCode:) ` : The API returned an invalid response with the given status code
63
65
- ` .invalidResponseFormat ` : The API response could not be decoded
64
66
67
+ ## Upcoming Features
68
+
69
+ The following features are planned for future releases:
70
+
71
+ - ** Structured Outputs** : Support for receiving structured, typed responses from the API
72
+ - ** Streaming Response** : Real-time streaming of model responses for improved user experience
73
+
65
74
## Documentation
66
75
67
76
For more detailed information about the Perplexity AI API, please refer to the official documentation:
0 commit comments