A Swift package focused on the Azure OpenAI/OpenAI Responses API for iOS, macOS, watchOS, tvOS, and other Apple platforms.
β οΈ Internal Development Notice: This SDK is currently under active internal development and testing. It is not yet published or intended for external production use. The API surface and features are subject to change as we complete our internal roadmap.
SwiftAzureOpenAI provides Swift-native models and utilities for working with the Azure/OpenAI Responses API, a unified, stateful API that combines chat, tools, and assistants patterns.
This package emphasizes strongly typed request/response models, response metadata extraction, and streaming-friendly types, designed specifically for Apple platforms and Swift development.
- π Responses API-first: Unified request/response models aligned with Azure/OpenAI Responses API
- π Async/Await: Modern Swift concurrency for non-streaming and streaming
- π‘ Real-time Streaming: Native SSE handling with optimized parsing
- π― Python-style API:
client.responses.create(...)for fast adoption - π‘οΈ Typed errors: Clear error modeling with
SAOAIErrorandErrorResponse - π§© Structured content: Text, images, and tool/function calling
- π Metadata extraction: Request id, rate limits, processing time
- π Azure + OpenAI: Works with both services via
SAOAIConfiguration - π¦ Swift Package Manager: First-class SPM support
- β‘ Optimized services: High-performance parsing/streaming and optional response caching
- iOS 13.0+ / macOS 10.15+ / watchOS 6.0+ / tvOS 13.0+
- Xcode 15.0+
- Swift 6.0+ (currently developed with Swift 6.2)
This package targets the Azure/OpenAI Responses API. It is model-agnostic; use the deployment or model name appropriate for your account. Examples below use gpt-4o/gpt-4o-mini.
Check the Azure models documentation for availability: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models
SwiftAzureOpenAI includes comprehensive support for the Azure OpenAI File API, enabling file upload, management, and integration with the Responses API:
- π File Upload: Upload documents, images, and data files
- π File Management: List, retrieve, and delete files
- π Responses Integration: Reference uploaded files in conversations
- π― Direct File Input: Include file data directly in requests
- π‘ Streaming Downloads: Stream large file content in chunks
- π‘οΈ Type Safety: Strongly typed file operations and responses
Supported operations:
client.files.create()- Upload filesclient.files.list()- List all filesclient.files.retrieve()- Get file detailsclient.files.delete()- Remove filesclient.files.streamContent()- Stream file content for large downloads
Note: This package is not yet published to public package managers. For internal development and testing, use local package references or direct GitHub integration.
For internal development, add SwiftAzureOpenAI as a local package or via GitHub:
dependencies: [
.package(url: "https://github.com/ytthuan/SwiftAzureOpenAI", branch: "main")
]Then add it to your target dependencies:
.target(
name: "YourTarget",
dependencies: ["SwiftAzureOpenAI"]
)SwiftAzureOpenAI provides modern async/await support with comprehensive streaming capabilities:
- π Python-style API (Recommended) -
client.responses.create(...) - π‘ Streaming API -
client.responses.createStreaming(...) - Advanced API - Build
SAOAIRequestand use your own HTTP stack
import SwiftAzureOpenAISAOAIClient(main client)ResponsesClient(Python-style client atclient.responses)EmbeddingsClient(embeddings client atclient.embeddings)FilesClient(files client atclient.files)SAOAIAzureConfiguration,SAOAIOpenAIConfigurationSAOAIRequest,SAOAIResponse,SAOAIMessage(request/response models)SAOAIEmbeddingsRequest,SAOAIEmbeddingsResponse(embeddings models)EmbeddingBatchHelper(batch processing utilities)InMemoryResponseCache,EmbeddingCache(caching with TTL support)MetricsDelegate(observability and performance tracking)SAOAIRequest,SAOAIMessage,SAOAIInputContentSAOAIResponse,SAOAIOutput,SAOAIOutputContentSAOAIStreamingResponse(SSE events)SAOAIEmbeddingsRequest,SAOAIEmbeddingsResponse,SAOAIEmbeddingSAOAIReasoning,SAOAITextSAOAITool,SAOAIJSONValueAPIResponse<T>,ResponseMetadata,RateLimitInfo,SAOAIError,ErrorResponse
The easiest way to use SwiftAzureOpenAI with a simple, Python-inspired API:
import SwiftAzureOpenAI
// Configure your client
let config = SAOAIAzureConfiguration(
endpoint: "https://your-resource.openai.azure.com",
apiKey: "your-api-key",
deploymentName: "gpt-4o-mini"
)
let client = SAOAIClient(configuration: config)
// Simple string input - just like Python!
let response = try await client.responses.create(
model: config.deploymentName,
input: "Write a haiku about Swift programming.",
maxOutputTokens: 200,
temperature: 0.7
)
// Extract the response text
for output in response.output {
for part in output.content ?? [] {
switch part {
case .outputText(let text):
print(text.text)
case .functionCall(let call):
print("Function call: \(call.name) args=\(call.arguments)")
}
}
}For conversations with multiple messages:
// Create messages easily with convenience initializer
let messages = [
SAOAIMessage(role: .system, text: "You are a helpful assistant."),
SAOAIMessage(role: .user, text: "What's the weather like?"),
SAOAIMessage(role: .assistant, text: "I don't have real-time weather data."),
SAOAIMessage(role: .user, text: "Can you help me with Swift programming?")
]
let response = try await client.responses.create(
model: config.deploymentName,
input: messages,
maxOutputTokens: 300
)SwiftAzureOpenAI provides full support for real-time streaming responses via SSE:
// Create a streaming call
let stream = client.responses.createStreaming(
model: config.deploymentName,
input: "Write a story about space exploration",
maxOutputTokens: 500,
temperature: 0.7
)
// Process events as they arrive
for try await event in stream {
if let parts = event.output?.first?.content {
for part in parts {
if let text = part.text, !text.isEmpty { print(text, terminator: "") }
}
fflush(stdout)
}
}For reasoning models like o1, o3-mini, o4-mini, you can specify reasoning effort:
// Reasoning configuration (effort and optional summary)
let reasoning = SAOAIReasoning(effort: "medium", summary: "concise")
let text = SAOAIText.low()
let response = try await client.responses.create(
model: config.deploymentName,
input: "Explain BFS vs DFS",
maxOutputTokens: 200,
reasoning: reasoning,
text: text
)// Define a function tool using SAOAIJSONValue
let sumTool = SAOAITool.function(
name: "sum_calculator",
description: "Return the sum of two integers",
parameters: .object([
"type": .string("object"),
"properties": .object([
"a": .object(["type": .string("integer")]),
"b": .object(["type": .string("integer")])
]),
"required": .array([.string("a"), .string("b")])
])
)
// Pass tool definitions in the request
let toolResponse = try await client.responses.create(
model: config.deploymentName,
input: "What's 15 + 27?",
tools: [sumTool]
)
// If the model asks to call a tool in streaming, send function outputs back (minimal form)
let functionOutput = SAOAIInputContent.FunctionCallOutput(callId: "call_123", output: "{\"result\": 42}")
let functionStream = client.responses.createStreaming(
model: config.deploymentName,
functionCallOutputs: [functionOutput],
previousResponseId: "resp_abc123"
)
for try await _ in functionStream { /* handle follow-up events */ }SwiftAzureOpenAI provides comprehensive embeddings support with built-in similarity utilities:
// Create embeddings for a single text
let response = try await client.embeddings.create(
text: "Swift is a powerful programming language",
model: "text-embedding-ada-002" // Azure: deployment name
)
// Access the embedding vector
let embedding = response.data.first!
print("Embedding dimensions: \(embedding.dimensions)")
print("Embedding vector: \(embedding.vector)")let texts = [
"Swift programming language",
"Python programming language",
"Machine learning algorithms",
"Database management systems"
]
let response = try await client.embeddings.create(
texts: texts,
model: "text-embedding-ada-002"
)
// Process all embeddings
for embedding in response.data {
print("Text \(embedding.index): \(embedding.dimensions) dims")
}let embedding1 = response.data[0]
let embedding2 = response.data[1]
// Calculate similarity (returns value between -1.0 and 1.0)
let similarity = embedding1.cosineSimilarity(with: embedding2)
print("Similarity: \(similarity)")
// Other distance metrics
let euclideanDist = embedding1.euclideanDistance(with: embedding2)
let dotProduct = embedding1.dotProduct(with: embedding2)let documents = [
"Swift is used for iOS development",
"Python is popular for AI and ML",
"JavaScript runs in web browsers",
"Rust focuses on memory safety"
]
// Find documents most similar to a query
let results = try await client.embeddings.semanticSearch(
query: "mobile app development",
documents: documents,
model: "text-embedding-ada-002",
threshold: 0.7
)
for result in results {
print("Document: \(result.document) (similarity: \(result.similarity))")
}// Find top-K most similar texts
let similarities = try await client.embeddings.findSimilar(
query: "artificial intelligence",
candidates: documents,
model: "text-embedding-ada-002",
topK: 3
)
for (text, index, similarity) in similarities {
print("Rank \(index + 1): \(text) (\(similarity))")
}SwiftAzureOpenAI provides production-ready utilities for enhanced developer experience and observability:
Process large sets of texts efficiently with automatic batching, concurrency control, and retry logic:
// Setup
let cache = EmbeddingCache(maxCapacity: 10000)
let batchHelper = EmbeddingBatchHelper(
embeddingsClient: client.embeddings,
cache: cache
)
// Process many texts efficiently
let texts = ["Text 1", "Text 2", "Text 3", /* ... hundreds more ... */]
let result = try await batchHelper.processEmbeddings(
texts: texts,
model: "text-embedding-ada-002",
configuration: .highThroughput // or .default, .conservative
) { progress in
print("Progress: \(Int(progress * 100))%")
}
print("Processed \(result.embeddings.count) embeddings")
print("Success rate: \(Int(result.successRate * 100))%")
print("Throughput: \(result.statistics.throughput(for: texts.count)) items/second")Configuration options:
- Default: 5 concurrent requests, batch size 100
- High Throughput: 10 concurrent requests, batch size 200
- Conservative: 2 concurrent requests, batch size 50
Automatic caching with TTL and LRU eviction:
let cache = EmbeddingCache(maxCapacity: 5000)
// Cache embeddings automatically when using batch helper
// Or manually:
cache.cacheEmbedding(embedding, for: "text", model: "model", expiresIn: 3600)
// Retrieve cached embeddings
if let cached = cache.getCachedEmbedding(for: "text", model: "model") {
print("Cache hit! Vector: \(cached.embedding.prefix(3))")
}
// Monitor cache performance
let stats = cache.statistics
print("Hit rate: \(Int(stats.hitRate * 100))%")
print("Utilization: \(Int(stats.utilization * 100))%")Track request performance and enable distributed tracing:
// Setup observability
let metricsDelegate = ConsoleMetricsDelegate(logLevel: .verbose)
let aggregatingDelegate = AggregatingMetricsDelegate()
// Create client with metrics integration
let client = SAOAIClient(
configuration: config,
metricsDelegate: metricsDelegate
)
// Or use the factory method
let responsesClient = ResponsesClient.create(
configuration: config,
metricsDelegate: metricsDelegate
)
// Requests automatically generate correlation IDs and emit metrics
let response = try await client.responses.create(...)
// Access aggregated statistics
let stats = aggregatingDelegate.statistics
print("Success rate: \(Int(stats.successRate * 100))%")
print("Average duration: \(String(format: "%.3fs", stats.averageRequestDuration))")Available metrics delegates:
- ConsoleMetricsDelegate: Logs events to console with correlation IDs
- AggregatingMetricsDelegate: Collects statistics for analysis
- Custom: Implement
MetricsDelegatefor integration with your monitoring system
See examples/ErgonomicsUtilitiesExample.swift for a comprehensive example demonstrating:
- Batch processing with different configurations
- Caching benefits and performance improvements
- Metrics collection and correlation ID tracking
- Integration of all utilities in a production workflow
SwiftAzureOpenAI supports file inputs (especially PDFs) following Azure AI Foundry guidelines. Files can be provided as Base64-encoded data or as file IDs from previously uploaded files.
// Example 1: PDF analysis with Base64-encoded data
let pdfData = // Your PDF data as Data
let base64String = pdfData.base64EncodedString()
let pdfResponse = try await client.responses.create(
model: "gpt-4o", // Vision models support PDF inputs
input: [
.message(SAOAIMessage(
role: .user,
text: "Summarize this PDF document",
filename: "report.pdf",
base64FileData: base64String,
mimeType: "application/pdf"
))
]
)
// Example 2: Using a file ID from uploaded files
let fileIdResponse = try await client.responses.create(
model: "gpt-4o-mini",
input: [
.message(SAOAIMessage(
role: .user,
text: "Analyze the uploaded document",
fileId: "assistant-KaVLJQTiWEvdz8yJQHHkqJ"
))
]
)
// Example 3: Direct file content creation
let message = SAOAIMessage(role: .user, content: [
.inputText(.init(text: "What are the key findings in this document?")),
.inputFile(.init(fileId: "assistant-123456789"))
])File Input Requirements:
- Only models with vision capabilities (gpt-4o, gpt-4o-mini, o1, etc.) support PDF file inputs
- Files can be up to 100 pages and 32MB total content per request
- Both extracted text and page images are included in the model's context
- Currently supported file types: PDF (primary), with support for various document formats
// Retrieve a response by ID
let retrieved = try await client.responses.retrieve("resp_abc123")
// Delete a response
let deleted = try await client.responses.delete("resp_abc123")For advanced use cases, you can still use the detailed API:
let request = SAOAIRequest(
model: config.deploymentName, // Azure: deployment name; OpenAI: model name
input: [
SAOAIMessage(
role: .system,
content: [ .inputText(.init(text: "You are a helpful assistant.")) ]
),
SAOAIMessage(
role: .user,
content: [ .inputText(.init(text: "Write a haiku about Swift programming.")) ]
)
],
maxOutputTokens: 200,
temperature: 0.7
)The input parameter uses an array of SAOAIMessage objects, each containing structured content parts for the Responses API.
Use your preferred HTTP stack to send requests and decode responses into the provided models:
import Foundation
import SwiftAzureOpenAI
let decoder = JSONDecoder()
// Configure if needed, e.g. date decoding strategy depending on your metadata usage.
func handleResponse(data: Data, httpResponse: HTTPURLResponse) throws -> APIResponse<SAOAIResponse> {
// Extract any metadata you collect from headers and timing
let rateLimit = RateLimitInfo(remaining: nil, resetTime: nil, limit: nil)
let metadata = ResponseMetadata(
requestId: httpResponse.allHeaderFields["x-request-id"] as? String,
timestamp: Date(),
processingTime: nil,
rateLimit: rateLimit
)
let body = try decoder.decode(SAOAIResponse.self, from: data)
return APIResponse(
data: body,
metadata: metadata,
statusCode: httpResponse.statusCode,
headers: httpResponse.allHeaderFields as? [String: String] ?? [:]
)
}let apiResponse: APIResponse<SAOAIResponse> = /* from your network layer */
let outputs = apiResponse.data.output
for output in outputs {
for part in output.content ?? [] {
switch part {
case .outputText(let text):
print(text.text)
case .functionCall(let call):
print("Function call: \(call.name) \narguments: \(call.arguments)")
}
}
}For advanced scenarios, the core includes StreamingResponseChunk<T> and services to process arbitrary streams of Data. Prefer the high-level client.responses.createStreaming(...) for Responses API.
The Responses API uses a unified data model structure that consolidates the best features from chat completions and assistants APIs:
-
SAOAIRequest- Main request payload for the Responses APImodel: String?β Azure deployment name or OpenAI model nameinput: [SAOAIInput]β Unified input array (messages and/or function call outputs)maxOutputTokens: Int?β Maximum tokens to generate in the responsetemperature: Double?,topP: Double?β Sampling parameterstools: [SAOAITool]?β Optional tool definitions for function callingpreviousResponseId: String?β Chain follow-ups to a prior responsereasoning: SAOAIReasoning?β Reasoning configurationtext: SAOAIText?β Text verbosity configurationstream: Bool?β Enable streaming
-
SAOAIReasoning- Reasoning configurationeffort: Stringβ Reasoning effort level: "low", "medium", or "high"summary: String?β Optional summary style: e.g. "concise", "detailed"
-
SAOAIMessage- Conversation messagerole: SAOAIMessageRole?β.system,.user,.assistant, or omitted for tool outputscontent: [SAOAIInputContent]β Structured content parts
-
SAOAIInputContent- Structured input content.inputText(InputText)β{ type: "input_text", text }.inputImage(InputImage)β{ type: "input_image", image_url }or base64 data URI.inputFile(InputFile)β{ type: "input_file", filename?, file_data?, file_id? }for PDF and document processing.functionCallOutput(FunctionCallOutput)β{ type: "function_call_output", call_id, output }
-
SAOAIResponse- Top-level responseid: String?model: String?created: Int?(mapped fromcreated_at)output: [SAOAIOutput]usage: SAOAITokenUsage?
-
SAOAIOutput- Output itemcontent: [SAOAIOutputContent]?role: String?- Function call fields for tool use (e.g.,
type,name,callId,arguments,status)
-
SAOAIOutputContent.outputText(OutputText)β{ type: "output_text", text }.functionCall(FunctionCall)β{ type: "function_call", call_id, name, arguments }
-
SAOAITokenUsage- Token consumption trackinginputTokens: Int?,outputTokens: Int?,totalTokens: Int?
-
APIResponse<T>- Wrapper for HTTP response datadata: Tβ Decoded response payloadmetadata: ResponseMetadataβ Request/response metadatastatusCode: Intβ HTTP status codeheaders: [String: String]β Response headers
-
ResponseMetadata- Request/response trackingrequestId: String?β Unique request identifiertimestamp: Dateβ Response timestampprocessingTime: TimeInterval?β Processing durationrateLimit: RateLimitInfo?β Rate limit information
-
RateLimitInfo- Rate limiting detailsremaining: Int?,resetTime: Date?,limit: Int?
-
SAOAIError,ErrorResponse- Error handling- Typed errors for network, decoding, status, and server-reported issues
This package provides data models and configurations for both Azure OpenAI and OpenAI services. You can use any HTTP client (e.g., URLSession) to call the respective endpoints.
For Azure OpenAI, use the Responses API endpoint with your resource configuration:
import SwiftAzureOpenAI
// Configure Azure OpenAI
let azureConfig = SAOAIAzureConfiguration(
endpoint: "https://your-resource.openai.azure.com",
apiKey: "your-azure-api-key",
deploymentName: "gpt-4o-mini", // Your deployment name
apiVersion: "preview" // Responses API version (default: "preview")
)
// Build your request
let request = SAOAIRequest(
model: azureConfig.deploymentName,
input: [
SAOAIMessage(
role: .user,
content: [.inputText(.init(text: "Hello, Azure OpenAI!"))]
)
],
maxOutputTokens: 100
)Azure OpenAI Responses API Endpoint:
- URL:
https://{resource}.openai.azure.com/openai/v1/responses?api-version=preview - Headers:
api-key: <AZURE_API_KEY>,Content-Type: application/json - Body:
SAOAIRequestencoded as JSON
For OpenAI, use the standard Responses API endpoint:
// Configure OpenAI
let openaiConfig = SAOAIOpenAIConfiguration(
apiKey: "sk-your-openai-api-key",
organization: "org-your-organization" // Optional
)OpenAI Responses API Endpoint:
- URL:
https://api.openai.com/v1/responses - Headers:
Authorization: Bearer <OPENAI_API_KEY>,Content-Type: application/json - Body:
SAOAIRequestencoded as JSON
Here's a complete example using URLSession with Azure OpenAI:
import Foundation
import SwiftAzureOpenAI
func sendResponsesRequest() async throws -> APIResponse<SAOAIResponse> {
let config = SAOAIAzureConfiguration(
endpoint: "https://your-resource.openai.azure.com",
apiKey: "your-api-key",
deploymentName: "gpt-4o-mini"
)
let request = SAOAIRequest(
model: config.deploymentName,
input: [
SAOAIMessage(
role: .user,
content: [.inputText(.init(text: "Hello!"))]
)
]
)
let json = try JSONEncoder().encode(request)
var urlRequest = URLRequest(url: config.baseURL)
urlRequest.httpMethod = "POST"
urlRequest.httpBody = json
// Apply configuration headers
for (key, value) in config.headers {
urlRequest.setValue(value, forHTTPHeaderField: key)
}
let (data, response) = try await URLSession.shared.data(for: urlRequest)
let httpResponse = response as! HTTPURLResponse
return try handleResponse(data: data, httpResponse: httpResponse)
}do {
// ... perform request and decode ...
} catch let error as SAOAIError {
print(error.localizedDescription)
} catch let error as DecodingError {
throw SAOAIError.decodingError(error)
} catch {
throw SAOAIError.networkError(error)
}If the server returns a structured error payload, decode into ErrorResponse and surface it via .apiError(ErrorResponse).
swift testFor comprehensive live testing with Azure OpenAI or OpenAI services, see the Live API Testing Guide.
Quick setup for live testing:
# Azure OpenAI
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
export COPILOT_AGENT_AZURE_OPENAI_API_KEY="your-azure-api-key"
export AZURE_OPENAI_DEPLOYMENT="your-deployment-name"
# Or OpenAI
export OPENAI_API_KEY="sk-your-openai-api-key"Then create a simple test client:
import SwiftAzureOpenAI
import Foundation
// Example test function
func testAzureOpenAI() async throws {
guard let endpoint = ProcessInfo.processInfo.environment["AZURE_OPENAI_ENDPOINT"],
let apiKey = ProcessInfo.processInfo.environment["COPILOT_AGENT_AZURE_OPENAI_API_KEY"] ?? ProcessInfo.processInfo.environment["AZURE_OPENAI_API_KEY"],
let deployment = ProcessInfo.processInfo.environment["AZURE_OPENAI_DEPLOYMENT"] else {
print("Azure OpenAI environment variables not set")
return
}
let config = SAOAIAzureConfiguration(
endpoint: endpoint,
apiKey: apiKey,
deploymentName: deployment
)
let request = SAOAIRequest(
model: deployment,
input: [
SAOAIMessage(
role: .user,
content: [.inputText(.init(text: "Hello, Azure OpenAI!"))]
)
],
maxOutputTokens: 50
)
// Implement your HTTP client logic here
print("Request configured successfully")
}import SwiftAzureOpenAI
func fileAPIExample() async throws {
// Configure client
let config = SAOAIAzureConfiguration(
endpoint: "https://your-resource.openai.azure.com",
apiKey: "your-api-key",
deploymentName: "gpt-4o",
apiVersion: "preview"
)
let client = SAOAIClient(configuration: config)
// Upload a file
let fileData = try Data(contentsOf: URL(fileURLWithPath: "document.pdf"))
let file = try await client.files.create(
file: fileData,
filename: "document.pdf",
purpose: .assistants // Use .assistants as workaround for .userData
)
print("Uploaded file: \(file.id)")
// Use the file in a conversation
let fileInput = SAOAIInputContent.inputFile(.init(fileId: file.id))
let textInput = SAOAIInputContent.inputText(.init(text: "Summarize this document"))
let message = SAOAIMessage(
role: .user,
content: [fileInput, textInput]
)
let response = try await client.responses.create(
model: "gpt-4o",
input: [.message(message)]
)
print("Summary: \(response.outputText ?? "")")
// List all files
let fileList = try await client.files.list()
print("Found \(fileList.data.count) files")
// Delete the file when done
let deleteResult = try await client.files.delete(file.id)
print("File deleted: \(deleteResult.deleted)")
}For large files, you can stream file content in chunks rather than loading the entire file into memory:
// Stream file content for large files
let stream = client.files.streamContent("file-abc123")
// Process chunks as they arrive
for try await chunk in stream {
// Process data chunk (e.g., write to file, parse incrementally)
print("Received chunk of \(chunk.count) bytes")
// Example: Write to local file
// fileHandle.write(chunk)
}
print("File download completed")func directFileExample() async throws {
let config = SAOAIAzureConfiguration(
endpoint: "https://your-resource.openai.azure.com",
apiKey: "your-api-key",
deploymentName: "gpt-4o",
apiVersion: "preview"
)
let client = SAOAIClient(configuration: config)
// Load image file
let imageData = try Data(contentsOf: URL(fileURLWithPath: "chart.png"))
let base64Image = imageData.base64EncodedString()
// Include file directly in request (no upload needed)
let fileInput = SAOAIInputContent.inputFile(.init(
filename: "chart.png",
base64Data: base64Image,
mimeType: "image/png"
))
let textInput = SAOAIInputContent.inputText(.init(text: "What does this chart show?"))
let message = SAOAIMessage(
role: .user,
content: [fileInput, textInput]
)
let response = try await client.responses.create(
model: "gpt-4o",
input: [.message(message)]
)
print("Chart analysis: \(response.outputText ?? "")")
}This project is licensed under the MIT License - see the LICENSE file for details.
- π Documentation: project README (this file) and docs/ directory
- π§ͺ Live API Testing: Live API Testing Guide
- π§ Internal Development: Contributing Guide and CI/CD Documentation
- π Issues: GitHub Issues (internal development tracking)
- π Azure OpenAI Responses API: Official Documentation
Note: This package is under active internal development and not officially affiliated with OpenAI or Microsoft. It provides Swift-native data models and client utilities designed for the Azure/OpenAI Responses API for internal use and testing.