Thanks to visit codestin.com
Credit goes to github.com

Skip to content

release: 1.37.0 #1567

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.36.1"
".": "1.37.0"
}
4 changes: 2 additions & 2 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
configured_endpoints: 64
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-518ca6c60061d3e8bc0971facf40d752f2aea62e3522cc168ad29a1f29cab3dd.yml
configured_endpoints: 68
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-77cfff37114bc9f141c7e6107eb5f1b38d8cc99bc3d4ce03a066db2b6b649c69.yml
18 changes: 18 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,23 @@
# Changelog

## 1.37.0 (2024-07-22)

Full Changelog: [v1.36.1...v1.37.0](https://github.com/openai/openai-python/compare/v1.36.1...v1.37.0)

### Features

* **api:** add uploads endpoints ([#1568](https://github.com/openai/openai-python/issues/1568)) ([d877b6d](https://github.com/openai/openai-python/commit/d877b6dabb9b3e8da6ff2f46de1120af54de398d))


### Bug Fixes

* **cli/audio:** handle non-json response format ([#1557](https://github.com/openai/openai-python/issues/1557)) ([bb7431f](https://github.com/openai/openai-python/commit/bb7431f602602d4c74d75809c6934a7fd192972d))


### Documentation

* **readme:** fix example snippet imports ([#1569](https://github.com/openai/openai-python/issues/1569)) ([0c90af6](https://github.com/openai/openai-python/commit/0c90af6412b3314c2257b9b8eb7fabd767f32ef6))

## 1.36.1 (2024-07-20)

Full Changelog: [v1.36.0...v1.36.1](https://github.com/openai/openai-python/compare/v1.36.0...v1.36.1)
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ List methods in the OpenAI API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:

```python
import openai
from openai import OpenAI

client = OpenAI()

Expand All @@ -246,7 +246,7 @@ Or, asynchronously:

```python
import asyncio
import openai
from openai import AsyncOpenAI

client = AsyncOpenAI()

Expand Down
26 changes: 26 additions & 0 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -415,3 +415,29 @@ Methods:
- <code title="get /batches/{batch_id}">client.batches.<a href="./src/openai/resources/batches.py">retrieve</a>(batch_id) -> <a href="./src/openai/types/batch.py">Batch</a></code>
- <code title="get /batches">client.batches.<a href="./src/openai/resources/batches.py">list</a>(\*\*<a href="src/openai/types/batch_list_params.py">params</a>) -> <a href="./src/openai/types/batch.py">SyncCursorPage[Batch]</a></code>
- <code title="post /batches/{batch_id}/cancel">client.batches.<a href="./src/openai/resources/batches.py">cancel</a>(batch_id) -> <a href="./src/openai/types/batch.py">Batch</a></code>

# Uploads

Types:

```python
from openai.types import Upload
```

Methods:

- <code title="post /uploads">client.uploads.<a href="./src/openai/resources/uploads/uploads.py">create</a>(\*\*<a href="src/openai/types/upload_create_params.py">params</a>) -> <a href="./src/openai/types/upload.py">Upload</a></code>
- <code title="post /uploads/{upload_id}/cancel">client.uploads.<a href="./src/openai/resources/uploads/uploads.py">cancel</a>(upload_id) -> <a href="./src/openai/types/upload.py">Upload</a></code>
- <code title="post /uploads/{upload_id}/complete">client.uploads.<a href="./src/openai/resources/uploads/uploads.py">complete</a>(upload_id, \*\*<a href="src/openai/types/upload_complete_params.py">params</a>) -> <a href="./src/openai/types/upload.py">Upload</a></code>

## Parts

Types:

```python
from openai.types.uploads import UploadPart
```

Methods:

- <code title="post /uploads/{upload_id}/parts">client.uploads.parts.<a href="./src/openai/resources/uploads/parts.py">create</a>(upload_id, \*\*<a href="src/openai/types/uploads/part_create_params.py">params</a>) -> <a href="./src/openai/types/uploads/upload_part.py">UploadPart</a></code>
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.36.1"
version = "1.37.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
8 changes: 8 additions & 0 deletions src/openai/_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ class OpenAI(SyncAPIClient):
fine_tuning: resources.FineTuning
beta: resources.Beta
batches: resources.Batches
uploads: resources.Uploads
with_raw_response: OpenAIWithRawResponse
with_streaming_response: OpenAIWithStreamedResponse

Expand Down Expand Up @@ -143,6 +144,7 @@ def __init__(
self.fine_tuning = resources.FineTuning(self)
self.beta = resources.Beta(self)
self.batches = resources.Batches(self)
self.uploads = resources.Uploads(self)
self.with_raw_response = OpenAIWithRawResponse(self)
self.with_streaming_response = OpenAIWithStreamedResponse(self)

Expand Down Expand Up @@ -270,6 +272,7 @@ class AsyncOpenAI(AsyncAPIClient):
fine_tuning: resources.AsyncFineTuning
beta: resources.AsyncBeta
batches: resources.AsyncBatches
uploads: resources.AsyncUploads
with_raw_response: AsyncOpenAIWithRawResponse
with_streaming_response: AsyncOpenAIWithStreamedResponse

Expand Down Expand Up @@ -355,6 +358,7 @@ def __init__(
self.fine_tuning = resources.AsyncFineTuning(self)
self.beta = resources.AsyncBeta(self)
self.batches = resources.AsyncBatches(self)
self.uploads = resources.AsyncUploads(self)
self.with_raw_response = AsyncOpenAIWithRawResponse(self)
self.with_streaming_response = AsyncOpenAIWithStreamedResponse(self)

Expand Down Expand Up @@ -483,6 +487,7 @@ def __init__(self, client: OpenAI) -> None:
self.fine_tuning = resources.FineTuningWithRawResponse(client.fine_tuning)
self.beta = resources.BetaWithRawResponse(client.beta)
self.batches = resources.BatchesWithRawResponse(client.batches)
self.uploads = resources.UploadsWithRawResponse(client.uploads)


class AsyncOpenAIWithRawResponse:
Expand All @@ -498,6 +503,7 @@ def __init__(self, client: AsyncOpenAI) -> None:
self.fine_tuning = resources.AsyncFineTuningWithRawResponse(client.fine_tuning)
self.beta = resources.AsyncBetaWithRawResponse(client.beta)
self.batches = resources.AsyncBatchesWithRawResponse(client.batches)
self.uploads = resources.AsyncUploadsWithRawResponse(client.uploads)


class OpenAIWithStreamedResponse:
Expand All @@ -513,6 +519,7 @@ def __init__(self, client: OpenAI) -> None:
self.fine_tuning = resources.FineTuningWithStreamingResponse(client.fine_tuning)
self.beta = resources.BetaWithStreamingResponse(client.beta)
self.batches = resources.BatchesWithStreamingResponse(client.batches)
self.uploads = resources.UploadsWithStreamingResponse(client.uploads)


class AsyncOpenAIWithStreamedResponse:
Expand All @@ -528,6 +535,7 @@ def __init__(self, client: AsyncOpenAI) -> None:
self.fine_tuning = resources.AsyncFineTuningWithStreamingResponse(client.fine_tuning)
self.beta = resources.AsyncBetaWithStreamingResponse(client.beta)
self.batches = resources.AsyncBatchesWithStreamingResponse(client.batches)
self.uploads = resources.AsyncUploadsWithStreamingResponse(client.uploads)


Client = OpenAI
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.36.1" # x-release-please-version
__version__ = "1.37.0" # x-release-please-version
52 changes: 33 additions & 19 deletions src/openai/cli/_api/audio.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
from __future__ import annotations

import sys
from typing import TYPE_CHECKING, Any, Optional, cast
from argparse import ArgumentParser

from .._utils import get_client, print_model
from ..._types import NOT_GIVEN
from .._models import BaseModel
from .._progress import BufferReader
from ...types.audio import Transcription

if TYPE_CHECKING:
from argparse import _SubParsersAction
Expand Down Expand Up @@ -65,30 +67,42 @@ def transcribe(args: CLITranscribeArgs) -> None:
with open(args.file, "rb") as file_reader:
buffer_reader = BufferReader(file_reader.read(), desc="Upload progress")

model = get_client().audio.transcriptions.create(
file=(args.file, buffer_reader),
model=args.model,
language=args.language or NOT_GIVEN,
temperature=args.temperature or NOT_GIVEN,
prompt=args.prompt or NOT_GIVEN,
# casts required because the API is typed for enums
# but we don't want to validate that here for forwards-compat
response_format=cast(Any, args.response_format),
model = cast(
"Transcription | str",
get_client().audio.transcriptions.create(
file=(args.file, buffer_reader),
model=args.model,
language=args.language or NOT_GIVEN,
temperature=args.temperature or NOT_GIVEN,
prompt=args.prompt or NOT_GIVEN,
# casts required because the API is typed for enums
# but we don't want to validate that here for forwards-compat
response_format=cast(Any, args.response_format),
),
)
print_model(model)
if isinstance(model, str):
sys.stdout.write(model + "\n")
else:
print_model(model)

@staticmethod
def translate(args: CLITranslationArgs) -> None:
with open(args.file, "rb") as file_reader:
buffer_reader = BufferReader(file_reader.read(), desc="Upload progress")

model = get_client().audio.translations.create(
file=(args.file, buffer_reader),
model=args.model,
temperature=args.temperature or NOT_GIVEN,
prompt=args.prompt or NOT_GIVEN,
# casts required because the API is typed for enums
# but we don't want to validate that here for forwards-compat
response_format=cast(Any, args.response_format),
model = cast(
"Transcription | str",
get_client().audio.translations.create(
file=(args.file, buffer_reader),
model=args.model,
temperature=args.temperature or NOT_GIVEN,
prompt=args.prompt or NOT_GIVEN,
# casts required because the API is typed for enums
# but we don't want to validate that here for forwards-compat
response_format=cast(Any, args.response_format),
),
)
print_model(model)
if isinstance(model, str):
sys.stdout.write(model + "\n")
else:
print_model(model)
14 changes: 14 additions & 0 deletions src/openai/resources/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,14 @@
BatchesWithStreamingResponse,
AsyncBatchesWithStreamingResponse,
)
from .uploads import (
Uploads,
AsyncUploads,
UploadsWithRawResponse,
AsyncUploadsWithRawResponse,
UploadsWithStreamingResponse,
AsyncUploadsWithStreamingResponse,
)
from .embeddings import (
Embeddings,
AsyncEmbeddings,
Expand Down Expand Up @@ -156,4 +164,10 @@
"AsyncBatchesWithRawResponse",
"BatchesWithStreamingResponse",
"AsyncBatchesWithStreamingResponse",
"Uploads",
"AsyncUploads",
"UploadsWithRawResponse",
"AsyncUploadsWithRawResponse",
"UploadsWithStreamingResponse",
"AsyncUploadsWithStreamingResponse",
]
6 changes: 6 additions & 0 deletions src/openai/resources/chat/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,7 @@ def create(
exhausted.
- If set to 'default', the request will be processed using the default service
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.
Expand Down Expand Up @@ -366,6 +367,7 @@ def create(
exhausted.
- If set to 'default', the request will be processed using the default service
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.
Expand Down Expand Up @@ -554,6 +556,7 @@ def create(
exhausted.
- If set to 'default', the request will be processed using the default service
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.
Expand Down Expand Up @@ -817,6 +820,7 @@ async def create(
exhausted.
- If set to 'default', the request will be processed using the default service
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.
Expand Down Expand Up @@ -1012,6 +1016,7 @@ async def create(
exhausted.
- If set to 'default', the request will be processed using the default service
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.
Expand Down Expand Up @@ -1200,6 +1205,7 @@ async def create(
exhausted.
- If set to 'default', the request will be processed using the default service
tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the `service_tier`
utilized.
Expand Down
33 changes: 33 additions & 0 deletions src/openai/resources/uploads/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

from .parts import (
Parts,
AsyncParts,
PartsWithRawResponse,
AsyncPartsWithRawResponse,
PartsWithStreamingResponse,
AsyncPartsWithStreamingResponse,
)
from .uploads import (
Uploads,
AsyncUploads,
UploadsWithRawResponse,
AsyncUploadsWithRawResponse,
UploadsWithStreamingResponse,
AsyncUploadsWithStreamingResponse,
)

__all__ = [
"Parts",
"AsyncParts",
"PartsWithRawResponse",
"AsyncPartsWithRawResponse",
"PartsWithStreamingResponse",
"AsyncPartsWithStreamingResponse",
"Uploads",
"AsyncUploads",
"UploadsWithRawResponse",
"AsyncUploadsWithRawResponse",
"UploadsWithStreamingResponse",
"AsyncUploadsWithStreamingResponse",
]
Loading
Loading