Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit e043d7b

Browse files
apcha-oaistainless-app[bot]
authored andcommitted
chore: fix dangling comment
1 parent 25cbb74 commit e043d7b

File tree

1 file changed

+0
-60
lines changed

1 file changed

+0
-60
lines changed

src/openai/resources/audio/transcriptions.py

Lines changed: 0 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -103,66 +103,6 @@ def create(
103103
timeout: float | httpx.Timeout | None | NotGiven = not_given,
104104
) -> TranscriptionVerbose: ...
105105

106-
model's confidence in the transcription. `logprobs` only works with
107-
response_format set to `json` and only with the models `gpt-4o-transcribe` and
108-
`gpt-4o-mini-transcribe`. This field is not supported when using
109-
`gpt-4o-transcribe-diarize`.
110-
111-
known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
112-
`known_speaker_references[]`. Each entry should be a short identifier (for
113-
example `customer` or `agent`). Up to 4 speakers are supported.
114-
115-
known_speaker_references: Optional list of audio samples (as
116-
[data URLs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs))
117-
that contain known speaker references matching `known_speaker_names[]`. Each
118-
sample must be between 2 and 10 seconds, and can use any of the same input audio
119-
formats supported by `file`.
120-
121-
language: The language of the input audio. Supplying the input language in
122-
[ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) (e.g. `en`)
123-
format will improve accuracy and latency.
124-
125-
prompt: An optional text to guide the model's style or continue a previous audio
126-
segment. The
127-
[prompt](https://platform.openai.com/docs/guides/speech-to-text#prompting)
128-
should match the audio language. This field is not supported when using
129-
`gpt-4o-transcribe-diarize`.
130-
131-
response_format: The format of the output, in one of these options: `json`, `text`, `srt`,
132-
`verbose_json`, `vtt`, or `diarized_json`. For `gpt-4o-transcribe` and
133-
`gpt-4o-mini-transcribe`, the only supported format is `json`. For
134-
`gpt-4o-transcribe-diarize`, the supported formats are `json`, `text`, and
135-
`diarized_json`, with `diarized_json` required to receive speaker annotations.
136-
137-
stream: If set to true, the model response data will be streamed to the client as it is
138-
generated using
139-
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).
140-
See the
141-
[Streaming section of the Speech-to-Text guide](https://platform.openai.com/docs/guides/speech-to-text?lang=curl#streaming-transcriptions)
142-
for more information.
143-
144-
Note: Streaming is not supported for the `whisper-1` model and will be ignored.
145-
146-
temperature: The sampling temperature, between 0 and 1. Higher values like 0.8 will make the
147-
output more random, while lower values like 0.2 will make it more focused and
148-
deterministic. If set to 0, the model will use
149-
[log probability](https://en.wikipedia.org/wiki/Log_probability) to
150-
automatically increase the temperature until certain thresholds are hit.
151-
152-
timestamp_granularities: The timestamp granularities to populate for this transcription.
153-
`response_format` must be set `verbose_json` to use timestamp granularities.
154-
Either or both of these options are supported: `word`, or `segment`. Note: There
155-
is no additional latency for segment timestamps, but generating word timestamps
156-
incurs additional latency. This option is not available for
157-
`gpt-4o-transcribe-diarize`.
158-
159-
extra_headers: Send extra headers
160-
161-
extra_query: Add additional query parameters to the request
162-
163-
extra_body: Add additional JSON properties to the request
164-
) -> Transcription: ...
165-
166106
@overload
167107
def create(
168108
self,

0 commit comments

Comments
 (0)