This is a no-nonsense async Scala client for OpenAI API supporting all the available endpoints and params including streaming, the newest ChatGPT completion, and voice routines (as defined here), provided in a single, convenient service called OpenAIService. The supported calls are:
- Models: listModels, and retrieveModel
- Completions: createCompletion
- Chat Completions: createChatCompletion - newπ₯
- Edits: createEdit
- Images: createImage, createImageEdit, and createImageVariation
- Embeddings: createEmbeddings
- Audio: createAudioTranscription - newπ₯, createAudioTranslation - newπ₯
- Files: listFiles, uploadFile, deleteFile, retrieveFile, and retrieveFileContent
- Fine-tunes: createFineTune, listFineTunes, retrieveFineTune, cancelFineTune, listFineTuneEvents, and deleteFineTuneModel
- Moderations: createModeration
Note that in order to be consistent with the OpenAI API naming, the service function names match exactly the API endpoint titles/descriptions with camelcase.
Also, we aimed the lib to be self-contained with the fewest dependencies possible therefore we ended up using only two libs play-ahc-ws-standalone and play-ws-standalone-json (at the top level). Additionally, if dependency injection is required we use scala-guice lib as well.
βοΈ Important: this is a "community-maintained" library and, as such, has no relation to OpenAI company.
π Check out an article about the lib/client on Medium.
The currently supported Scala versions are 2.12, 2.13, and 3. Note that an optional module openai-scala-guice is available only for Scala 2.12 and 2.13.
To pull the library you have to add the following dependency to your build.sbt
"io.cequence" %% "openai-scala-client" % "0.3.2"
or to pom.xml (if you use maven)
<dependency>
<groupId>io.cequence</groupId>
<artifactId>openai-scala-client_2.12</artifactId>
<version>0.3.2</version>
</dependency>
If you want a streaming support use "io.cequence" %% "openai-scala-client-stream" % "0.3.2" instead.
- Env. variables:
OPENAI_SCALA_CLIENT_API_KEYand optionally alsoOPENAI_SCALA_CLIENT_ORG_ID(if you have one) - File config (default): openai-scala-client.conf
I. Obtaining OpenAIService
First you need to provide an implicit execution context as well as akka materializer, e.g., as
implicit val ec = ExecutionContext.global
implicit val materializer = Materializer(ActorSystem())Then you can obtain a service in one of the following ways.
- Default config (expects env. variable(s) to be set as defined in
Configsection)
val service = OpenAIServiceFactory()- Custom config
val config = ConfigFactory.load("path_to_my_custom_config")
val service = OpenAIServiceFactory(config)- Without config
val service = OpenAIServiceFactory(
apiKey = "your_api_key",
orgId = Some("your_org_id") // if you have one
)βοΈ Important: If you want streaming support use OpenAIServiceStreamedFactory from openai-scala-client-stream lib instead of OpenAIServiceFactory (in the three examples above). Three additional functions - createCompletionStreamed, createChatCompletionStreamed, and listFineTuneEventsStreamed provided by OpenAIServiceStreamedExtra will be then available.
- Via dependency injection (requires
openai-scala-guicelib)
class MyClass @Inject() (openAIService: OpenAIService) {...}II. Calling functions
Full documentation of each call with its respective inputs and settings is provided in OpenAIService. Since all the calls are async they return responses wrapped in Future.
Examples:
- List models
service.listModels.map(models =>
models.foreach(println)
)- Retrieve model
service.retrieveModel(ModelId.text_davinci_003).map(model =>
println(model.getOrElse("N/A"))
)- Create completion
val text = """Extract the name and mailing address from this email:
|Dear Kelly,
|It was great to talk to you at the seminar. I thought Jane's talk was quite good.
|Thank you for the book. Here's my address 2111 Ash Lane, Crestview CA 92002
|Best,
|Maya
""".stripMargin
service.createCompletion(text).map(completion =>
println(completion.choices.head.text)
)- Create completion with a custom setting
val text = """Extract the name and mailing address from this email:
|Dear Kelly,
|It was great to talk to you at the seminar. I thought Jane's talk was quite good.
|Thank you for the book. Here's my address 2111 Ash Lane, Crestview CA 92002
|Best,
|Maya
""".stripMargin
service.createCompletion(
text,
settings = CreateCompletionSettings(
model = ModelId.text_davinci_001,
max_tokens = Some(1500),
temperature = Some(0.9),
presence_penalty = Some(0.2),
frequency_penalty = Some(0.2)
)
).map(completion =>
println(completion.choices.head.text)
)- Create completion with streaming and a custom setting
val source = service.createCompletionStreamed(
prompt = "Write me a Shakespeare poem about two cats playing baseball in Russia using at least 2 pages",
settings = CreateCompletionSettings(
model = ModelId.text_davinci_003,
max_tokens = Some(1500),
temperature = Some(0.9),
presence_penalty = Some(0.2),
frequency_penalty = Some(0.2)
)
)
source.map(completion =>
println(completion.choices.head.text)
).runWith(Sink.ignore)(For this to work you need to use OpenAIServiceStreamedFactory from openai-scala-client-stream lib)
- Create chat completion
val createChatCompletionSettings = CreateChatCompletionSettings(
model = ModelId.gpt_3_5_turbo
)
val messages: Seq[MessageSpec] = Seq(
MessageSpec(role = ChatRole.System, content = "You are a helpful assistant."),
MessageSpec(role = ChatRole.User, content = "Who won the world series in 2020?"),
MessageSpec(role = ChatRole.Assistant, content = "The Los Angeles Dodgers won the World Series in 2020."),
MessageSpec(role = ChatRole.User, content = "Where was it played"),
)
service.createChatCompletion(
messages = messages,
settings = createChatCompletionSettings
).map { chatCompletion =>
println(chatCompletion.choices.head.message.content)
}-
Wen Scala 3?
Feb 2023. You are right; we chose the shortest month to do so :)Done! -
I got a timeout exception. How can I change the timeout setting?
You can do it either by passing the
timeoutsparam toOpenAIServiceFactoryor, if you use your own configuration file, then you can simply add it there as:
openai-scala-client {
timeouts {
requestTimeoutSec = 200
readTimeoutSec = 200
connectTimeoutSec = 5
pooledConnectionIdleTimeoutSec = 60
}
}
-
I got an exception like
com.typesafe.config.ConfigException$UnresolvedSubstitution: openai-scala-client.conf @ jar:file:.../io/cequence/openai-scala-client_2.13/0.0.1/openai-scala-client_2.13-0.0.1.jar!/openai-scala-client.conf: 4: Could not resolve substitution to a value: ${OPENAI_SCALA_CLIENT_API_KEY}. What should I do?Set the env. variable
OPENAI_SCALA_CLIENT_API_KEY. If you don't have one register here. -
It all looks cool. I want to chat with you about your research and development?
Just shoot us an email at [email protected].
This library is available and published as open source under the terms of the MIT License.
This project is open-source and welcomes any contribution or feedback (here).
Development of this library has been supported by - Cequence.io -
The future of contracting
Created and maintained by Peter Banda.