Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Discussion options

You must be logged in to vote

Great question!

To stream chat completions using the openai Python package, you need to set stream=True and then iterate over the events.

Here’s how you can do it:

import openai

openai.api_key = "your-api-key"

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello!"}],
    stream=True  # ✅ this enables streaming
)

for chunk in response:
    if "choices" in chunk:
        content = chunk["choices"][0]["delta"].get("content", "")
        print(content, end="", flush=True)

This will print the generated message token-by-token in real time.

Let me know if that works — and feel free to mark this as the answer if it helps! ✅

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
1 reply
@Istituto-freudinttheprodev
Comment options

Answer selected by Istituto-freudinttheprodev
Comment options

You must be logged in to vote
1 reply
@MatteoMgr2008
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants