Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Streaming Response #785

@uggy-ug

Description

@uggy-ug

Hi Telegram team,

In the recent update you mentioned that bots can now stream responses as they’re generated, which is especially useful for AI chatbots.

However, as far as I can tell, the only practical way to simulate streaming today is to repeatedly call editMessage while receiving partial output from an LLM. That approach quickly runs into API rate limits and requires complex throttling logic.

Could you please clarify whether the new “streaming response” feature introduces any new backend mechanism or protocol to handle continuous output more efficiently (i.e., without hitting rate limits)?
If not, could you elaborate on the recommended implementation pattern on both sides (bot backend and Telegram client) to achieve smooth streaming behavior?

It also states that developers need to enable these features, but then it refers to the API page, which provides no explanation. BotFather also does not give any clues in this regard.

Thanks a lot for your work and for clarifying this!

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions