Thanks to visit codestin.com
Credit goes to github.com

Skip to content

feat(genai): Demonstrate how to get token count #13428

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

kaycebasques
Copy link

Checklist

@kaycebasques kaycebasques requested review from a team as code owners June 13, 2025 18:45
@product-auto-label product-auto-label bot added the samples Issues that are directly related to samples. label Jun 13, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kaycebasques, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've made a small update to the counttoken_compute_with_txt.py sample. The change adds a line to print the actual token count derived from the response, making the output clearer for users running the sample. I also updated the example output comment to match.

Highlights

  • Display Token Count: Added a print statement to explicitly show the computed token count by getting the length of the tokens list from the response object.
  • Update Example Output: Modified the inline comment showing the example output to include the line printing the token count, reflecting the change.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@kaycebasques kaycebasques changed the title Demonstrate how to get token count feat(genai): Demonstrate how to get token count Jun 13, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request demonstrates how to retrieve and print a token count from the compute_tokens method's response. The suggestion focuses on making the token calculation more robust by summing tokens from all parts in response.tokens_info, which handles potential edge cases more gracefully.

@@ -25,12 +25,14 @@ def compute_tokens_example() -> int:
)

print(response)
print(f"token count: {len(response.tokens_info[0].tokens)}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To handle cases where response.tokens_info might be empty or contain multiple elements, consider summing the token counts from all TokensInfo objects within response.tokens_info for a more robust approach.

print(f"token count: {sum(len(info.tokens) for info in response.tokens_info)}")

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the suggestion is more approach but this is a pretty simple example. I don't think we need to complicate it with the fully robust solution. The goal here is to understand basic usage.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kaycebasques could you update the approach so that it does handle an empty response? The basic usage is the goal, and if it fails for whatever reason having the sample address it encourages the recommended practice.

@glasnt glasnt added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 15, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
samples Issues that are directly related to samples.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants