Thanks to visit codestin.com
Credit goes to github.com

Skip to content

prompt is too long? #122

Open
Open
@cipherome-minkim

Description

@cipherome-minkim

I'm trying to use claude-sonnet-4-20250514 with the following settings in call_llm.py

...
response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=16000,
        thinking={
            "type": "enabled",
            "budget_tokens": 10000
        },
        messages=[
            {"role": "user", "content": prompt}
        ]
    )
...

after a bit of "Identifying abstractions using LLM..."

I get a exception stack dump with the following error:

anthropic.BadRequestError: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 211230 tokens > 200000 maximum'}}

211,230 tokens seems like alot... I don't think I understand how the max_tokens in the settings are being used in the prompt composing.

Can anyone help me understand what parameters I need to tweak to get it to process a repo. I'm not working a huge repo (just under 500 files fetched)

This is how I'm running the pipeline.

python main.py --dir /tmp/test-repo --include "*.java" "*.kt" --exclude ".github/*" -s 50000

Thanks in advance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions