-
Notifications
You must be signed in to change notification settings - Fork 838
feat(anthropic): add Messages.create sync instrumentation #4034
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat(anthropic): add Messages.create sync instrumentation #4034
Conversation
aabmass
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few suggestions, but looks pretty good, thanks
| GenAIAttributes.GEN_AI_REQUEST_MODEL: kwargs.get("model"), | ||
| GenAIAttributes.GEN_AI_REQUEST_MAX_TOKENS: kwargs.get("max_tokens"), | ||
| GenAIAttributes.GEN_AI_REQUEST_TEMPERATURE: kwargs.get("temperature"), | ||
| GenAIAttributes.GEN_AI_REQUEST_TOP_P: kwargs.get("top_p"), | ||
| GenAIAttributes.GEN_AI_REQUEST_TOP_K: kwargs.get("top_k"), | ||
| GenAIAttributes.GEN_AI_REQUEST_STOP_SEQUENCES: kwargs.get( | ||
| "stop_sequences" | ||
| ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For extracting parameters from the untyped kwargs dict, can you define a function with copied method signature from the Anthropic code being instrumented (and add a link to the code in a comment) and call it with *args, **kwargs? IMO it helps the reader and will let the type checker know the expected types
Example from Vertex:
Lines 100 to 102 in 4531513
| return GenerateContentParams( | |
| model=request.model, | |
| contents=request.contents, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome suggestion!! Added this snippet as we have in vertex ai.
...opentelemetry-instrumentation-anthropic/src/opentelemetry/instrumentation/anthropic/utils.py
Outdated
Show resolved
Hide resolved
| span_attributes = {**get_llm_request_attributes(kwargs, instance)} | ||
|
|
||
| span_name = f"{span_attributes[GenAIAttributes.GEN_AI_OPERATION_NAME]} {span_attributes[GenAIAttributes.GEN_AI_REQUEST_MODEL]}" | ||
| with tracer.start_as_current_span( | ||
| name=span_name, | ||
| kind=SpanKind.CLIENT, | ||
| attributes=span_attributes, | ||
| end_on_exit=False, | ||
| ) as span: | ||
| try: | ||
| result = wrapped(*args, **kwargs) | ||
|
|
||
| if span.is_recording(): | ||
| _set_response_attributes(span, result) | ||
|
|
||
| span.end() | ||
| return result | ||
|
|
||
| except Exception as error: | ||
| handle_span_exception(span, error) | ||
| raise |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@keith-decker can TelemetryHandler cover all of the common code in this PR? This seems like a great PR to start integrating GenAI util with
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aabmass @keith-decker I ported the code to use TelemetryHandler. I see that it is emitting only span attributes. Do we need implement log event emitter in TelemetryHandler? I see we have a pr for emitting log events here. I am going to update this codebase to emit logs once this pr is merged.
- Replaced direct tracer and logger initialization with TelemetryHandler in AnthropicInstrumentor. - Updated messages_create function to utilize TelemetryHandler for LLM invocation tracking. - Removed unused span handling functions and attributes to streamline code. - Enhanced attribute capturing for LLM requests and responses.
- Removed unused trace import from conftest.py and updated tracer_provider fixture to return the provider instead of yielding it. - Enhanced test_instrument_with_no_providers to ensure proper instrument/uninstrument cycle while passing providers for a clean test environment.
- Improved readability by restructuring the assignment of request_model in the messages_create function. - Ensured consistent handling of model attributes from both the attributes dictionary and kwargs.
…s function - Updated the return type to include AttributeValue for better type safety. - Expanded the docstring to detail the OpenTelemetry semantic conventions for LLM requests and server attributes. - Included descriptions for GenAI attributes and server attributes to improve clarity and documentation.
e011a91 to
643a282
Compare
Description
Implements OpenTelemetry instrumentation for Anthropic's
Messages.createAPI (sync, non-streaming). This adds automatic tracing for Anthropic SDK calls, capturing GenAI semantic convention attributes for observability.P.S: LLM help was used for writing the code.
What's Implemented
patch.py: Wrapper function forMessages.createthat creates spans with request/response attributesutils.py: Helper functions for attribute extraction, error handling, and content capture configuration__init__.py: Wiring for patching/unpatching viawraptSemantic Convention Attributes Captured
Request:
gen_ai.operation.name(chat)gen_ai.system(anthropic)gen_ai.request.modelgen_ai.request.max_tokensgen_ai.request.temperaturegen_ai.request.top_pgen_ai.request.top_kgen_ai.request.stop_sequencesserver.addressResponse:
gen_ai.response.idgen_ai.response.modelgen_ai.response.finish_reasonsgen_ai.usage.input_tokensgen_ai.usage.output_tokensError:
error.type(on exceptions)Fixes #(#3949)
Type of change
How Has This Been Tested?
Ran the full test suite with VCR cassettes for mocked API responses:
Test cases:
Results: 15 tests passed (8 new + 7 existing instrumentor tests)
Does This PR Require a Core Repo Change?
Checklist: