Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Getting access to raw response object in a Guardrail? #32

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Manouchehri opened this issue Jun 5, 2025 · 6 comments
Closed

Getting access to raw response object in a Guardrail? #32

Manouchehri opened this issue Jun 5, 2025 · 6 comments

Comments

@Manouchehri
Copy link
Contributor

Inside a Guardrail, is there a way I can access the raw response object? e.g. I am trying to read prompt_filter_results in my Guardrail. Example response object:

{
  "choices": [
    {
      "content_filter_results": {

      },
      "finish_reason": "tool_calls",
      "index": 0,
      "logprobs": null,
      "message": {
        "annotations": [],
        "content": null,
        "refusal": null,
        "role": "assistant",
        "tool_calls": [
          {
            "function": {
              "arguments": "{}",
              "name": "transfer_to_History_Tutor"
            },
            "id": "call_UsPG83icg4t0pZsf1uQWI97W",
            "type": "function"
          }
        ]
      }
    }
  ],
  "created": 1749158752,
  "id": "chatcmpl-BfCNsjM9bfc3k7Dj3vnGZlN6Zmp5V",
  "model": "gpt-4.1-nano-2025-04-14",
  "object": "chat.completion",
  "prompt_filter_results": [
    {
      "prompt_index": 0,
      "content_filter_results": {
        "hate": {
          "filtered": false,
          "severity": "low"
        },
        "jailbreak": {
          "filtered": false,
          "detected": true
        },
        "self_harm": {
          "filtered": false,
          "severity": "safe"
        },
        "sexual": {
          "filtered": false,
          "severity": "safe"
        },
        "violence": {
          "filtered": false,
          "severity": "safe"
        }
      }
    }
  ],
  "system_fingerprint": "fp_68472df8fd",
  "usage": {
    "completion_tokens": 15,
    "completion_tokens_details": {
      "accepted_prediction_tokens": 0,
      "audio_tokens": 0,
      "reasoning_tokens": 0,
      "rejected_prediction_tokens": 0
    },
    "prompt_tokens": 200,
    "prompt_tokens_details": {
      "audio_tokens": 0,
      "cached_tokens": 0
    },
    "total_tokens": 215
  }
}
@Manouchehri Manouchehri added the question Further information is requested label Jun 5, 2025
@dkundel-openai
Copy link
Collaborator

Hey @Manouchehri not today but I think it's a good feature request! We will have to see how to best do it since the chat completions response gets converted into a different format before it reaches any guardrail execution so it's not as easy as exposing it.

@Manouchehri
Copy link
Contributor Author

Hmm, is it possible to do with a hook and add the raw response to the context? (I tried and failed to add that.)

@dkundel-openai
Copy link
Collaborator

I think what we should do is move all the optional fields into providerData properties on the messages and then expose the full ModelResponse to the output guardrail as a separate field

Copy link
Contributor

This issue is stale because it has been open for 7 days with no activity.

@github-actions github-actions bot added the stale label Jun 13, 2025
@Manouchehri
Copy link
Contributor Author

Not stale. :P

@dkundel-openai
Copy link
Collaborator

This should get fixed in the 0.0.8 release today. Change was done in #104.

There is now an additional details object that contains the modelResponse that in turn contains the providerData of the raw response from ChatCompletions. So a bit convoluted but you should be able to access it via details.modelResponse.providerData.prompt_filter_results

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants