Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Ensure SDG can take advantage of vLLM's guided decoding #527

@bbrowning

Description

@bbrowning

Is your feature request related to a problem? Please describe.
We have some upcoming work, such as annotations, that requires structured output from vLLM. Here's an example of the type of vLLM parameters we need to be able to pass from a pipeline into the vLLM backend to enable guided decoding. The key part is the guided_choice section there, where we need to ensure that gets passed into vLLM to enable guided_choice decoding as documented at https://docs.vllm.ai/en/latest/features/structured_outputs.html

version: "1.0"
blocks:
  - name: gen_responses
    type: LLMBlock
    config:
      config_path: detailed_description_icl.yaml
      output_cols:
        - output
    gen_kwargs:
      max_tokens: 5
      temperature: 0
      extra_body:
        guided_choice:
          - "joy"
          - "sadness"
          - "anger"
          - "fear"
          - "love"

Describe the solution you'd like

Pipelines defined similar to above should cause vLLM to enable its guided_choice to coerce the responses into one of the defined possibilities.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions