Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Support for GPT-4? #11

@benthamite

Description

@benthamite

The package works fine with codegpt-model set to the default value ("text-davinci-003"). However, it fails when I set it to "gpt-4". OpenAI granted me access to the GPT-4 models yesterday, so I assume this is an issue with the package, especially given that it also fails when I setcodegpt-model to "gpt-3.5-turbo". (See this page for a list of available models.)

If GPT-4 is not currently supported, are there plans to support it soon? Thanks.

Backtrace:

Debugger entered--Lisp error: (error "Internal error: 404")
  error("Internal error: %s" 404)
  openai--handle-error(#s(request-response :status-code 404 :history nil :data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :error-thrown (error http 404) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #1 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 404 \ndate: Sat, 18 Mar 2023 01:06:08 GMT\ncontent-type: application/json\ncontent-length: 227\naccess-control-allow-origin: *\nopenai-organization: [User ID]\nopenai-processing-ms: 103\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 3500\nx-ratelimit-limit-tokens: 90000\nx-ratelimit-remaining-requests: 3499\nx-ratelimit-remaining-tokens: 85999\nx-ratelimit-reset-requests: 17ms\nx-ratelimit-reset-tokens: 2.666s\nx-request-id: add53e20bfda8e445412a38d0147869b\n" :-timer nil :-backend curl))
  #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11>(:data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :symbol-status error :error-thrown (error http 404) :response #s(request-response :status-code 404 :history nil :data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :error-thrown (error http 404) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #8 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 404 \ndate: Sat, 18 Mar 2023 01:06:08 GMT\ncontent-type: application/json\ncontent-length: 227\naccess-control-allow-origin: *\nopenai-organization: [User ID]\nopenai-processing-ms: 103\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 3500\nx-ratelimit-limit-tokens: 90000\nx-ratelimit-remaining-requests: 3499\nx-ratelimit-remaining-tokens: 85999\nx-ratelimit-reset-requests: 17ms\nx-ratelimit-reset-tokens: 2.666s\nx-request-id: add53e20bfda8e445412a38d0147869b\n" :-timer nil :-backend curl))
  apply(#<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> (:data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :symbol-status error :error-thrown (error http 404) :response #s(request-response :status-code 404 :history nil :data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :error-thrown (error http 404) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #10 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 404 \ndate: Sat, 18 Mar 2023 01:06:08 GMT\ncontent-type: application/json\ncontent-length: 227\naccess-control-allow-origin: *\nopenai-organization: [User ID]\nopenai-processing-ms: 103\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 3500\nx-ratelimit-limit-tokens: 90000\nx-ratelimit-remaining-requests: 3499\nx-ratelimit-remaining-tokens: 85999\nx-ratelimit-reset-requests: 17ms\nx-ratelimit-reset-tokens: 2.666s\nx-request-id: add53e20bfda8e445412a38d0147869b\n" :-timer nil :-backend curl)))
  request--callback(#<killed buffer> :error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 404 :history nil :data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :error-thrown (error http 404) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #17 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 404 \ndate: Sat, 18 Mar 2023 01:06:08 GMT\ncontent-type: application/json\ncontent-length: 227\naccess-control-allow-origin: *\nopenai-organization: [User ID]\nopenai-processing-ms: 103\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 3500\nx-ratelimit-limit-tokens: 90000\nx-ratelimit-remaining-requests: 3499\nx-ratelimit-remaining-tokens: 85999\nx-ratelimit-reset-requests: 17ms\nx-ratelimit-reset-tokens: 2.666s\nx-request-id: add53e20bfda8e445412a38d0147869b\n" :-timer nil :-backend curl) :encoding utf-8)
  apply(request--callback #<killed buffer> (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 404 :history nil :data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :error-thrown (error http 404) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings #3 :-buffer #<killed buffer> :-raw-header "HTTP/2 404 \ndate: Sat, 18 Mar 2023 01:06:08 GMT\ncontent-type: application/json\ncontent-length: 227\naccess-control-allow-origin: *\nopenai-organization: [User ID]\nopenai-processing-ms: 103\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 3500\nx-ratelimit-limit-tokens: 90000\nx-ratelimit-remaining-requests: 3499\nx-ratelimit-remaining-tokens: 85999\nx-ratelimit-reset-requests: 17ms\nx-ratelimit-reset-tokens: 2.666s\nx-request-id: add53e20bfda8e445412a38d0147869b\n" :-timer nil :-backend curl) :encoding utf-8))
  request--curl-callback("https://api.openai.com/v1/completions" #<process request curl> "finished\n")
  apply(request--curl-callback ("https://api.openai.com/v1/completions" #<process request curl> "finished\n"))
  #f(compiled-function (&rest args2) #<bytecode 0x18e23160535de8dd>)(#<process request curl> "finished\n")

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions