Thanks to visit codestin.com
Credit goes to github.com

Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

enhancement: implement a CLI command/flag for most parameter in the .cortexrcΒ #1897

@ramonpzg

Description

@ramonpzg

At the moment, Cortex has minimal support for editing its own configuration file via the CLI or HTTP and this makes it challenging for developers wanting to deploy it on a VM in the Cloud or even in an environment where the ultimate goal is to let Cortex talk to other tools via the server. The generated .cortexrc file currently contains the following parameters.

logFolderPath: /home/user/cortexcpp
logLlamaCppPath: ./logs/cortex.log
logTensorrtLLMPath: ./logs/cortex.log
logOnnxPath: ./logs/cortex.log
dataFolderPath: /home/user/cortexcpp
maxLogLines: 100000
apiServerHost: 127.0.0.1
apiServerPort: 39281
checkedForUpdateAt: 1740630061
checkedForLlamacppUpdateAt: 1740628158149
latestRelease: v1.0.10
latestLlamacppRelease: v0.1.49
huggingFaceToken: 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
gitHubUserAgent: ""
gitHubToken: ""
llamacppVariant: linux-amd64-avx2-cuda-12-0
llamacppVersion: v0.1.49
enableCors: true
allowedOrigins:
  - http://localhost:39281
  - http://127.0.0.1:39281
  - http://0.0.0.0:39281
proxyUrl: ""
verifyProxySsl: true
verifyProxyHostSsl: true
proxyUsername: ""
proxyPassword: ""
noProxy: example.com,::1,localhost,127.0.0.1
verifyPeerSsl: true
verifyHostSsl: true
sslCertPath: ""
sslKeyPath: ""
supportedEngines:
  - llama-cpp
  - onnxruntime
  - tensorrt-llm
  - python-engine
  - python
checkedForSyncHubAt: 0

To start a server, we currently only offer three options:

cortex start --port 7777 --loglevel DBUG --help

Ideally, we would provide users with the full menu to start the cortex server with different configurations. For example:

  • logFolderPath --> --logspath </path/to/nirvana>
  • logLlamaCppPath --> --logsllama </path/to/llamaland>
  • logTensorrtLLMPath --> Needs to be removed πŸͺ“
  • logOnnxPath --> --logsonnx </path/to/devsdevsdevs>
  • dataFolderPath --> --datapath </path/to/dataland
  • maxLogLines --> --loglines <100000>
  • apiServerHost --> --host <0.0.0.0>
  • apiServerPort --> --host 7777 βœ…
  • checkedForUpdateAt --> ... Not Needed to start the server β˜•
  • checkedForLlamacppUpdateAt --> ... Not Needed to start the server β˜•
  • latestRelease --> ... Not Needed to start the server β˜•
  • latestLlamacppRelease --> ... Not Needed to start the server β˜•
  • huggingFaceToken --> --hf-token <token>
  • gitHubUserAgent --> --gh-agent <that-thing>
  • gitHubToken --gh-token <that-token>
  • llamacppVariant --> ... Not Needed to start the server β˜•
  • llamacppVersion --> ... Not Needed to start the server β˜•
  • enableCors --> --cors 1 (1 = true & 0 = false)
  • allowedOrigins --> --origins <list of origins>
  • proxyUrl --> --proxu-url "https://hey.you"
  • verifyProxySsl --> --verify-proxy
  • verifyProxyHostSsl --> --verify-proxy-host
  • proxyUsername --> --proxy-username
  • proxyPassword --> --proxy-password
  • noProxy: example.com,::1,localhost,127.0.0.1
  • verifyPeerSsl --> --verify-ssl-peer
  • verifyHostSsl --> --verify-ssl-host
  • sslCertPath --> --ssl-cert-path
  • sslKeyPath --> --ssl-key-path
  • supportedEngines --> ... Not Needed to start the server β˜•
  • checkedForSyncHubAt --> ... Not Needed to start the server β˜•

Starting the server would the look like.

cortex start --host "0.0.0.0" \
    --port 7777 \
    --hf-token "<some-token>" \
    --cors 1 \
    --logspath "/some/interesting/path" \
    ...

Metadata

Metadata

Type

No type

Projects

Status

Completed

Status

Eng Planning

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions