Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: vernesong/mihomo

Mihomo Alpha with Smart Group

27 May 01:12

Choose a tag to compare

Pre-release

[声明] 本分支是在上游代码的基础上增加Smart策略组功能,关于Smart策略组的任何问题均与上游无关!
[Announcement] This branch is based on the upstream code to add Smart Groups functionality, any questions about Smart Groups are not related to the upstream!

Release created at Sun Oct 26 22:19:08 CST 2025
Synchronize Alpha branch code updates, keeping only the latest version


我应该下载哪个文件? / Which file should I download?
二进制文件筛选 / Binary file selector
查看文档 / Docs

Smart 策略组使用 / Smart Group Introduce

LightGBM Model

27 May 09:40

Choose a tag to compare

LightGBM Model Pre-release
Pre-release

Date: 2025-10-05

Usage:

# enable model auto update, the default is false
lgbm-auto-update: true
# model auto update interval, the default is 72 (hours)
lgbm-update-interval: 72
# model update url
lgbm-url: "https://github.com/vernesong/mihomo/releases/download/LightGBM-Model/Model.bin"

profile:
  #smart-collector-size: data collection file size, the default is 100 (MB)
  smart-collector-size: 100

proxy-groups:
- name: Smart Group
  type: smart
  # policy-priority: <1 means lower priority, >1 means higher priority, the default is 1, pattern support regex and string
  policy-priority: "Premium:0.9;SG:1.3" 
  # uselightgbm: use LightGBM model predict weight
  uselightgbm: false
  # collectdata: collect datas for model training
  collectdata: false
  # strategy: 'sticky-sessions' or 'round-robin', not support 'consistent-hashing'
  # the default is 'sticky-sessions' with more stable&smoothly nodes switch logic when datas collecting
  strategy: sticky-sessions
  # sample-rate: data acquisition rate, desirable values are 0-1, the default is 1
  sample-rate: 1
  # prefer-asn: force and lookup asn first when nodes select, the default is false
  prefer-asn: true
  ...
  • LightGBM for weight prediction (option: uselightgbm: true),need Model.bin file exist HomeDir (.config/mihomo/Model.bin) or use in OpenClash: (/etc/openclash/Model.bin)
  • Data collection function to facilitate for yourself-training of weight prediction models (option: collectdata: true), path in HomeDir (.config/mihomo/smart_weight_data.csv) or (/etc/openclash/smart_weight_data.csv) in OpenClash
  • If you need to train the model yourself, you could use feature transforms and must training by LightGBM version 3.3.5

Api:

# Show proxy weight
curl -H 'Authorization: Bearer ${secret}' -X GET http://${controller-api}/group/${groupname}/weights
curl -H 'Authorization: Bearer ${secret}' -X GET http://${controller-api}/group/weights

# Flush cache data
curl -H 'Authorization: Bearer ${secret}' -X POST http://${controller-api}/cache/smart/flush
curl -H 'Authorization: Bearer ${secret}' -X POST http://${controller-api}/cache/smart/flush/${configname}

# Block(degrade) connections - node result to forceing choice another best
curl -H 'Authorization: Bearer ${secret}' -X DELETE http://${controller-api}/connections/smart/${id}

# LightGBM model upgrade
curl -H 'Authorization: Bearer ${secret}' -X POST http://${controller-api}/upgrade/lgbm

Predicting performance (Large)

RMSE: 0.080149 MAE: 0.040632 R²: 0.934709

train_time=2025-10-05 17:35:42
rmse=0.080149
mae=0.040632
r2=0.934709
objective=regression;metric=rmse;verbosity=-1;boosting_type=gbdt;num_leaves=192;learning_rate=0.01;max_depth=12;max_bin=127;num_boost_round=1800;early_stopping_rounds=200;min_child_samples=2928;bagging_fraction=0.6274629722391256;feature_fraction=0.629643701208391;lambda_l1=0.28786675811749307;lambda_l2=3.476332518783713;min_split_gain=0.5963275240701651;bagging_freq=8;device=gpu;num_threads=2;gpu_use_dp=False;gpu_platform_id=0;gpu_device_id=0
data_shape=(2108278, 27)
weight_min=0.174000
weight_max=1.752602
weight_mean=0.812941
weight_std=0.313670

feature_importance error_by_weight_range residuals actual_vs_predicted