-
Notifications
You must be signed in to change notification settings - Fork 31
Description
$ python3 -m convert_llama_ckpt --base-model-path /llama2-7b-hf/ --pax-model-path pax_7B/ --model-size 7b
Loading the base model from /llama2-7b-hf/
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/shivajid/convert_llama_ckpt.py", line 210, in
convert(args.base_model_path, args.pax_model_path, args.model_size)
File "/home/shivajid/convert_llama_ckpt.py", line 96, in convert
'emb_var': np.concatenate([var['tok_embeddings.weight'].type(torch.float16).numpy() for var in pytorch_vars], axis=1)[:vocab,:]
ValueError: need at least one array to concatenate
Can you please help? I am using the LLama2 weights in --base-model-path