Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Non-ASCII characters in prompt crashes the tokenizer  #382

Closed
@stduhpf

Description

@stduhpf

Using any non-ascii character in the prompt always cause the program to crash during prompt processing. The crash seems to be hapenning during this function call:

std::vector<int> curr_tokens = tokenizer.encode(curr_text, on_new_token_cb);

> ./build/bin/sd -m ../ComfyUI/models/checkpoints/dreamshaper_8LCM.q8_0.gguf --cfg-scale 1 --steps 8 --sampling-method lcm --seed 42 -p "Un petit chaton très mignon" -v
Option: 
    n_threads:         12
    mode:              txt2img
    model_path:        ../ComfyUI/models/checkpoints/dreamshaper_8LCM.q8_0.gguf
    wtype:             unspecified
    clip_l_path:
    t5xxl_path:
    diffusion_model_path:
    vae_path:
    taesd_path:
    esrgan_path:
    controlnet_path:
    embeddings_path:
    stacked_id_embeddings_path:
    input_id_images_path:
    style ratio:       20.00
    normzalize input image :  false
    output_path:       output.png
    init_img:
    control_image:
    clip on cpu:       false
    controlnet cpu:    false
    vae decoder on cpu:false
    strength(control): 0.90
    prompt:            Un petit chaton très mignon
    negative_prompt:
    min_cfg:           1.00
    cfg_scale:         1.00
    guidance:          3.50
    clip_skip:         -1
    width:             512
    height:            512
    sample_method:     lcm
    schedule:          default
    sample_steps:      8
    strength(img2img): 0.75
    rng:               cuda
    seed:              42
    batch_count:       1
    vae_tiling:        false
    upscale_repeats:   1
System Info:
    BLAS = 0
    SSE3 = 1
    AVX = 1
    AVX2 = 1
    AVX512 = 0
    AVX512_VBMI = 0
    AVX512_VNNI = 0
    FMA = 1
    NEON = 0
    ARM_FMA = 0
    F16C = 1
    FP16_VA = 0
    WASM_SIMD = 0
    VSX = 0
[DEBUG] stable-diffusion.cpp:180  - Using CPU backend
[INFO ] stable-diffusion.cpp:195  - loading model from '../ComfyUI/models/checkpoints/dreamshaper_8LCM.q8_0.gguf'
[INFO ] model.cpp:790  - load ../ComfyUI/models/checkpoints/dreamshaper_8LCM.q8_0.gguf using gguf format
[DEBUG] model.cpp:807  - init from '../ComfyUI/models/checkpoints/dreamshaper_8LCM.q8_0.gguf'
WARNING: Behavior may be unexpected when allocating 0 bytes for ggml_calloc!
[INFO ] stable-diffusion.cpp:235  - Version: SD 1.x
[INFO ] stable-diffusion.cpp:266  - Weight type:                 q8_0
[INFO ] stable-diffusion.cpp:267  - Conditioner weight type:     q8_0
[INFO ] stable-diffusion.cpp:268  - Diffsuion model weight type: q8_0
[INFO ] stable-diffusion.cpp:269  - VAE weight type:             q8_0
[DEBUG] stable-diffusion.cpp:271  - ggml tensor size = 400 bytes
[DEBUG] clip.hpp:171  - vocab size: 49408
[DEBUG] clip.hpp:182  -  trigger word img already in vocab
[DEBUG] ggml_extend.hpp:1045 - clip params backend buffer size =  125.20 MB(RAM) (196 tensors)
[DEBUG] ggml_extend.hpp:1045 - unet params backend buffer size =  1398.81 MB(RAM) (686 tensors)
[DEBUG] ggml_extend.hpp:1045 - vae params backend buffer size =  94.47 MB(RAM) (140 tensors)
[DEBUG] stable-diffusion.cpp:398  - loading weights
[DEBUG] model.cpp:1520 - loading tensors from ../ComfyUI/models/checkpoints/dreamshaper_8LCM.q8_0.gguf
[INFO ] model.cpp:1675 - unknown tensor 'cond_stage_model.logit_scale | f16 | 1 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1675 - unknown tensor 'cond_stage_model.text_projection | q8_0 | 2 [768, 768, 1, 1, 1]' in model file
[INFO ] stable-diffusion.cpp:482  - total params memory size = 1618.48MB (VRAM 0.00MB, RAM 1618.48MB): clip 125.20MB(RAM), unet 1398.81MB(RAM), vae 94.47MB(RAM), controlnet 0.00MB(VRAM), pmid 0.00MB(RAM)
[INFO ] stable-diffusion.cpp:501  - loading model from '../ComfyUI/models/checkpoints/dreamshaper_8LCM.q8_0.gguf' completed, taking 4.04s
[INFO ] stable-diffusion.cpp:528  - running in eps-prediction mode
[DEBUG] stable-diffusion.cpp:563  - finished loaded file
[DEBUG] stable-diffusion.cpp:1369 - txt2img 512x512
[DEBUG] stable-diffusion.cpp:1118 - prompt after extract and remove lora: "Un petit chaton très mignon"     
[INFO ] stable-diffusion.cpp:646  - Attempting to apply 0 LoRAs
[INFO ] stable-diffusion.cpp:1123 - apply_loras completed, taking 0.00s
[DEBUG] conditioner.hpp:325  - parse 'Un petit chaton très mignon' to [['Un petit chaton très mignon', 1], ]
Segmentation fault

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions