File tree 1 file changed +32
-0
lines changed
1 file changed +32
-0
lines changed Original file line number Diff line number Diff line change @@ -468,6 +468,38 @@ Then you'll need to use a custom chat handler to load the clip model and process
468
468
)
469
469
```
470
470
471
+ <details >
472
+ <summary >Loading a Local Image</summary >
473
+
474
+ Images can be passed as base64 encoded data URIs. The following example demonstrates how to do this.
475
+
476
+ ``` python
477
+ import base64
478
+
479
+ def image_to_base64_data_uri (file_path ):
480
+ with open (file_path, " rb" ) as img_file:
481
+ base64_data = base64.b64encode(img_file.read()).decode(' utf-8' )
482
+ return f " data:image/png;base64, { base64_data} "
483
+
484
+ # Replace 'file_path.png' with the actual path to your PNG file
485
+ file_path = ' file_path.png'
486
+ data_uri = image_to_base64_data_uri(file_path)
487
+
488
+ messages = [
489
+ {" role" : " system" , " content" : " You are an assistant who perfectly describes images." },
490
+ {
491
+ " role" : " user" ,
492
+ " content" : [
493
+ {" type" : " image_url" , " image_url" : {" url" : data_uri }},
494
+ {" type" : " text" , " text" : " Describe this image in detail please." }
495
+ ]
496
+ }
497
+ ]
498
+
499
+ ```
500
+
501
+ </details >
502
+
471
503
### Speculative Decoding
472
504
473
505
` llama-cpp-python ` supports speculative decoding which allows the model to generate completions based on a draft model.
You can’t perform that action at this time.
0 commit comments