Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit b3e358d

Browse files
committed
docs: Add example of local image loading to README
1 parent afe1e44 commit b3e358d

File tree

1 file changed

+32
-0
lines changed

1 file changed

+32
-0
lines changed

README.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -468,6 +468,38 @@ Then you'll need to use a custom chat handler to load the clip model and process
468468
)
469469
```
470470

471+
<details>
472+
<summary>Loading a Local Image</summary>
473+
474+
Images can be passed as base64 encoded data URIs. The following example demonstrates how to do this.
475+
476+
```python
477+
import base64
478+
479+
def image_to_base64_data_uri(file_path):
480+
with open(file_path, "rb") as img_file:
481+
base64_data = base64.b64encode(img_file.read()).decode('utf-8')
482+
return f"data:image/png;base64,{base64_data}"
483+
484+
# Replace 'file_path.png' with the actual path to your PNG file
485+
file_path = 'file_path.png'
486+
data_uri = image_to_base64_data_uri(file_path)
487+
488+
messages = [
489+
{"role": "system", "content": "You are an assistant who perfectly describes images."},
490+
{
491+
"role": "user",
492+
"content": [
493+
{"type": "image_url", "image_url": {"url": data_uri }},
494+
{"type" : "text", "text": "Describe this image in detail please."}
495+
]
496+
}
497+
]
498+
499+
```
500+
501+
</details>
502+
471503
### Speculative Decoding
472504

473505
`llama-cpp-python` supports speculative decoding which allows the model to generate completions based on a draft model.

0 commit comments

Comments
 (0)