🔥 Fast transformer inference for Ruby
For non-ONNX models, check out Transformers.rb 🙂
Add this line to your application’s Gemfile:
gem "informers"Embedding
- sentence-transformers/all-MiniLM-L6-v2
- sentence-transformers/multi-qa-MiniLM-L6-cos-v1
- sentence-transformers/all-mpnet-base-v2
- sentence-transformers/paraphrase-MiniLM-L6-v2
- mixedbread-ai/mxbai-embed-large-v1
- Supabase/gte-small
- intfloat/e5-base-v2
- nomic-ai/nomic-embed-text-v1
- BAAI/bge-base-en-v1.5
- jinaai/jina-embeddings-v2-base-en
- Snowflake/snowflake-arctic-embed-m-v1.5
Reranking
- mixedbread-ai/mxbai-rerank-base-v1
- jinaai/jina-reranker-v1-turbo-en
- BAAI/bge-reranker-base
- Xenova/ms-marco-MiniLM-L-6-v2
sentences = ["This is an example sentence", "Each sentence is converted"]
model = Informers.pipeline("embedding", "sentence-transformers/all-MiniLM-L6-v2")
embeddings = model.(sentences)query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
model = Informers.pipeline("embedding", "sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
query_embedding = model.(query)
doc_embeddings = model.(docs)
scores = doc_embeddings.map { |e| e.zip(query_embedding).sum { |d, q| d * q } }
doc_score_pairs = docs.zip(scores).sort_by { |d, s| -s }sentences = ["This is an example sentence", "Each sentence is converted"]
model = Informers.pipeline("embedding", "sentence-transformers/all-mpnet-base-v2")
embeddings = model.(sentences)sentences = ["This is an example sentence", "Each sentence is converted"]
model = Informers.pipeline("embedding", "sentence-transformers/paraphrase-MiniLM-L6-v2")
embeddings = model.(sentences, normalize: false)query_prefix = "Represent this sentence for searching relevant passages: "
input = [
"The dog is barking",
"The cat is purring",
query_prefix + "puppy"
]
model = Informers.pipeline("embedding", "mixedbread-ai/mxbai-embed-large-v1")
embeddings = model.(input)sentences = ["That is a happy person", "That is a very happy person"]
model = Informers.pipeline("embedding", "Supabase/gte-small")
embeddings = model.(sentences)doc_prefix = "passage: "
query_prefix = "query: "
input = [
doc_prefix + "Ruby is a programming language created by Matz",
query_prefix + "Ruby creator"
]
model = Informers.pipeline("embedding", "intfloat/e5-base-v2")
embeddings = model.(input)doc_prefix = "search_document: "
query_prefix = "search_query: "
input = [
doc_prefix + "The dog is barking",
doc_prefix + "The cat is purring",
query_prefix + "puppy"
]
model = Informers.pipeline("embedding", "nomic-ai/nomic-embed-text-v1")
embeddings = model.(input)query_prefix = "Represent this sentence for searching relevant passages: "
input = [
"The dog is barking",
"The cat is purring",
query_prefix + "puppy"
]
model = Informers.pipeline("embedding", "BAAI/bge-base-en-v1.5")
embeddings = model.(input)sentences = ["How is the weather today?", "What is the current weather like today?"]
model = Informers.pipeline("embedding", "jinaai/jina-embeddings-v2-base-en", model_file_name: "../model")
embeddings = model.(sentences)query_prefix = "Represent this sentence for searching relevant passages: "
input = [
"The dog is barking",
"The cat is purring",
query_prefix + "puppy"
]
model = Informers.pipeline("embedding", "Snowflake/snowflake-arctic-embed-m-v1.5")
embeddings = model.(input, model_output: "sentence_embedding", pooling: "none")query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
model = Informers.pipeline("reranking", "mixedbread-ai/mxbai-rerank-base-v1")
result = model.(query, docs)query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
model = Informers.pipeline("reranking", "jinaai/jina-reranker-v1-turbo-en")
result = model.(query, docs)query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
model = Informers.pipeline("reranking", "BAAI/bge-reranker-base")
result = model.(query, docs)query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
model = Informers.pipeline("reranking", "Xenova/ms-marco-MiniLM-L-6-v2")
result = model.(query, docs)The model must include a .onnx file (example). If the file is not at onnx/model.onnx, use the model_file_name option to specify the location.
Embedding
embed = Informers.pipeline("embedding")
embed.("We are very happy to show you the 🤗 Transformers library.")Reranking
rerank = Informers.pipeline("reranking")
rerank.("Who created Ruby?", ["Matz created Ruby", "Another doc"])Named-entity recognition
ner = Informers.pipeline("ner")
ner.("Ruby is a programming language created by Matz")Sentiment analysis
classifier = Informers.pipeline("sentiment-analysis")
classifier.("We are very happy to show you the 🤗 Transformers library.")Question answering
qa = Informers.pipeline("question-answering")
qa.("Who invented Ruby?", "Ruby is a programming language created by Matz")Zero-shot classification
classifier = Informers.pipeline("zero-shot-classification")
classifier.("text", ["label1", "label2", "label3"])Text generation
generator = Informers.pipeline("text-generation")
generator.("I enjoy walking with my cute dog,")Text-to-text generation
text2text = Informers.pipeline("text2text-generation")
text2text.("translate from English to French: I'm very happy")Translation
translator = Informers.pipeline("translation", "Xenova/nllb-200-distilled-600M")
translator.("जीवन एक चॉकलेट बॉक्स की तरह है।", src_lang: "hin_Deva", tgt_lang: "fra_Latn")Summarization
summarizer = Informers.pipeline("summarization")
summarizer.("Many paragraphs of text")Fill mask
unmasker = Informers.pipeline("fill-mask")
unmasker.("Paris is the [MASK] of France.")Feature extraction
extractor = Informers.pipeline("feature-extraction")
extractor.("We are very happy to show you the 🤗 Transformers library.")Note: ruby-vips is required to load images
Image classification
classifier = Informers.pipeline("image-classification")
classifier.("image.jpg")Zero-shot image classification
classifier = Informers.pipeline("zero-shot-image-classification")
classifier.("image.jpg", ["label1", "label2", "label3"])Image segmentation
segmenter = Informers.pipeline("image-segmentation")
segmenter.("image.jpg")Object detection
detector = Informers.pipeline("object-detection")
detector.("image.jpg")Zero-shot object detection
detector = Informers.pipeline("zero-shot-object-detection")
detector.("image.jpg", ["label1", "label2", "label3"])Depth estimation
estimator = Informers.pipeline("depth-estimation")
estimator.("image.jpg")Image-to-image
upscaler = Informers.pipeline("image-to-image")
upscaler.("image.jpg")Image feature extraction
extractor = Informers.pipeline("image-feature-extraction")
extractor.("image.jpg")Note: ffmpeg is required to load audio files
Audio classification
classifier = Informers.pipeline("audio-classification")
classifier.("audio.wav")Image captioning
captioner = Informers.pipeline("image-to-text")
captioner.("image.jpg")Document question answering
qa = Informers.pipeline("document-question-answering")
qa.("image.jpg", "What is the invoice number?")Specify a variant of the model if available (fp32, fp16, int8, uint8, q8, q4, q4f16, or bnb4)
Informers.pipeline("embedding", "Xenova/all-MiniLM-L6-v2", dtype: "fp16")Specify a device (cpu, cuda, or coreml)
Informers.pipeline("embedding", device: "cuda")Note: Follow these instructions for cuda
Specify ONNX Runtime session options
Informers.pipeline("embedding", session_options: {log_severity_level: 2})This library was ported from Transformers.js and is available under the same license.
Task classes have been replaced with the pipeline method.
# before
model = Informers::SentimentAnalysis.new("sentiment-analysis.onnx")
model.predict("This is super cool")
# after
model = Informers.pipeline("sentiment-analysis")
model.("This is super cool")View the changelog
Everyone is encouraged to help improve this project. Here are a few ways you can help:
- Report bugs
- Fix bugs and submit pull requests
- Write, clarify, or fix documentation
- Suggest or add new features
To get started with development:
git clone https://github.com/ankane/informers.git
cd informers
bundle install
bundle exec rake download:files
bundle exec rake test