You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -450,7 +450,7 @@ The simplest way to use embeddings for search is as follows:
450
450
* Embed each chunk using a 'doc' model (e.g., `text-search-curie-doc-001`)
451
451
* Store those embeddings in your own database or in a vector search provider like [Pinecone](https://www.pinecone.io) or [Weaviate](https://weaviate.io)
452
452
* At the time of the search (live compute):
453
-
* Embed the search query using the correponding 'query' model (e.g. `text-search-curie-query-001`)
453
+
* Embed the search query using the corresponding 'query' model (e.g. `text-search-curie-query-001`)
454
454
* Find the closest embeddings in your database
455
455
* Return the top results, ranked by cosine similarity
456
456
@@ -470,7 +470,7 @@ Similar to search, these cosine similarity scores can either be used on their ow
470
470
471
471
Although OpenAI's embedding model weights cannot be fine-tuned, you can still use training data to customize embeddings to your application.
472
472
473
-
In the following notebook, we provide an example method for customizing your embeddings using training data. The idea of the method is to train a custom matrix to multiply embedding vectors by in order to get new customized embeddings. With good training data, this custom matrix will highlight the features relevant to your training labels and suppress the rest. You can equivalently consider the matrix mulitplication as (a) a modification of the embeddings or (b) a modification of the distance function used to measure the distances between embeddings.
473
+
In the following notebook, we provide an example method for customizing your embeddings using training data. The idea of the method is to train a custom matrix to multiply embedding vectors by in order to get new customized embeddings. With good training data, this custom matrix will highlight the features relevant to your training labels and suppress the rest. You can equivalently consider the matrix multiplication as (a) a modification of the embeddings or (b) a modification of the distance function used to measure the distances between embeddings.
0 commit comments