Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit aabbdbe

Browse files
Merge pull request openai#40 from pitmonticone/main
Fix a few typos
2 parents 5e66437 + 0009da6 commit aabbdbe

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -450,7 +450,7 @@ The simplest way to use embeddings for search is as follows:
450450
* Embed each chunk using a 'doc' model (e.g., `text-search-curie-doc-001`)
451451
* Store those embeddings in your own database or in a vector search provider like [Pinecone](https://www.pinecone.io) or [Weaviate](https://weaviate.io)
452452
* At the time of the search (live compute):
453-
* Embed the search query using the correponding 'query' model (e.g. `text-search-curie-query-001`)
453+
* Embed the search query using the corresponding 'query' model (e.g. `text-search-curie-query-001`)
454454
* Find the closest embeddings in your database
455455
* Return the top results, ranked by cosine similarity
456456

@@ -470,7 +470,7 @@ Similar to search, these cosine similarity scores can either be used on their ow
470470

471471
Although OpenAI's embedding model weights cannot be fine-tuned, you can still use training data to customize embeddings to your application.
472472

473-
In the following notebook, we provide an example method for customizing your embeddings using training data. The idea of the method is to train a custom matrix to multiply embedding vectors by in order to get new customized embeddings. With good training data, this custom matrix will highlight the features relevant to your training labels and suppress the rest. You can equivalently consider the matrix mulitplication as (a) a modification of the embeddings or (b) a modification of the distance function used to measure the distances between embeddings.
473+
In the following notebook, we provide an example method for customizing your embeddings using training data. The idea of the method is to train a custom matrix to multiply embedding vectors by in order to get new customized embeddings. With good training data, this custom matrix will highlight the features relevant to your training labels and suppress the rest. You can equivalently consider the matrix multiplication as (a) a modification of the embeddings or (b) a modification of the distance function used to measure the distances between embeddings.
474474

475475
* [Customizing_embeddings.ipynb](examples/Customizing_embeddings.ipynb)
476476

@@ -523,7 +523,7 @@ Code explanation can be applied to many use cases:
523523
* Generating in-code documentation (e.g., Python docstrings, git commit messages)
524524
* Generating out-of-code documentation (e.g., man pages)
525525
* In an interactive code exploration tool
526-
* Communicating program results back to users via a natural langauge interface
526+
* Communicating program results back to users via a natural language interface
527527

528528
An example prompt for explaining code with `code-davinci-002`:
529529

0 commit comments

Comments
 (0)