Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit d9dfc1f

Browse files
Merge pull request openai#64 from Ygnys/main
fixing typos
2 parents 3302d1b + aeec6d9 commit d9dfc1f

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ Ted Chiang
139139

140140
### Completion prompt example
141141

142-
Completion-style prompts take advantage of how large language models try to write text they think is mostly likely to come next. To steer the model, try beginning a pattern or sentence that will be completed by the output you want to see. Relative to direct instructions, this mode of steering large language models can take more care and experimentation. In addition, the models won't necessarily know where to stop, so you will often need stop sequences or post-processing to cut off text generated beyond the desired output.
142+
Completion-style prompts take advantage of how large language models try to write text they think is most likely to come next. To steer the model, try beginning a pattern or sentence that will be completed by the output you want to see. Relative to direct instructions, this mode of steering large language models can take more care and experimentation. In addition, the models won't necessarily know where to stop, so you will often need stop sequences or post-processing to cut off text generated beyond the desired output.
143143

144144
Example completion prompt:
145145

@@ -207,7 +207,7 @@ For more prompt examples, visit [OpenAI Examples][OpenAI Examples].
207207

208208
In general, the input prompt is the best lever for improving model outputs. You can try tricks like:
209209

210-
* **Give more explicit instructions.** E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say "I don't know" when the it doesn't know the answer, tell it 'Say "I don't know" if you do not know the answer.'
210+
* **Give more explicit instructions.** E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say "I don't know" when it doesn't know the answer, tell it 'Say "I don't know" if you do not know the answer.'
211211
* **Supply better examples.** If you're demonstrating examples in your prompt, make sure that your examples are diverse and high quality.
212212
* **Ask the model to answer as if it was an expert.** Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. E.g., "The following answer is correct, high-quality, and written by an expert."
213213
* **Prompt the model to write down the series of steps explaining its reasoning.** E.g., prepend your answer with something like "[Let's think step by step](https://arxiv.org/pdf/2205.11916v1.pdf)." Prompting the model to give an explanation of its reasoning before its final answer can increase the likelihood that its final answer is consistent and correct.
@@ -262,7 +262,7 @@ In general, writing can work with any style of prompt. Experiment to see what wo
262262
One capability of large language models is distilling information from a piece of text. This can include:
263263

264264
* Answering questions about a piece of text, e.g.:
265-
* Querying an knowledge base to help people look up things they don't know
265+
* Querying a knowledge base to help people look up things they don't know
266266
* Querying an unfamiliar document to understand what it contains
267267
* Querying a document with structured questions in order to extract tags, classes, entities, etc.
268268
* Summarizing text, e.g.:
@@ -301,7 +301,7 @@ Output:
301301
One
302302
```
303303

304-
If the text you wish to ask about is longer than the token limit (~4,000 tokens for `text-davinci-003` and ~2,000 tokens for earlier models), we recommending splitting the text into smaller pieces, ranking them by relevance, and then asking the most-relevant-looking pieces.
304+
If the text you wish to ask about is longer than the token limit (~4,000 tokens for `text-davinci-003` and ~2,000 tokens for earlier models), we recommend splitting the text into smaller pieces, ranking them by relevance, and then asking the most-relevant-looking pieces.
305305

306306
#### Summarization
307307

@@ -423,7 +423,7 @@ Output:
423423

424424
Tips for translation:
425425

426-
* Performance is best on the most common languages
426+
* Performance is best in the most common languages
427427
* We've seen better performance when the instruction is given in the final language (so if translating into French, give the instruction `Traduire le texte de l'anglais au français.` rather than `Translate the following text from English to French.`)
428428
* Backtranslation (as described [here](https://arxiv.org/abs/2110.05448)) can also increase performance
429429
* Text with colons and heavy punctuation can trip up the instruction-following models, especially if the instruction is using colons (e.g., `English: {english text} French:`)
@@ -456,7 +456,7 @@ The simplest way to use embeddings for search is as follows:
456456

457457
An example of how to use embeddings for search is shown in [Semantic_text_search_using_embeddings.ipynb](examples/Semantic_text_search_using_embeddings.ipynb).
458458

459-
In more advanced search systems, the the cosine similarity of embeddings can be used as one feature among many in ranking search results.
459+
In more advanced search systems, the cosine similarity of embeddings can be used as one feature among many in ranking search results.
460460

461461
#### Recommendations
462462

0 commit comments

Comments
 (0)