Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 62b51ca

Browse files
authored
feat: openai wandb sync (openai#64)
* feat: log fine_tune with wandb * feat: ensure we are logged in * feat: cli wandb namespace * feat: add fine_tuned_model to summary * feat: log training & validation files * feat: re-log if was not successful or force * doc: add docstring * feat: set wandb api only when needed * fix: train/validation files are inputs * feat: rename artifact type * feat: improve config logging * feat: log all jobs by default * feat: log job details * feat: log -> sync * feat: cli wandb log -> sync * fix: validation_files not always present * feat: format created_at + style * feat: log number of training/validation samples * feat(wandb): avoid download if file already synced * feat(wandb): add number of items to metadata * fix(wandb): allow force sync * feat(wandb): job -> fine-tune * refactor(wandb): use show_individual_warnings * feat(wandb): Logger -> WandbLogger * feat(wandb): retrive number of items from artifact * doc(wandb): add link to documentation
1 parent f288b00 commit 62b51ca

File tree

4 files changed

+351
-1
lines changed

4 files changed

+351
-1
lines changed

README.md

+7
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,7 @@ search = openai.Engine(id="deployment-namme").search(documents=["White House", "
7676
# print the search
7777
print(search)
7878
```
79+
7980
Please note that for the moment, the Microsoft Azure endpoints can only be used for completion and search operations.
8081

8182
### Command-line interface
@@ -142,6 +143,12 @@ Examples of fine tuning are shared in the following Jupyter notebooks:
142143
- [Step 2: Creating a synthetic Q&A dataset](https://github.com/openai/openai-python/blob/main/examples/finetuning/olympics-2-create-qa.ipynb)
143144
- [Step 3: Train a fine-tuning model specialized for Q&A](https://github.com/openai/openai-python/blob/main/examples/finetuning/olympics-3-train-qa.ipynb)
144145

146+
Sync your fine-tunes to [Weights & Biases](https://wandb.me/openai-docs) to track experiments, models, and datasets in your central dashboard with:
147+
148+
```bash
149+
openai wandb sync
150+
```
151+
145152
For more information on fine tuning, read the [fine-tuning guide](https://beta.openai.com/docs/guides/fine-tuning) in the OpenAI documentation.
146153

147154
## Requirements

openai/_openai_scripts.py

+3-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
import sys
55

66
import openai
7-
from openai.cli import api_register, display_error, tools_register
7+
from openai.cli import api_register, display_error, tools_register, wandb_register
88

99
logger = logging.getLogger()
1010
formatter = logging.Formatter("[%(asctime)s] %(message)s")
@@ -39,9 +39,11 @@ def help(args):
3939
subparsers = parser.add_subparsers()
4040
sub_api = subparsers.add_parser("api", help="Direct API calls")
4141
sub_tools = subparsers.add_parser("tools", help="Client side tools for convenience")
42+
sub_wandb = subparsers.add_parser("wandb", help="Logging with Weights & Biases")
4243

4344
api_register(sub_api)
4445
tools_register(sub_tools)
46+
wandb_register(sub_wandb)
4547

4648
args = parser.parse_args()
4749
if args.verbosity == 1:

openai/cli.py

+51
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@
1919
write_out_file,
2020
write_out_search_file,
2121
)
22+
import openai.wandb_logger
2223

2324

2425
class bcolors:
@@ -535,6 +536,19 @@ def prepare_data(cls, args):
535536
)
536537

537538

539+
class WandbLogger:
540+
@classmethod
541+
def sync(cls, args):
542+
resp = openai.wandb_logger.WandbLogger.sync(
543+
id=args.id,
544+
n_fine_tunes=args.n_fine_tunes,
545+
project=args.project,
546+
entity=args.entity,
547+
force=args.force,
548+
)
549+
print(resp)
550+
551+
538552
def tools_register(parser):
539553
subparsers = parser.add_subparsers(
540554
title="Tools", help="Convenience client side tools"
@@ -954,3 +968,40 @@ def help(args):
954968
sub = subparsers.add_parser("fine_tunes.cancel")
955969
sub.add_argument("-i", "--id", required=True, help="The id of the fine-tune job")
956970
sub.set_defaults(func=FineTune.cancel)
971+
972+
973+
def wandb_register(parser):
974+
subparsers = parser.add_subparsers(
975+
title="wandb", help="Logging with Weights & Biases"
976+
)
977+
978+
def help(args):
979+
parser.print_help()
980+
981+
parser.set_defaults(func=help)
982+
983+
sub = subparsers.add_parser("sync")
984+
sub.add_argument("-i", "--id", help="The id of the fine-tune job (optional)")
985+
sub.add_argument(
986+
"-n",
987+
"--n_fine_tunes",
988+
type=int,
989+
default=None,
990+
help="Number of most recent fine-tunes to log when an id is not provided. By default, every fine-tune is synced.",
991+
)
992+
sub.add_argument(
993+
"--project",
994+
default="GPT-3",
995+
help="""Name of the project where you're sending runs. By default, it is "GPT-3".""",
996+
)
997+
sub.add_argument(
998+
"--entity",
999+
help="Username or team name where you're sending runs. By default, your default entity is used, which is usually your username.",
1000+
)
1001+
sub.add_argument(
1002+
"--force",
1003+
action="store_true",
1004+
help="Forces logging and overwrite existing wandb run of the same fine-tune.",
1005+
)
1006+
sub.set_defaults(force=False)
1007+
sub.set_defaults(func=WandbLogger.sync)

0 commit comments

Comments
 (0)