Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Build failures in translate/automl #4353

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
tmatsuo opened this issue Jul 22, 2020 · 2 comments · Fixed by #4360
Closed

Build failures in translate/automl #4353

tmatsuo opened this issue Jul 22, 2020 · 2 comments · Fixed by #4360
Assignees
Labels
api: translate Issues related to the Cloud Translation API. priority: p1 Important issue which blocks shipping the next release. Will be fixed prior to next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.

Comments

@tmatsuo
Copy link
Contributor

tmatsuo commented Jul 22, 2020

It seems like a deterministic failure.

Python 3.6 build
Python 3.7 build
Python 3.8 build

Log:

_________________________ test_model_list_get_evaluate _________________________
Traceback (most recent call last):
  File "/workspace/translate/automl/model_test.py", line 80, in test_model_list_get_evaluate
    assert "evaluation_metric" in out
AssertionError: assert 'evaluation_metric' in 'name: "projects/1012616486416/locations/us-central1/models/VAR6465236123462402048/modelEvaluations/452876416420741254"\ncreate_time {\n  seconds: 1595365133\n  nanos: 496593000\n}\nevaluated_example_count: 30\n\n'

BTW, Build Cop Bot didn't report them. I filed:
googleapis/repo-automation-bots#727

@tmatsuo tmatsuo added priority: p1 Important issue which blocks shipping the next release. Will be fixed prior to next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. api: translate Issues related to the Cloud Translation API. labels Jul 22, 2020
@tmatsuo tmatsuo self-assigned this Jul 22, 2020
@tmatsuo
Copy link
Contributor Author

tmatsuo commented Jul 22, 2020

First step. This is happening locally on my workstation.

@tmatsuo
Copy link
Contributor Author

tmatsuo commented Jul 22, 2020

I think the API response changed the field name from evaluation_metric to evaluated_example_count.

tmatsuo pushed a commit to tmatsuo/python-docs-samples that referenced this issue Jul 23, 2020
engelke pushed a commit that referenced this issue Jul 23, 2020
* fix(translate): fix a broken test

fixes #4353

* use uuid

* fix builds
telpirion pushed a commit that referenced this issue Nov 16, 2022
* fix(translate): fix a broken test

fixes #4353

* use uuid

* fix builds
arbrown pushed a commit that referenced this issue Nov 17, 2022
* fix(translate): fix a broken test

fixes #4353

* use uuid

* fix builds
dandhlee pushed a commit that referenced this issue Nov 17, 2022
* fix(translate): fix a broken test

fixes #4353

* use uuid

* fix builds
dandhlee pushed a commit that referenced this issue Nov 18, 2022
* fix(translate): fix a broken test

fixes #4353

* use uuid

* fix builds
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: translate Issues related to the Cloud Translation API. priority: p1 Important issue which blocks shipping the next release. Will be fixed prior to next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant