Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@tgross
Copy link
Contributor

@tgross tgross commented May 31, 2017

This PR updates ContainerPilot to the 3.0.0-RC1 release candidate. Our Jenkins job is running the tests again but I'm expecting I might have to do some tweaking to get them to pass so let's consider this WIP for now.

@tgross tgross force-pushed the containerpilot3-dev branch 2 times, most recently from c476467 to 4a12566 Compare May 31, 2017 15:50
@tgross
Copy link
Contributor Author

tgross commented May 31, 2017

I was having some trouble with the JSON5 parser in the test.py but it turns out it was a typo in the ContainerPilot config that CP handles fine but that pyjson5 barfs on with a really poor error message. I think the rest of the fixes are now just updating the manage.py application to use the new config syntax for jobs vs services, etc.

@tgross
Copy link
Contributor Author

tgross commented Jun 1, 2017

Ok, after some fixes the tests are still failing but right now that looks to be for the same reason master is failing, which is that all the keys/account IDs are fouled up from when we moved the Jenkins server around. Will try to get that fixed next.

Edit Nope, not the keys but CNS not being correctly registered. I think this is just a bug in the domains we're giving it; the test rig says to use triton.zone but the error message is talking about cns.joyent.com so that's definitely something resembling the source of the problem.

Nope:

2017/06/01 15:16:39     2017/06/01 15:16:39 [INFO] agent: (LAN) joining: [mysql-consul.svc.9df26e60-4bc4-eca9-db82-a8ecb0dec126.us-east-1.triton.zone]
2017/06/01 15:16:48     2017/06/01 15:16:48 [WARN] manager: No servers available
2017/06/01 15:16:48     2017/06/01 15:16:48 [ERR] agent: failed to sync remote state: No known Consul servers

@tgross
Copy link
Contributor Author

tgross commented Jun 1, 2017

Closing this PR till the bugs are fixed.

@tgross tgross closed this Jun 1, 2017
@tgross tgross reopened this Jun 1, 2017
@tgross tgross force-pushed the containerpilot3-dev branch from 1b24b76 to 9202b92 Compare June 1, 2017 18:38
@tgross
Copy link
Contributor Author

tgross commented Jun 1, 2017

I've got the tests all running at this point except that they're running into the known problem that the blueprint randomly doesn't work, as described in #81. I'm going to rework the manage.py application to take advantage of the different way CPv3 handle concurrent applications, but I'll do that under a separate branch of work. This should be ready to merge.

@tgross
Copy link
Contributor Author

tgross commented Jun 1, 2017

Passing build looks like this (ref https://product-ci.joyent.us/job/autopilotpattern-mysql-containerpilot-v3/18/console):

----------------------------------------------------------------------
MySQLStackTest.test_replication_and_failover
----------------------------------------------------------------------
elapsed  | task
15.52712 | docker-compose -f docker-compose.yml -p my stop
19.35147 | docker-compose -f docker-compose.yml -p my rm -f
120.5100 | docker-compose -f docker-compose.yml -p my up -d
2.588850 | docker-compose -f docker-compose.yml -p my ps
0.683031 | docker inspect my_consul_1
37.36935 | wait_for_service: mysql-primary 1
1.613791 | docker-compose -f docker-compose.yml -p my ps -q mysql
3.335879 | docker exec adfe29bade5dc0c58c78eb08528b796adaa03569e666ca5ee0e2bde5a2768393 ip -o addr
5.096567 | assert_consul_correctness: 
56.25547 | docker-compose -f docker-compose.yml -p my scale mysql=3
6.514008 | wait_for_service: mysql 2
3.376027 | docker-compose -f docker-compose.yml -p my ps -q mysql
2.873407 | docker exec adfe29bade5dc0c58c78eb08528b796adaa03569e666ca5ee0e2bde5a2768393 ip -o addr
2.487378 | docker exec 8fcdd09e0ea0670c8f19f15ae6756de1ea27bae57d25e4dfe822cc21e02babe4 ip -o addr
2.930396 | docker exec 05afbf5d0af0cdeee936d3dac917cc9c69b3a004a9024259f6d1f6354becc927 ip -o addr
11.80958 | assert_consul_correctness: 
3.516325 | docker exec my_mysql_1 mysql -u mytestuser -p23DQS1KKC3 --vertical -e CREATE TABLE tbl1 (field1 INT, field2 VARCHAR(36)); mytestdb
2.788428 | docker exec my_mysql_1 mysql -u mytestuser -p23DQS1KKC3 --vertical -e INSERT INTO tbl1 (field1, field2) VALUES (1, "66682fc3-f394-4eac-b7ab-1051a1831e37"); mytestdb
3.077391 | docker exec my_mysql_1 mysql -u mytestuser -p23DQS1KKC3 --vertical -e INSERT INTO tbl1 (field1, field2) VALUES (1, "c2da4c6a-548e-4152-8024-d77df0cceb7d"); mytestdb
2.488188 | docker exec 05afbf5d0af0 mysql -u mytestuser -p23DQS1KKC3 --vertical -e SELECT * FROM tbl1 WHERE `field1`=1; mytestdb
2.190726 | docker exec 8fcdd09e0ea0 mysql -u mytestuser -p23DQS1KKC3 --vertical -e SELECT * FROM tbl1 WHERE `field1`=1; mytestdb
10.86911 | docker stop my_mysql_1
68.69553 | wait_for_service: mysql-primary 1
4.152424 | docker-compose -f docker-compose.yml -p my ps -q mysql
1.566462 | docker exec adfe29bade5dc0c58c78eb08528b796adaa03569e666ca5ee0e2bde5a2768393 ip -o addr
2.503694 | docker exec 8fcdd09e0ea0670c8f19f15ae6756de1ea27bae57d25e4dfe822cc21e02babe4 ip -o addr
2.861315 | docker exec 05afbf5d0af0cdeee936d3dac917cc9c69b3a004a9024259f6d1f6354becc927 ip -o addr
11.22180 | assert_consul_correctness: 
0.068317 | wait_for_service: mysql 1
2.718430 | docker-compose -f docker-compose.yml -p my ps -q mysql
2.192393 | docker exec adfe29bade5dc0c58c78eb08528b796adaa03569e666ca5ee0e2bde5a2768393 ip -o addr
2.243974 | docker exec 8fcdd09e0ea0670c8f19f15ae6756de1ea27bae57d25e4dfe822cc21e02babe4 ip -o addr
4.134433 | docker exec 05afbf5d0af0cdeee936d3dac917cc9c69b3a004a9024259f6d1f6354becc927 ip -o addr
11.42509 | assert_consul_correctness: 
3.759254 | docker exec 05afbf5d0af0 mysql -u mytestuser -p23DQS1KKC3 --vertical -e INSERT INTO tbl1 (field1, field2) VALUES (1, "dd639338-7622-441e-8fdc-82c02e86e993"); mytestdb
2.242756 | docker exec 8fcdd09e0ea0 mysql -u mytestuser -p23DQS1KKC3 --vertical -e SELECT * FROM tbl1 WHERE `field1`=1; mytestdb
.
----------------------------------------------------------------------
Ran 1 test in 397.593s

OK

Copy link

@jwreagor jwreagor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just had one question but not important.

- 127.0.0.1
labels:
- triton.cns.services=mysql-consul
- triton.cns.services=mc

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming stood out here. mc better than mysql-consul?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I agree it's a terrible name. 😀

But unfortunately the records that Triton CNS names create are "too long" despite being within spec and this causes the application to be unable to resolve the CNS name for Consul. (And despite having worked at one point in time.)

I've been unable to track down the problem but it's been kicked around between CNS, the Python client, and/or Google's DNS servers which are the search domain for Triton-deployed applications. I've been unable to figure it out but this hacks around that until such time as we can use the Triton search domain.

ref:
https://devhub.joyent.com/jira/browse/DOCKER-898
https://devhub.joyent.com/jira/browse/OPS-2555

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah good to know.

@tgross tgross merged commit 04832fd into master Jun 12, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants