Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Cloud Bigtable scaling tool should consider storage utilization #1695

Closed
@hegemonic

Description

@hegemonic

In which file did you encounter the issue?

https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/bigtable/metricscaler

Did you change the file? If so, how?

No

Describe the issue

The Cloud Bigtable scaling tool does not consider storage utilization per node when it resizes a cluster. It's already a best practice to look at storage utilization, and it will become a hard requirement on 2018-10-01, when we'll prevent users from exceeding the maximum utilization per node.

We should update the tool to look at the metric bigtable.googleapis.com/cluster/storage_utilization and factor that metric into the calculation of how many nodes to add or remove.

GoogleCloudPlatform/cloud-bigtable-examples#305 tracks the same issue, but for the HBase client for Java.

Metadata

Metadata

Assignees

Labels

api: bigtableIssues related to the Bigtable API.type: cleanupAn internal cleanup or hygiene concern.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions