-
Notifications
You must be signed in to change notification settings - Fork 484
[UPGRADE] Docker cassandra image to 5.0.4 #2780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Obviously some issues with test docker cassandra startup, will investigate |
|
dateof function has been removed in cassandra 5.0, should use toTimestamp now |
|
From java cassandra driver jira: https://issues.apache.org/jira/browse/CASSJAVA-89 Issue has been resolved and merged and should be available in the next release of the driver 4.19.1: apache/cassandra-java-driver#2029 Until then we are stuck I'm afraid in terms of compatibility. |
|
Culprit on our side with compression usage: https://github.com/apache/james-project/blob/master/server/blob/blob-cassandra/src/main/java/org/apache/james/blob/cassandra/cache/CassandraBlobCacheDataDefinition.java#L38 |
I just read the PR. From what I understand, with our current code and the potential 4.19.1 driver upgrade, it would work with Cassandra 5.0 and above. But I am afraid that it would not work with the existing deployments with Cassandra < 5.0.
The And the driver introduced a few deprecated methods to use the deprecated cf https://github.com/apache/cassandra-java-driver/pull/2029/files#r2170601291 So it seems there is no single bullet for both prior Cassandra 5.0 and upper Cassandra 5.0. IMO we may
|
|
@quantranhong1999 good remarks actually. But would there be? The tables would exist already after a migration, wouldnt it be fair to think the compression option on the table would be migrated as well? The problem would only occur if we setup a new cassandra 4 cluster with the updated java driver version... but wouldnt we want just running cass 5 on a new setup right away? |
|
I want to test if there is other issues or not wih cass 5 and our tests, so I removed that compression option for now in a no_merge commit |
Yes. I think it should the table should be migrated when upgrading to Cassandra 5.0.
I am not sure the community would upgrade to Cassandra 5.0 soon. Then 4.19.1 driver would be a breaking change for existing 4.x Cassandra, for example, no? |
|
The discussion would benefit from community opinion I agree. I'm waiting to see if there is other issues popping up before potentially starting a thread on the topic But you are right, we will need or to drop the compression on that table to make it simple if we judge it's no big deal, or having an option (was thinking more in blob.properties?) to have retro compatibility with cassandra < 5 This kind of breaking change sounds really weird and unnecessary as well... but can't do much about it |
+1 |
|
IMO we can document the optimal chunck size setting for Cassandra 4 + 5 & how to apply it. We can then rely on the user to manually do it if he cares about performance. (We explain we do not aply it for compat reasons) IMO retro compat is way more desirable than few percent perf if some easy setting is not done. |
|
@chibenwa sounds reasonable to me |
…ion, CassandraMailboxDataDefinition, CassandraMessageDataDefinition, CassandraMessageFastViewProjectionDataDefinition
With update to Cass 5 some tests are failing with an OOM docker container crash
|
Was green btw, so I cleaned up the git history, created a proper JIRA ticket and added some upgrade instructions. |
Need to perf test. There is no driver upgrade,
4.19.0is still the latest one