Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@Arsnael
Copy link
Contributor

@Arsnael Arsnael commented Jul 29, 2025

Need to perf test. There is no driver upgrade, 4.19.0 is still the latest one

@Arsnael
Copy link
Contributor Author

Arsnael commented Jul 29, 2025

Obviously some issues with test docker cassandra startup, will investigate

@Arsnael
Copy link
Contributor Author

Arsnael commented Jul 30, 2025

dateof function has been removed in cassandra 5.0, should use toTimestamp now

@Arsnael
Copy link
Contributor Author

Arsnael commented Jul 30, 2025

Actual blocker: https://ci-builds.apache.org/job/james/job/ApacheJames/job/PR-2780/6/testReport/junit/org.apache.james.blob.cassandra.cache/CachedBlobStoreTest/___/

From java cassandra driver jira: https://issues.apache.org/jira/browse/CASSJAVA-89

Issue has been resolved and merged and should be available in the next release of the driver 4.19.1: apache/cassandra-java-driver#2029

Until then we are stuck I'm afraid in terms of compatibility.

@Arsnael
Copy link
Contributor Author

Arsnael commented Jul 30, 2025

@quantranhong1999
Copy link
Member

Issue has been resolved and merged and should be available in the next release of the driver 4.19.1: apache/cassandra-java-driver#2029
Until then we are stuck I'm afraid in terms of compatibility.

I just read the PR. From what I understand, with our current code and the potential 4.19.1 driver upgrade, it would work with Cassandra 5.0 and above. But I am afraid that it would not work with the existing deployments with Cassandra < 5.0.

Culprit on our side with compression usage: https://github.com/apache/james-project/blob/master/server/blob/blob-cassandra/src/main/java/org/apache/james/blob/cassandra/cache/CassandraBlobCacheDataDefinition.java#L38

The withCompression method that we are using seems to change the chunk_length_kb option (work with Cassandra 4.x) to the chunk_length_in_kb option (work only with Cassandra 5.0).

And the driver introduced a few deprecated methods to use the deprecated chunk_length_kb option with prior Cassandra 5.0.

cf https://github.com/apache/cassandra-java-driver/pull/2029/files#r2170601291

So it seems there is no single bullet for both prior Cassandra 5.0 and upper Cassandra 5.0.

IMO we may

  • Option 1: avoid setting the chunk option to preserve inter-compatibility regardless of Cassandra version
  • Option 2: have a JVM property regarding Cassandra version used (not sure the chunk option would be worth the complication)

@Arsnael
Copy link
Contributor Author

Arsnael commented Jul 30, 2025

@quantranhong1999 good remarks actually. But would there be?

The tables would exist already after a migration, wouldnt it be fair to think the compression option on the table would be migrated as well? The problem would only occur if we setup a new cassandra 4 cluster with the updated java driver version... but wouldnt we want just running cass 5 on a new setup right away?

@Arsnael
Copy link
Contributor Author

Arsnael commented Jul 31, 2025

I want to test if there is other issues or not wih cass 5 and our tests, so I removed that compression option for now in a no_merge commit

@quantranhong1999
Copy link
Member

The tables would exist already after a migration, wouldnt it be fair to think the compression option on the table would be migrated as well?

Yes. I think it should the table should be migrated when upgrading to Cassandra 5.0.

The problem would only occur if we setup a new cassandra 4 cluster with the updated java driver version... but wouldnt we want just running cass 5 on a new setup right away?

I am not sure the community would upgrade to Cassandra 5.0 soon. Then 4.19.1 driver would be a breaking change for existing 4.x Cassandra, for example, no?
(Unless we remove the compaction options like your latest commit for example)

@Arsnael
Copy link
Contributor Author

Arsnael commented Jul 31, 2025

The discussion would benefit from community opinion I agree. I'm waiting to see if there is other issues popping up before potentially starting a thread on the topic

But you are right, we will need or to drop the compression on that table to make it simple if we judge it's no big deal, or having an option (was thinking more in blob.properties?) to have retro compatibility with cassandra < 5

This kind of breaking change sounds really weird and unnecessary as well... but can't do much about it

@chibenwa
Copy link
Contributor

The discussion would benefit from community opinion I agree.

+1

@chibenwa
Copy link
Contributor

IMO we can document the optimal chunck size setting for Cassandra 4 + 5 & how to apply it.

We can then rely on the user to manually do it if he cares about performance. (We explain we do not aply it for compat reasons)

IMO retro compat is way more desirable than few percent perf if some easy setting is not done.

@Arsnael
Copy link
Contributor Author

Arsnael commented Aug 1, 2025

@chibenwa sounds reasonable to me

Arsnael added 4 commits August 1, 2025 15:20
…ion, CassandraMailboxDataDefinition, CassandraMessageDataDefinition, CassandraMessageFastViewProjectionDataDefinition
With update to Cass 5 some tests are failing with an OOM docker container crash
@Arsnael
Copy link
Contributor Author

Arsnael commented Aug 1, 2025

Was green btw, so I cleaned up the git history, created a proper JIRA ticket and added some upgrade instructions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants