Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@LivanKov
Copy link
Owner

Pull Request for Assignment 4
Summary:

  • Implemented all missing features from the past assignment
  • Implemented a thermostat along with thorough unit tests
  • Enabled running Rayleigh-Taylor Instability
  • Implemented input for gravity, epsilon, delta
  • Implemented Periodic Boundary Conditions via the "Ghost Particle" method
  • Implemented a class for "checkpointing"
  • Ran the "falling drop" simulations
  • Used GNU prof for performance assessment
  • Main optimizations currently come from naive algorithm optimization and trying to prioritize in-place object construction
  • Refactoring for a more structured code

Copy link
Collaborator

@manishmishra6016 manishmishra6016 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note
This is only feedback regarding the code directly. Grade feedback will be on Moodle once we are through with all groups.


Thank you for your submission 💪
I think you were able to meet most of the objectives in terms of the implementation. I would have liked to see more detailed profiling results and effect of optimizations you carried out. Also, please avoid a lot of dead codes, and provide comments on the tests.

I am currently trying to build the code on linux cluster and it doesn't seem to work right away. Can you confirm that you were able to build this exact version of the code or did you need to make some adjustments (to cmake file for instance)? I haven't looked at it closely yet, but if you have some comments that can be helpful to me, I would appreciate it.

Have a look at my comments and see if they make sense. Feel free to reply/ clarify something if needed. I wish you good luck with the final worksheet. 😄

Cheers,
Manish

- **-o ../output/simulation_output** overrides the `output_path` specified in the XML file to set the output file path.
- **-t 5** overrides the `write_frequence` specified in the XML file to set the write frequency to write a file every 5 iterations.

### EML Example
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### EML Example
### XML Example

* @param delta_t The delta_t value to write.
* @param t_end The t_end value to write.
*/
static void writeCheckpoint(LinkedCellContainer &particles,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, it seems like checkpointing will not work with DirectSum. Also, you removed the ParticleContainer interface, so both DirectSum and LinkedCells are independent now, otherwise, you could have used something shared_ptr here to make this work with both.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LinkedCellContainer serves as a wrapper for the DirectSumContainer.
It can still have the exact same functionality by omitting the "cells" array.

Comment on lines +94 to +102
/* for (auto& p : particles_) {
auto& current_velocity = p.getV();
std::array<double, 3> new_velocity{};
for (size_t i = 0; i < dimensions_; ++i) {
new_velocity[i] = current_velocity[i] * scaling_factor_;
}
p.updateV(new_velocity);
} */
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dead code

Comment on lines +163 to +183
/*for (auto &particle : particles_) {
auto mass = particle.getM();
if(mass <= 0) {
Logger::getInstance().error("Mass of particle must be positive");
return;
}
// factor for the Maxwell-Boltzmann distribution
double average_velocity = std::sqrt(initial_temperature_ / mass);
// Generate random velocity for the particle
std::array<double, 3> random_velocity =
maxwellBoltzmannDistributedVelocity(average_velocity, dimensions_);
auto current_velocity = particle.getV();
for (size_t i = 0; i < dimensions_; ++i) {
current_velocity[i] += random_velocity[i];
}
particle.updateV(current_velocity);
} */
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dead code

Comment on lines +226 to +234
/*for (auto& p : particles_) {
auto& current_velocity = p.getV();
std::array<double, 3> new_velocity{};
for (size_t i = 0; i < 3; ++i) {
new_velocity[i] = current_velocity[i] * scaling_factor_;
}
p.updateV(new_velocity);
} */
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dead code, just FYI, if you want to keep another version of the code, the best practice is to simply keep it on another branch. And if you no longer use, just remove it, you can always access it with the older commit IDs.


size_t molecules_this_iteration = particles.size();

Calculation<Position>::run(particles, params_.time_delta, option);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you read from the checkpoint, there is no force available from the previous step. So, you need to compute it once before starting the main loop. Currently, the very first step (position update) after reading the checkpoint file is missing the force contribution.

Comment on lines +10 to +19
if (SimParams::enable_v_threshold) {
// Clamp velocity based on absolute threshold (positive and negative)
for (int i = 0; i < 3; ++i) {
if (v[i] > SimParams::v_threshold) {
v[i] = SimParams::v_threshold; // Positive threshold
} else if (v[i] < -SimParams::v_threshold) {
v[i] = -SimParams::v_threshold; // Negative threshold
}
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this required?
This would destroy the symplectic nature of the time integration scheme we use, so I am wondering what was the use case here.

Comment on lines +37 to +41
if (it->first->getSigma() == it->second->getSigma()) {
sigma = it->first->getSigma();
} else {
sigma = (it->first->getSigma() + it->second->getSigma()) / 2;
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could try some optimization here if time permits as discussed in the last review, like precomputing possible combinations and using them directly instead of computing for all pairs separately.

EXPECT_EQ(upper_right_actual_ids, upper_right_expected_ids) << "Upper right neighbor IDs do not match expected values";
}
*/
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dead code - why did you comment these tests?

}
*/

TEST_F(PeriodicBoundaryTest, PeriodicTransitionTest) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No comment provided about what this test is really doing. Briefly define the goal of the test and expected output.

@manishmishra6016
Copy link
Collaborator

Just FYI,
I was able to compile on the cluster with some minor changes to cmake file.

I added find_package(XercesC REQUIRED)
and changed
target_link_libraries(MolSim PUBLIC xerces-c) to target_link_libraries(MolSim PUBLIC XercesC::XercesC)

With these, it works seamlessly with cmake .. followed by make in the build folder.

I was not able to compile with the intel compiler however. This is I think due to the use of C++20 features "concepts" (+ maybe some other as well).

@LivanKov
Copy link
Owner Author

Just FYI, I was able to compile on the cluster with some minor changes to cmake file.

I added find_package(XercesC REQUIRED) and changed target_link_libraries(MolSim PUBLIC xerces-c) to target_link_libraries(MolSim PUBLIC XercesC::XercesC)

With these, it works seamlessly with cmake .. followed by make in the build folder.

I was not able to compile with the intel compiler however. This is I think due to the use of C++20 features "concepts" (+ maybe some other as well).

Interesting, I wasn't the one doing the benchmarks on the cluster, so I'd have to further clarify that with my teammates. From what I've gathered however, they had no issues with compiling and running the program.
As far as the concepts go, they aren't strictly necessary but they do enforce slightly stricter behaviour for templated classes, which I like, so I just decided to leave them in.
Also, if I recall correctly, during our last meeting, it has been clarified that the updated cluster that we were supposed to use, would generally have no problems compiling code with more modern features in it.

If you have any more questions/complaints, please let me know.

Cheers

@LivanKov LivanKov merged commit f7a58d9 into master Jan 19, 2025
2 checks passed
@LivanKov LivanKov deleted the dev-worksheet_4 branch January 19, 2025 19:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants