Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@MakisH
Copy link
Member

@MakisH MakisH commented Jun 27, 2022

It looks like users do not really understand the issue with MPI ports, and they do not understand how much of a compromise sockets are.
This adds a reference to Benjamin's dissertation (scaling results), but we may need to add even more information.

@uekerman what else could we improve here?

It looks like users do not really understand the issue with MPI ports, and they do not understand how much of a compromise sockets are.
This adds a reference to Benjamin's dissertation (scaling results), but we may need to add even more information.
Copy link
Member

@uekerman uekerman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. I added a bit more.

@MakisH
Copy link
Member Author

MakisH commented Jun 27, 2022

@uekerman I added a few more clarifications, based on our discussion.

@Scriptkiddi
Copy link
Contributor

would it be possible to add a quantification for very large coupling meshes?
like very large coupling meshes ( > 1000000 vertices) to help people decide quickly in which category they fall? Because I would guess different fields have different understandings of what a very large mesh is.

@fsimonis
Copy link
Member

@Scriptkiddi The basic guideline is to use sockets.

The size of the communicated mesh depend on many factors. Vertex count is only a part of the story. Mesh connectivity, defined mappings, defined and exchanged data, watch integrals and filtering strategies all impact this communicated size.

You will need to use the inbuilt profiling, or you do your own.

The general guideline for the inbuilt profiling (which is missing in the docs) is:

  1. Start using sockets
  2. Enable sync mode in the configuration.
  3. Run your full testcase for a few time steps.
  4. Establish your baselines:
    • For the mesh transfer, checkout the events partition.sendGlobalMesh in initialize
    • For the data transfer, m2n.sendData and m2n.receiveData events in advance.
  5. Figure out if these impact the overall simulation time. If not, then you are done.
  6. Change to <m2n:mpi > and rerun the case.
  7. Compare the events in 4 to your established baseline and select the preferred method.

@uekerman
Copy link
Member

would it be possible to add a quantification for very large coupling meshes?

it mainly depends on how this relates to the computational cost of your solver. good point, but hard to quantify 😕

@MakisH MakisH merged commit 7401104 into master Jul 5, 2022
@MakisH MakisH deleted the mpi-ports-clarifications branch July 5, 2022 12:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants