Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
56 views2 pages

PCCFD Poster

The poster highlights a CFD study of a multiphase flow separator vessel using Eulerian and population balance models in ANSYS Fluent. Results show oil fraction over time. Performance testing shows HDF5 reduced case file read time from 166 to 42.7 seconds for a 140 million cell grid on KAUST's supercomputer Shaheen.

Uploaded by

Mahfoud AMMOUR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views2 pages

PCCFD Poster

The poster highlights a CFD study of a multiphase flow separator vessel using Eulerian and population balance models in ANSYS Fluent. Results show oil fraction over time. Performance testing shows HDF5 reduced case file read time from 166 to 42.7 seconds for a 140 million cell grid on KAUST's supercomputer Shaheen.

Uploaded by

Mahfoud AMMOUR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/321528523

Computational Modeling of a Multiphase Flow separator.

Poster · December 2017


DOI: 10.13140/RG.2.2.13959.70561

CITATIONS READS

0 125

3 authors, including:

Gautham Narayan Rooh Khurram


Carnegie Mellon University King Abdullah University of Science and Technology
2 PUBLICATIONS 5 CITATIONS 18 PUBLICATIONS 406 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Development of a 3D Unstructured AeroThermodynamic code for aerospace applications View project

HPC Outreach View project

All content following this page was uploaded by Gautham Narayan on 05 December 2017.

The user has requested enhancement of the downloaded file.


CFD Modeling of a Multiphase Gravity Separator Vessel
Gautham Narayan*, Rooh Khurram*, Ehab Elsaadawy†
King Abdullah University of Science and Technology*; Saudi Aramco R&DC†

Abstract Multiphase Flow Results Separator Case - Compute and I/O Performance
Contour Plot : Oil Fraction
The poster highlights a CFD study that incorporates a combined Eulerian multi-fluid multiphase and a
Time = 0 HDF5 Case file read performance in Fluent
Population Balance Model (PBM) to study the flow inside a typical multiphase gravity separator vessel 16 Gb (140 Million 3D CFD grid) - without compression
(GSV) found in oil and gas industry. The simulations were performed using Ansys Fluent CFD package 180 166.2
running on KAUST supercomputer, Shaheen. Also, a highlight of a scalability study is presented. The 160
140
effect of I/O bottlenecks and using Hierarchical Data Format (HDF5) for collective and independent

Time (seconds)
120
parallel reading of case file is presented. This work is an outcome of a research collaboration on an Time = 0.5 100
69
Aramco project on Shaheen. 80
54
60 42.7
40
Geometry 20
0
1 - Host Mode 2 - Node0 3 - Parallel 4 - Parallel
Gas outlet Time = 3 Independent Collective
Fluent Reading Modes

Weir

Perforated baffle plate


The compute part scales very well on Shaheen. The I/O performance numbers are using HDF5
(Hierarchical Data Format) read/write modes in Fluent v17. The Independent mode offers highest
Time = 30
speedup but the IO does not scale as we increase core count. For large core counts, I/O becomes a
bottleneck. KSL is discussing a possible RFE with ANSYS.

Water outlet
Oil outlet
Burst Buffer : I/O Speedup
Fluent F1 Race Car Test Case
140 million cell mesh; 21GB output file
Mass flow inlet
Vector plot : Normalized Velocity 50
Gas outlet 50

Mixed Mass Flow Inlet 40


40

Time (sec)
Unstructured Grid

Time (sec)
Mesh Information: 30 30
• 700,000 tetrahedral elements
20 20
• Maximum face size (0.1m)
• Minimum Size - 0.01m
• Growth Rate - 1.2 10 10

0 0
Baffle Plate Oil-Water Stratification Oil outlet Test 1 Test 2 Test 3 Lustre BB 1 Node BB 10 Nodes
Water outlet
BB Lustre
GSV Dimensions Directional Porosity Inlet mass flow
Length 45.50 m Porosity 0.3 Oil 1.8x105 bbl / 8.4% Computational Bottleneck
In order to speedup I/O, the performance is analysed on burst buffer, which is an SSD based file
Diameter 4.26 m Axial Coeff resistance 1000 m-2 Water 5.4x104 bbl / 2.5%
Compute Performance Timer : system that is integrated within the Aires network on Shaheen. 20-30% IO improvement is observed.
Weir height 2.00 m Radial Coeff resistance 10000 m-2 Gas 1.9x106 bbl / 89%
150 iterations on 2048 compute cores 150 iterations on 16384 compute cores The results are repeatable (figure on left). Oversubscription of burst buffer nodes showed modest
Average wall-clock time per iteration: 4.407 sec Average wall-clock time per iteration: 0.589 sec scalability in IO (figure on right). During our tests, we discovered (courtesy to KSL scientist – Georgios
Multi-Phase Model Population Balance Model (PBM) Global reductions per iteration: 196 ops Global reductions per iteration: 196 ops Markomanolis) that HDF based parallel IO does not work on burst buffer. This issue will also be
Global reductions time per iteration: 0.000 sec (0.0%) Global reductions time per iteration: 0.000 sec (0.0%)
• Euler - Euler multiphase model • PBM solved using the Inhomogeneous Message count per iteration: 70030690 messages discussed with ANSYS.
Message count per iteration: 7462618 messages
• Four phases Discrete Method (IDM) Data transfer per iteration: 66972.895 MB Data transfer per iteration: 155328.436 MB
▪ Oil – Primary Phase • Two water phases of 16 bins each LE solves per iteration: 25 solves LE solves per iteration: 25 solves Conclusions:
▪ Water - two phases for population balance model Secondary • Default settings for aggregation and LE wall-clock time per iteration: 0.784 sec (17.8%) LE wall-clock time per iteration: 0.144 sec (24.4%)
breakage
GSV model is developed and preliminary results are obtained for oil separation. The overall framework
▪ Gas Phases LE global solves per iteration: 2 solves LE global solves per iteration: 2 solves
LE global wall-clock time per iteration: 0.012 sec (0.3%) LE global wall-clock time per iteration: 0.021 sec (3.5%) is scalable on Shaheen. For large core counts, IO becomes a bottleneck. Various read/write options
Inlet Droplet Distribution and burst buffer technology helped in identifying optimal parameters for faster IO rate. The results
• Log normal distribution Total wall-clock time: 661.027 sec Total wall-clock time: 88.359 sec
show that a significant improvement (~4x) in I/O performance can be achieved using HDF5 file
• Mean diameter – 100 microns
• Standard deviation - 33 micron formats for large cases in Fluent. Additional 30% improvement of IO can be obtained by using burst
Initial condition: Scalability: ANSYS Benchmarks buffer on Shaheen. Further IO improvement request will be sent to ANSYS.

Fluent: Patch Based Initialization Cavity flow 0.5M nodes


Future Work:
Sedan 4M nodes F1 Racecar 140M nodes
• Fully understand flow structures around momentum breaker.
Gas Layer
Weir height (2.0m) • Conduct parametric studies on breaker and weir.
• Setup intuitive graphical workflow for design optimization.
• IO improvement collaborative work with ANSYS
References:
[1] Vilagines, R. D., & Akhras, A. R. Three-Phase Flows Simulation for Improving Design of Gravity
Separation Vessels. SPE Annual Technical Conference and Exhibition 10.2118/134090-MS
Water Layer (1.0m) Oil Layer (1.4m) [2] Manoj Kumar Vani (ANSYS), HPC Scale –up test on Shaheen: Gravity Separator, 2016

RESEARCH POSTER PRESENTATION DESIGN © 2015

www.PosterPresentations.com

View publication stats

You might also like