FEM Assembly and Solver Benchmarks

Banner - FEM Assembly and Solver Benchmarks

It is both interesting and useful to compare simulation tools and the performance of different implementations. In the following we have performed three basic tests relevant to FEM simulation codes, sparse matrix vector multiplication, finite element matrix assembly, and solving the Poisson equation on a unit square, with the following five different Finite Element solver implementations:

  • FEATool Multiphysics - the FEATool toolbox written in Matlab / Octave m-script code
  • FEAT2D - a Fortran 77 Finite Element library used in the FeatFlow CFD code
  • Julia - Julia is a very new programming language that aims to achieve close to the performance of C while at the same time being very easy to program and use (maybe as if Matlab was reimplemented from scratch today)
  • FEniCS Project - a computing platform for solving partial differential equations (PDEs) with high-level Python and C++ interfaces
  • SFEA (Stencil based Finite Element Analysis) - is an experimental high-performance stencil based FEM solver written in Fortran 90 which comes very close to the memory bandwidth limit so gives an indication of the upper performance limit

All tests were performed on a single core (serial mode) of a Desktop system with a Intel Core i7-3820 CPU running Linux, and the Fortran codes were compiled with the Intel Fortran compiler.

Test 1 - Sparse Matrix Vector Product

1/hFEAT2DFEAToolSFEA
FortranMatlab/OctaveJuliaFortran
12800.0020.030
2560.0010.0020.0310
5120.0050.0060.0340.001
10240.0240.0210.050.002
20480.1970.0850.080.009
40960.4110.250.034

For parse benchmark test we can see that FEAT2D (CSR sparse format), FEATool (Matlab sparse format), and Julia (CSC sparse format) perform similarly especially for larger grid sizes, while the stencil based SFEA approach is about a magnitude faster. FEniCS does not seem to support matrix and vector operations on a high level so no data is included for FEniCS here.

Test 2 - Finite Element Matrix Assembly

1/hFEAT2DFEAToolFEniCS
FortranMatlab/OctavePython/C++Julia
1280.050.050.050.24
2560.130.140.120.57
5120.420.430.311.7
10241.51.71.16.5
204867526
409624105

Here surprisingly, the vectorized and optimized FEATool Matlab code is actually just as fast as the FEAT2D Fortran code (which is a very good end efficient reference implementation). FEniCS is a little bit faster but this is to be expected since FEAT2D and FEATool both use quadrilateral shape functions which are more expensive to assemble than the linear triangular ones used by FEniCS and Julia. The performance of Julia is unfortunately not very good here but this could be due to a non-optimized implementation. The SFEA code is not included since being stencil based FEM assembly costs virtually nothing.

Test 3 - Linear Solver for the Poisson Equation

1/hFEAT2DFEAToolFEniCSSFEA
Fortran
GMG
Matlab/Octave
UMFPACK
Python/C++
PETSc
Julia
CHOLMOD?
Fortran
GMG
1280.0250.3680.190.180.049
2560.051.7180.790.3220.064
5120.217.4064.650.90.11
10241.147.64634.13.40.16
20486338.14-14.60.49
409663832.9
819220

In the final test the Poisson equation is solved on the unit square with unit source term and zero homogeneous Dirichlet boundary conditions everywhere. The default linear solver of each code is used throughout the tests. FEAT2D and SFEA employ a geometric multigrid solver, FEATool (Matlab/Octave) uses UMFPACK, and FEniCS uses PETSc. From the timings we can see that the UMFPACK and PETSc direct sparse solvers have about the same performance, with a slight advantage for PETSc (although failed for the 1/h=2048 grid). As the problem sizes increases we can see that the GMG solvers scale significantly better than the direct solvers, with the stencil based SFEA approach being about a magnitude faster (about 700 times faster than UMFPACK on the 1/h=2048 grid with 4.2 million unknowns). It is not quite clear which solver Julia uses but due to the performance figures which are on par with the FEAT2D GMG solver we suspect it detects that the problem can be solved faster with Cholesky factorization and uses a corresponding solver.

Summary

We have looked at how five different finite element implementations perform on three important test cases. For sparse matrix vector operations the codes performed very similar with exception of the stencil based approach with was a magnitude faster.

For matrix assembly it was quite surprising just how good performance one can get from a properly vectorized and optimized Matlab implementation, showing the same performance as a fast Fortran implementation.

Regarding the solvers GMG unsurprisingly beat direct solvers when grid sizes increased. Here also the stencil based approach was faster by a magnitude or more.

To conclude one can say that it is entirely possible to write high performant FEM simulation codes in many programming languages. What seems more important for performance is the choice of data structures (sparse vs stencil) and algorithm (direct vs iterative solvers). The out-lier might be Julia for which it currently isn’t fully clear how it will perform, but due to being a very new language it certainly shows a lot of potential.

Category: solver

Tags: assembly benchmark fenics fortran julia matlab

Published:

SHARE ON

Twitter LinkedIn Facebook Google+