Rendering FLEXI results in Blender

Blender is a powerful 3D rendering, animation and modelling OpenSource software suite. It is not a postprocessing software in the CFD sense, but it can create beautiful visualizations to show off the results. Rendering FLEXI simulation data in Blender is a little bit involved, but totally worth it. Here are some samples:

In this post, we are going to show you how to take post-processed data from a CFD solver and render them in Blender, allowing you to present your simulation results in a eye-catching way. We are using data produced by FLEXI and post-process them using ParaView, but the procedure is applicable to other toolchains as well. The focus is on how to use Blender to render already post-processed data, so there will be no step-by-step instructions on how to obtain the data – only a rough overview will be given in that regard.

We will split the discussion in two parts: In the first part, we will show you how to get the data into Blender, how to set up a simple scene and how to generate a static rendering. By static we mean that the object in the scene, i.e. the CFD solution data, is fixed and does not change during the rendering process – think of a single time step / solution file from your CFD data. In the second part, we will then discuss how to generate a dynamic rendering  – when you want to render a series of images with time-dependent CFD data – think of rendering every time step of the CFD solution to generate a cool movie!

We provide the data used in this tutorial (the configuration and mesh used for the calculations done by FLEXI, the state files used to post process the results in ParaView, the layout files in Blender and the scripts used to automate the whole process) in a git repository on GitHub. Feel free to download the files if you want to follow along during the tutorial.


Part I: Getting started and rendering a static scene

As our example, we will simulate the flow around a rather special object: The head of a monkey, affectionately called Suzanne, which is the unofficial mascot of Blender:

The Suzanne head


Running the Simulation

We just want to provide some details about how we set up the Suzanne simulation in case you want to repeat it. To generate the mesh, we first exported the head from Blender in .stl format. You probably need to clean up the mesh a bit, make sure it’s one continuous mesh (by default the eye-sockets are separate from the main part of the head) and that there are no double vertices. The actual hexahedral mesh around the head was then created using Hexpress. The parameters of the simulation where set up in such a way that we achieve a Mach number of 0.2 and a Reynolds number of several hundred, which leads to a laminar but non-stationary flow in the wake of the head.

Post-process the Data

As already mentioned, Blender is not a post-processing software. You need to create the actual visualization with the post-processing tool of your choice – it’s fine as long as a suitable output is provided (more on that in a second).

We are going to visualize the streamlines around the head using ParaView, employing the Plugin that is shipped with FLEXI. A spherical source for streamlines is set in front of the monkey head and the “Tube” filter applied to the streamlines to give them some thickness – that looks a lot nicer. We choose to color the streamlines by the magnitude of the velocity. This is how the result (without the monkey head, which we will insert in Blender later) looks like in ParaView:

Our scene in ParaView – a few streamlines with the tube filter applied.

Not bad, but we want to make that look better!

Exporting the data

Blender supports several data formats for input, and there are two common formats that ParaView provides an output for: X3D and PLY. These two are fundamentally different, and one of them may be more applicable depending of what you are trying to achieve.

The X3D export is available after choosing File->Export Scene from the ParaView menu. You need to change the file type to X3D or X3DB (the binary version of X3D). The scene export means that everything you see in ParaView will be exported, included e.g. visible planes and arrows from slices or glyphes. Additionally, some things you don’t see will also be exporting, specifically several default lights and a camera position.

PLY export is done using File->Save Data. This will only save the data of the object that is currently selected in the pipeline browser! Also, the PLY export only works with polygonal data. If the PLY option is not available, that means your currently selected object is not made of polygonal data. No additional data like lights or camera positions will be exported.

Both formats are able to export the current coloring of your object, but for PLY you need to make sure to check the corresponding option in the output dialog.

Importing into Blender

Now it’s time to start up Blender. We are going to start with a clean scene, so select everything by pressing A twice and then X, confirm using the left mouse button (LMB). Import your data using File->Import and selecting either Stanford or X3D Extensible 3D. If you imported a scene, you will see in the Outliner in the top right corner that next to your actual geometry (which is called Shape_IndexedFaceSet) a camera (Viewport) and several lights (DirectLight) are imported as well. Go ahead and delete them as well. In case you imported a PLY file, only the actual object is imported with a name that depends on your file name.

Imported .x3d file from ParaView. Notice the additional objects in the top-right corner.

Choosing a Rendering Engine

Blender supports several rendering engines, which is the part of the program that will actually draw the final picture of your scene. There are different workflows with each of the engines, and in this tutorial we are using the so called Cycles engine. This is the most modern of the engines, but it is not the default one. You need to change the engine in the middle of the menu bar – where it says Blender Render, switch to Cycles Render.

Camera Manipulation

To move around in the scene, you can press the middle mouse button (MMB) to rotate around the current center of the screen. The mouse wheel lets you zoom in and out and you can pan around by pressing the Shift key and the MMB at the same time.

Object Manipulation

In case you want to scale or rotate your imported object (exporting from ParaView will mess up the axis orientations), select your imported geometry by using the right mouse button (RMB). You can rotate objects by pressing R, possibly followed by what axis you want to rotate about, so e.g. pressing R and then X will let you rotate around the x-axis. To scale an object, press S. Translating an object is done by pressing the G key, again possibly followed by the axis you want to translate along.

All actions have to be confirmed using the LMB once you are satisfied, or press ESC to abort. A nice trick is to hold the Ctrl key will doing any of the above (and many more) operations, which will snap to “nice” values, e.g. multiples of 5 degrees for the rotation.

Adding Light

Since we deleted everything, our scene does not contain a light source at the moment, which makes actually seeing stuff pretty hard. We can change that by pressing Shift-A to open up the Add menu and then choose one of the objects under the Lamp category. The different lights should be pretty self-explanatory, we choose an Area-type light for our scene.  After inserting the light, you can change it’s properties in the bottom-right corner. In the Data tab of the Properties view, you can change the size of the Area lamp by using the corresponding slider. Crank it up to see that it is actually a rectangular light source. You can also tweak the color and the strength of the light here.

The imported streamlines after a rotation with an added Area-type lamp.

You can translate and rotate the light to aim it at the part of the scene that you want to illuminate. Of course you can also add additional light sources to illuminate different parts of the scene. With a light source, we can now give our scene a first render. Press Shift-Z to change the current view from 3D to Rendered and see a preview of how your scene would look like.

A rendered view of the early scene, accessed by pressing Shift-Z.

Adding a Material

So the light seems to be working, but our streamlines are all dull and white and not colored like they where in ParaView. This is because we need to add a material to the streamlines first! To do this, split the main view by clicking and dragging the diagonal lines in the top-right corner of the 3D view to the left. You now have two views in the middle. Switch the right one to the Node Editor using the menu in the bottom-left corner.

How to access the Node Editor view.

Make sure to select your imported geometry and in the Properties view on the bottom-right, switch to the Material tab. Now click on Use Nodes, and a default material will be assigned to the geometry. The Node Editor allows you to create your material in a kind of flowchart, combining different types of materials with each other to achieve the desired look. The default material is only diffuse, meaning it scatters the light. You can see a Diffuse BSDF shader connected to the Surface node of the Material output in the Node Editor.

The default material in the Node Editor.

In reality most materials are not only diffuse, but also have a little bit of gloss to them. This can be achieved by mixing a glossy and a diffuse shader. To do this, press Shift-A to add another object in the Node Editor, than choose Shader->Glossy BSDF. To combine two shaders, we also need to add a Mix Shader in the same way. Using drag-and-drop, we can create connections from output nodes to input ones and create our material in this way, the result could look like this:

A basic material with a Diffuse and a Glossy shader mixed together.


In the Mix shader, the Fac value determines the relative strength of the two inputs – 0 being only the first input. You can set a uniform color using the corresponding properties, but of course we want to use the coloring exported from ParaView.

The coloring has been imported as an attribute called Col. We can use this information by adding a node from Input->Attribute and then setting the Name value to Col, afterwards connecting the Color output to the corresponding inputs of the shaders.

A material using the imported coloring.


If you now look at the rendered view, you can actually see the colors as they were exported from ParaView!

A rendered view of our object with the imported colors.

The Node Editor is a powerful tool which can be used to create almost any material, but it requires some trial-and-error at first. A common shader that you might also put into the mix is the Emission shader, which allows the materiel to emit light on it’s own.

Setting a camera

Until now we only rendered whichever view currently was set, but the final render(s) will be done from one or multiple cameras that we have to set. You can add a camera using Shift-A. The camera can be manipulated in the same way as any object, and to see the view of the current camera press 0 on the numpad. A handy shortcut is Ctrl-Alt-0 to set the current view as the camera view.

A final render can be produced by pressing F12.

Finishing the scene

This should give you an overview about the steps needed to import and render your scientific visualizations in Blender. A lot of helpful resources about Blender itself can be found online – it is unfortunately not an easy program to start with.

A lot of the learning curve is simply trying different things and seeing how it looks. There are a lot of options and possibilities to make the final render look amazing. Starting from the simple scene created above, we first added the monkey head in to actually see our geometry. Using the Subsurf modifier will make it look smooth. If you add a reflective plane at the bottom of the scene, you get some interesting reflections. Also, making the streamlines glow a bit helps a lot. In the end, your final render could look something like this:

Final render of our scene – the streamlines around the Suzanne head.

Part II: Scripting and rendering a dynamic scene

You now know how to get CFD results into Blender, how to generate a simple scene and how to create a nice-looking image for the title page of your PhD thesis – and we have not even dived into the advanced features like camera motion yet. In theory, by applying this knowledge to a time-series of CFD data to generate a movie would be possible, but it is very tedious. We can do a lot better with scripting, of course! Blender has a nice python API, and we will use this to our advantage to make “rendering a movie” pretty easy. In the following, when we talk about “dynamic” rendering, we refer to a series of data files that are rendered in the same scence – the typical application is that you have a series of solution files / state files at consecutive time steps and want to make a movie out of it. Note that the setup of the scene itself (lights, cameras, other effects) can of course also be either static or dynamic in this process.

The workflow builds on what we have discussed above, with a few modifications. We will first outline the general steps, and then give the details in an example below.

toolchain for dynamic rendering

The steps shown in the schematic above are:

  1. Use FLEXI (or any other CFD code, although we have a clear favorite, of course! 😉 ) to generate and save time snapshots of the solution.
  2. Provide the snapshots in a ParaView-readable format or use the FLEXI-ParaView plugin to directly access the h5 solution files. If you do not use FLEXI or wish to use the standalone visualization tools, just ensure that you can load the solution data into ParaView, i.e.  convert them into .vtu or .vtk files.
  3. Prepare the ParaView statefile, i.e. postprocess the data. Load one of your temporal snapshots into ParaView, and do all the postprocessing you wish to do. For example, extract streamlines as we did for Suzanne above or compute and color isosurfaces to visualize flow features. Remember that Blender itself is not a postprocessor, it will just make the postprocessed data look fabulous. So “what you see” has to be defined in ParaView, “how it looks” is what Blender helps you with. Once you are satisfied with the postprocessing, save the ParaView state file (e.g. as layout.pvsm).
  4. Next, run the provided python script to generate the PLY or X3D for all snapshots. This script feeds the files to ParaView, applies the layout provided and exports the data in the format selected.
  5. Now you should have all the snapshot data as either X3D or PLY files. Import one of the files into Blender for reference, and generate the Blender scene, i.e. arrange cameras, lights, material properties and so on. Save the scene as a .blend file.
  6. Finally, use the provided script to have Blender render all the snapshot data consecutively and generate a series of images from your scene. Combine the series of images into a movie with an encoder of your choice… et voilá!

Now that we have a general idea of how to do the dynamic rendering, let’s run through the toolchain step by step. As an example, we will choose the flow over a circular cylinder at Mach 0.2 and Re 200, which should give some nice vortex shedding. Again,  if you want to repeat the simulation, all the necessary files are shipped in the git repository. Since we want to get a nice smooth video, we choose a high output frequency by setting the analyze_dt parameter in the ini file. Convert all the solution files to a ParaView-readable format or just use FLEXI‘s plugin to load the h5 files directly. Note that the toolchain relies on the time stamp information being a part of the file name for all steps below, for example as in Cylinder_Re200_Ma0.2_State_0000533.100000000.h5. If you use the FLEXI framework, you do not need to care about that, it is automatically done by the framework. If you are using some other CFD tool, please make sure that the solution files include a time stamp.  Opening one snapshot into ParaView and setting some nice values for the Q-criterion gives us something like in the screenshot below. Click on image to see a brief video – done in Paraview – of what the flow looks like.


If you are satisfied with the postprocessing, save the ParaView statefile as cylinder.pvsm. By doing this, you have already completed steps 1-3 of the workflow. We recommend that you store all ParaView-readable files (either FLEXI h5 or vtu) in a separate folder named “states”, but that is totally up to you. For our example, the contents of the states folder look something like this:

Now, as step 4, run the provided script named to apply this statefile to all the stored solution files and extract the objects to be rendered in Blender. A typical call looks like this:
python -l cylinder.pvsm -r $MYHOME/flexi/build/lib/ -x x3d -f x3d $PWD/states/*h5

Here, is the script provided by us – for it to work, ParaView’s batch application pvbatch must be callable from where you are. Calling it with –help gives you an overview of the options available. The ones we use here are:

  • -l: the ParaView layout or state file that defines the postprocessing steps and data to export
  • -r (optional): the path to FLEXI‘s ParaView plugin
  • -x (optional): if not set, the output will be in the PLY format, otherwise X3D can be set as above
  • -f: the output folder for the PLY / X3D files. This folder will be created if it does not exist
  • the last argument is the path to the files to be processed, i.e. to the contents of the states folder

Depending on the size of your computation and the number of snapshots to process, this might take some time. Once the converting is done, the contents of the x3d folder should like similar to this:

Let us import one of those files into Blender to make sure everything worked and to set up the Blender scene. To do this, start Blender, delete all the objects from the startup scene and import an X3D file via File->Import->X3D. Change the render engine to cycles and delete all the DirectLights and Viewports as before.


After this, the scene should look something like this:


Now set up the lights as described above, and press Shift-Z to check if it all worked.

To define the material you which to apply, select the wake structures, click on the Material icon below and then on the plus-icon to create a new material slot. You can change the material name by clicking on it.

We have renamed the material to NEW_MAT and assigned it to the vortex structures. This is what it should look like:

You can now setup the material as discussed above. Once you are satisfied, make sure to hit the F button next to the material name slot. This prevents Blender from deleting the material if it is not assigned to an object. Since during the dynamic rendering objects will be imported and deleted for each snapshot, the material otherwise gets lost. Also delete the Shape_IndexFaceSets objects before saving the Blender scene as scene.blend.


Save the final result as scene.blend. That completes step 5 of the toolchain. Of course you can do a lot more than what we have described here. For example, adding multiple cameras or moving cameras is also supported. Now on to the final step, the rendering itself. This step entails loading the scence.blend file in Blender, importing the relevant geometries, rendering the scene, saving the images and repeating the process for all frames of the movie you are trying to make. Luckily, this is what the script we provide in the git repository does for you. In order to set it up for your render project, edit the user parameters defined at the top of the file.

The most important options you should set are:

  • importFolder: the location of the X3D or PLY files
  • matName: the name of the material you whish to apply to the imported geometry
  • outputFolder: where the images go
  • rendersamples: number of samples / rays for each rendering tile. For a quick, first shot rendering usually 25 is good, high quality renderings without artifacts may require considerably more

With all set up, we can now finally start the rendering!

blender -b scene.blend --python

The rendering should now start! Note that depending on what type of hardware you are working on,  you can tell Blender which rendering device to use (via the setup in the .blend file). Click on the icon below to choose either CPU or GPU rendering.



In the same menu bar, you can set the size of the tiles i.e. the size of the rendering stencil under “performance”. Depending on whether you are on a GPU or a CPU, this might strongly influence the rendering speed. The general rule of thumb is to use small powers of 2 for CPU (16,32) and larger ones for GPU (256, 512). You might play around with this and find out for yourself what works best!

Once the rendering is done, you can generate a movie out of the images by a method of your choice, e.g. with MEncoder. A very basic video can be generated with

mencoder mf://./*png -mf fps=30 -o cylinder.avi -ovc x264

Here is what your final result should look like. Click on the image to start the movie.


Or if you want to be a little bit more artsy:

Good luck and have fun! If you have any comments or questions, please post below.



We would love to hear from you!