hexed 0.3.0
 
Loading...
Searching...
No Matches
Mesh I/O

Hexed has a few functions for input and output of mesh files. This is useful both for saving the simulation state to be restarted later, and for interacting with other meshers and/or solvers. Because, to my knowledge, there is no existing file format capable of representing all the information in a Hexed mesh, including hanging nodes, octree hierarchy, and face warping, it primarily uses its own custom file format, described below. If you wish to use Hexed as a mesher for your own solver, or to use the Hexed solver with a custom mesh, your best bet is probably to write a parser for the native format. Hexed also has limited support for exporting meshes in the native formats for other solvers (see below), but such export formats lose information. Currently, importing files from other formats is not supported.

Native file format

The native file format for Hexed meshes is based on HDF5. One could argue that it doesn't even deserve to be called a file format—a Hexed mesh file is an ordinary HDF5 file that you can interact with via the HDF5 library or any of the other HDF tools, but with a specific structure. Before launching into the details of the file format itself, we should explain conceptually how Hexed represents its mesh.

Mesh representation

Elements

A Hexed mesh contains a collection of elements, each of which is either a line segment, a quadrilateral, or a hexahedron. The shape of each element is defined by a parametric mapping \( \vec{x}(\vec{\xi}) \), where \( \vec{\xi} \) is the vector of reference coordinates in the reference element (which is always the unit interval, square, or cube) and \( \vec{x} \) is the vector of physical coordinates in the physical element (the actual element with it's real size and shape). In Hexed, reference coordinates go from 0 to 1, not -1 to 1 as they do in some formulations. I.e. \( \xi_i \in [0, 1]\ :\ \forall i \). For an example, see the plot below. The mapping \( \vec{x}(\vec{\xi}) \) is always a polynomial.

Element if reference space with isocontours of physical coordinates.
Element if physical space with isocontours of reference coordinates.

In a Hexed mesh, many of the elements are perfect squares/cubes. That is, \( \vec{x}(\vec{\xi}) = a \vec{\xi} + \vec{b} \) for scalar constant \( a \) and vector constant \( b \). Such elements are referred to as Cartesian elements. Any elements that do not fit this description are called deformed. Hexed makes this distinction in order to exploit time and memory-saving simplifications of the numerical scheme for Cartesian elements. In many of the deformed elements, \( \vec{x} \) is a degree 1 (a.k.a. multilinear) polynomial that can be obtained by multilinear interpolation of the vertex coordinates. (Some people might be tempted to call this a "linear" polynomial, but it is technically not linear.) However, elements that are extruded to the surface geometry will have \( \vec{x} \) of full degree \(p\) in order to better fit the surface by warping their faces. As described in our most recent publications, face warping is represented by a polynomial on each face which essentially gives the deviation of the warped face from the unwarped face.

The following two sections explain the calculation of the coordinate transformation in more detail. As is often the case for computational science, the procedure is more neatly represented in code than in equations. So, some Python examples are included to demonstrate the process in 3D. Of course, you can also look at the real implementations in C++, but it is more complex due to the low-level language constructs, the dimensionality-independent formulation, and the context of how it fits into larger algorithms. I hope the Python snippets make it a bit more clear what the actual calculations are.

Vertex deformation

In Hexed, all field data, including the physical coordinates \( \vec{x} \), is represented by a degree \( p \) polynomial in each element in a nodal basis. To define the physical coordinates, or any other field, means to compute their value at the element's quadrature points, which are chosen to be the tensor product Gauss-Legendre nodes mapped to [0, 1] (in reference space). Quadrature points are also arranged in a row-major array format ("z"-index changes the fastest). (Technically, because the physical coordinates at the quadrature points aren't directly used by the computational kernel, they aren't permanently stored, but they are still computed temporarily.) As mentioned above, for an element without face warping, the physical coordinates defined by degree 1 interpolation between the coordinates of the vertices. The vertices of each element are ordered in a row-major, array-based manner (the last coordinate changes the fastest). For example, the vertices of the unit square are {(0, 0), (0, 1), (1, 0), (1, 1)} and the vertices of the unit cube are {(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)}. To further clarify, the following Python code snippet illustrates an example of computing quadrature point coordinates for a 3D element without face warping. For the actual implementation of this calculation in C++, see hexed::Deformed_element::position in src/Deformed_element.cpp.

import numpy as np
from sympy.integrals.quadrature import gauss_legendre
import matplotlib.pyplot as plt
degree = 3 # degree p of the polynomials
row_size = degree + 1 # computationally, it is usually more convenient to work in terms of the size of each row of quadrature points
nodes, weights = gauss_legendre(row_size, 20) # fetch the Gauss-Legendre nodes
nodes = (np.array(nodes).astype(np.float64) + 1)/2 # map from [-1, 1] (the standard convention) to [0, 1] (the Hexed convention)
def physical_qpoint_coords(vertex_coords):
""" compute the physical coordinates of the element quadrature points from the coordinates of the element vertices """
phys_coords = np.zeros((3, row_size**3))
for i in range(row_size):
for j in range(row_size):
for k in range(row_size):
# start with the coordinates of all vertices, and then perform linear interpolation one dimension at a time
qpoint_coords = vertex_coords + 0
for i_dimension in range(3):
node = nodes[(i, j, k)[i_dimension]]
# perform linear interpolation along `i_dimension`, reducing the number of points left to interpolate by a factor of 2
qpoint_coords = (1 - node)*qpoint_coords[:qpoint_coords.shape[0]//2, :] + node*qpoint_coords[qpoint_coords.shape[0]//2:, :]
# having performed linear interpolation three times (trilinear interpolation),
# we have now gone from 8 points to just 1 point with 3 values (the 3 coordinates)
phys_coords[:, ((i*row_size) + j)*row_size + k] = qpoint_coords
return phys_coords
# vertices for an example element which is a frustrum of a pyramid (the top/+z face is shrunk by 0.2 in the x and y directions)
example_vertices = np.array([
[0., 0., 0.],
[.2, .2, 1.],
[0., 1., 0.],
[.2, .8, 1.],
[1., 0., 0.],
[.8, .2, 1.],
[1., 1., 0.],
[.8, .8, 1.],
])
# plot the vertices
ax = plt.gcf().add_subplot(projection = "3d")
ax.scatter(example_vertices[:, 0], example_vertices[:, 1], example_vertices[:, 2])
# compute the quadrature point positions of this example element
example_qpoints = physical_qpoint_coords(example_vertices)
# plot the quadrature points
ax.scatter(example_qpoints[0, :], example_qpoints[1, :], example_qpoints[2, :])
ax.set_xlabel("$x_0$")
ax.set_ylabel("$x_1$")
ax.set_zlabel("$x_2$")
plt.show()

Face warping

The face warping is represented as described in our 2024 SciTech meshing paper, with two exceptions:

  1. The warping function is stored at Gauss-Legendre points instead of Gauss-Lobatto. They are still originally computed on Gauss-Lobatto points to ensure continuity between elements, but they are then mapped to Gauss-Legendre points, for consistency with the field polynomials.
  2. Warping functions can be applied to any face, not just the +x face. In fact, for simplicity, warping values are stored for all faces of all deformed elements, even though they will only be nonzero for at most one face on each element, or in the case that you have anisotropic layers, sometimes two opposite faces.

These distinctions do not constitute inconsistencies with the paper. The underlying mathematics is completely equivalent, but the best way to represent that math in equations is not the same as the best way to represent it in code. In any case, the warping function is represented as an adjustment to the unwarped position of the face quadrature points. Note that the face quadrature points are an array of Gauss-Legendre points on the faces, which is not a subset of the interior quadrature points—extrapolation is required to obtain values of field variables at face quadrature points. As in the publication, the face warping is a one-dimensional adjustment in the direction of increasing face-normal reference coordinate. Specifically, if \( \vec{x}_- \) and \( \vec{x}_+ \) are the positions of a matching pair of quadrature points on the negative and positive-facing x, y, or z faces, and \( w_- \) and \( w_+ \) are the warping functions at those faces, then the warping can be formulated as

\[ \vec{x}_{-, \text{warped}} = \vec{x}_- + w_-(\vec{x}_+ - \vec{x}_-) \]

\[ \vec{x}_{+, \text{warped}} = \vec{x}_+ + w_+(\vec{x}_+ - \vec{x}_-) \]

and the corresponding adjustments to the interior quadrature points are computed by linear interpolation between the adjustments to the faces. The following Python snippet shows an example, whereas the real implementation is in hexed::Deformed_element::position in src/Deformed_element.cpp.

def warped_qpoint_coords(vertex_coords, face_warping):
""" computes the physical quadrature point coordinates again, but this time with face warping """
unwarped_coords = physical_qpoint_coords(vertex_coords) # start with the unwarped coordinates; face warping is a perturbation to these
warped_coords = unwarped_coords + 0 # will adjust these as we go along
for i_dimension in range(3): # start with warping for -/+ x faces, then y, etc
# the interior quadrature points are a 3D array, whereas the face quadrature points are a 2D array
# we need to compute some strides in order to map face quadrature points to the corresponding 1D slice of interior quadrature points
interior_stride = row_size**(2 - i_dimension) # stride between two interior quadrature points in the interior slice
# stride between two interior quadrature points corresponding to consecutive rows of face quadrature points
if (i_dimension == 0): face_row_stride = row_size
else: face_row_stride = row_size**2
# stride between two interior quadrature points corresponding to consecutive columns of face quadrature points
if (i_dimension == 2): face_col_stride = row_size
else: face_col_stride = 1
# loop through face quadrature points (do - and + faces for the same dimension simultaneously)
for face_row in range(row_size):
for face_col in range(row_size):
# compute the slice of interior quadrature points
slice_start = face_row*face_row_stride + face_col*face_col_stride
interior_slice = slice(slice_start, slice_start + row_size*interior_stride, interior_stride)
interior_coords = unwarped_coords[:, interior_slice]
# extrapolate the unwarped coordinates of interior quadrature points to faces
unwarped_face_coords = np.zeros((3, 2)) # column 0 is - face, column 1 is + face
for i in range(row_size):
for face_sign in [0, 1]: # face_sign == 0 indicates negative-facing face whereas face_sign == 1 indicates positive-facing
unwarped_face_coords[:, face_sign] += interior_coords[:, i]*np.array([(face_sign - nodes[j])/(nodes[i] - nodes[j]) for j in range(row_size) if j != i]).prod()
interp_coefs = [1 - nodes[np.newaxis, :], nodes[np.newaxis, :]]
# compute the difference between coordinates of corresponding quadrature points on opposite faces (\xi_+ - \xi_-)
diff = unwarped_face_coords[:, [1]] - unwarped_face_coords[:, [0]]
# interpolate back to interior quadrature points
for face_sign in [0, 1]:
warped_coords[:, interior_slice] += interp_coefs[face_sign]*face_warping[i_dimension, face_sign, face_row*row_size + face_col]*diff
return warped_coords
# compute an example warping function which is a simple quadratic warping of the -y face
example_warping = np.zeros((3, 2, row_size**2)) # first index is face dimension, second index is face sign, last index is the quadrature point of the face
example_warping[1, 0, :] = 1.5*(nodes[:, np.newaxis]*(1 - nodes[:, np.newaxis])*np.ones(row_size)).flatten()
# plot the vertices
ax = plt.gcf().add_subplot(projection = "3d")
ax.scatter(example_vertices[:, 0], example_vertices[:, 1], example_vertices[:, 2])
# compute the quadrature point positions of this example element
example_qpoints = warped_qpoint_coords(example_vertices, example_warping)
# plot the quadrature points
ax.scatter(example_qpoints[0, :], example_qpoints[1, :], example_qpoints[2, :])
ax.set_xlabel("$x_0$")
ax.set_ylabel("$x_1$")
ax.set_zlabel("$x_2$")
plt.show()

Tree

Tree refinement (bintree in 1D, quadtree in 2D, and octree in 3D) is a key concept in the Hexed meshing scheme. This is represented by a recursive data structure (implemented as hexed::Tree), where each Tree object optionally contains pointers to 2/4/8 other Tree objects representing its children and/or a pointer to another Tree object representing its parent. The child pointers are in the same row-major array order as the vertices of an element, according to their spatial position. A tree with no parent is called the root (the tree structure as a whole naturally contains exactly one of these), and a tree with no children is called a leaf. The refinement level of a Tree object is the number of generations it is from the root. I.e., the root will have refinement level 0, its children will have refinement level 1, etc..

A Tree can also contain a pointer to a computational element, as described above. The Hexed meshing scheme involves starting with a Cartesian tree mesh, cutting holes in it, and extruding new elements, meaning that not all Trees have elements, and not all elements have Trees. Tree objects are implicitly associated with a box in physical space, which, if the Tree is associated with a Cartesian element, will coincide with that element. If the Tree is associated with a deformed element, that element will usually be close to but not exactly coincide with the Tree. To fully define the mapping from Tree objects to physical space, the tree as a whole also has an origin, which is the physical coordinates of the minimum (least x, y, and z) vertex of the root, and also a root size, which is the edge length of the root. The nominal position of a Tree is a vector of 1/2/3 non-negative integers representing its displacement relative to the tree origin in units of that Tree object's edge length (not the root size). The association between elements and Trees leads to elements also having a refinement level and nominal position, which is equal to that of their Tree, if they have one. Extruded elements without Trees still have a refinement level and nominal position with is computed based on their relation to the element they were extruded from (their extrusion parent). E.g. if you extrude an element from the +x face of its extrusion parent, then the x-coordinate of its nominal position will be 1 greater than that of its extrusion parent.

Connections

Any mesh representation must have some way to indicate the connections between elements. Although you could, in principle, infer this information from the vertex list of the elements and the tree structure, Hexed identifies the connections explicitly in both the hexed::Accessible_mesh object and the mesh file. If you are trying to read a Hexed mesh into your own solver, you may opt to simply ignore the connection data and just use the element/vertex data. Furthermore, Hexed does not allow completely arbitrary connections between elements—it only supports the manners of connections required to generate a tree mesh and to extrude elements from it.

A conformal connection (one between two elements of the same refinement level) is specified by 6 pieces of data:

  1. a pointer to the first element
  2. a pointer to the second element
  3. the index of the dimension of the face of the first element that is being connected (0 if it is an x-face, 1 for y, 2 for z)
  4. the dimension index of the second face
  5. the sign of the face of the first element (0 if the face faces in the negative x/y/z direction, 1 if it faces in the positive direction)
  6. the sign of the second face

Either the dimensions or the signs of the two faces involved must be different. Connecting the +x face of one element to the +x face of another, for example, is not allowed. Of course, connecting two faces of the same element is also not allowed. This connection format doesn't explicitly specify the orientation of the faces that are connected (which vertex maps to which vertex). Instead, it is always assumed to be what you might call the "laziest possible orientation". Consider each element restored to its original Cartesian shape and location (coincident with its Tree, if it has one). Now, merge the vertices of the connected faces while only changing the coordinates corresponding to the dimensions of the connected faces. So, for example, if you are connecting a y-face to a z-face, then merge the vertices without changing the x-coordinate. In other words, connect the faces by only stretching the elements and not twisting them. Of course, in the actual mesh, the vertex coordinates are arbitrary—the principle of moving vertices without changing one of the coordinates is just a thought experiment to explain what I mean by the "laziest possible orientation". Note also that if you're just trying to read a Hexed mesh into your code, you don't necessarily have to worry about this orientation problem, since you can compare the vertices of the neighboring elements and see which are the same. However, if you're trying to create a mesh file for Hexed, you will need to make sure the orientation condition is satisfied.

A refined connection connects a coarse element with some elements that are 1 level finer. You cannot create a refined connection with elements that differ by more than 1 refinement level. A refined connection can accept up to 1/2/4 elements in 1/2/3 dimensions. It can also accept half as many fine elements, causing the remaining ones to be "stretched" by a factor of 2 to cover the coarse face. This occurs when you have extruded elements of different refinement levels. As a result, in 2D sometimes you will have one coarse element connected to one finer element, but this is still considered to be a refined connection, not a conformal one, since the elements have different refinement levels. Thus, a refined connection is specified by 11 pieces of data:

  1. a pointer to the coarse element
  2. up to 4 pointers to the fine elements
    • The fine elements are in row-major array order. That is, elements that come first in the list of child pointers of their parent tree will come first in the fine element list of a refined connection.
  3. boolean flag indicating whether the fine elements are stretched along the lesser dimension, i.e. the first of the (up to two) indices of the array of fine elements.
    • If this is true, there will be half as many fine elements as there otherwise would be.
  4. boolean flag indicating whether the fine elements are stretched along the greater dimension, i.e. the second of the two indices of the array of fine elements, and only applicable in 3D.
    • If this is true, there will be half as many fine elements as there otherwise would be.
  5. dimension index of the face of the coarse element that is being connected
  6. dimension index of the fine faces
    • This is only a single number. It is assumed that the same face of all the fine elements is being connected.
  7. sign of the coarse face
  8. sign of the fine faces
    • again, only one number for all the fine elements

Finally, a boundary connection specifies that a face of an element is on a domain boundary. This is specified by 4 pieces of data:

  1. pointer to the element whose face is being connected
  2. pointer to the boundary condition
  3. dimension index of the face
  4. sign of the face

File format

A Hexed mesh file is an HDF5 file, with a name ending in .mesh.h5 by convention. Note that some of the attributes in this file aren't conceptually part of the mesh, but they are numerical scheme parameters that affect how much memory the mesh object needs to allocate. Its contents are the following:

group /:

  • attribute n_dim (1 \(\times\) 1 int): number of dimension
  • attribute row_size (1 \(\times\) 1 int): polynomial degree + 1
  • attribute n_forcing (1 \(\times\) 1 int): number of artificial viscosity forcing functions (numerical scheme parameter). Usually 4.
  • attribute n_stage (1 \(\times\) 1 int): number of time integration stages (numerical scheme parameter). Usually 2.
  • attribute n_var (1 \(\times\) 1 int): number of physical flow variables (numerical scheme parameter). Usually n_dim + 2.
  • attribute root_size (1 \(\times\) 1 double): edge length of the root tree element
  • attribute version_major (1 \(\times\) 1 int): major version of the Hexed executable used to create this file (if applicable).
  • attribute version_minor (1 \(\times\) 1 int): minor version of the Hexed executable used to create this file (if applicable).
  • attribute version_patch (1 \(\times\) 1 int): patch version of the Hexed executable used to create this file (if applicable).
  • group vertices:
    • dataset position (n \(\times\) 3): Coordinates of the vertices. There are always 3 even for 1D and 2D meshes. Trailing coordinates are set to 0.
  • group elements:
    • dataset is_deformed (n \(\times\) 1 bool) One entry for each element. If that element is perfectly Cartesian, the value is 0. If it is deformed, the value is 1.
    • dataset nominal_position (n \(\times\) n_dim int) one entry for each element giving its nominal position
    • dataset refinement_level (n \(\times\) 1 int) one entry for each element giving its refinement level
    • dataset aniso_refinement_level (n \(\times\) 1 int) one entry for each element giving its anisotropic refinement level. The first extruded element will have an anisotropic refinement level of 1, and if any layer splits are performed, each subsequent element will have an anisotropic refinement level 1 higher than the previous (but the same refinement level).
    • dataset vertices (n \(\times\) \( 2^\verb|n_dim| \) int) One row for each element giving the indices (in the /vertices/position dataset) of its vertices in standard row-major array order. For elements that are connected, these will actually point to the same vertices, so if you are reading a mesh file you can use this to infer the connectivity information, if you choose (although hanging nodes might make that tricky).
    • group face_warping:
      • dataset element_indices (n \(\times\) 1 int) One entry for each deformed element. Each entry is the index of the element (in the datasets is_deformed, nominal_position, etc.) that this is warping referring to (the node_adjustments dataset gives the actual warping values). The entry of is_deformed at that index must be 1. There should not be multiple face warping entries referring to the same element.
      • dataset node_adjustments (n \(\times\) \( \verb|n_dim*2*row_size|^{\verb|n_dim| - 1} \) double) Face warping values for the element specified by the corresponding entry in the element_indices dataset. These warping values behave as described in Face warping, where the row-major n_dim \(\times\) 2 \(\times\) \( \verb|row_size|^{\verb|n_dim| - 1} \) array of values for each face (the face dimension index changes the slowest) has been flattened into a single array of values for all faces.
  • group connections:
    • dataset conformal (n \(\times\) 6 int): Each row provides the data for 1 conformal connection. In each row, entries 0 and 1 are the indices of the elements participating in the connection. Entries 2 and 3 are the dimension of the faces involved, and entries 4 and 5 are the signs of the faces (0|1). If either of the elements are marked as Cartesian (via /elements/is_deformed), then there are additional restrictions: both face dimensions must be the same, the first face sign must be 1, and the second face sign must be 0. If you're writing a mesh file and that is inconvenient, you can just mark all elements as deformed, although this can add a modest performance overhead in the solver.
    • dataset refined (n \(\times\) 11 int): Each row provides the data for 1 refined connection. In each row, entry 0 is the index of the coarse element. Entries 1, 2, 3, 4 are the indices of the fine elements in the connection. If there are less than 4 fine elements (because the mesh is 2D or there is stretching) then the unused entries are set to -1. Entries 5 and 6 indicate whether the fine elements are stretched in the first and second (if applicable) face dimensions. Entries 7 and 8 are the dimension indices of the pertinent faces of the coarse and fine elements, respectively. Entries 9 and 10 are the signs of the faces of the coarse and fine elements (0|1). As is the case for conformal connections, if any of the elements involved are Cartesian, the face dimensions must be the same. However, the Cartesian restriction of face signs does not apply (because the coarse element is required to come first, regardless of which face is being connected).
    • dataset boundary (n \(\times\) 4 int): Each row provides the data for 1 boundary connection. Entry 0 is the index of the element being connected. Entry 1 is the index of the boundary condition being applied. Entry 3 is the face dimension index. Entry 4 is the face sign (0|1). The mesh file does not record what the boundary conditions are—those are specific to each simulation. However, there is a convention for mapping boundary condition indices to input file parameters:
      • Boundary conditions [0, 2*n_dim) are the extremal boundary conditions. In particular, extremal_bcIJ is index 2*I + J (the order of mesh extremes is analogous to the order of element faces).
      • Boundary condition 2*n_dim, if it exists, is the surface_bc.
  • group tree:
    • dataset origin (1 \(\times\) n_dim double): origin of the tree
    • dataset children (n \(\times\) \( 2 + 2^\verb|n_dim| \) int): Represents the actual topology of the tree via the child lists, plus a little extra data they gets tacked on. Each row represents one hexed::Tree object. Entry 0 in the row is the index if the computational element (i.e. a row in the /elements datasets) associated with this tree, if there is one. If the tree does not have an element, then entry 0 is set to -1. Entry 1 is the flood fill status of the tree (-1 ⇒ unprocessed | 0 ⇒ deleted | 1 ⇒ in domain). The remaining entries are the indices of the children of the tree, if they exist, and are otherwise set to -1. So, if a tree is a leaf, all entries after 1 will be -1. If, for example, entry 2 were equal to 946, that would mean that you can find information about the first child of this tree on row 946 of the same dataset. Note that there is no information about parents, since a single recursive traversal through the children dataset is sufficient to fully reconstruct the tree structure including parent relations.

For the implementation of reading and writing a mesh file, see hexed::Accessible_mesh::read_file and hexed::Accessible_mesh::write in src/Accessible_mesh.cpp.

Export formats

So far, Hexed has limited support for exporting the mesh into other formats, which it can write but not read. The challenge is finding formats that support hanging nodes, which are critical to the Hexed meshing scheme. Implementing an algorithm to remove the hanging nodes (only for exporting) may be of interest in the future, but that is a nontrivial task.

Currently, the only export format currently supported is OpenFOAM's PolyMesh. Technically, the hanging nodes are implemented using polyhedral elements with more than 6 faces. At this time, only 3D is supported. OpenFOAM is an inherently 3D solver and 2D meshes have to be extruded to create a 1 cell thick slab, a capability that I have not yet implemented.

I have been considering adding support for ANSYS Fluent case files, which I believe support genuine hanging nodes. If this would be useful to you, or you know of another format that you believe would work, feel free to discuss it with me so that I can factor it into future development plans.