A DIFFUSE-MORPHING ALGORITHM FOR SHAPE OPTIMIZATION

Shape optimization typically involves geometries characterized by several dozen design variables and a possibly high number of explicit/implicit constraints restricting the design space to admissible shapes. In this work, instead of working with parametrized CAD models, the idea is to interpolate between admissible instances of finite element/CFD meshes. We show that a properly chosen surrogate model can replace the numerous geometry-based design variables with a more compact set permitting a global understanding of the admissible shapes spanning the design domain, thus reducing the size of the optimization problem. To this end, we present a two-level mesh parametrization approach for the design domain geometry based on Diffuse Approximation in a properly chosen locally linearized space, and replace the geometry-based variables with the smallest set of variables needed to represent a manifold of admissible shapes for a chosen precision. We demonstrate this approach in the problem of designing the section of an A/C duct to maximize the permeability evaluated using CFD.


INTRODUCTION
Shape optimization may be viewed as the task of combining a parameterized geometric model with a numerical simulation code in order to predict the geometric state that minimizes a given cost function while respecting a set of equality/inequality constraints.In this paper we consider the task of shape/mesh interpolation or hypothesizing the structure, which occurs between shape/mesh instances given by a sequence of parameter values.The need for this arose during the development of multidisciplinary optimization techniques, because CAD parameterized models involved in automatized computing chains suffered from excessive design space dimensionality eventually leading to crashes of either the mesh generator or the solver.This phenomenon is due to the difficulties in expressing all the technological and common sense constraints (needed to convert a set of geometric parameters to an admissible shape) within existing parameterization methods.Most current approaches to shape parameterization require hand-constructed CAD models.We are interested in developing an alternative approach in which the interpolation system builds up structural shapes automatically by learning from existing examples.One of the central components of this kind of learning is the abstract problem of inducing a smooth nonlinear constraint manifold from a set of the examples, called "Manifold Learning" by Bregler [2] who developed approaches closely related to neural networks for doing it.[8] proposed a similar approach in the domain of Reduced Order Modeling (ROM) for complex flow problems.In this paper we apply manifold learning to the shape interpolation problem to develop a parametrization scheme tailored to the structural optimization problem (e.g.airplane wing, A/C duct, engine inlet, etc) Several techniques [5,26] have been used to replace a complicated numerical model by a lower-order meta-model, usually based on polynomial response surface methodology (RSM), kriging, least-squares regression and moving least squares [4].Surrogate functions and reducedorder meta-models have also been used in the field of control systems to reduce the order of the overall transfer function [26].A very popular physics-based meta-modeling technique consists of carrying out the approximation on the full vector fields using PCA and Galerkin projection [1] in CFD [21,27] as well as in structural analysis [12] and has been successfully applied to a number of areas such as flow modeling [23,13], optimal flow control [21], aerodynamics design optimization [18,11] or structural mechanics [14].In [7], a snapshot-weighting scheme introduced using vector sensitivities as system snapshots to compute a robust reduced order model well-suited to optimization.[8] also demonstrated a goal-oriented local POD approach that is computationally less expensive than using a global POD approach.However, we have not observed much if any research into using decomposition-based surrogate models to reducing dimensionality of the design domain in shape optimization, and for that matter, structural optimization of any type.This area, we feel is promising considering the obvious advantages of having far fewer parameters describing the domain: easier visualization, more flexibility in the choice of admissible shapes, better applicability to gradient-based solvers due to reduced dimensionality and thus a reduction in the overall size of the optimization, and of course a separation between the CAD and the optimization phases in system design by giving the optimization group a protocol to reparametrize structural shapes for a given set of admissible shapes/meshes that can be generated by the CAD group, and using the presented algorithm (or a variant thereof) on these to get the new set of design variables.In this paper, we present what can best be described as a manifold learning approach combining Diffuse Approximation and Principal Component Analysis, whose performance is easily compared to that of simple linear interpolation, classical morphing [25] and a posteriori mesh parametrization [9].We propose a four-step "a posteriori" reparametrization approach to reduce the number of design variables needed while describing the shape of a structure: -Pixellization: the protocol first uses the method of snapshots to generate M admissible shapes (or read a set of structural meshes) sweeping the design space.In order to obtain an indicator function for the design domain, a step called "pixellization" is next performed by mapping the snapshot boundaries/edges onto a reference grid with a certain resolution, to be then stored as a binary array S i of 0's and 1's, as is typically done in image-storing/manipulation [15].-Decomposition of the M snapshots by Principal Components Analysis.
-Two-level dimensionality reduction: In the first reduction phase, the snapshot "pixel arrays" (or "voxels" in 3D) are then reduced to obtain a small number of dominant basis vectors ( φ1 .. φm ) spanning the physical design domain, and the vector of coefficients ᾱ ∈ R m , m << M is then obtained by projecting a structural shape onto the basis Φ.In the second reduction phase, the coefficients α 1 ..α m corresponding to the snapshots are analyzed to understand the shape of the feasible region, allowing us to deduce the true dimensionality of the physical design domain.A Diffuse Approximation performed in the α-space gives the final minimal set of parameters t 1 ...t p , p ≤ m, thus our approach involves a two-level model reduction.Since they have been obtained from an "a posteriori" sweep of the design domain followed by decomposition, these new variables can be directly used in an optimization algorithm to obtain the optimal shape (pixel array) for a given performance objective.
-Shape Interpolation to obtain a smooth structural shape from t.The methodology is described in the next section with the overall algorithm, and the test-case from the automotive field, the numerical model used to calculate the objective function are then described in section 4. The optimization problem is formally presented in section 5. Section 6 presents some results with a discussion of the different stages, and we close with a discussion of possible future work.

Creation of snapshots
We build the parametrization scheme after studying the full range of admissible shapes (i.e.snapshots [10]) constituting the design domain.For structural optimization problems for a fixed topology, these admissible shapes could be obtained in a Lagrangian description by a sampling of the geometry-based design variables within their feasible range X ∈ [ LB , ŪB ] ⊂ R N , or simply from the finite set of points describing the edges/boundaries of a series of CFD meshes/grid points for an initial random sampling of M designs.

Pixelization of snapshots
This step refers to mapping the edges/boundaries for each snapshot onto a reference grid and store it as a binary array [15].This is typically performed by finding the cells (in the reference grid) penetrated by the edges/boundary of the structure or mesh and assigning a value 1 to these boundary cells as well as all the cells inside the boundary cells as shown in figure 1, and 0 to the cells outside the boundary, thus allowing us to store the pixel maps as arrays (S i ∈ R Nc , i = 1..M) of 1s and 0s.It goes without saying that pixelization captures the actual shape better with higher resolution compared to a lower resolution.Figure 1.Reference grid mapping for pixel map: plate with circular hole

Principal Components Analysis
This is the first phase of model reduction.We first calculate the deviation matrix D S for the snapshots using: where M << N c = number of snapshots, S i = ith individual snapshot binary array (pixel map)and S is the mean of all the snapshots.Next, the covariance matrix C v is calculated allowing us to express any S j in terms of the eigenvectors φi of C v .
for the jth pixel map.
In the first reduction phase, we limit the basis to the first m << M most "energetic" modes

Model reduction and design domain dimensionality
Equation ( 4) does not provide a sufficient basis for establishing the value of m as one needs to specify the threshold value for ǫ.Also the α ij may not be taken as design variables without taking into account the possible relationships between them so as to render feasible shapes.

Figure 2. Feasible region for a plate with a circular hole of varying radius
Let us consider the same system (plate with circular hole R min ≤ r ≤ R max .Ignoring the fact that the dimensionality is 1, we construct 50 random snapshots by varying the radius r.The pixelization and PCA are then performed in succession giving us a set of α's corresponding to each snapshot.As illustrated in figure 2, the α's form a set of one-dimensional manifolds, clearly indicating that the design domain is parametrized by ONE single parameter t, which in this case happens to be the hole radius (in the general case we obtain a vector t ∈ R p , p ≤ m), i.e. α 1 = α 1 (t), α 2 = α 2 (t)....These manifolds are easily obtained by performing a Diffuse Approximation [4,20] over all the α 1 ...α M obtained from snapshots S 1 to S M .Furthermore, the curves of α 1 , α 2 , ... vs t may be interpreted as possible "constraints" (direct geometric constraints, technological constraints etc that are difficult to express mathematically) on the geometric parameters X (here simply r) in the α-space, since points lying outside the manifolds will produce inadmissible shapes as shown.Thus, in the second reduction phase, we locally introduce the parametric expression of the α-manifolds.

Manifold approximation and updating
We present here a formal approach to locally identify the system dimensionality from the α-manifolds.Consider a system of M pixel snapshots converted to the PCA-space retaining m < M coefficients thus giving us a set of points ᾱ1 , ... ᾱM ∈ R m .We would like to implement an algorithm that: 1. Detects the "true" dimensionality (p ≤ m) from the local rank of the α-manifold in the vicinity of the evaluation point, so that the feasible region may (locally) be expressed as 2. Constrains the evaluation point (ᾱ ev ) to stay on the feasible region of admissible shapes, during the course of the optimization.

Local Rank Detection of α-manifold
To locally detect the dimensionality of the α 1 ...α m hyper-surface in the neighborhood of ᾱev , we first establish the local neighborhood, this may be done in the original geometric space (if available) or, if the original parameters are unavailable which is what this approach is intended for, by using the α values if the neighborhood is sufficiently dense.So if β1 ... βnbd are neighboring points in α-space, we next use a polynomial basis centered around the evaluation point To demonstrate this, we consider again the M snapshots for a plate with a circular hole of varying radius.Considering the first three modes, the points corresponding to the snapshots are shown to the left in figure 3. We assemble the moment matrix A and calculate its rank = 1 (only one significant value as seen in the middle figure 3) and thus the dimensionality of the plate with circular hole of varying radius is p = 1, thus allowing us to parametrize the curve with a single parameter α  The idea is to bring the current design point given by the optimization algorithm in subsequent iterations, down to the surface, which represents locally the manifold of admissible shapes.The local surface tangent to the manifold is defined with respect to the tangent plane iteratively updated.To achieve this, we use a Diffuse Approximation-based manifold "walking" scheme consisting of the following steps shown in figure 4.
1. Let P i be the evaluation point (on the α-manifold), and P 0 i+1 be the new candidate point (that needs to be brought back on to the manifold/feasible region).We first establish the neighborhood β1 ... βnbd of P 0 i+1 .2. Calculate the centroid βm = ( nbd i=1 βi )/nbd.3. Find the centroidal plane for the neighborhood from the eigenvectors of the covariance matrix C nbd , the first eigenvector representing the plane normal: 4. Project the evaluation point as well as the neighborhood points in the local coordinate system v1 , v2 ... (origin at centroid βm ) to get the local co-ordinates h, t 1 , t 2 .....t p where h is the height over the centroidal plane using the equations (for a general point ᾱ).
5. Perform a diffuse approximation for the nbd points, to obtain the local surface h = h(t 1 ...t p ) using a polynomial basis P centered around ᾱev , with a weighting matrix W .
6.We then project the point P 0 i+1 onto this tangent plane to get the adjusted evaluation point P 1 i+1 , and then repeat the process by finding the new neighborhood and new tangent plane and new projection point P 2 till the evaluation point stops changing P f i+1 .In other words, we "walk" along the surface of the α-manifold to ensure that we stay in the domain of feasible solutions.To illustrate the approach, consider the α coefficients obtained for a plate with two circular holes of varying radii r 1 and r 2 , with M = 1000 snapshots, shown in figure 5. We see clearly from the shape of the manifolds, that all the parameters α 1 ....α M

Shape interpolation
In this step, performed at every single function call within the optimization subroutine, we recreate the structural shape for an arbitrary design point ( t).

PCA reconstruction
In the first step, the α coefficients are obtained from the values of t (location on the α-manifold), and thus the pixel maps: While we expect the interpolated map to contain values of 0 and 1 based on the previous section which appears to guarantee an admissible solution map as long as we stay on the α-manifold, it is still possible for the processes of averaging snapshots, singular value decomposition and truncation to deliver intermediate values (grayscale) around the boundaries as intermediate values during the optimization.The next step deals with this problem in case it surfaces.

Density filtering
Due to the truncation of the basis vectors, it is possible that in the course of the optimization we may pass through certain points slightly outside the feasible surfaces.A typical example is presented in Figure 7 showing the conversion from pixel map to boundary pixels.In this simple example, we use a worst-case pixel map with values of 0, 0.5 and 1.Here, there is one single boundary/edge to be located, but in complex shapes the density filter needs to allow the user to capture every possible edge.For the purposes of shape optimization, the authors have found Canny's algorithm [6] to be an appropriate density filter, with a pretreatment as presented below.Any density gradient-based filter can easily be thrown off by certain situations, like the one shown in figure 7 where the filter throws up two boundaries, since both edges (the real one as well as the false one) represent a strong gradient, no matter the filter threshold.One would expect that a sufficiently diverse family of snapshots, a grid of sufficient resolution and a good optimization algorithm would limit the occurrence of such situations, but we might need to deal with situations of this sort during intermediate stages in the optimization.In order to be able to distinguish between two gradients of the same strength based on the actual density values, we recommend pretreating each element of the snapshot S old according to (10) in order to attenuate the stronger gradients further away from values of 1 (since the true boundary should be close to the front corresponding to a value of 1 for the approach presented in this paper).
where h, b are constants that can be adjusted according to the type of problem being reduced.This is a simple but effective pretreatment somewhat inspired by grayscale suppression methods in topology optimization [24,19].

Pixel map boundary and Moving least squares smoothing
The next step is to locate the co-ordinates of the corner points/vertices of the boundary pixels using one of various possible methods [3], as seen in Figure 8 for a part of the pixel map.Next, a local moving least square approximation using radial basis weighting functions [20,4] is performed to construct the boundaries/edges from the vertices of the (reconstructed) pixel map.
where l = parameter representing the curve.
The coefficients a x (l) and a y (l) are not constant over the domain but depend on the values of the design variables, and are chosen to minimize the functionals J x (a) and J y (a) defined by: where N n = number of vertices in the local neighborhood of each evaluation point used to describe the reconstructed pixel map, located by the marching cubes approach [3].
A typical radial basis form for the weighting function is w i (l i , l) = exp(−(l i − l) 2 ), but one could also choose an interpolating polynomial form to better capture a somewhat irregular shape due to local effects.

Objective function evaluation
The reconstructed shape is meshed and the numerical analysis is performed using a method chosen based on the disciplines involved in the analysis i.e.CFD/Navier-Stokes for incompressible flows [17,16], FEA for structural analysis [22], etc.The only difference is that instead of obtaining X opt i we attempt to find the final governing parameters topt and thus the coefficients ᾱ( topt ) that optimize the performance objective.An important phase here is remeshing the surfaces obtained in section 2.4.Chappuis et al [9] developed an approach of calculating principal curvatures from an existing mesh or shape using a secondary local model with Diffuse Interpolation, and then using these curvatures to identify shape primitives such as cylinders, torus, etc for the purpose of meshing.In this paper, we have calculated the curvature energy for the reconstructed edges and used this to position the nodes/blocks for meshing the newer shape.

Algorithm for overall procedure
The complete algorithm is shown in figure 9.The inlet and outlet portions of the air-conditioning duct have fixed geometries, while the middle portion allows for modification of the shape and thus performance of the duct.The duct geometry as shown in Figure 10 is completely described by the relative positions of points P1 to P11.P1 to P4 and P9 to P11 are assumed fixed and the positions of P5,P6, Figure 10.Duct geometry showing four different regions P7 and P8 are required to determine the geometry of the portion of the duct critical to performance.The parameters that determine the locations of P5 to P8 are obtained by the geometric constructions shown.Parameters X1 to X5 allow us to locate P5 to P8, while parameters a1 to a4 and b1 to b4 allow us to draw Bezier curves passing through these points tracing out the whole geometry of the curved portion of the duct.The geometry of the curved portion of the duct in 2D (the only part of the duct that is variable) may thus be characterized by 13 parameters in all: x1 to x5, a1 to a4 and b1 to b4, and the design and thus performance can be changed by altering these parameters.In order to retain a laminar flow and for other design considerations, there are upper and lower bounds on these 13 parameters and thus on the possible designs for the duct.The design variables are thus: X 1 , X 2 , X 3 , X 4 , X 5 (forP 1 ..P 8 ) and

CFD model and mesh
Since the Reynolds number for the situation is typically low, the air flow is modeled using OpenFoam CFD for incompressible 2D laminar flow.For every possible design resulting from particular choices of the 13 parameters described earlier in section 4.1, we set up a CFD grid with 39000 grid points and 17250 hexahedral cells.The physical domain is split into 23 different blocks for the purpose of meshing.Boundary conditions enforced for the CFD set the pressure at the duct outlet = 0 (atmospheric pressure) and flow speed along the walls (straight as well as curved portions) = 0.The CFD analysis is run for 500 iterations to ensure convergence, and the converged pressure and velocity fields in the duct are obtained for each hexahedral cell, and are assumed to be evaluated at the midpoint of each cell for post-processing and surrogate function calls.The performance-related objective function (permeability) may be directly evaluated from the pressure/velocity fields (both CFD as well as surrogate) for each design.

Optimization problem
The optimization problem in the geometric space may be written as: where P f low = flow permeability = 1/(pressure loss from inlet to outlet)= 1/(P inlet − P outlet ), and ŪB and LB are the upper and lower bounds on the 13 design variables.Once we switch over to the reduced space, we can express the objective function as a function of the PCA coefficients (α i ) and hence the new design variables t, thus the optimization problem may be written as: So the optimization problem is now one of finding the pixelized shape S( ᾱ( topt )).The constraints g i , i ∈ [1, N] are obtained by transferring the bounds ŪB and LB on the α-space, while h represents the feasible region (set of manifolds in ᾱ-space).Both g i and h are taken into account implicitly with the Diffuse Approximation-based approach outlined earlier allowing the α's to be expressed locally as functions of the final parameters t 1 , ..t p .

Results and discussion
Since the mapping to the new space is highly non-linear, we need to ensure we stay in the feasible region during the course of the optimization.Clearly there will be a loss of accuracy due to the pixelization and PCA phases [10] that gave the intermediate design variables (α 1 ...α m ) and the reconstruction that allows us to capture the duct geometry from the α's, but the author's contention is that the error will be consistent throughout and not seriously affect the results of the optimization, and this has been observed.Of course the effect of the error can be estimated very easily using a reconstruction of the original snapshot geometries after pixelization and decomposition, using the truncated basis.In the problem solved, we have used only M = 102 snapshots as mentioned previously.If M was chosen to be higher, i.e. 1000 or more snapshots using a Latin Hypercube Sampling of M = 1000 designs between the bounds, the quality of the truncation with say m = 5 modes would improve.In this paper, we have focused more on the approach of two-level model reduction using the basis truncation and the α-space Diffuse Approximation.

Dimensionality and model reduction
Figure 11 shows the dimensionality deduction approach first presented in section 3.After analyzing the individual snapshots in α−space, it is clear from the set of 2D surfaces obtained that the behavior of the various α's is governed by just TWO parameters (say t 1 and t 2 that can be easily found by a Diffuse Approximation [20,4] over the retained α's.The feasible regions are represented by the α-manifolds, and as explained in section 3, staying on the manifold ensures an admissible solution, even though we may need to invoke the density filter from time to time during the optimization.This also means that ᾱ = [α 1 (t 1 , t 2 ), α 2 (t 1 , t 2 )...α m (t 1 , t 2 )] if using a truncated basis of size m.So ᾱopt = ᾱ(t opt 1 , t opt 2 ) transferring the problem into the t-space where t are the final parameters that control the overall design domain.

Interpolation of duct shape
This is an important step consisting of first using the α coefficients to determine the pixel map S representing the shape of the duct, followed by passing the reconstructed pixel map S (if grayscale is present for whatever reason) through a Canny density filter as explained previously to get the boundary/edge pixels, followed by a "marching cubes" to extract the vertices of the boundary/edge pixels, and finally obtaining the actual smooth geometric shape (reconstructed) of the duct using the moving least squares approximation described in section 3. The weighting function needs to be carefully chosen while using a global (or local) Moving Least Squares (Diffuse Approximation) since a delicate balance between smoothness and precision is required and the duct geometry is composed of different types of sections.Since the boundary curves will next be used to create a CFD mesh the authors feel it is better to err somewhat in favor of smoothness and try to control precision by increasing resolution.Figure 12 shows an enlarged view of the MLS smoothing in the two curved portions of the duct.Figure 13 shows the quality of reconstruction with increased number of modes retained after truncation.As expected, the accuracy increases with the additional modes retained, but the increase in precision drops off quickly with added modes as explained in section 3. Finally, even the small loss of accuracy will be consistent for all the designs so by building a response surface between X i and α i or simply by inspection of the optimal shape obtained, one could easily extract the optimal design variables in the geometric design space ( Xopt ) from the optimal variables ᾱopt .Of course, regardless of the number m of modes retained, these are all expressed as functions of the true design variables, i.e. α 1 (t 1 , t 2 )...α m (t 1 , t 2 ) with the diffuse Figure 12. vertices and moving least square curve (enlarged) Figure 13.Reconstruction precision with increasing modes and truncation error approximation.

Optimization in the reduced-space
The goal is to first perform the optimization in the reduced-space getting topt , and then calculating ᾱ( topt ), and next to estimate Xopt (original geometric parameters) from the values of ᾱ( topt ) either by inspection of the optimized shape S( ᾱ( topt )) or using an RSM between the Xi and α i .The permeability P f low for every possible design was calculated using the inverse of the total pressure drop across the duct length (inlet to outlet).The next step was to obtain the optimal shape using 5-8 modes, followed by identification by response surface methodology over the values of the original 13 geometric design parameters for each ᾱopt , i.e. getting X( ᾱopt ).The optimal solution obtained has been added to figure 10 and as expected, it lies on the edge of the constraint/feasible region.This is followed by "reverse look-up" (Table 1) by projecting the pixel array obtained by shape generation, meshing and then pixelization, on to the truncated basis of m modes.

ᾱrev (S( ᾱopt
where ᾱopt = optimal coefficients using m modes and ᾱrev = coefficients obtained by reverse identification from X( ᾱopt ).This reverse look-up of α coefficients from the identified geometric parameters X( ᾱopt ) is needed to account for the error introduced by the moving least squares approximation needed to map the ᾱopt (m coefficients) to the X( ᾱopt ).
The velocity fields for the optimal shapes obtained using 5-8 modes are shown in figure 14 and the values of ᾱopt , X( ᾱopt ) and ᾱrev are shown in Table 2, where the pressure drop needs to be minimized for optimal performance.The velocity field gets increasingly more regular with additional modes retained.We also note that the ᾱrev (S( ᾱopt )) and ᾱopt are fairly close to each other, the slight discrepancy being due to the error introduced by the moving least squares for the response surface between the α's and the geometric parameters.This error can be completely or mostly avoided by one of two approaches: 1.Using direct identification to extract the geometric design parameters directly from the structural shape/mesh created using the ᾱopt with m modes.2. The ᾱopt can be used to directly plot the structural shape and this can be directly used for design purposes instead of trying to first extract the geometric design parameters.This can and should be the method of choice, but the RSM between the two sets of design variables is directly programmable in most cases and always an option for the design engineer.

Conclusions
In this paper, the authors have introduced an "a posteriori" scheme with a two-level model reduction to replace the geometry-based variables with a more compact and normalized set of variables and replace the higher-dimensional design space with a newer design space of lower-dimension.The overall interpolation technique is nonlinear, and is constrained to produce only shapes  from an abstract manifold in shape space induced by learning.The non-varying zones used for boundary conditions are naturally preserved and additional constraints may by imposed using constrained versions of Proper Orthogonal Decomposition.Since the approach is "a posteriori", it is clearly dependent on the information contained in the snapshot database, and thus the initial sampling used to create the database.While it is clear that the analysis of the α manifolds using a diffuse approximation is the main reduction phase, the truncation to m modes is also important to reduce computational effort.The results showed how the geometry could be very closely described even with 5 modes (ultimately depending on 2 parameters) even with a modest snapshot database with a sampling size of 102, and it is obvious that the accuracy of the truncation can be directly influenced by increasing the sampling size.One could also combine this approach with using a surrogate model for the numerical analysis to greatly increase the performance as well as reduce overall computation time.
The presented methodology has a few possible areas of improvement.The first is in resolving the difficulty in setting upper and lower bounds on the α-based design variables.The second area is in the treatment of possible degenerate cases for the structural shape.The third area is studying the efficacy of the Canny and pretreatment filter in "bounding" 3D pixel maps.
5) with an appropriate weighting function, for example gaussian w(d) = exp(−c * d 2 ) and assemble the moment matrix A = P T W P , where W is the diagonal matrix whose elements correspond to the weighted contributions of the nodes β1 ... βnbd .Next, we detect the local rank of the manifold by calculating the singular values of the moment matrix A, this gives us the dimensionality p ≤ m.

Figure 3 .
Figure 3. Detecting dimensionality for a plate with a circular hole of varying radius

Figure 4 .
Figure 4. Walking the evaluation point along the α-manifold using diffuse approximation and tangent plane construction

Figure 5 .
Figure 5. α-manifolds for plate with two circular holes of fixed centers and independently varying radii

Figure 6 .
Figure 6.Diffuse α-manifold walking for the plate with two circular holes example

Figure 7 .
Figure 7. Canny filter for 2D case with and without proposed pretreatment

Figure 8 .
Figure 8. Finding the pixel map boundary

Figure 9 .
Figure 9. Schematic diagram for the overall algorithm