The post Collaboration between Neural Concept and PSA appeared first on Neural Concept.
]]>
With this new step, PSA will aim at accelerating design cycles, the time between the ideation of a new design and the start of production. Another target is to optimize the performance of the next generations of vehicles, including larger autonomy and greater passenger comfort.
Figure 1 In this benchmark case, we compared the accuracy of a Geometric CNN to a production-level Gaussian Process. The dataset was composed of 800 samples of geometries described with up to 22 parameters.
The benchmark study compared Geometric CNNs to Gaussian-Process based regression models, specifically tuned for production-level simulations. It proved that, even though the Geometric CNN does not have access to the parametric description and is therefore much more broadly applicable than the Gaussian Process, it can also outperform the standard methods by a large margin.
Neural Concept Shape is a high-end deep learning software, which understands 3D shapes (CAD), and learns how they interact with the laws of physics (CAE). It is able to emulate full-fledged simulators, giving predictions in approximately 30 ms versus minutes to hours (or even days) for classic simulators. In other terms, engineers can use Neural Concept Shape to explore, manually or automatically, an infinite amount of designs without calling back the resource-consuming, time-consuming simulator.
NCS is the link between designers and simulation experts in the company, reducing lengthy iterations between teams. This allows to dramatically accelerate R&D cycles, enhance product performances, and solve the most difficult engineering challenges.
The post Collaboration between Neural Concept and PSA appeared first on Neural Concept.
]]>The post Miniswys uses Neural Concept Shape for the design optimization of customized ultrasonic actuators appeared first on Neural Concept.
]]>
However, each application has its own set of requirements, where small design variations can lead to completely different modal behavior. In this context, Miniswys and Neural Concept have been successfully collaborating over the past months, to build a 3D Deep Learning based surrogate model. It allows to get an instantaneous and precise estimation of the dynamic behavior of these actuators, based on geometric and/or boundary conditions variations.
Using Neural Concept Shape, Miniswys is able to explore in a very fast manner many different designs iterations, without the need of going through the full-fledged simulator at every step. Ultimately Miniswys is able to explore extensively the space of designs, to find innovative geometries, outperforming the classic ones, while drastically reducing the costs and time of the research and development phase. After using the tool, Raphaël Hoesli, CTO of Miniswys and directly involved in the project, expressed his satisfaction in the following words:
“Neural Concept Shape enables us to be much more efficient to design products meeting our customers’ requirements. The feedback from our design iterations is so fast that Miniswys’ engineers can see the evolution of the performance quasi instantaneously while changing the design parameters. In other words, slow iterations are replaced by quick predictions which give us the possibility to intuitively improve the performances of our actuators.”
These successful results encouraged Miniswys to continue using Neural Concept Shape to leverage on this surrogate model in shape optimization for piezo actuators.
Figure 1: Example of Miniswys linear ultrasonic actuator.
Figure 2: Comparison between the simulation and the neural network prediction on a test sample.
Neural Concept Shape is a high-end deep learning software, which understands 3D shapes (CAD), and learns how they interact with the laws of physics (CAE). It is able to emulate full-fledged simulators, giving predictions in approximately 30 ms versus minutes to hours (or even days) for classic simulators. In other terms, engineers can use Neural Concept Shape to explore, manually or automatically, an infinite amount of designs without calling back the resource-consuming, time-consuming simulator. This allows to dramatically accelerate R&D cycles, enhance product performances, and solve the most difficult engineering challenges.
The post Miniswys uses Neural Concept Shape for the design optimization of customized ultrasonic actuators appeared first on Neural Concept.
]]>The post Neural Concept SA and EPFL CVLab at NeurIPS 2020: “MeshSDF: Differentiable Iso-Surface Extraction.” appeared first on Neural Concept.
]]>
During the session, the team will introduce a differentiable way to produce explicit surface mesh representations from Deep Signed Distance Functions by removing the limitation of the Marching Cubes algorithm. The key insight is that by reasoning differentiate the 3D location of surface samples with respect to the underlying deep implicit field. The team exploit this to define MeshSDF, an end-to-end differentiable mesh representation which can vary its topology. They use two different applications to validate their theoretical insight: Single-View Reconstruction via Differentiable Rendering and Physically-Driven Shape Optimization.
The Neural Information Processing Systems Foundation is a non-profit corporation whose purpose is to foster the exchange of research on neural information processing systems in their biological, technological, mathematical and theoretical aspects. The Conference on Neural Information Processing Systems is the main venue where the most groundbreaking scientific publications in machine learning are published every year, with more than 10,000 attendees.
NIPS 2020 is held Sun 6th December through Sat the 12th, 2020 at Virtual-only.
Read the full paper on : https://arxiv.org/abs/2006.03997
The post Neural Concept SA and EPFL CVLab at NeurIPS 2020: “MeshSDF: Differentiable Iso-Surface Extraction.” appeared first on Neural Concept.
]]>The post NUMECA uses Neural Concept’s Deep Learning platform to study the nature of turbulence appeared first on Neural Concept.
]]>The large scale availability of High-Performance Computing (HPC) opens the door to a truly novel approach to turbulence model development. Exploiting Artificial Intelligence (AI) and Machine Learning (ML) techniques applied to a database of high-fidelity, scale-resolving simulations of test cases that contain most features of separated flow regions or complex 3D flows. Figure 1 shows an example of a flow field that is used as the basis for the turbulence modelling task.
Figure 1: Flow structure of T161 cascade. Typical flow field used as input for the turbulence model improvement
The huge amount of data generated in these simulations requires a new approach to data mining. This is where Neural Concepts brings in its tool chain based on Deep-Learning, to analyse very large amounts of data provided by 3D scale resolving simulations.
Using Neural Concept’s Geometry-based Variational Auto-Encoders (VAE), NUMECA was able to gain first insights into correlations between tens of statistically averaged flow variables. The VAE compresses the data first in a physically meaningful way into so-called ‘embeddings’ and then reconstructs the original input from the compressed data. This is done to a very high accuracy, which allows to use the ML model as a replacement, a so-called surrogate, for the original data. The advantages are a much easier handling of the data, and the possibility of exploiting data mining and analysis techniques that help to understand the physics in the data.
Figure 2 shows an example of the possible analysis. The colors of the symbols on the 2D plot correspond to the value of the ‘embedding’ and are the same in the 3D view (left) and in the 2D plot (right). Points of the same color have the same value for all the considered physical quantities and the 3D view colored by the embedding value, gives us one global statistical representation for several physical quantities over the investigated domain. Both plots together provide a new perspective on the flow behaviour via the machine learning model. Figure 2, shows snapshots of the views used in the Graphical User Interface.
Figure 2: (top) Structure found by the ML model. (bottom) Statistical analysis of quantities
The post NUMECA uses Neural Concept’s Deep Learning platform to study the nature of turbulence appeared first on Neural Concept.
]]>The post Collaboration of Neural Concept and Bosch on successful applications of 3D Deep Learning based surrogate models appeared first on Neural Concept.
]]>
More particularly, we achieved promising results on E-Drive motor housing simulations. Bosch Research engineers trained a deep Geometric Convolutional Neural Network (GCNN) to emulate accurately, in a few ms, the fully fledged Finite Element software.
These successful results encouraged Bosch Research to continue the collaboration with Neural Concept on a further application of shape design optimization.
“For the considered application, NCS performs clearly better than currently used surrogate models and therefore we see the potential of NCS for more use-cases.”
–Roland Schirrmacher, Structural Dynamics and Acoustics engineer at Bosch
Neural Concept Shape is a high-end deep learning software, which understands 3D shapes (CAD) and learns how they interact with the laws of physics (CAE). It is able to emulate full-fledged simulators, giving predictions in approximately 30ms, versus minutes to hours (or even days) for classic simulators. In other terms, engineers can use Neural Concept Shape to explore, manually or automatically, an infinite amount of designs without calling back the resource consuming, time-consuming simulator. This allows to dramatically accelerated R&D cycles, enhance product performances and solve the most difficult engineering challenges.
The post Collaboration of Neural Concept and Bosch on successful applications of 3D Deep Learning based surrogate models appeared first on Neural Concept.
]]>The post Top 100 Swiss Startup Award appeared first on Neural Concept.
]]>Neural Concept promises to keep moving forwards with creative mindset and passion towards a new generation of CAD and CAE tools boosted by Deep-Learning, to fulfill all the expectations of those who trust and support our technology. You can check more details on our Deep-Learning engineering software on our page here.
To have a look on the complete list of Top 100 Swiss Startup, please check here.
The post Top 100 Swiss Startup Award appeared first on Neural Concept.
]]>The post On Deep Learning and Multi-objective Shape Optimization appeared first on Neural Concept.
]]>In such cases, we usually refer to the objective function as a black-box function that when given queries for some input locations can answer back with the true objective value we are interested in. But in engineering design problems, querying the function can be very expensive! Function evaluation time can be in the order of minutes to hours to even days. One would therefore want to be highly selective in choosing the input locations to query. One of the techniques widely used to help is Bayesian optimization. It works by constructing, or learning, a probabilistic model of the objective function, called the surrogate model, that is then searched efficiently with an acquisition function before candidate samples are chosen for evaluation on the real objective function.
The video embedded in this article, provides a glimpse of how the concepts covered in this article are nicely put together and are easily accessible to the engineer through a simple user interface. Though treating a rather synthetic example, the video provides a first glance of the online learning loop automation possible using the NCS software.
In summary, given a black-box function, an efficient search strategy of its input space to find the optimum boils down to answering the two following questions:
The surrogate model we are after is a model that we can use to make reliable predictions about the latent function but also one that can maintain a measure of uncertainty over these predictions. That’s why Gaussian Processes (GPs) have been largely used to answer the modeling question. GP-based surrogate models provide a nice and flexible mechanism for learning continuous functions by interpolating observations. The confidence intervals GPs provide can also be used to assess if one should consider refitting the predictions in some regions of interest. GPs are very general and enjoy a neat mathematical foundation. They, however, have the shortcoming of being bound to a rather small number of parameters depicting the input space. In other words, GPs lose their efficiency in high dimensional spaces when the number of features exceeds a few dozens.
As for the exploration question, when given a relatively low dimensional input space, both grid and random search strategies can do. But for most realistic optimization problems, the dimensionality of the input space is so high that a grid search becomes quickly intractable and the random search may not be the optimal option to adopt. A combination of evolutionary algorithms with a heuristic local search is often used to efficiently search the input space and acquire new samples to evaluate.
The non-convexity of the objective is also another challenge faced in optimization problems. When constructing the surrogate, one should be careful about premature convergence and the possibility of being stuck in a local rather than the global optimum.
Most real world optimization applications are formulated as multi-objective optimization problems where we seek to simultaneously optimize for multiple criteria and take into account the different constraints imposed. In that case, instead of finding an ultimate optimum, the goal of the optimization is to recover the Pareto front of these different objectives or, in certain cases, to identify Pareto optimal points only in a subset of the Pareto front.
Another equally important aspect to consider when tackling a global optimization problem is the possibility to exploit multi-fidelity evaluations. When the number of high-fidelity evaluations we dispose is limited, like it is often the case for very complex functions, we can kick-start the surrogate model training with lower fidelity evaluations to first explore which regions of the search space to further query while sparing the higher fidelity evaluations to refitting the regions where high accuracy predictions are more important. Constructing the surrogate model using multi-fidelity evaluations provides a nice trade-off between prediction accuracy and computational efficiency. Furthermore, by blending evaluations coming from different sources, it contributes to a more interoperable approach to the optimization process. Learning the surrogate model from multi-fidelity evaluations goes hand in hand with uncertainty quantification. The lower fidelity an evaluation is, the wider the confidence interval around the prediction and vice versa. In the context of active learning, these confidence intervals, or variance maps, constitute an essential part in planning further samples acquisition.
Multi-objective optimization has a multitude of applications in the realm of numerical simulations. 3D shape design optimization is a particularly interesting domain for such applications. The examples here are numerous from the optimization of the aero- or hydrodynamics characteristics of a certain design through computational fluid dynamics (CFD) to ensuring proper solid material rigidity through structural analysis.
If you are a mechanical engineer working on the design of a blade for a propeller or maybe perfecting the profile of an airplane wing, you’ll almost certainly always refer to some Computational Fluid Dynamics (CFD) simulations as a cheaper proxy to solving the Navier Stokes equations. You might very well be a medical engineer carefully calibrating the design of a cardiovascular pump, CFD simulations would also be among your most important tools to ensure optimal fluid flow through your design. In these two scenarios, and numerous other ones, CFD simulations prove very important especially when considering the reduced need for physical tests they can yield. When it comes to computational complexity, however, CFD simulations are among the most expensive function evaluations. A full blown high-fidelity simulation may very well last for a few days… As an engineer with a strong problem-solving mindset, would you always wait for every single simulation result? Most probably you wouldn’t. You’d rather build a surrogate!
A very similar process applies to other engineering disciplines such as mechanical and electromagnetic engineering. In mechanical structural analysis for example, the Finite Element (FE) solvers are widely used to predict the design performance characteristics such as part distortions due internal stresses. FE solvers are based on the Finite Element Method that provides an approximation of the solution to solid material analysis by discretizing the continuous body of the design input space into a large finite number of elements and solving the problem in the domain of each element along with the corresponding boundary conditions. The smaller the element size is, the more accurate, but expensive, the simulation results are.
The above examples can only represent a negligible fraction of the applications of surrogate modeling for shape optimization. Engineers are clearly not oblivious to the great potentials of surrogate models in accelerating 3D simulations and optimizing their designs. Yet, with the limitations imposed by the parametrization requirements of most current GP-based surrogate models, it is still rather difficult to harness the full benefits of surrogate models in fully automated optimization loops.
More recently, a new family of surrogate models are gaining traction in the circles of scientific engineering. By training a deep neural network to learn the surrogate model, one can not only overcome the low-dimensionality limitation of Gaussian-based surrogates but in some cases also obtain a superior prediction accuracy [4]. The Deep Learning approach is also more interoperable. Dropping the handcrafted shape parameterization requirement means that we can train the network on arbitrary shapes exploiting the capabilities of transfer learning and leveraging on geometries pertaining to different datasets that were otherwise locked in project silos with a distinct regressor trained exclusively on each set.
Not very surprisingly though, this new technology comes with its own set of challenges. At first sight, 3D Convolutional Neural Networks might manifest themselves as the right candidate for the task. However, their large memory footprint makes it difficult to accommodate all the data required for their training into memory -even when considering modern GPUs capacities. A possible mitigation, one that compromises accuracy, is training the network on relatively coarse discretization, aka voxels, of the volumetric input. A yet better alternative is possible by utilizing a newer architecture of CNN, Geodesic Convolutional Neural Network (GCNN) in particular. GCNNs learn directly from the surface mesh representation of 3D geometries and can thus considerably reduce the computing requirements of plain CNN. Besides the enhanced usability and interoperability they enjoy, GCNN-based surrogate models offer another important advantage over Gaussian Process and other parametrized forms of surrogates in the context of 3D shape optimization. By training the network directly on the surface mesh of the shape, it is possible to back-propagate the gradient calculations down to the original vertices, enabling a free form first order optimization of the shape. This can be very useful when embedded in a multi-objective optimization process. Though, one still has to integrate the proper constraints to guarantee smoothness and to preserve other design requirements.
The above mentioned Geodesic Convolutional Neural Networks are at the core architecture of surrogate models provided by Neural Concept Shape, the first of its kind software interface dedicated to 3D Deep Learning shape optimization.
This article presented a very brief and high-level overview of multi-objective global function optimization and the benefits one can unlock utilizing deep learning approaches in constructing the surrogate model used in the optimization process. The ideas presented here are inspired by the optimisation framework used in the Neural Concept Shape software. Some interesting use cases can be found browsing our website. Further reading materials and related articles are also listed below.
[1] Deep Fluids: A Generative Network for Parameterized Fluid Simulations. https://arxiv.org/pdf/1806.02071.pdf
[2] Deep learning for mechanical property evaluation. http://news.mit.edu/2020/deep-learning-mechanical-property-metallic-0316
[3] Learning to Simulate Complex Physics with Graph Networks. https://arxiv.org/abs/2002.09405
[4] Geodesic Convolutional Shape Optimization. https://arxiv.org/abs/1802.04016
The post On Deep Learning and Multi-objective Shape Optimization appeared first on Neural Concept.
]]>The post 10 SWISS ENGINEERING STARTUPS TO WATCH IN 2020 appeared first on Neural Concept.
]]>You can see the whole article on https://www.venturelab.ch/10-Swiss-Engineering-Startups-to-Watch-in-2020
The post 10 SWISS ENGINEERING STARTUPS TO WATCH IN 2020 appeared first on Neural Concept.
]]>The post In-graph training loop appeared first on Neural Concept.
]]>
As a follow up to this previous test, we compared the performance of in-graph training loop ( https://www.tensorflow.org/guide/function#advanced_example_an_in-graph_training_loop ) with loop in python and model with annotated __call__
function.
We performed training with 100, 200, 400, 800 and 1600 steps with the naval dataset with the following config –
It is seen that the python loop has some upfront initial cost, but over the period of time it runs faster than in-graph training loop.
The initial cost of the python loop can be attributed to the fact that multiple traces are performed for each different shapes, but once all the different shapes are encountered, there are no further delays and it runs faster than the in-graph loop.
The finding also corroborates with the issue https://github.com/tensorflow/tensorflow/issues/35165
In order to avoid retracing for different shapes, we should provide input_signature to the model tf.function. The input_signature is known only after the partial shape information is available, so instead of statically annotating with tf.function, we need to create a tf.function with the right input signature on the fly.
Comparisons of training in python loop, avoiding the retracing –
As a follow up to Tf.function retracing vs specifying an input signature , training on naval dataset is run for 5000 steps, with various combinations of annotating tf functions. Following are the results
Model | Outer function | Retracing happens | Mean train step (discarding retracing times) | |
---|---|---|---|---|
1 | tf.func with signature | python func | no | 0.28 |
2 | tf.func without signature | train_step is tf.func with signature | no | 0.22 |
3 | tf func without signature | compute_gradient is tf.func with signature | no | 0.25 |
4 | tf func without signature | compute_gradient is tf.func without signature | yes | 0.24 |
5 | tf func without signature | train_step is tf.func without signature | yes | 0.14 |
Refer to attached notebook for details:
We can conclude that retracing is expensive operation, but once the function has retraced all possible shapes, the compiled function is faster.
Making train_step
as the tf.function is much faster than making compute_gradient
as the tf function .
The post In-graph training loop appeared first on Neural Concept.
]]>The post Deep Neural Network in simulations appeared first on Neural Concept.
]]>Indeed, numerical simulation techniques have traditionally relied on solving physically derived equations using finite differences and adding heuristic models when those become too complex to solve (Turbulence models in fluid mechanics for example). More recently, Lattice Boltzmann methods have become popular as a means to simulate streaming and collision processes across a limited number of particles. Both classes of techniques remain (computationally) (very) expensive (we are talking about hours to days of simulation), and since the simulation must be re-run each time an engineer wishes to change the shape, this makes the design process slow and costly.
A typical engineering approach is therefore to test only a few designs without a fine-grained search in the space of potential variations. Hence the company is limited by:
Since this is a severe limitation, there have been many attempts at overcoming it and one of the most famous is reduced-order modeling:
Reduced-Order modeling (ROM) is a class of Machine Learning approaches used to learn a simplified model of a simulator, based on data. This method is a simplification of a high-fidelity dynamical model, built from a large number of numerical simulations. It preserves essential behavior and dominant effects, for the purpose of reducing solution time or storage capacity required for the more complex model. It is applied in a large range of physics and has proven its efficiency for specific applications. It works well when the engineer wants to vary a few, well-defined parameters, with a specific objective in mind. A good overview of the different techniques used is given in this paper: https://www.sciencedirect.com/science/article/abs/pii/S0376042103001131
However, the modeling power of classical ROM methods is limited, and they present several drawbacks:
Deep Neural-Networks are an extension of classical Reduced Order Modeling, where the approximating function is not limited to a simple linear model but can be extended through a stack of non-linear operations, called layers. A very recent branch of the Deep-Learning research applies this concept to the processing of geometric information and was able to overcome the limitations of more classical reduced-order models. Based on a neural network architecture, it is able to understand 3D shapes and learns how they interact with the laws of physics. Since it uses raw, 3D, unprocessed geometries as input, it does not suffer from all of the previous drawbacks I mentioned. The engineer is now able to leverage on its historical database (even if the parametrization of a given part has evolved over time!) and integrate experimental data as well. 3D Deep Learning allows to switch from silos workflows to a common base where the information is globally shared and continuously re-used.
It is also orthogonal to the physics, and the same technique can tackle a very large range of physics. Finally, its overall performance is able to improve over time, as it can be fed with new simulations on the fly.
Conclusion: After these few lines, it seems like 3D Deep Learning is the solution to enhance engineering processes! Well it is, but there is also a very important step in order to exploit the full potential of this tool (but also of any machine learning technique): preparing your data so that the model can extract the maximum of information out of it. In a follow-up article, I will come with a few tips and tricks to get the best out of your data when using Machine Learning for numerical simulations. Stay tuned!
The post Deep Neural Network in simulations appeared first on Neural Concept.
]]>