The post NCS to enhance the design of turbomachines appeared first on Neural Concept.

]]>In this video, you can see how the user is able to navigate on the performance map, evaluating the behavior of the design on specific operating conditions, for different values and views (pressure field, velocity field, etc…) . Then, the user can upload a new geometry, and get the instantaneous predictions of the model on the whole range of operating conditions.

To see the whole presentation made by Dirk Wunsch from NUMECA International, you can click on the following link:

The post NCS to enhance the design of turbomachines appeared first on Neural Concept.

]]>The post Training deep learning models efficiently on the cloud appeared first on Neural Concept.

]]>

With Neural Concept Shape, you can use 3D numerical simulations as input to train your deep learning models. If we take the example of aerodynamic simulations, these CFD simulations results are usually much larger files than images or text (a single result can reach several hundreds of GB). Hence storing a large amount of them can become an issue in the long term, as it would require to regularly scale up the hardware infrastructures accordingly.

The engineer might then face difficulties to stream through the files, to make relevant analysis, and it becomes a real limitation and bottleneck in the usage of deep learning for engineering applications. At Neural Concept, we are aware of this issue, and we addressed it by evaluating different solutions over time.

Our initial set up was a NFS (Network File System) solution. It is very convenient because it allows to access files the same way we would access them on the local storage of a machine and can be used by several users in a team. However, this solution does not scale well with the accumulation of data, and the increasing number of experiments done by users simultaneously. Moreover, it can quickly become expensive.

As an alternative, we chose to store the data in a secure cloud environment, using FUSE libraries, allowing us to easily access data as if it was on a local computer, while benefiting from the powerful and flexible cloud architecture. FUSE, which stands for Filesystem in Userspace, is an interface for Unix-like operating systems that lets users create their own file systems. Using a fuse library, it is possible to mount a cloud storage bucket onto the local filesystem, and then applications can access files in the cloud storage as if it was on a local file system. The user of Neural Concept Shape is now able to train models directly from a secure cloud storage, without any impact on the speed of computation, as it was benchmarked internally.

*Figure 1: Comparison of training speed, we can see that we reach similar training speeds with data stored on NFS, or on secure cloud environments.*

Over the past years, the performance of GPUs has drastically improved, and are widely used in various deep learning applications. They allow a very fast and efficient computing, having large memory size available. Hence, the most modern GPUs are now able to tackle complex physics-based deep-learning challenges and deal with (very) refined geometries.

With Neural Concept Shape, we use 3D simulation data to train our models, which can be very heavy files if the simulation is extremely detailed. Most people would then tend to think that the main bottleneck when dealing with such data is the GPU itself, but it is not always the case. The main reason of slow-down (which can be critical for some applications) is sometimes the streaming of the data to the GPU. Indeed, for large files, and especially when the data is being fetched over the network from cloud storage, this can result in a drastic slow-down of the training process. It can then become a real limitation in the usage of deep learning for engineering applications.

This is why we are using the cache functionality of tensorflow data API ( https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache). We are able to cache the dataset to a local SSD disk during the training process, allowing very efficient retrieval of the data. Caching data means that it can be very efficiently and rapidly accessed as it is stored locally. After the initial pass over the dataset and the first iterations, the dataset gets cached and subsequent iterations go much faster.

*Figure 2: Comparison of training speed, with or without the usage of local ssd cache*

In the graph, we see that after the initial 150 steps (after which data has been cached to the local SSD Disk), the training speed increases and remains steady when using the cache.This enables the engineers using Neural Concept Shape to perform various experiments very efficiently, even when dealing with a large dataset, or very complex simulations.

The post Training deep learning models efficiently on the cloud appeared first on Neural Concept.

]]>The post Collaboration between Neural Concept and PSA appeared first on Neural Concept.

]]>

With this new step, PSA will aim at accelerating design cycles, the time between the ideation of a new design and the start of production. Another target is to optimize the performance of the next generations of vehicles, including larger autonomy and greater passenger comfort.

*Figure 1 In this benchmark case, we compared the accuracy of a Geometric CNN to a production-level Gaussian Process. The dataset was composed of 800 samples of geometries described with up to 22 parameters.*

The benchmark study compared Geometric CNNs to Gaussian-Process based regression models, specifically tuned for production-level simulations. It proved that, even though the Geometric CNN does not have access to the parametric description and is therefore much more broadly applicable than the Gaussian Process, it can also outperform the standard methods by a large margin.

Neural Concept Shape is a high-end deep learning software, which understands 3D shapes (CAD), and learns how they interact with the laws of physics (CAE). It is able to emulate full-fledged simulators, giving predictions in approximately 30 ms versus minutes to hours (or even days) for classic simulators. In other terms, engineers can use Neural Concept Shape to explore, manually or automatically, an infinite amount of designs without calling back the resource-consuming, time-consuming simulator.

*NCS is the link between designers and simulation experts in the company, reducing lengthy **iterations between **teams. This allows to dramatically accelerate R&D cycles, enhance product performances, and solve the most difficult engineering challenges. *

The post Collaboration between Neural Concept and PSA appeared first on Neural Concept.

]]>The post Miniswys uses Neural Concept Shape for the design optimization of customized ultrasonic actuators appeared first on Neural Concept.

]]>

However, each application has its own set of requirements, where small design variations can lead to completely different modal behavior. In this context, Miniswys and Neural Concept have been successfully collaborating over the past months, to build a 3D Deep Learning based surrogate model. It allows to get an instantaneous and precise estimation of the dynamic behavior of these actuators, based on geometric and/or boundary conditions variations.

Using Neural Concept Shape, Miniswys is able to explore in a very fast manner many different designs iterations, without the need of going through the full-fledged simulator at every step. Ultimately Miniswys is able to explore extensively the space of designs, to find innovative geometries, outperforming the classic ones, while drastically reducing the costs and time of the research and development phase. After using the tool, Raphaël Hoesli, CTO of Miniswys and directly involved in the project, expressed his satisfaction in the following words:

**“Neural Concept Shape enables us to be much more efficient to design products meeting our customers’ requirements. The feedback from our design iterations is so fast that Miniswys’ engineers can see the evolution of the performance quasi instantaneously while changing the design parameters. In other words, slow iterations are replaced by quick predictions which give us the possibility to intuitively improve the performances of our actuators.”**

These successful results encouraged Miniswys to continue using Neural Concept Shape to leverage on this surrogate model in shape optimization for piezo actuators.

**Figure 1**: Example of Miniswys linear ultrasonic actuator.

**Figure 2:** Comparison between the simulation and the neural network prediction on a test sample.

Neural Concept Shape is a high-end deep learning software, which understands 3D shapes (CAD), and learns how they interact with the laws of physics (CAE). It is able to emulate full-fledged simulators, giving predictions in approximately 30 ms versus minutes to hours (or even days) for classic simulators. In other terms, engineers can use Neural Concept Shape to explore, manually or automatically, an infinite amount of designs without calling back the resource-consuming, time-consuming simulator. This allows to dramatically accelerate R&D cycles, enhance product performances, and solve the most difficult engineering challenges.

The post Miniswys uses Neural Concept Shape for the design optimization of customized ultrasonic actuators appeared first on Neural Concept.

]]>The post Neural Concept SA and EPFL CVLab at NeurIPS 2020: “MeshSDF: Differentiable Iso-Surface Extraction.” appeared first on Neural Concept.

]]>

During the session, the team will introduce a differentiable way to produce explicit surface mesh representations from Deep Signed Distance Functions by removing the limitation of the Marching Cubes algorithm. The key insight is that by reasoning differentiate the 3D location of surface samples with respect to the underlying deep implicit field. The team exploit this to define MeshSDF, an end-to-end differentiable mesh representation which can vary its topology. They use two different applications to validate their theoretical insight: Single-View Reconstruction via Differentiable Rendering and Physically-Driven Shape Optimization.

The Neural Information Processing Systems Foundation is a non-profit corporation whose purpose is to foster the exchange of research on neural information processing systems in their biological, technological, mathematical and theoretical aspects. The Conference on Neural Information Processing Systems is the main venue where the most groundbreaking scientific publications in machine learning are published every year, with more than 10,000 attendees.

NIPS 2020 is held Sun 6th December through Sat the 12th, 2020 at Virtual-only.

Read the full paper on : https://arxiv.org/abs/2006.03997

The post Neural Concept SA and EPFL CVLab at NeurIPS 2020: “MeshSDF: Differentiable Iso-Surface Extraction.” appeared first on Neural Concept.

]]>The post NUMECA uses Neural Concept’s Deep Learning platform to study the nature of turbulence appeared first on Neural Concept.

]]>The large scale availability of High-Performance Computing (HPC) opens the door to a truly novel approach to turbulence model development. Exploiting Artificial Intelligence (AI) and Machine Learning (ML) techniques applied to a database of high-fidelity, scale-resolving simulations of test cases that contain most features of separated flow regions or complex 3D flows. Figure 1 shows an example of a flow field that is used as the basis for the turbulence modelling task.

**Figure 1**: Flow structure of T161 cascade. Typical flow field used as input for the turbulence model improvement

The huge amount of data generated in these simulations requires a new approach to data mining. This is where **Neural Concepts** brings in its tool chain based on Deep-Learning, to analyse very large amounts of data provided by 3D scale resolving simulations.

Using Neural Concept’s Geometry-based Variational Auto-Encoders (VAE), NUMECA was able to gain first insights into correlations between tens of statistically averaged flow variables. The VAE compresses the data first in a physically meaningful way into so-called ‘embeddings’ and then reconstructs the original input from the compressed data. This is done to a very high accuracy, which allows to use the ML model as a replacement, a so-called surrogate, for the original data. The advantages are a much easier handling of the data, and the possibility of exploiting data mining and analysis techniques that help to understand the physics in the data.

Figure 2 shows an example of the possible analysis. The colors of the symbols on the 2D plot correspond to the value of the ‘embedding’ and are the same in the 3D view (left) and in the 2D plot (right). Points of the same color have the same value for all the considered physical quantities and the 3D view colored by the embedding value, gives us one global statistical representation for several physical quantities over the investigated domain. Both plots together provide a new perspective on the flow behaviour via the machine learning model. Figure 2, shows snapshots of the views used in the Graphical User Interface.

**Figure 2:** (top) Structure found by the ML model. (bottom) Statistical analysis of quantities

The post NUMECA uses Neural Concept’s Deep Learning platform to study the nature of turbulence appeared first on Neural Concept.

]]>The post Collaboration of Neural Concept and Bosch on successful applications of 3D Deep Learning based surrogate models appeared first on Neural Concept.

]]>

More particularly, we achieved promising results on E-Drive motor housing simulations. Bosch Research engineers trained a deep Geometric Convolutional Neural Network (GCNN) to emulate accurately, in a few ms, the fully fledged Finite Element software.

These successful results encouraged Bosch Research to continue the collaboration with Neural Concept on a further application of shape design optimization.

*“For the considered application, NCS performs clearly better than currently used surrogate models and therefore we see the potential of NCS for more use-cases.” *

*–***Roland ****Schirrmacher**, Structural Dynamics and Acoustics engineer at Bosch

Neural Concept Shape is a high-end deep learning software, which understands 3D shapes (CAD) and learns how they interact with the laws of physics (CAE). It is able to emulate full-fledged simulators, giving predictions in approximately 30ms, versus minutes to hours (or even days) for classic simulators. In other terms, engineers can use Neural Concept Shape to explore, manually or automatically, an infinite amount of designs without calling back the resource consuming, time-consuming simulator. This allows to dramatically accelerated R&D cycles, enhance product performances and solve the most difficult engineering challenges.

The post Collaboration of Neural Concept and Bosch on successful applications of 3D Deep Learning based surrogate models appeared first on Neural Concept.

]]>The post Top 100 Swiss Startup Award appeared first on Neural Concept.

]]>Neural Concept promises to keep moving forwards with creative mindset and passion towards a new generation of CAD and CAE tools boosted by Deep-Learning, to fulfill all the expectations of those who trust and support our technology. You can check more details on our Deep-Learning engineering software on our page here.

To have a look on the complete list of Top 100 Swiss Startup, please check here.

The post Top 100 Swiss Startup Award appeared first on Neural Concept.

]]>The post On Deep Learning and Multi-objective Shape Optimization appeared first on Neural Concept.

]]>In such cases, we usually refer to the objective function as a black-box function that when given queries for some input locations can answer back with the true objective value we are interested in. But in engineering design problems, querying the function can be very expensive! Function evaluation time can be in the order of minutes to hours to even days. One would therefore want to be highly selective in choosing the input locations to query. One of the techniques widely used to help is Bayesian optimization. It works by constructing, or learning, a probabilistic model of the objective function, called the surrogate model, that is then searched efficiently with an acquisition function before candidate samples are chosen for evaluation on the real objective function.

The video embedded in this article, provides a glimpse of how the concepts covered in this article are nicely put together and are easily accessible to the engineer through a simple user interface. Though treating a rather synthetic example, the video provides a first glance of the online learning loop automation possible using the NCS software.

In summary, given a black-box function, an efficient search strategy of its input space to find the optimum boils down to answering the two following questions:

- What surrogate model should I use to substitute the true objective function?
- Given a specific surrogate model, what is the exploration strategy I should use to explore the different regions of the input and select next points to query?

The surrogate model we are after is a model that we can use to make reliable predictions about the latent function but also one that can maintain a measure of uncertainty over these predictions. That’s why Gaussian Processes (GPs) have been largely used to answer the modeling question. GP-based surrogate models provide a nice and flexible mechanism for learning continuous functions by interpolating observations. The confidence intervals GPs provide can also be used to assess if one should consider refitting the predictions in some regions of interest. GPs are very general and enjoy a neat mathematical foundation. They, however, have the shortcoming of being bound to a rather small number of parameters depicting the input space. In other words, GPs lose their efficiency in high dimensional spaces when the number of features exceeds a few dozens.

As for the exploration question, when given a relatively low dimensional input space, both grid and random search strategies can do. But for most realistic optimization problems, the dimensionality of the input space is so high that a grid search becomes quickly intractable and the random search may not be the optimal option to adopt. A combination of evolutionary algorithms with a heuristic local search is often used to efficiently search the input space and acquire new samples to evaluate.

The non-convexity of the objective is also another challenge faced in optimization problems. When constructing the surrogate, one should be careful about premature convergence and the possibility of being stuck in a local rather than the global optimum.

Most real world optimization applications are formulated as multi-objective optimization problems where we seek to simultaneously optimize for multiple criteria and take into account the different constraints imposed. In that case, instead of finding an ultimate optimum, the goal of the optimization is to recover the Pareto front of these different objectives or, in certain cases, to identify Pareto optimal points only in a subset of the Pareto front.

Another equally important aspect to consider when tackling a global optimization problem is the possibility to exploit multi-fidelity evaluations. When the number of high-fidelity evaluations we dispose is limited, like it is often the case for very complex functions, we can kick-start the surrogate model training with lower fidelity evaluations to first explore which regions of the search space to further query while sparing the higher fidelity evaluations to refitting the regions where high accuracy predictions are more important. Constructing the surrogate model using multi-fidelity evaluations provides a nice trade-off between prediction accuracy and computational efficiency. Furthermore, by blending evaluations coming from different sources, it contributes to a more interoperable approach to the optimization process. Learning the surrogate model from multi-fidelity evaluations goes hand in hand with uncertainty quantification. The lower fidelity an evaluation is, the wider the confidence interval around the prediction and vice versa. In the context of active learning, these confidence intervals, or variance maps, constitute an essential part in planning further samples acquisition.

Multi-objective optimization has a multitude of applications in the realm of numerical simulations. 3D shape design optimization is a particularly interesting domain for such applications. The examples here are numerous from the optimization of the aero- or hydrodynamics characteristics of a certain design through computational fluid dynamics (CFD) to ensuring proper solid material rigidity through structural analysis.

If you are a mechanical engineer working on the design of a blade for a propeller or maybe perfecting the profile of an airplane wing, you’ll almost certainly always refer to some Computational Fluid Dynamics (CFD) simulations as a cheaper proxy to solving the Navier Stokes equations. You might very well be a medical engineer carefully calibrating the design of a cardiovascular pump, CFD simulations would also be among your most important tools to ensure optimal fluid flow through your design. In these two scenarios, and numerous other ones, CFD simulations prove very important especially when considering the reduced need for physical tests they can yield. When it comes to computational complexity, however, CFD simulations are among the most expensive function evaluations. A full blown high-fidelity simulation may very well last for a few days… As an engineer with a strong problem-solving mindset, would you always wait for every single simulation result? Most probably you wouldn’t. You’d rather build a surrogate!

A very similar process applies to other engineering disciplines such as mechanical and electromagnetic engineering. In mechanical structural analysis for example, the Finite Element (FE) solvers are widely used to predict the design performance characteristics such as part distortions due internal stresses. FE solvers are based on the Finite Element Method that provides an approximation of the solution to solid material analysis by discretizing the continuous body of the design input space into a large finite number of elements and solving the problem in the domain of each element along with the corresponding boundary conditions. The smaller the element size is, the more accurate, but expensive, the simulation results are.

The above examples can only represent a negligible fraction of the applications of surrogate modeling for shape optimization. Engineers are clearly not oblivious to the great potentials of surrogate models in accelerating 3D simulations and optimizing their designs. Yet, with the limitations imposed by the parametrization requirements of most current GP-based surrogate models, it is still rather difficult to harness the full benefits of surrogate models in fully automated optimization loops.

More recently, a new family of surrogate models are gaining traction in the circles of scientific engineering. By training a deep neural network to learn the surrogate model, one can not only overcome the low-dimensionality limitation of Gaussian-based surrogates but in some cases also obtain a superior prediction accuracy [4]. The Deep Learning approach is also more interoperable. Dropping the handcrafted shape parameterization requirement means that we can train the network on arbitrary shapes exploiting the capabilities of transfer learning and leveraging on geometries pertaining to different datasets that were otherwise locked in project silos with a distinct regressor trained exclusively on each set.

Not very surprisingly though, this new technology comes with its own set of challenges. At first sight, 3D Convolutional Neural Networks might manifest themselves as the right candidate for the task. However, their large memory footprint makes it difficult to accommodate all the data required for their training into memory -even when considering modern GPUs capacities. A possible mitigation, one that compromises accuracy, is training the network on relatively coarse discretization, aka voxels, of the volumetric input. A yet better alternative is possible by utilizing a newer architecture of CNN, Geodesic Convolutional Neural Network (GCNN) in particular. GCNNs learn directly from the surface mesh representation of 3D geometries and can thus considerably reduce the computing requirements of plain CNN. Besides the enhanced usability and interoperability they enjoy, GCNN-based surrogate models offer another important advantage over Gaussian Process and other parametrized forms of surrogates in the context of 3D shape optimization. By training the network directly on the surface mesh of the shape, it is possible to back-propagate the gradient calculations down to the original vertices, enabling a free form first order optimization of the shape. This can be very useful when embedded in a multi-objective optimization process. Though, one still has to integrate the proper constraints to guarantee smoothness and to preserve other design requirements.

The above mentioned Geodesic Convolutional Neural Networks are at the core architecture of surrogate models provided by Neural Concept Shape, the first of its kind software interface dedicated to 3D Deep Learning shape optimization.

This article presented a very brief and high-level overview of multi-objective global function optimization and the benefits one can unlock utilizing deep learning approaches in constructing the surrogate model used in the optimization process. The ideas presented here are inspired by the optimisation framework used in the Neural Concept Shape software. Some interesting use cases can be found browsing our website. Further reading materials and related articles are also listed below.

[1] Deep Fluids: A Generative Network for Parameterized Fluid Simulations. https://arxiv.org/pdf/1806.02071.pdf

[2] Deep learning for mechanical property evaluation. http://news.mit.edu/2020/deep-learning-mechanical-property-metallic-0316

[3] Learning to Simulate Complex Physics with Graph Networks. https://arxiv.org/abs/2002.09405

[4] Geodesic Convolutional Shape Optimization. https://arxiv.org/abs/1802.04016

The post On Deep Learning and Multi-objective Shape Optimization appeared first on Neural Concept.

]]>The post 10 SWISS ENGINEERING STARTUPS TO WATCH IN 2020 appeared first on Neural Concept.

]]>You can see the whole article on https://www.venturelab.ch/10-Swiss-Engineering-Startups-to-Watch-in-2020

The post 10 SWISS ENGINEERING STARTUPS TO WATCH IN 2020 appeared first on Neural Concept.

]]>