Neural Concept https://neuralconcept.com 3D Deep Learning Software for Enhanced Engineering Tue, 22 Sep 2020 09:25:04 +0000 en-US hourly 1 https://wordpress.org/?v=5.3.4 Top 100 Swiss Startup Award https://neuralconcept.com/blog/top-100-swiss-startup-award/ Wed, 16 Sep 2020 11:54:38 +0000 https://neuralconcept.com/?p=3216 Neural Concept was listed in the top 100 Swiss Startup Award 2020, organized by Venturelab on the 9th September 2020. It was an honour to be listed with the other Swiss Startups which contribute to Swiss Industry known to be the most innovative in the world according to the the GII 2020 innovation ranking. Neural […]

The post Top 100 Swiss Startup Award appeared first on Neural Concept.

]]>
Neural Concept was listed in the top 100 Swiss Startup Award 2020, organized by Venturelab on the 9th September 2020. It was an honour to be listed with the other Swiss Startups which contribute to Swiss Industry known to be the most innovative in the world according to the the GII 2020 innovation ranking.

Neural Concept promises to keep moving forwards with creative mindset and passion towards a new generation of CAD and CAE tools boosted by Deep-Learning, to fulfill all the expectations of those who trust and support our technology. You can check more details on our Deep-Learning engineering software on our page here.

To have a look on the complete list of Top 100 Swiss Startup, please check here.

The post Top 100 Swiss Startup Award appeared first on Neural Concept.

]]>
On Deep Learning and Multi-objective Shape Optimization https://neuralconcept.com/blog/on-deep-learning-and-multi-objective-shape-optimization/ Fri, 24 Apr 2020 12:48:50 +0000 https://neuralconcept.com/?p=1529 Optimization is a fundamental process in many scientific and engineering applications. Optimizing a function comprises searching its domain for an input that results in the minimum or maximum value of the given objective. In the case where we have access to an  analytical expression of the objective function, the answer to the optimization problem might […]

The post On Deep Learning and Multi-objective Shape Optimization appeared first on Neural Concept.

]]>
Optimization is a fundamental process in many scientific and engineering applications. Optimizing a function comprises searching its domain for an input that results in the minimum or maximum value of the given objective. In the case where we have access to an  analytical expression of the objective function, the answer to the optimization problem might be straightforward. Even when it is infeasible to directly obtain the analytical solution, gradient-based method combined with heuristic search would bring satisfaction in most cases. But what if we don’t have direct access to the objective function or the gradients? In many engineering applications, the form of the objective function can be very complex and intractable to analyze. Its evaluation might incur the resolution of a complex PDE, which makes it computationally very expensive. 

In such cases, we usually refer to the objective function as a black-box function that when given queries for some input locations can answer back with the true objective value we are interested in. But in engineering design problems, querying the function can be very expensive! Function evaluation time can be in the order of minutes to hours to even days. One would therefore want to be highly selective in choosing the input locations to query. One of the techniques widely used to help is Bayesian optimization. It works by constructing, or learning, a probabilistic model of the objective function, called the surrogate model, that is then searched efficiently with an acquisition function before candidate samples are chosen for evaluation on the real objective function.

https://youtu.be/7xCcmk76xHc

The video embedded in this article, provides a glimpse of how the concepts covered in this article are nicely put together and are easily accessible to the engineer through a simple user interface. Though treating a rather synthetic example, the video provides a first glance of the online learning loop automation possible using the NCS software.  

Global optimization in a nutshell

In summary, given a black-box function, an efficient search strategy of its input space to find the optimum boils down to answering the two following questions: 

  • What surrogate model should I use to substitute the true objective function?
  • Given a specific surrogate model, what is the exploration strategy I should use to explore the different regions of the input and select next points to query? 

The surrogate model we are after is a model that we can use to make reliable predictions about the latent function but also one that can maintain a measure of uncertainty over these predictions. That’s why Gaussian Processes (GPs) have been largely used to answer the modeling question. GP-based surrogate models provide a nice and flexible mechanism for learning continuous functions by interpolating observations. The confidence intervals GPs provide can also be used to assess if one should consider refitting the predictions in some regions of interest. GPs are very general and enjoy a neat mathematical foundation. They, however, have the shortcoming of being bound to a rather small number of parameters depicting the input space. In other words, GPs lose their efficiency in high dimensional spaces when the number of features exceeds a few dozens.

As for the exploration question, when given a relatively low dimensional input space, both grid and random search strategies can do. But for most realistic optimization problems, the dimensionality of the input space is so high that a grid search becomes quickly intractable and the random search may not be the optimal option to adopt. A combination of evolutionary algorithms with a heuristic local search is often used to efficiently search the input space and acquire new samples to evaluate. 

The non-convexity of the objective is also another challenge faced in optimization problems. When constructing the surrogate, one should be careful about premature convergence and the possibility of being stuck in a local rather than the global optimum.

Most real world optimization applications are formulated as multi-objective optimization problems where we seek to simultaneously optimize for multiple criteria and take into account the different constraints imposed. In that case, instead of finding an ultimate optimum, the goal of the optimization is to recover the Pareto front of these different objectives or, in certain cases, to identify Pareto optimal points only in a subset of the Pareto front. 

Another equally important aspect to consider when tackling a global optimization problem is the possibility to exploit multi-fidelity evaluations. When the number of high-fidelity evaluations we dispose is limited, like it is often the case for very complex functions, we can kick-start the surrogate model training with lower fidelity evaluations to first explore which regions of the search space to further query while sparing the higher fidelity evaluations to refitting the regions where high accuracy predictions are more important. Constructing the surrogate model using multi-fidelity evaluations provides a nice trade-off between prediction accuracy and computational efficiency. Furthermore, by blending evaluations coming from different sources, it contributes to a more interoperable approach to the optimization process. Learning the surrogate model from multi-fidelity evaluations goes hand in hand with uncertainty quantification. The lower fidelity an evaluation is, the wider the confidence interval around the prediction and vice versa. In the context of active learning, these confidence intervals, or variance maps, constitute an essential part in planning further samples acquisition.

Numerical simulations and 3D shape optimization

Multi-objective optimization has a multitude of applications in the realm of numerical simulations. 3D shape design optimization is a particularly interesting domain for such applications. The examples here are numerous from the optimization of the aero- or hydrodynamics characteristics of a certain design through computational fluid dynamics (CFD) to ensuring proper solid material rigidity through structural analysis. 

If you are a mechanical engineer working on the design of a blade for a propeller or maybe perfecting the profile of an airplane wing, you’ll almost certainly always refer to some Computational Fluid Dynamics (CFD) simulations as a cheaper proxy to solving the Navier Stokes equations. You might very well be a medical engineer carefully calibrating the design of a cardiovascular pump, CFD simulations would also be among your most important tools to ensure optimal fluid flow through your design. In these two scenarios, and numerous other ones, CFD simulations prove very important especially when considering the reduced need for physical tests they can yield. When it comes to computational complexity, however, CFD simulations are among the most expensive function evaluations. A full blown high-fidelity simulation may very well last for a few days… As an engineer with a strong problem-solving mindset, would you always wait for every single simulation result? Most probably you wouldn’t. You’d rather build a surrogate!  

A very similar process applies to other engineering disciplines such as mechanical and electromagnetic engineering. In mechanical structural analysis for example, the Finite Element (FE) solvers are widely used to predict the design performance characteristics such as part distortions due internal stresses. FE solvers are based on the Finite Element Method that provides an approximation of the solution to solid material analysis by discretizing the continuous body of the design input space into a large finite number of elements and solving the problem in the domain of each element along with the corresponding boundary conditions. The smaller the element size is, the more accurate, but expensive, the simulation results are. 

The above examples can only represent a negligible fraction of the applications of surrogate modeling for shape optimization. Engineers are clearly not oblivious to the great potentials of surrogate models in accelerating 3D simulations and optimizing their designs. Yet, with the limitations imposed by the parametrization requirements of most current GP-based surrogate models, it is still rather difficult to harness the full benefits of surrogate models in fully automated optimization loops. 

What about Deep-Learning based surrogates? 

More recently, a new family of surrogate models are gaining traction in the circles of scientific engineering. By training a deep neural network to learn the surrogate model, one can not only overcome the low-dimensionality limitation of Gaussian-based surrogates but in some cases also obtain a superior prediction accuracy [4]. The Deep Learning approach is also more interoperable. Dropping the handcrafted shape parameterization requirement means that we can train the network on arbitrary shapes exploiting the capabilities of transfer learning and leveraging on geometries pertaining to different datasets that were otherwise locked in project silos with a distinct regressor trained exclusively on each set. 

Not very surprisingly though, this new technology comes with its own set of challenges. At first sight, 3D Convolutional Neural Networks might manifest themselves as the right candidate for the task. However, their large memory footprint makes it difficult to accommodate all the data required for their training into memory -even when considering modern GPUs capacities. A possible mitigation, one that compromises accuracy, is training the network on relatively coarse discretization, aka voxels, of the volumetric input. A yet better alternative is possible by utilizing a newer architecture of CNN, Geodesic Convolutional Neural Network (GCNN) in particular. GCNNs learn directly from the surface mesh representation of 3D geometries and can thus considerably reduce the computing requirements of plain CNN. Besides the enhanced usability and interoperability they enjoy, GCNN-based surrogate models offer another important advantage over Gaussian Process and other parametrized forms of surrogates in the context of 3D shape optimization. By training the network directly on the surface mesh of the shape, it is possible to back-propagate the gradient calculations down to the original vertices, enabling a free form first order optimization of the shape. This can be very useful when embedded in a multi-objective optimization process. Though, one still has to integrate the proper constraints to guarantee smoothness and to preserve other design requirements. 

The above mentioned Geodesic Convolutional Neural Networks are at the core architecture of surrogate models provided by Neural Concept Shape, the first of its kind software interface dedicated to 3D Deep Learning shape optimization. 

 

Summary

This article presented a very brief and high-level overview of multi-objective global function optimization and the benefits one can unlock utilizing deep learning approaches in constructing the surrogate model used in the optimization process. The ideas presented here are inspired by the optimisation framework used in the Neural Concept Shape software. Some interesting use cases can be found browsing our website. Further reading materials and related articles are also listed below.  

Resources

[1] Deep Fluids: A Generative Network for Parameterized Fluid Simulations. https://arxiv.org/pdf/1806.02071.pdf

[2] Deep learning for mechanical property evaluation. http://news.mit.edu/2020/deep-learning-mechanical-property-metallic-0316

[3] Learning to Simulate Complex Physics with Graph Networks. https://arxiv.org/abs/2002.09405

[4] Geodesic Convolutional Shape Optimization. https://arxiv.org/abs/1802.04016

 

The post On Deep Learning and Multi-objective Shape Optimization appeared first on Neural Concept.

]]>
10 SWISS ENGINEERING STARTUPS TO WATCH IN 2020 https://neuralconcept.com/blog/10-swiss-engineering-startups-to-watch-in-2020/ Fri, 17 Apr 2020 15:13:53 +0000 https://neuralconcept.com/?p=1509 We are happy to announce that we were selected by VentureLab as one of the 10 Swiss engineering start-ups to watch in 2020! You can see the whole article on https://www.venturelab.ch/10-Swiss-Engineering-Startups-to-Watch-in-2020

The post 10 SWISS ENGINEERING STARTUPS TO WATCH IN 2020 appeared first on Neural Concept.

]]>
We are happy to announce that we were selected by VentureLab as one of the 10 Swiss engineering start-ups to watch in 2020!

You can see the whole article on https://www.venturelab.ch/10-Swiss-Engineering-Startups-to-Watch-in-2020

The post 10 SWISS ENGINEERING STARTUPS TO WATCH IN 2020 appeared first on Neural Concept.

]]>
In-graph training loop https://neuralconcept.com/blog/in-graph-training-loop/ https://neuralconcept.com/blog/in-graph-training-loop/#respond Fri, 27 Mar 2020 16:17:35 +0000 https://neuralconcept.com/?p=1444 In a previous experiment, we have explored the behaviour and interaction between keras models, tf.functions, saved models and tf.dataset. A summary of this experiment is available here as a notebook.   As a follow up to this previous test, we compared the performance of in-graph training loop ( https://www.tensorflow.org/guide/function#advanced_example_an_in-graph_training_loop ) with loop in python and model with […]

The post In-graph training loop appeared first on Neural Concept.

]]>
In a previous experiment, we have explored the behaviour and interaction between keras models, tf.functions, saved models and tf.dataset. A summary of this experiment is available here as a notebook.

 

As a follow up to this previous test, we compared the performance of in-graph training loop ( https://www.tensorflow.org/guide/function#advanced_example_an_in-graph_training_loop ) with loop in python and model with annotated __call__ function.

We performed training with 100, 200, 400, 800 and 1600 steps with the naval dataset with the following config –

It is seen that the python loop has some upfront initial cost, but over the period of time it runs faster than in-graph training loop.

The initial cost of the python loop can be attributed to the fact that multiple traces are performed for each different shapes, but once all the different shapes are encountered, there are no further delays and it runs faster than the in-graph loop.

The finding also corroborates with the issue https://github.com/tensorflow/tensorflow/issues/35165

In order to avoid retracing for different shapes, we should provide input_signature to the model tf.function. The input_signature is known only after the partial shape information is available, so instead of statically annotating with tf.function, we need to create a tf.function with the right input signature on the fly.

Comparisons of training in python loop, avoiding the retracing –

Follow Up

As a follow up to Tf.function retracing vs specifying an input signature , training on naval dataset is run for 5000 steps, with various combinations of annotating tf functions. Following are the results

  Model Outer function Retracing happens Mean train step (discarding retracing times)
1 tf.func with signature python func no 0.28
2 tf.func without signature train_step is tf.func with signature no 0.22
3 tf func without signature compute_gradient is tf.func with signature no 0.25
4 tf func without signature compute_gradient is tf.func without signature yes 0.24
5 tf func without signature train_step is tf.func without signature yes 0.14

Refer to attached notebook for details:

We can conclude that retracing is expensive operation, but once the function has retraced all possible shapes, the compiled function is faster.

Making train_step as the tf.function is much faster than making compute_gradient as the tf function .

The post In-graph training loop appeared first on Neural Concept.

]]>
https://neuralconcept.com/blog/in-graph-training-loop/feed/ 0
Deep Neural Network in simulations https://neuralconcept.com/blog/deep-neural-network-in-simulations/ https://neuralconcept.com/blog/deep-neural-network-in-simulations/#respond Wed, 11 Mar 2020 09:23:55 +0000 https://neuralconcept.com/?p=1378 Deep learning and AI in general have taken the entire field of computer science by storm and has now become the dominant approach to solving a wide array of problems, ranging from winning board games to molecular discovery. However, Computer Assisted Design (CAD) and geometry processing are still mostly based on traditional techniques. Indeed, numerical […]

The post Deep Neural Network in simulations appeared first on Neural Concept.

]]>
Deep learning and AI in general have taken the entire field of computer science by storm and has now become the dominant approach to solving a wide array of problems, ranging from winning board games to molecular discovery. However, Computer Assisted Design (CAD) and geometry processing are still mostly based on traditional techniques.

Indeed, numerical simulation techniques have traditionally relied on solving physically derived equations using finite differences and adding heuristic models when those become too complex to solve (Turbulence models in fluid mechanics for example). More recently, Lattice Boltzmann methods have become popular as a means to simulate streaming and collision processes across a limited number of particles. Both classes of techniques remain (computationally) (very) expensive (we are talking about hours to days of simulation), and since the simulation must be re-run each time an engineer wishes to change the shape, this makes the design process slow and costly. 

A typical engineering approach is therefore to test only a few designs without a fine-grained search in the space of potential variations. Hence the company is limited by:

  1. Its financial resources for the given project 
  2. The time constraints of the project
  3. Its engineers’ experience and cognitive biases during the product development phase.

Since this is a severe limitation, there have been many attempts at overcoming it and one of the most famous is reduced-order modeling:

Reduced-Order modeling (ROM) is a class of Machine Learning approaches used to learn a simplified model of a simulator, based on data. This method is a simplification of a high-fidelity dynamical model, built from a large number of numerical simulations. It preserves essential behavior and dominant effects, for the purpose of reducing solution time or storage capacity required for the more complex model. It is applied in a large range of physics and has proven its efficiency for specific applications. It works well when the engineer wants to vary a few, well-defined parameters, with a specific objective in mind. A good overview of the different techniques used is given in this paper: https://www.sciencedirect.com/science/article/abs/pii/S0376042103001131

Figure 2: Classical Response Surface built from a reduced-order model

However, the modeling power of classical ROM methods is limited, and they present several drawbacks:

  1. For some industries, the majority of the simulation data is acquired through experiments, by sensors being placed at various locations, conditions… This data cannot be easily transferred to a reduced-order model. Indeed, it would require that the conditions of experiments are always strictly identical, which is very rarely the case, as you need a total control over the parameters and conditions of simulation
  2. When building the reduced-order model, a parameterization is defined and kept throughout the whole project and the simulations have to be generated using this parametrization. If a company is facing new requirements for a given product, or wants to explore new designs, they may be forced to change their parameterization. Then, they would have to start everything from scratch again and re-generate a bunch of simulations to build a new ROM. It creates silos in the company’s workflow, where ROMs are built for very specific use-cases and are hardly used for later applications.
  3. Some applications require the simulations of very complex phenomena, with discontinuities that may appear (transonic flows…). ROMs tend to « smooth » these discontinuities, giving large errors in these specific regions.

Deep Neural-Networks are an extension of classical Reduced Order Modeling, where the approximating function is not limited to a simple linear model but can be extended through a stack of non-linear operations, called layers. A very recent branch of the Deep-Learning research applies this concept to the processing of geometric information and was able to overcome the limitations of more classical reduced-order models. Based on a neural network architecture, it is able to understand 3D shapes and learns how they interact with the laws of physics. Since it uses raw, 3D, unprocessed geometries as input, it does not suffer from all of the previous drawbacks I mentioned. The engineer is now able to leverage on its historical database (even if the parametrization of a given part has evolved over time!) and integrate experimental data as well. 3D Deep Learning allows to switch from silos workflows to a common base where the information is globally shared and continuously re-used.

It is also orthogonal to the physics, and the same technique can tackle a very large range of physics. Finally, its overall performance is able to improve over time, as it can be fed with new simulations on the fly.

Conclusion: After these few lines, it seems like 3D Deep Learning is the solution to enhance engineering processes! Well it is, but there is also a very important step in order to exploit the full potential of this tool (but also of any machine learning technique): preparing your data so that the model can extract the maximum of information out of it. In a follow-up article, I will come with a few tips and tricks to get the best out of your data when using Machine Learning for numerical simulations. Stay tuned! 

The post Deep Neural Network in simulations appeared first on Neural Concept.

]]>
https://neuralconcept.com/blog/deep-neural-network-in-simulations/feed/ 0
Artificial Intelligence meets Aerodynamics – the Ultimate Drone https://neuralconcept.com/blog/artificial-intelligence-meets-aerodynamics-the-ultimate-drone/ Mon, 24 Feb 2020 10:35:49 +0000 https://neuralconcept.com/?p=1348 18 months ago, Neural Concept, EPFL (École polytechnique fédérale de Lausanne), senseFly and AirShaper teamed up for an academic research project to apply deep learning to aerodynamics. Neural Concept Shape was coupled with AirShaper to explore the space of designs, with the ultimate objective to find more efficient designs. Based on those learnings, our software […]

The post Artificial Intelligence meets Aerodynamics – the Ultimate Drone appeared first on Neural Concept.

]]>
18 months ago, Neural Concept, EPFL (École polytechnique fédérale de Lausanne), senseFly and AirShaper teamed up for an academic research project to apply deep learning to aerodynamics.

Neural Concept Shape was coupled with AirShaper to explore the space of designs, with the ultimate objective to find more efficient designs. Based on those learnings, our software started seeing trends and improved its understanding of the application. Soon, it started making predictions on what could be an even better aerodynamic shape! The result? A more efficient drone that will fly further on the same battery charge! This was a first research project and the potential for other applications (cars, transportation, planes, …) is monumental. Are you also working on an application where aerodynamic improvements can make a difference? Just let us know!

We published a report describing the work that has been done and further analyzing what we managed to do, you can read it here.

The post Artificial Intelligence meets Aerodynamics – the Ultimate Drone appeared first on Neural Concept.

]]>
NeuralSampler: Euclidean Point Cloud Auto-Encoder and Sampler https://neuralconcept.com/blog/neuralsampler-euclidean-point-cloud-auto-encoder-and-sampler/ Wed, 22 Jan 2020 11:41:55 +0000 https://neuralconcept.com/?p=1328 We propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state-of-the-art architectures while still delivering better performance. You can download the full publication here.

The post NeuralSampler: Euclidean Point Cloud Auto-Encoder and Sampler appeared first on Neural Concept.

]]>

We propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state-of-the-art architectures while still delivering better performance.

You can download the full publication here.

The post NeuralSampler: Euclidean Point Cloud Auto-Encoder and Sampler appeared first on Neural Concept.

]]>
SP80 will be using Neural Concept Shape https://neuralconcept.com/blog/sp80-will-be-using-neural-concept-shape/ Thu, 19 Dec 2019 10:15:31 +0000 https://neuralconcept.com/?p=1259 Early September 2019, the ultra aerodynamic bike we helped designing broke two world records of speed. It is with the same objective in mind that we are pleased to announce our collaboration with SP80 : https://sp80.ch/. This team composed of EPFL Alumni and students are working to build the world’s fastest sailboat. They aim at […]

The post SP80 will be using Neural Concept Shape appeared first on Neural Concept.

]]>
Early September 2019, the ultra aerodynamic bike we helped designing broke two world records of speed. It is with the same objective in mind that we are pleased to announce our collaboration with SP80 : https://sp80.ch/.

This team composed of EPFL Alumni and students are working to build the world’s fastest sailboat. They aim at reaching the speed at 80 knots, with the sole power of a kite. To be able to reach such speed, aero and hydrodynamic optimization is mandatory. Moreover, the team needs to model quite complex physical phenomenon such as super cavitation, with numerical simulations that can take a lot of time (hours to days). Using Neural Concept Shape, they will be able to reduce this computation time to milliseconds, allowing them to further optimize the design of their sailboat and ultimately gain these precious knots to break the world record.

Stay tuned for further updates on our collaboration !

The post SP80 will be using Neural Concept Shape appeared first on Neural Concept.

]]>
Neural Concept and Airbus presenting their collaboration https://neuralconcept.com/blog/neural-concept-and-airbus-collaborating/ Thu, 12 Dec 2019 02:50:00 +0000 https://neuralconcept.com/?p=1246 Neural Concept is collaborating with Airbus to further accelerate their engineering process and to generate new design solutions across many kinds of problems in areas such as fluid dynamics, structural engineering, eletro-magnetics… The first application is Computational Fluid Dynamics, where Neural Concept Shape has been tested and approved by Airbus’ engineers on external aerodynamics applications. […]

The post Neural Concept and Airbus presenting their collaboration appeared first on Neural Concept.

]]>
Neural Concept is collaborating with Airbus to further accelerate their engineering process and to generate new design solutions across many kinds of problems in areas such as fluid dynamics, structural engineering, eletro-magnetics… The first application is Computational Fluid Dynamics, where Neural Concept Shape has been tested and approved by Airbus’ engineers on external aerodynamics applications.

At the 2019 edition, you could find Neural Concept and Airbus running a booth together at NeurIPS. The Conference and Workshop on Neural Information Processing Systems (or NeurIPS), is one of the main machine learning conferences of the world. It had more than 13,000 registered participants in 2019 and takes place every year in December. It was the opportunity for us to join forces with Airbus engineers to demonstrate applications of Real Time CFD simulations with 3D Mesh Convolutional Networks.

Neural Concept Shape is the first deep learning system, which understands 3D shapes (CAD) and learns how they interact with the laws of physics (CAE). It is able to emulate full-fledged simulators, giving predictions in approximately 30 ms, versus minutes to hours (or even days) for classic simulators. Neural Concept Shape allows engineers to extract maximum value from their data. In other terms, they can use Neural Concept’s graphical interface to explore an infinite amount of designs without calling back the resource-consuming, time-consuming simulator.

Poster presented at NeurIPS 2019

On top of this “real-time” design space exploration capability, it is also possible to use Neural Concept’s optimization library. With this approach, the optimization algorithm can create arbitrarily complex shapes, and the user can easily manually interact with the optimization algorithm. Therefore, numerical optimization becomes much more smoothly integrated in the whole engineering process, and can converge to unique and innovative geometries the engineer may not have thought of.

So stay tuned and you might maybe meet a similar booth in the upcoming events !

The post Neural Concept and Airbus presenting their collaboration appeared first on Neural Concept.

]]>
Neural Concept will be at the NAFEMS Conference https://neuralconcept.com/blog/neural-concept-3d-deep-learning-for-cae-shape-optimization/ Tue, 08 Oct 2019 09:00:57 +0000 https://staging.arcticonline.com/neural-concept-final/?p=234 NAFEMS is an independent not-for-profit company, with the mission to provide knowledge, international collaboration and educational opportunities for the use and validation of engineering simulation. NAFEMS is widely held to be the leading independent source of information and training for engineering analysts and designers at all levels. On the 15th and 16th of October, will […]

The post Neural Concept will be at the NAFEMS Conference appeared first on Neural Concept.

]]>

NAFEMS is an independent not-for-profit company, with the mission to provide knowledge, international collaboration and educational opportunities for the use and validation of engineering simulation. NAFEMS is widely held to be the leading independent source of information and training for engineering analysts and designers at all levels. On the 15th and 16th of October, will take place the NAFEMS European Conference: Simulation-Based Optimisation in London.

Neural Concept has been invited to hold a presentation there on the 15th of October. Pierre Baqué, our CEO, will speak about 3D Deep-learning Based Surrogate Modeling and Optimisation, its applications and advantages in today’s industrial world. It will be followed by a Q&A session, so do not hesitate to come and discuss with us about the current and upcoming challenges in simulation !

The post Neural Concept will be at the NAFEMS Conference appeared first on Neural Concept.

]]>