How to Integrate Simulation with Statistical Analysis to Better Predict Durability

Advanced OEMs and Tier 1 suppliers have for years been relying on statistical analysis to predict the likelihood or probability of a component’s failure under specific driving conditions. But while statistical databases do a great job of recording and cataloguing past performance based on experimental data, what if there were a way to get a glimpse into the future?

Well, there is.

Can Durability be Simulated?

Nearly every OEM and supplier is doing some sort of experimentation to predict durability. But it is the advanced OEMs and suppliers that are applying newer, more sophisticated technologies to do this in a way that is more accurate, more predictive and more optimal than ever.

Suppose a team of engineers is studying the presumptive lifetime of a component in a four-door family sedan. They would start by analyzing the statistical data from similar model derivatives (other four-door sedans), and developing inference models. They would be looking to establish the likelihood of failure for a range of components, under particular loading conditions (vibration, structural loading, and thermal loading) in order to identify probability of component survival (based on drive hours or mileage logged).

For such a large data set, you can affix many constants, such as physical loading conditions. But a significant variable factor is temperature, as a vehicle and its components will warm up and vary in temperature over drive times and cycles. In an experiment (and in the data it records for future statistical analysis), you have simple device to tell you what the temperature of a component is at various intervals — but it won't tell you why it's that temperature. And this could very well be the missing puzzle piece to accurately assess durability and predict lifetime.

Thus, the options are these: do more experiments, use the available experimental data to make assumptions and predict what will happen in the real-world driving lifetime of that component, or...hand the statistical data over to the simulation team to apply another critical layer of analysis, thereby virtually removing any mystery that remains.

Teams Exchanging Data to Get Improved Predictions

One of the main advantages a simulation environment has over experimentation alone is that you can understand the impact of conduction, radiation, convection. In a tool like TAITherm one can identify the thermal relations of components in proximity by accounting for all pertinent modes of heat transfer. This ultimately gets to the question of “why” this temperature or that temperature.

Adding “why” to “what temperature” and “when will it get there” allows teams to have a complete data set that will reveal how best to alleviate the problem. In one set of circumstances, the simulation and experimental data may forecast that a heat shield be placed around the component; another set of data may prescribe moving the component; and a third may call for convection remediation (increasing airflow).

But without the full spectrum of what, where, and why that only comes from integrating simulation and experimental data, engineering teams may waste time, money and/or resources either doing additional experimentation or relying on assumptions that later prove faulty. Making changes in a simulation environment not only saves time and money, it may prevent a team from having to change a component’s materials, weight, thickness or design...all of which comes at significant cost and time.

Moreover, recent advancements in simulation technology allows TAITherm to analyze dynamic profiles. You can analyze street driving, highway driving, and you can simulate all cycles, then feed this transient analysis back to the broader lifetime analysis. This wasn’t possible only a few short years ago, but the future of analysis is now!

Advanced Analysis Made...Easy?!

One of the more attractive core features of TAITherm is its Stoplight Analysis functionality. Users are able to establish boundary conditions, or thresholds for temperatures (say, 200 degrees C) and visually flag when the component exceeds that temperature. Green indicates the component is safely below the threshold, yellow alerts the user that the temperature is approaching a dangerous “gray area,” and a red light notes that it is exceeding the threshold.

Not only does this make it super easy on the user performing the analysis, it gives him or her feedback to understand what is causing the thermal problem:

  • When does the component approach the “danger zone?”
  • How long does the component exceed that temperature threshold, and what is the magnitude of heat issues that arise?
  • What is the nature of any issues that arise (radiation, conduction, convection)?
  • Will the thermal issues have a negative impact on the component’s durability?

...Made Even Easier?!

Even better for the user performing this analysis (experimental coupled with simulation) is that most of this can even be automated, using a product like CoTherm.

We will dive further into CoTherm’s unique coupling abilities in a future article. In the meantime, if you have any comments or questions, please do not hesitate to contact us.

Share this article on social media

Subscribe and receive a monthly email update of our blogs