ndCurveMaster

Scientific Solutions for Data Analysis and Curve Fitting

Table of Contents

Optimizing Complex Nonlinear Regression Models Enhanced Multicollinearity Detection Preventing Overfitting in Multivariable Models Q-Q Plot and Normality Tests Discover Equations from Data

Optimizing Complex Nonlinear Regression Models using Heuristic and Random Search Methods

Finding complex multivariable nonlinear regression models (3D, 4D, 5D, 6D, etc.) and selecting the best-fitting functions results in a large number of possible combinations. Searching through all these combinations using an exact algorithm is both computationally expensive and time-consuming. Therefore, ndCurveMaster employs heuristic techniques for curve fitting and data analysis and incorporates scientific algorithms based on machine learning, such as random search, to address this challenge. In this process, the best complex nonlinear regression models are determined through randomization and iterative searching using the following methods:

Although these methods facilitate the discovery of better models, the use of a heuristic algorithm means that even when employing the same dataset, the method of finding the best models, as well as the models themselves, can differ each time. Therefore, repeated searches implemented in the program effectively help the user find a solution very close to the optimal outcome.

Back to Top

Enhanced Multicollinearity Detection in ndCurveMaster

ndCurveMaster provides a multicollinearity detection feature to enhance the quality of the models created through the use of:

Variance Inflation Factor (VIF) for Multicollinearity Detection

The VIF index is widely used for detecting multicollinearity (more information can be found on Wikipedia). There's no strict VIF value to determine the presence of multicollinearity. VIF values above 10 are often considered an indication of multicollinearity, but values above 2.5 in weaker models may cause concern. ndCurveMaster calculates the VIF values for each model, which are displayed in the last column of the regression analysis table for each predictor, as shown below:

ndCurveMaster Variance Inflation Factor VIF

In addition, ndCurveMaster offers a search option for models with a VIF limit value. The user can select the "VIF cannot exceed" checkbox to only display models that do not exceed the selected VIF value. The default VIF limit value is 10, as shown below:

ndCurveMaster VIF button

The user can adjust the limit as needed.

Utilizing the Pearson Correlation Matrix in ndCurveMaster

The Pearson Correlation Matrix window displays Pearson Correlation coefficients between each pair of variables, as shown below:

ndCurveMaster Pearson Correlation matrix

Examining the correlations between variables is the simplest way to detect multicollinearity. In general, an absolute correlation coefficient of more than 0.7 between two or more variables indicates the presence of multicollinearity.

Back to Top

Preventing Overfitting in Multivariable Nonlinear Regression Models

Overfitting occurs when a statistical model has too many parameters compared to the sample size used to build it. This is a common problem in machine learning, but it is not usually applicable to regression models. However, ndCurveMaster's advanced algorithms make it possible to build complex multi-variable models, which can also be susceptible to overfitting.

In regression analysis with one independent variable, overfitting can be easily detected by examining the graph:

ndCurveMaster Overfitting Curve

But in nonlinear regression analysis of many variables it is not possible to detect overfitting in this way. Therefore, ndCurveMaster has implemented an overfitting detection technique using the test set method. The software randomly selects part of the data for the test set:

ndCurveMaster Overfitting Program Option

and uses the remaining data for regression analysis. Overfitting is detected by comparing the root mean square (RMS) errors of the test set and the entire dataset.

The following example shows a multi-variable regression model: Y = a0 + a1*exp(x1) + a2*x2-8 + a3*x35.6 + a4*x4-1 + a5*x59 + a6*x64.1 + a7*exp(x6)3 + a8*x516 + a9*(1/2)x4 + a10*x2-6 + a11*x11.9 + a12*x35.2 + a13*x1-11 + a14*(ln(x3))8 + a15*exp(x5)-1 + a16*x41.9 + a17*x616 + a18*x210 + a19*exp(x3)5 + a20*(ln(x6))2 + a21*x4-4 + a22*exp(x5)2 + a23*x44.2 + a24*x515 + a25*x615 + a26*x312

Standard statistical analysis of the entire dataset may not detect overfitting. However, ndCurveMaster can also check the RMS errors of the test set and the entire dataset. In the example below, the test set RMS error is 9.55 and the dataset RMS error is 0.138. ndCurveMaster detects overfitting because the test set error is 68.775 times the dataset error.

ndCurveMaster Overfitting Detect

The overfitting is clearly demonstrated in the graph below, where blue points represent the entire dataset and yellow points represent the test set:

ndCurveMaster Overfitting Detect Chart

The graph shows that the fit of the entire dataset looks perfect, but the test set points do not fit well.

Back to Top

Q-Q Plot and Normality Tests

In ndCurveMaster, it is possible to test the normality of residuals from the discovered equations. This allows for assessing the significance of predictors in the calculated regression equations. The following methods are implemented:

To perform normality tests, select the Q-Q Plot & Normality option in the "Graph" menu, as shown below:

ndCurveMaster Q-Q Plot

If the residuals do not follow a normal distribution, this may affect the validity of statistical results and inferences. In such cases, the significance of individual predictors can be assessed using the sensitivity analysis (SA %) in the "Statistics" window. SA % indicates the percentage increase in the RMSE (Root Mean Square Error) of the entire equation after removing a given variable/predictor from the regression model. The higher the percentage, the more significant the predictor, as shown below:

ndCurveMaster Statistics Regression

The critical values for the Shapiro-Wilk Test are determined as follows:

The approximation equations for the Shapiro-Wilk Test critical values were derived from exact Monte Carlo simulations with 10,000 iterations. These equations were developed using ndCurveMaster and are characterized by 100% accuracy, with the sum of squared residuals across all data points being zero.

Below are the approximation equations created with ndCurveMaster, which are also implemented in the software, for various significance levels (α):

The complete table, containing all exact critical values for the Shapiro-Wilk Test, calculated using the Monte Carlo method and approximations generated with ndCurveMaster for sample sizes (N) ranging from 51 to 5000, can be downloaded in Excel format from the following link: Shapiro_Wilk_Critical_Values_Table_5000.xlsx.

Back to Top

Discover Equations from Data

Equation discovery involves identifying mathematical relationships that best describe the behavior of a dataset. This process is crucial for understanding complex systems and uncovering the underlying laws governing data. By leveraging advanced techniques like nonlinear regression, researchers can develop predictive models and gain deeper insights.

ndCurveMaster is a powerful tool designed specifically for such tasks. Unlike traditional software like Excel, which may struggle with local optima, ndCurveMaster employs a randomized search algorithm to explore a broader solution space.

In this tutorial, both Excel and ndCurveMaster were used to discover the following equation:

Y = 3 + 3 · x12.5 + 4 · x23 + (-3.5) · x31/2.

Using only data and curve fitting techniques, ndCurveMaster successfully discovered this equation, while Excel provided a close approximation. The results are presented in the table below.

Software Discovered Equation RMSE Pearson Correlation Coefficient
Excel Y = 0.89 + 3.518 * x12.65 + 1.314 * x22.11 - 0.066 * x3-4.16 4.38 0.99999715
ndCurveMaster Y = 3 + 3 · x12.5 + 4 · x23 + (-3.5) · x31/2 0 1

Read the full tutorial here: Curve Fitting in Excel: A Tutorial on Fitting a Complex Nonlinear Regression Model to Your Data

Back to Top