Optimizing Complex Nonlinear Regression Models using Heuristic and Random Search Methods
Finding complex multivariable nonlinear regression models (3D, 4D, 5D, 6D, etc.) and selecting the best-fitting functions results in a large number of possible combinations. Searching through all these combinations using an exact algorithm is both computationally expensive and time-consuming. Therefore, ndCurveMaster employs heuristic techniques for curve fitting and data analysis and incorporates scientific algorithms based on machine learning, such as random search, to address this challenge. In this process, the best complex nonlinear regression models are determined through randomization and iterative searching using the following methods:
The "AutoFit" method employs an algorithm where variables are randomized and base functions are iterated. This algorithm is fast and efficient, but the solution is limited by the iteration. The search automatically completes when the correlation coefficient value reaches its maximum.
The "Random Search" method uses an algorithm in which both variables and base functions are fully randomized. This method is more time-consuming than the "AutoFit" method, but the solution is unrestricted due to the randomization process. The user must manually terminate the search by pressing the ESC key, allowing for potentially unlimited search duration.
Although these methods facilitate the discovery of better models, the use of a heuristic algorithm means that even when employing the same dataset, the method of finding the best models, as well as the models themselves, can differ each time. Therefore, repeated searches implemented in the program effectively help the user find a solution very close to the optimal outcome.
Enhanced Multicollinearity Detection in ndCurveMaster
ndCurveMaster provides a multicollinearity detection feature to enhance the quality of the models created through the use of:
Variance Inflation Factor (VIF)
Pearson Correlation Matrix.
Variance Inflation Factor (VIF) for Multicollinearity Detection
The VIF index is widely used for detecting multicollinearity (more information can be found on Wikipedia). There's no strict VIF value to determine the presence of multicollinearity. VIF values above 10 are often considered an indication of multicollinearity, but values above 2.5 in weaker models may cause concern. ndCurveMaster calculates the VIF values for each model, which are displayed in the last column of the regression analysis table for each predictor, as shown below:
In addition, ndCurveMaster offers a search option for models with a VIF limit value. The user can select the "VIF cannot exceed" checkbox to only display models that do not exceed the selected VIF value. The default VIF limit value is 10, as shown below:
The user can adjust the limit as needed.
Utilizing the Pearson Correlation Matrix in ndCurveMaster
The Pearson Correlation Matrix window displays Pearson Correlation coefficients between each pair of variables, as shown below:
Examining the correlations between variables is the simplest way to detect multicollinearity. In general, an absolute correlation coefficient of more than 0.7 between two or more variables indicates the presence of multicollinearity.
Preventing Overfitting in Multivariable Nonlinear Regression Models
Overfitting occurs when a statistical model has too many parameters compared to the sample size used to build it. This is a common problem in machine learning, but it is not usually applicable to regression models. However, ndCurveMaster's advanced algorithms make it possible to build complex multi-variable models, which can also be susceptible to overfitting.
In regression analysis with one independent variable, overfitting can be easily detected by examining the graph:
But in nonlinear regression analysis of many variables it is not possible to detect overfitting in this way.
Therefore, ndCurveMaster has implemented an overfitting detection technique using the test set method. The software randomly selects part of the data for the test set:
and uses the remaining data for regression analysis. Overfitting is detected by comparing the root mean square (RMS) errors of the test set and the entire dataset.
Standard statistical analysis of the entire dataset may not detect overfitting. However, ndCurveMaster can also check the RMS errors of the test set and the entire dataset. In the example below, the test set RMS error is 9.55 and the dataset RMS error is 0.138. ndCurveMaster detects overfitting because the test set error is 68.775 times the dataset error.
The overfitting is clearly demonstrated in the graph below, where blue points represent the entire dataset and yellow points represent the test set:
The graph shows that the fit of the entire dataset looks perfect, but the test set points do not fit well.
In ndCurveMaster, it is possible to test the normality of residuals from the discovered equations. This allows for assessing the significance of predictors in the calculated regression equations. The following methods are implemented:
Shapiro-Wilk Test for data sets containing 3 to 5000 observations,
Anderson-Darling Test for any number of observations,
Q-Q Plot (Quantile-Quantile Plot), which provides a visual assessment of whether a data set follows a specific distribution (e.g., normal distribution).
To perform normality tests, select the Q-Q Plot & Normality option in the "Graph" menu, as shown below:
If the residuals do not follow a normal distribution, this may affect the validity of statistical results and inferences. In such cases, the significance of individual predictors can be assessed using the sensitivity analysis (SA %) in the "Statistics" window. SA % indicates the percentage increase in the RMSE (Root Mean Square Error) of the entire equation after removing a given variable/predictor from the regression model. The higher the percentage, the more significant the predictor, as shown below:
The critical values for the Shapiro-Wilk Test are determined as follows:
For sample sizes (N) ranging from 3 to 50, they are read directly from a table of critical values,
For sample sizes (N) from 51 to 5000, they are calculated using approximation equations.
The approximation equations for the Shapiro-Wilk Test critical values were derived from exact Monte Carlo simulations with 10,000 iterations. These equations were developed using ndCurveMaster and are characterized by 100% accuracy, with the sum of squared residuals across all data points being zero.
Below are the approximation equations created with ndCurveMaster, which are also implemented in the software, for various significance levels (α):
The complete table, containing all exact critical values for the Shapiro-Wilk Test, calculated using the Monte Carlo method and approximations generated with ndCurveMaster for sample sizes (N) ranging from 51 to 5000, can be downloaded in Excel format from the following link:
Shapiro_Wilk_Critical_Values_Table_5000.xlsx.
Equation discovery involves identifying mathematical relationships that best describe the behavior of a dataset. This process is crucial for understanding complex systems and uncovering the underlying laws governing data. By leveraging advanced techniques like nonlinear regression, researchers can develop predictive models and gain deeper insights.
ndCurveMaster is a powerful tool designed specifically for such tasks. Unlike traditional software like Excel, which may struggle with local optima, ndCurveMaster employs a randomized search algorithm to explore a broader solution space.
In this tutorial, both Excel and ndCurveMaster were used to discover the following equation:
Y = 3 + 3 · x12.5 + 4 · x23 + (-3.5) · x31/2.
Using only data and curve fitting techniques, ndCurveMaster successfully discovered this equation, while Excel provided a close approximation. The results are presented in the table below.