| Approximating the cumulative distribution function values of a standard normal
distribution with the highest accuracies still remains a challenging task. For this purpose,
the non-linear prediction formulas based on artificial neural networks are applicable to
the non-linear nature of a standard normal distribution integral. In this study, a dataset
consisting of almost real integral values of a standard normal distribution was prepared
ranging from -5 to 10 by increments of 0.01. The dataset was used to train 16 artificial
neural networks each of which was repeated 100 times to reach the best performance
among them by considering the number of neurons, including 1, 2, 3, 5, 15, 25, 35, and 45.
The test dataset was constructed ranging from -10 to 10 by increments of 0.001 without
including the training dataset. Two different types of ANN models were considered in
which their transfer functions of the hidden layers were hyperbolic tangent and those of
the output layers were either hyperbolic tangent or linear (purelin) . Three evaluation
metrics, the mean squared error (MSE), absolute error (AE), and relative error (RE)
were used to compare the results of the proposed models and another 7 accurate
literature approximation formulas. The results of the predicted points against their
almost real values were illustrated and their measurement metric values were calculated
and compared with those of the 7 literature formulas. The highest accuracies with 8 to 9
digits of accuracy were achieved by the 2 proposed equations based on ANN models using only 15 neurons with the measurement metrics MSE = 2.15E-17, AE = 1.03E-08,
RE = 1.04E-08, point = 2.89, and MSE = 4.91E-18, AE = 4.51E-09, RE = 3.23E-06, point =
-2.99 in the interval -10 to 10, respectively. In conclusion, the 2 ANN-based equations
with 15 neurons were superior in terms of properties, including optimization, less
absolute error, and less computational costs. However, for simple calculations, the ANNbased equation with 2 neurons using 2 hyperbolic tangent transfer functions at their
hidden and output layers can also be used. |