CFP last date
01 May 2024
Call for Paper
June Edition
CAE solicits high quality original research papers for the upcoming June edition of the journal. The last date of research paper submission is 01 May 2024

Submit your paper
Know more
Reseach Article

Stochastic Solution on Convergence Property of Quadratic Functions

by Adekunle Y. A., Ebiesuwa Seun, Yoro R. E.
Communications on Applied Electronics
Foundation of Computer Science (FCS), NY, USA
Volume 5 - Number 7
Year of Publication: 2016
Authors: Adekunle Y. A., Ebiesuwa Seun, Yoro R. E.
10.5120/cae2016652317

Adekunle Y. A., Ebiesuwa Seun, Yoro R. E. . Stochastic Solution on Convergence Property of Quadratic Functions. Communications on Applied Electronics. 5, 7 ( Jul 2016), 10-17. DOI=10.5120/cae2016652317

@article{ 10.5120/cae2016652317,
author = { Adekunle Y. A., Ebiesuwa Seun, Yoro R. E. },
title = { Stochastic Solution on Convergence Property of Quadratic Functions },
journal = { Communications on Applied Electronics },
issue_date = { Jul 2016 },
volume = { 5 },
number = { 7 },
month = { Jul },
year = { 2016 },
issn = { 2394-4714 },
pages = { 10-17 },
numpages = {9},
url = { https://www.caeaccess.org/archives/volume5/number7/631-2016652317/ },
doi = { 10.5120/cae2016652317 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2023-09-04T19:54:58.960205+05:30
%A Adekunle Y. A.
%A Ebiesuwa Seun
%A Yoro R. E.
%T Stochastic Solution on Convergence Property of Quadratic Functions
%J Communications on Applied Electronics
%@ 2394-4714
%V 5
%N 7
%P 10-17
%D 2016
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Constraints satisfaction problems (CSPs) aim at solution via algorithms that search a domain space for goal states. The solution of which must satisfy all constraints and guarantees explicit reasoning structure that conveys data about the problem to the algorithm. Thus, it assigns to an output, a set of variables that satisfies a set of constraints in its bid to prune off huge portion of the search space. This study presents solutions to quadratic functions via David Fletcher Powell method and stochastic method of optimization. To aim this purpose, hybrid neural networks are trained using DFP as a pre-processor to yield approximate solutions to the quadratic function. A trial solution of the quadratic equation is written as sum of two parts: (a) first part satisfies the initial condition for unconstrained optimization using DFP and hybrids as separate methods to solve a quadratic function; while (b) second part uses DFP as a pre-processor with adjustable parameters of for the ANN-TLRN hybrid. Results show that presented method introduces a closer form to the analytic solution. These present method is easily extended to solve a wide range of problems.

References
  1. Armijo, S., (1966).Minimization function having Lipschitz continuous partial derivatives, Pacific Journal of Mathematics, 16, 1-3.
  2. Barzilar, J and Borwein, J.W.,(1988).Two point step size gradient methods, IMA Journal of Numerical Analysis, 141-148.
  3. Batruni, R.,(1991). Multilayer network with piecewise-linear structure and BP-learning, IEEE Transaction on Neural Networks, 2, 395–403.
  4. Broyden, C.G.,(1967). Quasi newton method and their application to function approximation, Mathematical Computation, 21, 368-381.
  5. Cauchy, A.,(1987). Methode generale pour la resolution de systems d’equations simultanees, Computer Rendition of Science Paris, 25, 46-89.
  6. Caudill M.,(1987). Neural Networks Primer, Part I, AI Expert December, 46-52.
  7. Conway, A. J., Macpherson, K and Brown, J. C., (1998). Delayed time series predictions with neural networks, Journal of Neurocomputing, 18, 81–89.
  8. Dai, Y.H.,(2001). Alternate step gradient method, Report AMSS-2001-041, Academy of Mathematics and Systems Science, 32-56.
  9. Dai, Y.H and Yuan, Y.,(2003). Alternate minimization gradient method, IMA J. of Numerical Analysis, 23, 377
  10. Dai, Y.H and Yuan, Y.,(1999). A nonlinear conjugate gradient method with strong global convergence property, SIAM Journal of Optimization, 10, 177-182.
  11. Dai, Y.H., Yuan, J.Y and Yuan, Y.,(2002). Modified two-point stepsize gradient methods for unconstrained optimization, Computational Optimization and Application, 22, 103-109.
  12. Daniel, J.W.(1967). Conjugate gradient method for linear/nonlinear operator equations, SIAM J. Numeric Analysis, 4, 10 – 26.
  13. Dennis, J.E and Moore, J.J.,(1974). A characterization of superlinear convergence and its application to Quasi Newton methods, Mathematical Computation, 28, 549.
  14. Dennis, J.E and Moore, J.J.,(1977). Quasi Newton method: motivation and theory, SIAM Rev, 19, 46-89.
  15. Denton, J.W and Hung, M.S.,(1996). A comparison of nonlinear optimization methods for supervised learning in multilayer feedforward networks, European J. of Operation Research, 93, 358-368.
  16. Dixon, L.C.W.,(1972). Variable metric algorithms necessary and sufficient conditions for identical behaviour on non-quadratic functions, J. of Optimization Theory and Application, 10, 34-40.
  17. Fletcher, R.,(1987) Practical methods of optimization, John Wiley and Sons, Chichester.
  18. Fletcher, R and Reeves, C.,(1964). Function minimization by conjugate gradients, Computational Journal, 7, 149
  19. Forsythe, G.E.,(1986). On asymptotic directions of s-dimension optimum gradient method, Numerische Mathematik, 11, 57-76.
  20. Friedlander, A., Martinez, J.M., Molina, B and Raydan, M.,(1999). Gradient method with retards and generalization, SIAM J. of Numerical analysis, 36, 275-289.
  21. Ghalambaz, M., Noghrehabadi, A.R., Behrang, M.A., Assareh, E., Ghanbarzadeh, A and Hedayat, N.,(2011). A hybrid gravitational search neural network method to solve well known Wessinger’s equation, World Academy of Science, Engineering and Technology, 49, 803.
  22. Gottlieb, D and Orszag, S.A.,(1977). Numerical analysis of spectral methods: theory and applications, CBMS-NSF Regional Conference Series in Applied Mathematics, 26.
  23. Griewank, A and Toini, P.H.,(1982). Local convergence analysis of partitioned Quasi Newton updates, Numerical Mathematics, 39, 429-448.
  24. Hager, W and Zhang, H.,(2003). A new conjugate gradient method wit guaranteed descent and an efficient line search, SIAM J. of Optimization, 305-333.
  25. Heppner, H and Grenander, U.,(1990). A stochastic non-linear model for coordinated bird flocks”, In Krasner, S (Ed.), The ubiquity of chaos (233–238). Washington: AAAS.
  26. Jang, J.S.,(1993). Adaptive fuzzy inference systems. IEEE Transactions on Systems, Man and Cybernetics, 23, 665–685.
  27. Khan, J., Zahoor, R and Qureshi, I.R,(2009). Swarm intelligence for problem of non-linear ordinary differential equations and its application to well known Wessinger's equation. European J. Sci. Research, 34(4), 514-525.
  28. Lagris, I.E., Likas, A and Fotiadis, D.I.,(1998). Artificial neural networks for solving ordinary and partial differential equation, IEEE Trans. Neural Network, 9(5), 987
  29. Lee, H and Kang, I.S.,(1990). Neural algorithms for solving differential equations, Journal of Computational Physics, 91, 110–131.
  30. Liu, Y and Storey, C.,(1991). Efficient generalized conjugate gradient algorithms, J. of Optimization Theory Application, 69, 129-137.
  31. Malek, A and Beidokhti, R.S.,(2006). Numerical solution for high order differential equations using hybrid neural network - Optimization method, Applied Mathematics and Computation, 183, 260-271.
  32. Mandic, D. and Chambers, J.,(2001). Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability, John Wiley andSons: New York.
  33. Meade, A.J and Fernandez, A.A.,(1994). The numerical solution of linear ordinary differential equations by feedforward neural networks, Mathematical and Computer Modeling, 19(12), 1–25.
  34. Nash, J and Sutcliffe, J., (1970). River flow forecasting with conceptual models, J. of Hydro. Sci., 10, 282–290.
  35. Ojugo, A.A, (2012a).Artificial neural networks gravitational search model for rainfall runoff simulation and modeling, unpublished PhD, Computer Sci. Department, Ebonyi State University Abakiliki, Nigeria.
  36. Ojugo, A.A., Eboka, A.O., Okonta, E.O., Yoro, R.E and Aghware, F.O.,(2012b). Genetic algorithm rule-based intrusion detection system, Journal of Emerging Trends in Computing and Information Systems, 3(8), 1182 - 1194.
  37. Ojugo, A., and Yoro, R.E., (2013a). Computational intelligence in stochastic solution for Toroidal Queen, Progress in Intel Comp. App., 2(1), 46–56.
  38. Ojugo, A.A., Emudianughe, J., Yoro, R.E., Okonta, E.O and Eboka, A..,(2013b). Hybrid artificial neural network gravitational search algorithm for rainfall runoff modeling in Hydrology, Progress Intel. Comp. App., 2(1), 22
  39. Perez, M and Marwala, T.,(2011). Stochastic optimization approaches for solving Sudoku, IEEE Transaction on Evolutionary Computation, 256–279.
  40. Pham, D and Karaboga, D.,(1999). Training Elman and Jordan networks for system identification using GA, Artificial Intelligence in Engineering, 13, 107–117.
  41. Pham, D., Koc, E., Ghanbarzadeh, A and Otri, S.,(2006). Optimization of weights of multi-layered perceptrons using bee algorithm. Proc. of Intelligent Manuf. Systems. Sakarya University, Dept of Industrial Engineering, 38.
  42. Pham, D.T and Liu, X.,(1995). Artificial Neural Networks for Identification, Prediction and Control, Springer Verlag, London.
  43. Plumb, A.P., Rowe, R.C., York, P and Brown, M.,(2005). Optimization of the predictive ability of artificial neural network models, European J. of Pharmaceutical Sciences, 25, 395-405.
  44. Polak, E and Ribiere, G.,(1969). Note sur la convergence de directions conjugees, Rev. Francaise Informat Researche Operationelle, 3(16), 35-43.
  45. Polyak, B.T.,(1969). The conjugate gradient method in extreme problems, USSR Computation Mathematics: Mathematic Physics, 9, 94-112.
  46. Powell, M.J.,(1971). On the convergence of the variable algorithm D, Winston Mathematics Application, 21.
  47. Powell, M.J.,(1976). Some global convergence properties of a variable metric algorithm for minimization without exact line searches, In Cottle, R.W and Lemke, C.E (eds.), Nonlinear programming, SIAM Proceedings of AMS, 9, 53-72.
  48. Rashedi, E., Nezamabadi-pour, H and Saryazdi, S.,(2009). GSA: A Gravitational Search Algorithm, Information Sciences, 179, 2232–2248.
  49. Rashedi, E., Nezamabadi-pour, H and Saryazdi, S.,(2009). Filter modeling using gravitational search algorithm, Energy policy; doi:10.1016/j.engappai.2010.05.007.
  50. Raydan, M.,(1993). On Barzilai and Borwein choice of stepsize for the gradient method, IMA Journal Numeric Analysis, 13, 321-326.
  51. Reynolds, R.,(1994). An introduction to cultural algorithms, IEEE Transaction on Evolutionary Programming, 131-139.
  52. Ritter, K.,(1979). Local and superlinear convergence of a class of variable method, Computing, 23, 287-297
  53. Stachurski, A.,(1981). Superlinear convergence of a class of variable method, Mathematical Programming, 14, 178-205.
  54. Ursem, R., Krink, T., Jensen, M.and Michalewicz, Z.,(2002). Analysis and modeling of controls in dynamic systems. IEEE Transaction on Evolutionary Computing, 6(4), 378-389
Index Terms

Computer Science
Information Sciences

Keywords

Stochastic elitist network function optimization search space solution