##plugins.themes.bootstrap3.article.main##

Erly Krisnanik
Widya Cholil
Muhammad Adrezo
Catur Nugrahaeni DP
Mumtazimah Binti Mohamad

Abstract

The stunting prevalence rate in 2020 of the Ministry of Health of the Republic of Indonesia was 38.9%. The stunting prevalence rate in Central Java itself is 33.9%, of which 17.0% are stunted and 16.9% are very short. The purpose of the study is to obtain valid data on the factors causing stunting and carry out the classification process quickly. The method used in this study is machine learning by comparing three algorithms, namely: SVM, KNN and Random Forrest. The results of this study are said that the average calculation of the accuracy level of early childhood stunting data using SVM and KNN is above 80% and Random Forrest is below 80%. While the calculation results of the average precision value of 84% and recall value of 80% using SVM, the average precision value of 95% and the recall value of 91% using KNN with K = 1, and the average precision value of 87% and the recall value of 52% using Random Forrest.  The conclusion of the comparison between SVM, Random Forest and KNN methods to calculate precision and recall values can be said that KNN is better with K = 1 close to 100%.

##plugins.themes.bootstrap3.article.details##

How to Cite
Krisnanik, E., Cholil, W., Adrezo, M., DP, C. N., & Binti Mohamad, M. (2025). Classification of stunting for early childhood in indramayu using machine learning methods. International Journal of Basic and Applied Science, 14(2), 97–109. https://doi.org/10.35335/ijobas.v14i2.833
References
[1] M. Ula, S. Fachrurrazi, R. A. Rizal, Mauliza, and Syarkawi, “Implementation of Data Mining Models With Algorithms K-Nearest Neighbor in Monitoring the Nutritional Status of Children and Stunting,” J. Sist. Inf. dan Ilmu Komput. Prima(JUSIKOM PRIMA), vol. 6, no. 2, pp. 11–16, 2023.
[2] S. Wiyono, D. S. Wibowo, M. F. Hidayatullah, and D. Dairoh, “Comparative Study of KNN, SVM and Decision Tree Algorithm for Student’s Performance Prediction,” Int. J. Comput. Sci. Appl. Math., vol. 6, no. 2, p. 50, 2020, doi: 10.12962/j24775401.v6i2.4360.
[3] H. Ohmaid, S. Eddarouich, A. Bourouhou, and M. Timouyas, “Comparison between svm and knn classifiers for iris recognition using a new unsupervised neural approach in segmentation,” IAES Int. J. Artif. Intell., vol. 9, no. 3, pp. 429–438, 2020, doi: 10.11591/ijai.v9.i3.pp429-438.
[4] S. Sutarmi, W. Warijan, T. Indrayana, and I. Gunawan, “Machine Learning Model For Stunting Prediction,” J. Heal. Sains, vol. 4, no. 9, pp. 10–23, 2023, doi: https://doi.org/10.46799/jhs.v4i9.1073.
[5] S. Ramadhan, “CORRELATION BETWEEN LBW HISTORY AND STUNTING INCIDENCE : A LITERATURE REVIEW,” Indones. Midwifery Heal. Sci. J., vol. 7, no. 4, pp. 376–389, 2023, doi: 10.20473/imhsj.v7i4.2023.376-389.
[6] A. Nugroho, H. L. H. S. Warnars, F. L. Gaol, and T. Matsuo, “Trend of stunting weight for infants and toddlers using decision tree,” IAENG Int. J. Appl. Math., vol. 52, no. 1, pp. 1–5, 2022, [Online]. Available: https://www.proquest.com/openview/c38963f46f50928a549956f4138f47f6/1?pq-origsite=gscholar&cbl=2049591
[7] N. Widanti, W. Handini, N. W. Yanto, and A. Alamsyah, “Development Edge Device Monitoring System Stunting and Malnutrition in Golden age 0–5 years Integrated with AI,” J. Penelit. Pendidik. IPA, vol. 9, no. SpecialIssue, pp. 247–253, 2023, doi: https://doi.org/10.29303/jppipa.v9iSpecialIssue.6397.
[8] S. Syahrial, R. Ilham, Z. F. Asikin, and S. S. I. Nurdin, “Stunting Classification in Children’s Measurement Data Using Machine Learning Models,” J. La Multiapp, vol. 3, no. 2, pp. 52–60, 2022, doi: https://doi.org/10.37899/journallamultiapp.v3i2.614.
[9] I. Rahmi, M. Susanti, H. Yozza, and F. Wulandari, “Classification of Stunting in Children Under Five Years in Padang City Using Support Vector Machine,” BAREKENG J. Ilmu Mat. dan Terap., vol. 16, no. 3, pp. 771–778, 2022, doi: https://doi.org/10.30598/barekengvol16iss3pp771-778.
[10] F. H. Bitew, C. S. Sparks, and S. H. Nyarko, “Machine learning algorithms for predicting undernutrition among under-five children in Ethiopia,” Public Health Nutr., vol. 25, no. 2, pp. 269–280, 2022, doi: https://doi.org/10.1017/S1368980021004262.
[11] I. Ahmad, M. Basheri, M. J. Iqbal, and A. Rahim, “Performance comparison of support vector machine, random forest, and extreme learning machine for intrusion detection,” IEEE access, vol. 6, no. 5, pp. 33789–33795, 2018, doi: https://doi.org/10.1109/ACCESS.2018.2841987.
[12] S. Li, E. J. Harner, and D. A. Adjeroh, “Random KNN feature selection-a fast and stable alternative to Random Forests,” BMC Bioinformatics, vol. 12, no. 1, p. 450, 2011, doi: https://doi.org/10.1186/1471-2105-12-450.
[13] K.-L. Du, B. Jiang, J. Lu, J. Hua, and M. N. S. Swamy, “Exploring kernel machines and support vector machines: Principles, techniques, and future directions,” Mathematics, vol. 12, no. 24, p. 3935, 2024, doi: https://doi.org/10.3390/math12243935.
[14] S. Dhanka, A. Sharma, A. Kumar, S. Maini, and H. Vundavilli, “Advancements in Hybrid Machine Learning Models for Biomedical Disease Classification Using Integration of Hyperparameter-Tuning and Feature Selection Methodologies: A Comprehensive Review,” Arch. Comput. Methods Eng., vol. 6, no. 6, pp. 1–36, 2025, doi: https://doi.org/10.1007/s11831-025-10309-5.
[15] J. Mkungudza, H. S. Twabi, and S. O. M. Manda, “Development of a diagnostic predictive model for determining child stunting in Malawi: a comparative analysis of variable selection approaches,” BMC Med. Res. Methodol., vol. 24, no. 1, p. 175, 2024, doi: https://doi.org/10.1186/s12874-024-02283-6.
[16] J. Joseph and K. Kartheeban, “Visualizing the Full Spectrum Optimization of K-Nearest Neighbors From Data Preprocessing to Hyperparameter Tuning and K-Fold Validation for Cardiovascular Disease Prediction,” Informatica, vol. 49, no. 2, p. 1, 2025, [Online]. Available: https://www.informatica.si/index.php/informatica/article/view/7774
[17] T. R. Mahesh, O. Geman, M. Margala, and M. Guduri, “The stratified K-folds cross-validation and class-balancing methods with high-performance ensemble classifiers for breast cancer classification,” Healthc. Anal., vol. 4, no. 12, p. 100247, 2023, doi: https://doi.org/10.1016/j.health.2023.100247.
[18] M. O. Franz and B. Schölkopf, “A Unifying View of Wiener and Volterra Theory and Polynomial Kernel Regression,” Neural Comput., vol. 18, no. 12, p. 6796712, 2006, doi: 10.1162/neco.2006.18.12.3097.Abstract.
[19] and L. Z. Quansheng Kuang, A Practical GPU Based KNN Algorithm, vol. 7, no. Iscsct. 2009.
[20] Y. Zhou, Y. Li, and S. Xia, “An improved KNN text classification algorithm based on clustering,” J. Comput., vol. 4, no. 3, pp. 230–237, 2009, doi: 10.4304/jcp.4.3.230-237.
[21] I. Systems and S. I. Ayua, “Random Forest Ensemble Machine Learning Model for Early Detection and Prediction of Weight Category,” Data Sci. Intell. Syst., vol. XX, no. September, pp. 1–15, 2023, doi: 10.47852/bonview32021149.
[22] T. Daniya, M. Geetha, and K. S. Kumar, “Classification and regression trees with gini index,” Adv. Math. Sci. J., vol. 9, no. 10, pp. 8237–8247, 2020, doi: 10.37418/amsj.9.10.53.
[23] A. Singh, M. N., and R. Lakshmiganthan, “Impact of Different Data Types on Classifier Performance of Random Forest, Naïve Bayes, and K-Nearest Neighbors Algorithms,” Int. J. Adv. Comput. Sci. Appl., vol. 8, no. 12, pp. 1–10, 2017, doi: 10.14569/ijacsa.2017.081201.
[24] J. Roy and S. Saha, “Ensemble hybrid machine learning methods for gully erosion susceptibility mapping: K-fold cross validation approach,” Artif. Intell. Geosci., vol. 3, no. March, pp. 28–45, 2022, doi: 10.1016/j.aiig.2022.07.001.
[25] T. R. Mahesh et al., “AdaBoost Ensemble Methods Using K-Fold Cross Validation for Survivability with the Early Detection of Heart Disease,” Comput. Intell. Neurosci., vol. 2022, p. 9005278, 2022, doi: 10.1155/2022/9005278.
[26] T. R. Mahesh, A. C. Kaladevi, J. M. Balajee, V. Vivek, M. Prabu, and V. Muthukumaran, “An Efficient Ensemble Method Using K-Fold Cross Validation for the Early Detection of Benign and Malignant Breast Cancer,” Int. J. Integr. Eng., vol. 14, no. 7, pp. 204–216, 2022, doi: 10.30880/ijie.2022.14.07.015.
[27] M. Hamza and D. Larocque, “An empirical comparison of ensemble methods based on classification trees,” J. Stat. Comput. Simul., vol. 75, no. 8, pp. 629–643, Aug. 2005, doi: 10.1080/00949650410001729472.
[28] S. Tangirala, “Evaluating the impact of GINI index and information gain on classification using decision tree classifier algorithm,” Int. J. Adv. Comput. Sci. Appl., vol. 11, no. 2, pp. 612–619, 2020, doi: 10.14569/ijacsa.2020.0110277.
[29] D. Trishnanti and H. Al Azies, “… of Support Vector Machine Method (Svm) and K-Nearest Neighbor (K-Nn) in Classification of Human Development Index (HDI),” Proceeding ASEAN Youth Conf., no. November, 2019, doi: 10.17605/OSF.IO/NCX74.
[30] D. A. Anggoro and D. Novitaningrum, “Comparison of accuracy level of support vector machine (SVM) and artificial neural network (ANN) algorithms in predicting diabetes mellitus disease,” ICIC Express Lett., vol. 15, no. 1, pp. 9–18, 2021, doi: 10.24507/icicel.15.01.9.
[31] S. Shabani et al., “Modeling pan evaporation using Gaussian Process Regression K-Nearest Neighbors Random Forest and support vector machines; comparative analysis,” Atmosphere (Basel)., vol. 11, no. 1, 2020, doi: 10.3390/ATMOS11010066.
[32] P. Thanh Noi and M. Kappas, “Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery,” Sensors (Basel)., vol. 18, no. 1, 2017, doi: 10.3390/s18010018.
[33] M. Saberioon, P. Císař, L. Labbé, P. Souček, P. Pelissier, and T. Kerneis, “Comparative performance analysis of support vector machine, random forest, logistic regression and k-nearest neighbours in rainbow trout (oncorhynchus mykiss) classification using image-based features,” Sensors (Switzerland), vol. 18, no. 4, pp. 1–15, 2018, doi: 10.3390/s18041027.