Explainable AI-Driven Decision Support for Social Benefit Optimization: Improving Fairness, Reliability, and Managerial Oversight

Authors

  • Shakun Garg Department of Computer Science and Engineering, Greater Noida Institute of Technology, Greater Noida, Uttar Pradesh, India
  • Amit Verma School of Computer Science, University of Petroleum and Energy Studies Dehradun, Uttarakhand, India

DOI:

https://doi.org/10.63503/j.ijaimd.2025.195

Keywords:

Explainable Artificial Intelligence (XAI), Decision Support Systems, Social Benefit Optimisation, Fairness-Aware Machine Learning, Reliable AI, Uncertainty Estimation, Human-in-the-Loop Oversight, Policy Compliance, Transparent AI Governance

Abstract

It has become more necessary to have fair and transparent distribution of social benefits due to the increasing dependence of governments and organizations on data-driven decision systems. However, traditional AI platforms tend to be black-box, so interpretability is usually limited, and it allows biases to exist that compromise trust and undermine managerial control. To overcome these issues, the present paper introduces a proposal of an explainable artificial intelligence-based decision support system to improve fairness, reliability, and policy compliance in the workflow of social-benefit distribution. Its approach combines interpretable prediction modelling, equity-sensitive modifications, uncertainty estimation, and human-in-the-loop oversight and places it into one pipeline. Quantitative analysis of synthetic and real-world welfare data demonstrates that the proposed structure removes demographic bias by 22.7% and decision under perturbations by 18.4 and greater explanation fidelity by 31.2 than non-explainable bases do. The system further enhances the consistency of the allocation by 17.5% and reduces the risk of policy-violation by 14.9 %, and at the same time, it sustains the competitive predictive accuracy. As experimental findings indicate, there might be not only the higher quality of generated balance and credible recommendations of benefits but the enhanced managerial control due to the transparency of decision rationales and audit-traceable procedures. The results emphasize the usefulness of explainable and decision-aware AI systems in facilitating socially responsible and accountable decision making towards the administration of the public good.

References

[1] van Kimpren, F., de Bruijn, H., & Arnaboldi, M. (2023). Machine learning algorithms and public decision-making: A conceptual overview. The Routledge Handbook of Public Sector Accounting, 124-138. https://doi.org/10.4324/9781003295945

[2] Mahmoodi, E., Fathi, M., Tavana, M., Ghobakhloo, M., & Ng, A. H. (2024). Data-driven simulation-based decision support system for resource allocation in industry 4.0 and smart manufacturing. Journal of Manufacturing Systems, 72, 287-307. https://doi.org/10.1016/j.jmsy.2023.11.019

[3] French, S., Dickerson, A., & Mulder, R. A. (2024). A review of the benefits and drawbacks of high-stakes final examinations in higher education. Higher Education, 88(3), 893-918. https://doi.org/10.1007/s10734-023-01148-z

[4] Alzubaidi, L., Al-Sabaawi, A., Bai, J., Dukhan, A., Alkenani, A. H., Al-Asadi, A., ... & Gu, Y. (2023). Towards risk‐free trustworthy artificial intelligence: Significance and requirements. International Journal of Intelligent Systems, 2023(1), 4459198. https://doi.org/10.1155/2023/4459198

[5] Emmert‐Streib, F., Yli‐Harja, O., & Dehmer, M. (2020). Explainable artificial intelligence and machine learning: A reality rooted perspective. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(6), e1368. https://doi.org/10.1002/widm.1368

[6] Singh, J., Rani, S., & Srilakshmi, G. (2024, April). Towards explainable AI: interpretable models for complex decision-making. In 2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS) (Vol. 1, pp. 1-5). IEEE. doi: 10.1109/ICKECS61492.2024.10616500

[7] Miró-Nicolau, M., Jaume-i-Capó, A., & Moyà-Alcover, G. (2024). Assessing fidelity in xai post-hoc techniques: A comparative study with ground truth explanations datasets. Artificial Intelligence, 335, 104179.https://doi.org/10.1016/j.artint.2024.104179

[8] Pawlicki, M. (2023, October). Towards quality measures for xAI algorithms: Explanation stability. In 2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA) (pp. 1-10). IEEE. doi: 10.1109/DSAA60987.2023.10302535

[9] Le Quy, T., Roy, A., Iosifidis, V., Zhang, W., & Ntoutsi, E. (2022). A survey on datasets for fairness‐aware machine learning. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 12(3), e1452. https://doi.org/10.1002/widm.1452

[10] Wang, X., Chang, C. H., & Yang, C. C. (2024). Achieving equity via transfer learning with fairness optimization. IEEE access. doi: 10.1109/ACCESS.2024.3519465

[11] Goyal, S., Kumar, A., Rathod, N., & Verma, A. (2025, April). Comparative analysis of pre-processing, inprocessing and post-processing methods for bias mitigation: A case study on adult dataset. In 2025 12th International Conference on Computing for Sustainable Global Development (INDIACom) (pp. 1-6). IEEE. doi: 10.23919/INDIACom66777.2025.11115514.

[12] Viceconti, M., Pappalardo, F., Rodriguez, B., Horner, M., Bischoff, J., & Tshinanu, F. M. (2021). In silico trials: Verification, validation and uncertainty quantification of predictive models used in the regulatory evaluation of biomedical products. Methods, 185, 120-127. https://doi.org/10.1016/j.ymeth.2020.01.011

[13] Rasouli, P., & Yu, I. C. (2021, December). Analyzing and improving the robustness of tabular classifiers using counterfactual explanations. In 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) (pp. 1286-1293). IEEE.doi: 10.1109/ICMLA52953.2021.00209

[14] Nguyen, T. H., Saghir, A., Tran, K. D., Nguyen, D. H., Luong, N. A., & Tran, K. P. (2024). Safety and Reliability of Artificial Intelligence Systems. In Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges (pp. 185-199). Cham: Springer Nature Switzerland.https://doi.org/10.1007/978-3-031-71495-5_9

[15] Enarsson, T., Enqvist, L., & Naarttijärvi, M. (2022). Approaching the human in the loop–legal perspectives on hybrid human/algorithmic decision-making in three contexts. Information & Communications Technology Law, 31(1), 123-153. https://doi.org/10.1080/13600834.2021.1958860

[16] Braga, C. M., Serrano, M. A., & Fernández-Medina, E. (2025, November). Guided and Federated RAG: Architectural Models for Trustworthy AI in Data Spaces. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 363-374). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-032-10489-2_31

[17] Sayles, J. (2024). AI Governance and Oversight Model. In Principles of AI Governance and Model Risk Management: Master the Techniques for Ethical and Transparent AI Systems (pp. 183-208). Berkeley, CA: Apress. https://doi.org/10.1007/979-8-8688-0983-5_7

[18] Mehdiyev, N., Houy, C., Gutermuth, O., Mayer, L., & Fettke, P. (2021, March). Explainable artificial intelligence (XAI) supporting public administration processes–on the potential of XAI in tax audit processes. In International Conference on Wirtschaftsinformatik (pp. 413-428). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-86790-4_28

[19] Heider, M., Stegherr, H., Nordsieck, R., & Hähner, J. (2023). Assessing model requirements for explainable AI: A template and exemplary case study. Artificial Life, 29(4), 468-486. doi: 10.1162/artl_a_00414.

[20] Wanner, J., Herm, L. V., Heinrich, K., & Janiesch, C. (2022). The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study. Electronic Markets, 32(4), 2079-2102. https://doi.org/10.1007/s12525-022-00593-5

[21] Zahid, I. A., Garfan, S., Chyad, M. A., Albahri, A. S., Albahri, O. S., Alamoodi, A. H., ... & Alzubaidi, L. (2025). Explainability, Robustness, and Fairness in User-Centric Intelligent Systems: A Systematic Review. IEEE Transactions on Emerging Topics in Computational Intelligence. doi: 10.1109/TETCI.2025.3567604

[22] Yin, X., & Büyüktahtakın, I. E. (2021). A multi-stage stochastic programming approach to epidemic resource allocation with equity considerations. Health Care Management Science, 24(3), 597-622. https://doi.org/10.1007/s10729-021-09559-z

[23] Hamon, R., Junklewitz, H., Sanchez, I., Malgieri, G., & De Hert, P. (2022). Bridging the gap between AI and explainability in the GDPR: towards trustworthiness-by-design in automated decision-making. IEEE Computational Intelligence Magazine, 17(1), 72-85.doi: 10.1109/MCI.2021.3129960.

Downloads

Published

2026-01-03

How to Cite

Shakun Garg, & Amit Verma. (2026). Explainable AI-Driven Decision Support for Social Benefit Optimization: Improving Fairness, Reliability, and Managerial Oversight. International Journal on Engineering Artificial Intelligence Management, Decision Support, and Policies, 2(4), 15–28. https://doi.org/10.63503/j.ijaimd.2025.195

Issue

Section

Research Articles