Data-Centric Governance Models Using Trustworthy AI: Strengthening Transparency, Bias Control, and Policy Compliance in Welfare Management
DOI:
https://doi.org/10.63503/j.ijaimd.2025.200Keywords:
Trustworthy AI, Data-Centric Governance, Welfare Management Systems, Transparency Enhancement, Bias Mitigation, Policy Compliance Automation, Explainable AI, Fairness-Aware Decision Models, Governance Framework, Ethical AI DeploymentAbstract
The increased applications of AI-based decision making in the welfare area of the government have heightened the issues associated with the lack of transparency, the bias of algorithms, and the uneven compliance with the provisions of the policy. Current welfare systems often have disjointed data streams and black box models, which creates quantifiable differences in benefit eligibility determinations across demographic categories and opening rates in automated decision libraries of more than 20 percent. To overcome these obstacles, this article proposes an integrated model of data-centric governance that implements reliable principles of AI, combining the promotion of transparency, the reduction of the effect of bias, and the possibility of automatic verification of policy adherence. The structure takes into consideration organized data administration, impartiality-conscious modeling, decipherable choices and a guideline-driven conformity execution to guarantee uniform, auditable welfare results. Empirical experiments done on welfare-analogous datasets indicate that the proposed model narrows demographic gaps by 31-38% and leads to greater compliance accuracy of policies (78 vs. 96) and higher transparency scores (42 vs. baseline machine learning systems). The governance layer is also computationally efficient and has a mean runtime overhead of 69-9%. These findings indicate that data-fiduciary trust AI: This finding shows that sound, trustworthy, and regulatory consistent welfare decision-making through data-centric AI provides a promising opportunity to establish fairness, reliability, and regulatory consistency in the application of an AI to the population.
References
[1] Agarwal, P. K. (2018). Public administration challenges in the world of AI and bots. Public Administration Review, 78(6), 917-921. https://doi.org/10.1111/puar.12979
[2] van Toorn, G. (2024). Automating the welfare state: the case of disability benefits and services. In The Routledge Handbook of the Political Economy of Health and Healthcare (pp. 259-270). Routledge. https://doi.org/10.4324/9781003017110
[3] Lartey, D., & Law, K. M. (2025). Artificial intelligence adoption in urban planning governance: A systematic review of advancements in decision-making, and policy making. Landscape and Urban Planning, 258, 105337. https://doi.org/10.1016/j.landurbplan.2025.105337
[4] Kuziemski, M., & Misuraca, G. (2020). AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications policy, 44(6), 101976. https://doi.org/10.1016/j.telpol.2020.101976
[5] Wenzelburger, G., König, P. D., Felfeli, J., & Achtziger, A. (2024). Algorithms in the public sector. Why context matters. Public Administration, 102(1), 40-60. https://doi.org/10.1111/padm.12901
[6] Nikolinakos, N. T. (2023). Ethical principles for trustworthy AI. In EU policy and legal framework for artificial intelligence, robotics and related technologies-the AI Act (pp. 101-166). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-031-27953-9_3
[7] Larsson, S. (2021). AI in the EU: Ethical Guidelines as a Governance Tool. In The European Union and the technology shift (pp. 85-111). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-63672-2_4
[8] Sunyaev, A., Benlian, A., Pfeiffer, J., Jussupow, E., Thiebes, S., Maedche, A., & Gawlitza, J. (2025). High-Risk Artificial Intelligence. Business & Information Systems Engineering, 1-14. https://doi.org/10.1007/s12599-025-00942-6
[9] Jui, T. D., & Rivas, P. (2024). Fairness issues, current approaches, and challenges in machine learning models. International Journal of Machine Learning and Cybernetics, 15(8), 3095-3125. https://doi.org/10.1007/s13042-023-02083-2
[10] Galicia-Gallardo, A. P., Ceccon, E., Castillo, A., & González-Esquivel, C. E. (2023). An Integrated Assessment of Social-ecological Resilience in Me´ Phaa Indigenous Communities in Southern Mexico. Human Ecology, 51(1), 151-164. https://doi.org/10.1007/s10745-022-00382-w
[11] Sackmann, S., & Kähmer, M. (2008). ExPDT: A policy-based approach for automating compliance. Wirtschaftsinformatik, 50(5), 366-374. https://doi.org/10.1007/s11576-008-0078-1
[12] Jakubik, J., Vössing, M., Kühl, N., Walk, J., & Satzger, G. (2024). Data-centric artificial intelligence. Business & Information Systems Engineering, 66(4), 507-515. https://doi.org/10.1007/s12599-024-00857-8
[13] Sansone, D., & Zhu, A. (2023). Using machine learning to create an early warning system for welfare recipients. Oxford Bulletin of Economics and Statistics, 85(5), 959-992. https://doi.org/10.1111/obes.12550
[14] Rayhana, R., Yun, H., Liu, Z., & Kong, X. (2023). Automated defect-detection system for water pipelines based on CCTV inspection videos of autonomous robotic platforms. IEEE/ASME Transactions on Mechatronics, 29(3), 2021-2031. doi: 10.1109/TMECH.2023.3307594
[15] Tillin, L. (2022). Does India have subnational welfare regimes? The role of state governments in shaping social policy. Territory, Politics, Governance, 10(1), 86-102. https://doi.org/10.1080/21622671.2021.1928541
[16] Le Quy, T., Roy, A., Iosifidis, V., Zhang, W., & Ntoutsi, E. (2022). A survey on datasets for fairness‐aware machine learning. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 12(3), e1452. https://doi.org/10.1002/widm.1452
[17] Salih, A. M., Raisi‐Estabragh, Z., Galazzo, I. B., Radeva, P., Petersen, S. E., Lekadir, K., & Menegaz, G. (2025). A perspective on explainable artificial intelligence methods: SHAP and LIME. Advanced Intelligent Systems, 7(1), 2400304. https://doi.org/10.1002/aisy.202400304
[18] Vale, D., El-Sharif, A., & Ali, M. (2022). Explainable artificial intelligence (XAI) post-hoc explainability methods: Risks and limitations in non-discrimination law. AI and Ethics, 2(4), 815-826. https://doi.org/10.1007/s43681-022-00142-y
[19] Minkkinen, M., Laine, J., & Mäntymäki, M. (2022). Continuous auditing of artificial intelligence: a conceptualization and assessment of tools and frameworks. Digital Society, 1(3), 21. https://doi.org/10.1007/s44206-022-00022-2
[20] Kuziemski, M., & Misuraca, G. (2020). AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications policy, 44(6), 101976. https://doi.org/10.1016/j.telpol.2020.101976
[21] Zhang, J., & El-Gohary, N. M. (2017). Integrating semantic NLP and logic reasoning into a unified system for fully-automated code checking. Automation in construction, 73, 45-57. https://doi.org/10.1016/j.autcon.2016.08.027
[22] Aitim, A., & Auyezova, A. (2025). Legal Eligibility Inference from Text: Constraint Extraction with Pretrained Language Models. Procedia Computer Science, 272, 451-456. https://doi.org/10.1016/j.procs.2025.10.230
[23] Akmal, M. U., Asif, S., Koval, L., Mathias, S. G., Knollmeyer, S., & Grossmann, D. (2024, September). Layered Data-Centric AI to Streamline Data Quality Practices for Enhanced Automation. In International Conference on Artificial Intelligence: Methodology, Systems, and Applications (pp. 128-142). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-81542-3_11
[24] Butler, T., & McGovern, D. (2012). A conceptual model and IS framework for the design and adoption of environmental compliance management systems: For special issue on governance, risk and compliance in IS. Information Systems Frontiers, 14(2), 221-235. https://doi.org/10.1007/s10796-009-9197-5