American International Journal of Computer Science and Technology
E-ISSN: XXXX - XXXX P-ISSN: XXXX - XXXX

Open Access | Research Article | Volume 1 Issue 1 | Download Full Text

Human‐AI Collaboration in High‐Stakes Decision Making: Trust Calibration through Transparency

Authors: D. Jenifar
Year of Publication : 2025
DOI: XX:XXXXX:XXXXXXXX
Paper ID: AIJCST-V1I1P103


How to Cite:
D. Jenifar, "Human‐AI Collaboration in High‐Stakes Decision Making: Trust Calibration through Transparency" American International Journal of Computer Science and Technology, Vol. 1, No. 1, pp. 14-22, 2025.

Abstract:
As artificial intelligence (AI) systems are increasingly integrated into high-stakes decision-making domains, the challenge of fostering appropriate human trust in these systems becomes critical. Miscalibrated trust—either overtrust or distrust—can lead to significant consequences, including poor outcomes, ethical violations, or system rejection. This paper investigates how transparency mechanisms in AI systems can support effective trust calibration in human–AI collaboration. We review the current literature on trust in AI, identify key transparency dimensions (e.g., algorithmic explainability, performance metrics, uncertainty reporting), and examine how these factors influence human decision-making behavior. Through case studies in domains such as healthcare diagnostics and autonomous weapons systems, we highlight both the opportunities and limitations of transparency as a tool for trust calibration. Finally, we propose a framework for designing AI systems that foster appropriate levels of trust by aligning transparency with user needs, context specificity, and ethical imperatives.

Keywords: Human–AI Collaboration, Trust Calibration, Transparency, Explainability, High-Stakes Decision-Making, Human Factors, Responsible AI, Trustworthy AI, Algorithmic Accountability, Human–Machine Teaming.

References:
1. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
2. Hoffman, R. R., Johnson, M., Bradshaw, J. M., & Underbrink, A. (2013). Trust in automation. IEEE Intelligent Systems, 28(1), 84–88. https://doi.org/10.1109/MIS.2013.24
3. Gunning, D. (2017). Explainable Artificial Intelligence (XAI). Defense Advanced Research Projects Agency (DARPA).
4. Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
5. Wang, D., Yang, Q., Abdul, A., & Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–15). https://doi.org/10.1145/3290605.3300831
6. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). https://doi.org/10.1145/2939672.2939778
7. Zhang, Y., Liao, Q. V., & Bellamy, R. K. E. (2020). Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 295–305). https://doi.org/10.1145/3351095.3372852
8. Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
9. Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018). ‘It's reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14). https://doi.org/10.1145/3173574.3173951
10. Nicolaidis, C., et al. (2021). Overtrust in healthcare AI: Risks and remedies. Journal of Biomedical Informatics, 119, 103826. https://doi.org/10.1016/j.jbi.2021.103826
11. Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. https://doi.org/10.1518/001872097778543886
12. Shin, D. (2021). The effects of explainability and causability on trust in AI. Computers in Human Behavior, 119, 106718. https://doi.org/10.1016/j.chb.2021.106718

aijcst AIJCST

American International Journal of Computer Science and Technology (AIJCST) is an international double-blind peer-reviewed journal dedicated to advancing interdisciplinary research that bridges the gap between Artificial Intelligence, BigData, Computational Studies, and Management Science.

Get In Touch

Contact Address

Zakir Hussain Street,
Koodal Nagar, Madurai - 625018

Branch Address

Noordhoek Hegtstraat 101,
Enschede, Overijssel, 7521 GC,
Netherland.

Email

aijcstjournal@gmail.com
editor@aijcst.org

2025 © NextGen Scientific Publication. All Rights Reserved. Designed by AIJCST