Understanding AI systems is a significant challenge, and explainable AI (XAI) approaches aim to shed light on their decision-making processes. However, current XAI methods fall short in effectively explaining decisions to end users and addressing concerns about fairness and bias. In this project I focus on developing approaches to increase the transparency and explainability of AI systems, aiming to build trust by enabling a wider public to understand and participate in discussions. I use behavioral science methods to research optimal explanation techniques and develop a step-by-step AI explainability framework.