The main goal of this project is to develop approaches to increase the explainability and transparency of AI. Understanding AI systems emerges to be a big challenge of the near future. To build trust in the systems, their choices or actions need to be explainable and transparent - not only to the IT developer but also to the end user. This is addressed by the research field AI explainability. One direction in this field is Explainable AI (XAI) where the mechanisms of the AI system are used to provide additional outputs that can be used to understand how a decision was reached (see Sokol and Flach (2020)). However, while this approach might provide additional input on understanding how a decision was reached, it does not help in explaining it to the user or in reducing concerns regarding the fairness of AI due to algorithmic bias. This is where social science takes over, for example by the proposed idea of a digital marketplace - the TuringBox - to study AI behaviors as presented by Epstein et al. (2018). Epstein et al. (2018) argue that the output of AI systems should be seen as independent behavioral identities and propose ”...using scientific techniques like experimentation and causal inference to understand these behaviors, agnostic to the underlying system architecture”(p.3). In addition, I consider it important to not only provide access to all information, but also to enable a wider public to understand and to participate in the discussion on AI systems. Therefore, different levels of depth of the explanations need to be provided for different groups of the audience, as otherwise the audience might be overwhelmed by information leading to a decrease in understanding as the systems appear to be too complex. Only if the society is enabled to understand AI, trust can be build and the benefits of AI systems can be fully leveraged. In this project I use behavioral science methods to research how to actually phrase an explanation in respect to level of detail and wording and aim to develop a step-by-step AI explainability framework.