We rely on algorithmic systems for decision making across society. We need to have confidence in them, and be able to identify their harmful, unlawful or socially unacceptable outcomes. The question is: how can algorithms be assessed? Our mission is to develop an Independent Audit of AI Systems consisting of transparent and crowd-sourced set of audit rules/standards for all autonomous systems in the areas of Ethics, Bias, Privacy, Trust, and Cybersecurity. We are more than 120 interdisciplinary fellow from all around the world working on creating an Independent Audit of AI Systems in a transparent and crowd-sourced way. The main goal of this project is to develop standardized measures to evaluate metrics of fairness, precision, and accuracy in algorithms for particular tasks ( e.g. Data quality, inclusion, accuracy, completeness, validity, accessibility).