An AI neither has a free will nor the freedom of action. It is also not able to consider possible consequences of its actions in the same way a human can. Nevertheless, the causal responsibility of an artificial agent for an outcome cannot be denied. In many cases, it is therefore unclear to what degree an artificial agent will ultimately be held responsible for a decision. The ambiguity about who is finally responsible for an outcome opens up some kind of “moral wiggle room”. People might (un)consciously exploit the ambiguity as it makes it easier to justify self-serving decisions. I am interested in under which conditions and in which situations people take advantage of the ambiguity about the new kind of agent to justify unethical behavior. In this project I conduct lab experiments modeling various hybrid-decision situations in the moral domain (e.g. negative third party externalities or information manipulations) to identify if there are setups that lead to unethical behavior when interacting with artificial agents. Furthermore, build on the ex post and ex ante causal responsibility attribution theory by Engl (2018), I develop a theoretical concept that separates responsibility into causal and moral responsibility encompassing artificial agents.