AI bias is when a computer system or Artificial Intelligence (AI) makes unfair decisions because of its programming. This can happen when the system learns from data that reflects people's biases. For example, if an AI system is taught to recognize human faces using a bunch of images that are mostly white people, it might have a harder time recognizing the faces of people of other races. AI bias can also happen when a computer system is designed in a way that encourages unfair decisions.