Existential risk from artificial general intelligence means that when we build computers that can think like humans and make decisions on their own, they could do bad things that could make the world a dangerous place. It could be like an evil robot that takes over the world and ruins everything. We want to make sure this doesn't happen, so we have to be really careful to make sure that we build these computers in a way that they won't do bad things to people.