Empirical risk minimization is when people want to figure out the best way to solve a problem by looking at real examples, or data. Imagine you have a pile of puzzle pieces and you need to put them together to make a picture. Every time you try to put the pieces together, you can see if they fit or not. You keep trying different ways until you find the right way to make the picture. That's what empirical risk minimization is like, trying different ways until you find the best one.
Now, imagine you have a bunch of shapes and you need to sort them into different groups. You can look at each shape to figure out which group it should belong to, and then make a rule to sort all the shapes. This is what people do when they use empirical risk minimization to build machine learning models. They look at lots of examples (data) and create a rule that can help sort more examples into different groups.
Empirical risk minimization helps people build models to solve problems such as image recognition or natural language processing. By using real examples, humans can create models that are better at understanding and predicting new examples.