When you go to a store and want to buy something, you must look for the item you want to purchase and then take it to the cashier to pay for it, right? Similarly, in computer programming, when we have multiple processes (or people) that want to access the same resource (like a variable or a memory location), we need a way to coordinate their actions so that everyone gets a fair chance to use the resource without causing any problems.
Fetch-and-add is a way to coordinate these actions by allowing only one process to access the shared resource at a time. Here's how it works:
Imagine that you and your friend want to buy the same toy from the store, but only one person can buy it at a time. To avoid any confusion or chaos, the store provides a special paper that keeps track of who is next in line to buy the toy. When you arrive at the store, you take the paper and write your name on it (this is like reading the current value of the shared resource). Then, you add one to the number on the paper and return it to the store clerk (this is like changing the value of the shared resource). When your friend arrives, they will see the updated number on the paper and know that they are second in line to buy the toy. They will wait until it's their turn, and then repeat the process of updating the number on the paper before buying the toy.
In computer programming, fetch-and-add works similarly. When a process wants to access a shared resource, it "fetches" (or reads) the current value of the resource and then "adds" (or changes) a predetermined value to it. This overall process is called an atomic operation. The use of atomic operations for shared resources ensures that there is no race condition, which is when two or more processes attempt to access the same resource simultaneously, leading to incorrect results.
Therefore, fetch-and-add is a way to coordinate access to shared resources and ensure that no problems arise when multiple processes attempt to access the same resource at the same time.