Static Load Balancing means that the workload (like user requests, data access, etc.) is divided manually or in a fixed way between multiple servers. Once set, this division doesn’t change automatically, even if one server gets overloaded or another stays underused.
In load balancing, a deterministic method means that the system always sends a request to the same server based on a fixed rule. For example, if your computer’s IP address is used to decide where your request goes, you will always be sent to the same server every time. This makes the process predictable and stable. But the problem is — if that server gets too many heavy tasks or busy users, it can get overloaded, while other servers may not be doing much.
On the other hand, a probabilistic method works more like rolling a dice. It doesn’t always send the request to the same server. Instead, it chooses one based on chances. For example, Server A might be picked 50% of the time, Server B 30%, and Server C 20%. This helps spread the load more evenly between servers. It’s more flexible and can handle changes better, but it might send the same user to different servers at different times, which can sometimes cause issues with tracking or saving user data.
In dynamic load balancing, the system checks how busy each server is in real-time and sends new requests to the one that’s least busy or most available. Unlike static load balancing (which uses fixed rules), dynamic methods can adjust automatically based on current conditions. This helps avoid overloading any single server and keeps the system running smoothly, even when traffic changes a lot.
There are two main types of dynamic load balancing: centralized and distributed.
In centralized load balancing, there is one main controller (or server) that watches over all the others. It collects information about how busy each server is and then makes the decision about where to send each new request. This method is easier to manage and can make smarter decisions since it sees everything in one place. But if the central controller goes down, the whole system can be affected.
In distributed load balancing, there is no single controller. Instead, all servers work together and share information with each other. Each server can help decide where to send requests based on what it knows about the others. This makes the system more reliable — even if one server fails, the others can keep working. However, it can be more complex since each server needs to communicate and stay updated.
In distributed load balancing, there are two types: cooperative and non-cooperative.
-
Cooperative: Servers work together and share information. They help each other handle the load, leading to better decisions, but it needs more communication.
-
Non-Cooperative: Each server works on its own without sharing data. It’s simpler and faster but can cause uneven load because servers don’t know what others are doing.
Problems in Load Balancing (Simple Points):
-
Not Fairly Shared Work – Some servers get too much work while others do very little.
-
One Point Failure – If the main controller stops working, the whole system can go down.
-
Slow Response – If a user is sent to a far or busy server, it can take longer to get a reply.
-
Too Much Talking Between Servers – In some systems, servers talk a lot to each other, which can slow things down.
-
Hard to Grow – When adding more servers or users, it gets harder to manage the system.
-
User Gets Logged Out – If a user is sent to a different server each time, they might lose their session or data.
-
Security Problems – Load balancers can be attacked by hackers if not protected properly.
Comments
Post a Comment