This guide is an introduction for admins and system administrators of how the ClearOS 'system load' works and how it can be used to understand performance issues.
System load is a measurement of how much data is passing through the processor. You can see this metric by running the following command:
The value you are looking for is on the top-right of that page under 'load average.' Of the three numbers, the first number is the 1 minute average, the second is the 5 minute average and the third is the 15 minute average.
Commonly an analogy of a bridge with traffic passing over it is used. At 0.50 the bridge has half of the capacity that it can handle of traffic. This means that there is plenty of space and all the cars are able to flow across the bridge well. If some processes are slow and others are fast, that is ok. The traffic is flowing.
At 1.00 the bridge has EXACTLY the amount of cars passing over it that allow for traffic to flow at the rate they need. 1.00 is actually a bad number though because this is the point that modern processor will bump their performance up using more electricity. If you have fans that kick in because of increased heat on the processor, you will start to hear them kick in now. Additionally 1.00 means there is no head room for periodic spikes which can trigger a backlog.
At any number higher than 1.00 you have a problem and processes are being slowed. There are multiple ways of addressing the problem. Sometimes simple configuration tuning is all that is required. Other times, you may need upgraded hardware to additional hardware to 'share' the load. All of this depends on the services you are running.
Be sure to read the NEXT section as well. It is important.
With multiple processors which are common in most systems, you get a 1.00 for every processor you have. You may have a system load of 7.00 and your system is running fine. Why? In multi-processor situations where you have 8 processors, a load of 7.00 is below the threshold. This is the equivalent of having a load that would normally be at capacity on a 7 lane bridge but actually on an 8 lane bridge. In other words, its just fine.
That being said, multiple processors can cause a challenge when troubleshooting system performance because some services and applications are not multithreaded but rather monolithic in nature. A monolithic application will only ever run on 1 processor at a time. This can present a problem because you may have an 8-core system that has a system load of 1.26 and it is still getting bogged down. OpenVPN is an example of a monolithic service. To scale this application you actually have to run multiple copies of the application on different ports. Applications like dansguardian run multiple-independent worker threads and so can be juggled between processors. In our analogy of the 8 lane highway, imagine the sign 'Trucks must use right-most lane ONLY'. In such a situation where there were a lot of trucks, this can cause a problem with truck performance.