I believe the "tickless" system should work also. You can set, for example, the kernel's tickrate to 1000 HZ and still have "tickless" kernel. The kernel will scale from 1 to 1000 HZ depending on how often it is good to interrupt the program.
Here's something that could be used to fine-tune the system. It might be possible to set CONFIG_HZ to something ridiculously high (5000 HZ?) and then fine tune this timeout_granularity to achieve the optimal HZ rate. Obviously if it should happen that at some time the server requires more than 2000 interrupts per second, the system could scale up to 5000 HZ. I'm not sure about possible overhead, though.
Ingo (kernel developer) Wrote:When running the kernel then there's a 'timeout granularity'
runtime tunable parameter as well, under:
/proc/sys/kernel/timeout_granularity
it defaults to 1, meaning that CONFIG_HZ is the granularity of timers.
For example, if CONFIG_HZ is 1000 and timeout_granularity is set to 10,
then low-res timers will be expired every 10 jiffies (every 10 msecs),
thus the effective granularity of low-res timers is 100 HZ. Thus this
feature implements nonintrusive dynamic HZ in essence, without touching
the HZ macro itself.
So, if I understand this correctly, you can enable tickless kernel, set CONFIG_HZ to 3000 HZ and then do "echo 100 >/proc/sys/kernel/timeout_granularity" to get 900 Hz kernel. Set up some other numbers and you get the whatever Hz kernel while being "tickless".
Then another thing I've been pondering is that why everybody want 1000 HZ kernel? The 1000 HZ can't be requirement for 1000 FPS.
Here's article I found [
http://www.smk.co.za/2007/07/21/a-tickless-kernel/ ]
Quote:The higher the Hz, the more often the timer interrupts and the scheduler gets to run. The more often the scheduler runs, the more frequently applications multi task between each other. So, with a setting of 1000 Hz, applications would receive less time to run per second but more applications would be running in that second - and you get a more responsive desktop.
Quote:You don’t need to interrupt an application if it’s the only one running in the system. That will just: 1) be a waste of time because you’ll come straight back to it. 2) slow down the application and system. 3) waste of cpu power. You can let it run it’s course and if there’s nothing to do, you can keep the CPU in a completely idle state and only wake up when needed.
One of my theories has been that if the kernel wakes up 1000 times per second to check interrupts, the kernel knows more precisely when network data arrived - and thus knows more precisely when game events happened. My other opposing theory is that the network interface tells the exact time when the event happened anyway, so it's useless to poll for interrupts so often - it's just waste of CPU power. Which one is it?
Then related to the article (and especially to the quotes), it's stupid to interrupt a CPU intensive program thounsands of times per second just to notice that the CPU intensive program wants to keep on running. This I've been thinking that Nitoxys (the original 1000 FPS tutorial @ Steampowered.com forums author) thought that it'd be good to make sure that when any other program on the server gets CPU time, the timeslice will be very short. Thus, even if the game server is interrupted 1000 times per second (1 ms interval) for nothing, it's better than giving 10 ms CPU timeslice for example to a web server once in a while.
I've got 100 HZ kernel server. I run the game server, web server, heavy MySQL stuff (over 400 qps) and several less CPU-intensive programs on the same machine. My game server's FPS is around 500 (fps_max 600). If I set "fps_max 0" then the FPS jumps occasionally to ~980. I'm quite sure the kernel Hz isn't the limit for the FPS. So why is it important to have it high?