SRCDS Steam group


Strange FPS/CPU behaviour
#1
I've just got hold of a new dedi, specs are as follow:

C2Q 2.4GHz
4GB RAM
Centos 5 recompiled for 1000 fps, low latency preemption etc

srcds has been running with taskset -c 0, there's been one other server on the same core as the one I've been testing with but it's been empty all the time

I'm not sure if this is a problem or normal behaviour. Here's what's happening:
  • srcds instance starts, nobody connected. Runs at 990-994 fps usually, stats returns 24-26% CPU (one core?), although top gives a more realistic figure of 2-3%
  • 10 players join, fps jumps between 990 and 500ish, stats says 99.9% CPU, top says about 25% of one core
  • players leave, fps now hovers around 950-990 (still idle but lower than original?) with drops down to like 600ish, stats says cpu usage is 30-50%


This may help to provide a better picture:

[Image: dedikicsscomsrcdsfps811wd4.png]


What I'd like to know is:

1. Is this normal?
and 2. Can it be corrected?

Cheers, disco.
Reply
#2
sourcetv?

try without taskset
Reply
#3
Running without taskset gave pretty much identical results, and source TV wasn't enabled during the tests that I did

I take it this isn't normal behaviour then?
Reply
#4
not really, which howto did you follow for 1000fps?
Reply
#5
I roughly followed the one on the steam forums.

I think you're right thinking it's how the kernel's been compiled though. I'm going to try a few more compiles tonight, 2000Hz, with/without HPET, possibly tickless too, although I haven't heard too many good things about tickless kernels
Reply
#6
I believe the "tickless" system should work also. You can set, for example, the kernel's tickrate to 1000 HZ and still have "tickless" kernel. The kernel will scale from 1 to 1000 HZ depending on how often it is good to interrupt the program.

Here's something that could be used to fine-tune the system. It might be possible to set CONFIG_HZ to something ridiculously high (5000 HZ?) and then fine tune this timeout_granularity to achieve the optimal HZ rate. Obviously if it should happen that at some time the server requires more than 2000 interrupts per second, the system could scale up to 5000 HZ. I'm not sure about possible overhead, though.

Ingo (kernel developer) Wrote:When running the kernel then there's a 'timeout granularity'
runtime tunable parameter as well, under:

/proc/sys/kernel/timeout_granularity

it defaults to 1, meaning that CONFIG_HZ is the granularity of timers.

For example, if CONFIG_HZ is 1000 and timeout_granularity is set to 10,
then low-res timers will be expired every 10 jiffies (every 10 msecs),
thus the effective granularity of low-res timers is 100 HZ. Thus this
feature implements nonintrusive dynamic HZ in essence, without touching
the HZ macro itself.

So, if I understand this correctly, you can enable tickless kernel, set CONFIG_HZ to 3000 HZ and then do "echo 100 >/proc/sys/kernel/timeout_granularity" to get 900 Hz kernel. Set up some other numbers and you get the whatever Hz kernel while being "tickless".


Then another thing I've been pondering is that why everybody want 1000 HZ kernel? The 1000 HZ can't be requirement for 1000 FPS.

Here's article I found [ http://www.smk.co.za/2007/07/21/a-tickless-kernel/ ]

Quote:The higher the Hz, the more often the timer interrupts and the scheduler gets to run. The more often the scheduler runs, the more frequently applications multi task between each other. So, with a setting of 1000 Hz, applications would receive less time to run per second but more applications would be running in that second - and you get a more responsive desktop.

Quote:You don’t need to interrupt an application if it’s the only one running in the system. That will just: 1) be a waste of time because you’ll come straight back to it. 2) slow down the application and system. 3) waste of cpu power. You can let it run it’s course and if there’s nothing to do, you can keep the CPU in a completely idle state and only wake up when needed.

One of my theories has been that if the kernel wakes up 1000 times per second to check interrupts, the kernel knows more precisely when network data arrived - and thus knows more precisely when game events happened. My other opposing theory is that the network interface tells the exact time when the event happened anyway, so it's useless to poll for interrupts so often - it's just waste of CPU power. Which one is it?

Then related to the article (and especially to the quotes), it's stupid to interrupt a CPU intensive program thounsands of times per second just to notice that the CPU intensive program wants to keep on running. This I've been thinking that Nitoxys (the original 1000 FPS tutorial @ Steampowered.com forums author) thought that it'd be good to make sure that when any other program on the server gets CPU time, the timeslice will be very short. Thus, even if the game server is interrupted 1000 times per second (1 ms interval) for nothing, it's better than giving 10 ms CPU timeslice for example to a web server once in a while.

I've got 100 HZ kernel server. I run the game server, web server, heavy MySQL stuff (over 400 qps) and several less CPU-intensive programs on the same machine. My game server's FPS is around 500 (fps_max 600). If I set "fps_max 0" then the FPS jumps occasionally to ~980. I'm quite sure the kernel Hz isn't the limit for the FPS. So why is it important to have it high?
Reply
#7
Thanks, that's interesting reading and gives me a few thoughts Smile

I'm just going to build a ton of kernels now with different options and record the results, to see if I can narrow it down to one variable, and if a combination of two provides decent rates etc.

Also, something I forgot about, does nice'ing (not renice, I know that causes issues) srcds processes make any noticeable difference?
Reply
#8
disco Wrote:Also, something I forgot about, does nice'ing (not renice, I know that causes issues) srcds processes make any noticeable difference

Most likely not. If you think of what programs you're running, there are always just those few game servers that are taking the 100% of CPU time. You can't "nice" them to go faster.

I use nice to lower the priority of MySQL, Apache and some other programs that are running on the server.

Good that you're trying out different setups. I think you can rule out real-time kernel, which is overkill. Set the "pre-emption" variable to the highest non-rt ("low-latency desktop") and go with that.

It would be very great if someone with good knowledge about the kernel tick / tickless / Hz / timer / jiffie / pre-emption / real-time / hires-timers / HPET stuff would post something. I'm doubtful that it'll happen, though Toungue
Reply
#9
I have the same problem, anyone care to help out? (willing to pay)
Reply


Forum Jump:


Users browsing this thread: 3 Guest(s)