SRCDS Steam group


FPS Explanation
#1
Here is something that I wrote on another forum to help people understand how FPS works server side. I thought it would be helpful to also post here...

Quote:Its been awhile since I have run an strace, but here are the steps the engine takes (1000 times for 1000FPS servers) each second (best I can remember):
1) Query the kernel for time
2) Calculate movements, shots, relative to the time it captured at the beginning of the frame.
3) Send and Receive packets to update clients
4) Query the kernel for time
5) Call a sleep for 1000usec (microseconds). <-- This gives the CPU a break, otherwise we would be maxing out the CPU at 100%
6) Go to step 1

Now, because sleep, querying for time, and send and receive packets don't always produce the same amount of latency (meaning it doesn't always take the same amount of time for each call to sleep, gettimeofday, send and receiving packets), this leads to fluctuations in the frames per second. Let me give an example.

Perfect scenario (no latency whatsoever):
Each frame will last exactly 1 millisecond (1000 microseconds) because of the call to sleep for 1000 microseconds and it would produce exactly 1000FPS because it does this 1000 times a second.

Reality:
Most of the time, on a good machine, there will be about 6 milliseconds of latency each second. This produces around 994FPS.

Now, to the point of this post. Indeed you are correct when you say that clients only get updates 100 times per second (ticks), but this doesn't really have any correlation to server side frames per second. Server side frames per second reference the servers ability to calculate and send out data. The more frequently the server does this, the more accurate it can estimate or predict, because its doing calculations over a shorter period of time. You could think of it as sampling a runner in a race every minute (and then calculating his finish time) as apposed to sampling him every second (and then calculating his finish time). The more frequently you sample worldly things, the more precise you can be when making predictions. This also means that you can predict better when a client needs an update or not.

To point out the obvious:
100FPS samples a tenth as much as 1000FPS
250FPS samples a quarter as much as 1000FPS
333FPS samples a third as much as 1000FPS
500FPS samples half as much as 1000FPS
etc...

I hope I haven't confused anyone... and most of all, I hope this helps you all understand.

Kind regards,
DiSTANT

One thing to note... this difference between FPS and Ticks is this. FPS is how frequently the server calculates things and checks to see if clients need updates. Ticks are how often the server actually updates the client. So, on a 1000FPS 100tick server, you can expect the server to calculate 10 frames for each 1tick.
[Image: banner.php?t=2&amp;bg=002244&amp...p;id=82023]

[Image: banner.php?t=2&amp;bg=002244&amp...p;id=82024]
Reply
#2
not quite accurate, at least not for srcds. srcds calculates all movements etc. only on each tick. what it does on each frame is looking for incoming packets from clients, receiving them and (that is the important part) writing down the time when there were received. this is the actual reason why 1000fps are better than 100fps: on 100fps packets wait for an unknown time between 0 and 10ms until they are received by the engine. that additional time cannot be compensated (only on average) so this gives an additional (unnecessary) error in lag compensation.

the rest isn't quite accurate as well: the "perfect scenario" forgets that the engine has to do actual calculations each frame (and esp. each tick). so even if the sleep lasts always exactly 1000usecs you will never get 1000fps exactly. The difference is the time the engine needs for it's calculations.

and the gained precision has nothing to do with a shorter time to cover. because that time has nothing to do with fps and is as long as you set sv_maxunlag (by default 1 second!).

a valve developer has written a very good article about lag compensation. it's about the hl1 engine, but principles should be the same:
http://developer.valvesoftware.com/wiki/Latency_Compensating_Methods_in_Client/Server_In-game_Protocol_Design_and_Optimization
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#3
(02-18-2010, 06:48 PM)BehaartesEtwas Wrote:  not quite accurate, at least not for srcds. srcds calculates all movements etc. only on each tick. what it does on each frame is looking for incoming packets from clients, receiving them and (that is the important part) writing down the time when there were received. this is the actual reason why 1000fps are better than 100fps: on 100fps packets wait for an unknown time between 0 and 10ms until they are received by the engine. that additional time cannot be compensated (only on average) so this gives an additional (unnecessary) error in lag compensation.

the rest isn't quite accurate as well: the "perfect scenario" forgets that the engine has to do actual calculations each frame (and esp. each tick). so even if the sleep lasts always exactly 1000usecs you will never get 1000fps exactly. The difference is the time the engine needs for it's calculations.

and the gained precision has nothing to do with a shorter time to cover. because that time has nothing to do with fps and is as long as you set sv_maxunlag (by default 1 second!).

a valve developer has written a very good article about lag compensation. it's about the hl1 engine, but principles should be the same:
http://developer.valvesoftware.com/wiki/Latency_Compensating_Methods_in_Client/Server_In-game_Protocol_Design_and_Optimization

But are client ticks synchronized with each other? If not, then I am exactly right in saying that each frame may process ticks for certain clients. Also, I said perfect scenario, meaning one that is not achievable, much like a physics problem where they remove the friction.
[Image: banner.php?t=2&amp;bg=002244&amp...p;id=82023]

[Image: banner.php?t=2&amp;bg=002244&amp...p;id=82024]
Reply
#4
I am not 100% sure, but what would be the difference between the tickrate (specified at command line) and sv_maxupdaterate?

Quote:Also, I said perfect scenario, meaning one that is not achievable, much like a physics problem where they remove the friction.
I understood it that way. But even in a perfect unachievable scenario the engine behaves that way it will not achieve 1000 fps.
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#5
(02-19-2010, 07:15 PM)BehaartesEtwas Wrote:  I am not 100% sure, but what would be the difference between the tickrate (specified at command line) and sv_maxupdaterate?

Quote:Also, I said perfect scenario, meaning one that is not achievable, much like a physics problem where they remove the friction.
I understood it that way. But even in a perfect unachievable scenario the engine behaves that way it will not achieve 1000 fps.

I believe on a 100tick server, you can still set sv_maxupdaterate 66 and limit the tickrate to 66. Regardless, I still think on every frame the server has to check to see if any clients need updates (I could be wrong, but this is how I have always thought). If I look at strace on an active server, I see incoming and outgoing packets on nearly every frame, which leads me to believe this.

I was only using the "Perfect Scenario" as a way to explain to everyone why we don't get true 1000FPS. I do agree with you that there is no way to actually achieve these results due to process time Smile

Thanks for your comments hrg, I think these just further help people understand my post above. I don't want it to be confusing at all, for anyone!
[Image: banner.php?t=2&amp;bg=002244&amp...p;id=82023]

[Image: banner.php?t=2&amp;bg=002244&amp...p;id=82024]
Reply
#6
Quote:If I look at strace on an active server, I see incoming and outgoing packets on nearly every frame, which leads me to believe this.
ok then it will be as you said. it's a question of design, so unless valve tells us we need to rely on such observations :-)
but I ask myself now what the -tickrate command line option is about. is there any difference between a server running at tickrate 100 with sv_max{update|cmd}rate 66 and on running at tickrate 66?

Quote:I was only using the "Perfect Scenario" as a way to explain to everyone why we don't get true 1000FPS.
I know. That's why I answered that way. The real reason for not reaching 1000 fps is that the engine has to do some work. That effect is way dominating any latencies.

Quote:Thanks for your comments hrg, I think these just further help people understand my post above. I don't want it to be confusing at all, for anyone!
thank you for your post. so now even I learned something new :-) this kind of discussion is what someday might lead to some breakthrough...
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#7
(02-20-2010, 07:08 PM)BehaartesEtwas Wrote:  
Quote:If I look at strace on an active server, I see incoming and outgoing packets on nearly every frame, which leads me to believe this.
ok then it will be as you said. it's a question of design, so unless valve tells us we need to rely on such observations :-)
but I ask myself now what the -tickrate command line option is about. is there any difference between a server running at tickrate 100 with sv_max{update|cmd}rate 66 and on running at tickrate 66?

Quote:I was only using the "Perfect Scenario" as a way to explain to everyone why we don't get true 1000FPS.
I know. That's why I answered that way. The real reason for not reaching 1000 fps is that the engine has to do some work. That effect is way dominating any latencies.

Quote:Thanks for your comments hrg, I think these just further help people understand my post above. I don't want it to be confusing at all, for anyone!
thank you for your post. so now even I learned something new :-) this kind of discussion is what someday might lead to some breakthrough...

You guys can post all the information you want, bottom line is the engine is closed source so nobody knows what happens between the rising edge of a clock cycle and falling edge of it

One day I hope this FPS shit goes away so all this time and wasted resources goes towards something else, like stopping cheats from working.
http://leaf.dragonflybsd.org/~gary

“The two most common elements in the universe are hydrogen and stupidity.”








Reply
#8
peace brother :-D
Reply
#9
he is somewhat right... really without some periodic ticks/frames (however you want to call it) won't work, but the network receiving part could be better without frames...
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#10
I like this explanation, maybe this thread should be a sticky Toungue
Looking for a game server? Visit fullfrag.com and pick one up as low as $2.50 / mo!
Reply
#11
BehaartesEtwas vs Monk to fight to the dead ^^
Reply
#12
(02-25-2010, 03:22 PM)Peter_Pan123 Wrote:  BehaartesEtwas vs Monk to fight to the dead ^^

argh, this is not a fight ^^
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#13
Definately not a fight Smile This post is really only to explain server side FPS vs Tickrate. There seem to be a lot of misconceptions out there.
[Image: banner.php?t=2&amp;bg=002244&amp...p;id=82023]

[Image: banner.php?t=2&amp;bg=002244&amp...p;id=82024]
Reply
#14
Thanks for the explanation. I was always wondering what exactly tics and FPS had to do with each other.
Reply
#15
TBH, i cant feel the difference in-game whether the server is running at 100 fps or 1000

personally, I think its a waste of time configuring for 1000.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)