SRCDS Steam group


Tickrate, FPS, DST?
#1
So I've been wonder, what do Tickrate and FPS mean in terms of my game servers, currently I have them running with -tickrate 100 and I haven't set anything in terms of FPS. I know tickrate is... well maybe important or something but FPS? Server's don't have monitors? I dunno anyways can anyone tell me what FPS "Does" for a server I mean aren't these FPS values in the thousands a little excessive I mean will people really notice the difference between 2.00000001 and 2.00000002 apples? I dunno.....
Reply
#2
There are actually several posts that debate the issues around server fps and plenty of explanations in these forums for those who don't know what fps does for the server. Take a moment to browse the forums and find out.
*Windows lack of output*
You: Hey, I want to run this program!
Windows: Ok.. It crashed... Now what? Give up?
You:...wtf...
*linux output helpful?*
You: ./My_program
Linux:...Failed!...oo kitties!
You:...wtf...
Reply
#3
Someone out there must know personally I can't tell the difference between standard and high FPS servers, tick rate is noticeable but not FPS
Reply
#4
once your server is getting over 100fps, you're not going to see/feel/sense any changes in-game even if it goes all the way up to 1000.

Only reason they exist in my opinion is hl1 games where it actually mattered for registration, so gsp's trick people into buying 'boosted' 500, 1000fps source servers for extra money. what a rip
Reply
#5
BrutalGoerge Wrote:once your server is getting over 100fps, you're not going to see/feel/sense any changes in-game even if it goes all the way up to 1000.

Only reason they exist in my opinion is hl1 games where it actually mattered for registration, so gsp's trick people into buying 'boosted' 500, 1000fps source servers for extra money. what a rip

not entirely true. srcds (as hlds) has to calculate some interpolations to correct for the latencies ("ping") in the network - those are vital for the "registration". those interpolations are working only if the ping is constant at least on a time scale of a few seconds. if your server jumps e.g. between 100 and 1000 fps the ping variates by 9ms. This will make the interpolation incorrect. so *constant* fps are really important.

*high* fps are a little less important, as they "only" reduce the pings. but of course reducing the ping by 9ms for every player on the server will give a positive effect...

but: what really matters are the times, not the fps values (which are the inverse). so the difference between 100 and 500 fps are 8ms, but between 500 and 1000 fps are only 1ms! so if you are having your servers running at 500 or 1000 fps will not really matter. even servers jumping between 500 and 1000 fps might be quite nice. but servers jumping between 100 and 500 fps will probably terrible...
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#6
And I thought it was the ticks that mattered...

The server simulates the game in discrete time steps called ticks.
During each tick, the server processes incoming user commands, runs a physical simulation step, checks the game rules, and updates all object states. After simulating a tick, the server decides if any client needs a world update and takes a snapshot of the current world state if necessary. A higher tickrate increases the simulation precision, but also requires more CPU power and available bandwidth on both server and client.

http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking

BehaartesEtwas; So what does all the fps above 100 actually do???

Pesonally I'd rather have a tick 200 server with 200fps than tick 100 with 1000fps...
Imagine 200 updates/sec... (cl_updaterate 200/cl_cmdrate 200 etc) =)
"the box said 'requires windows xp or better'. so i installed linux"
Linux Ubuntu 9.04 Server 2.6.30-vanilla #1 SMP PREEMPT x86_64
Reply
#7
What you say is correct. But not only when and how often the server calculates the world is important, also when (and how often) the information is send to and updates are received from the clients. This is only possible every frame. The server is single-threaded, so if it sleeps between two frames, it cannot receive or send any data. If you have 100 fps, the server sleeps 10ms between each frame. This means basically the clients have to wait up to 10ms until they receive an answer. So there are two problems with this: 10ms is a long time, if you want low pings. and second, it's up to 10ms, which means your ping is variates by 10ms from tick to tick... this is bad!

Quote:Pesonally I'd rather have a tick 200 server with 200fps than tick 100 with 1000fps...
Imagine 200 updates/sec... (cl_updaterate 200/cl_cmdrate 200 etc) =)
in theory you are right. but in practice you need to take into account, that the real data comes from the clients at first. their actual cmdrate ("out" in the net_graph) is limited by their fps. there won't be many clients running with fps much higher than 100 usually, so the additional world updates by the server are only based on the same information like the previous ones. So I suppose the improvement from 100 to 200 fps will be larger than from tick 100 to 200 (while keeping fps at say 200).

Of course, I did not see the source code of the net engine. So all I am saying is based on observations from "outside". But I "probed" some aspects of the engine by playing around with some system calls, so it is not entirely speculation ;-)
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#8
Ok, a really good explanation there... but I'm not quite convinced.
As I see it the server during a tick receives>calculates>transmit the data. Not that it splits these things thoughout several fps's...

And of course, I did not see the source code of the net engine either Wink
"the box said 'requires windows xp or better'. so i installed linux"
Linux Ubuntu 9.04 Server 2.6.30-vanilla #1 SMP PREEMPT x86_64
Reply
#9
janne Wrote:As I see it the server during a tick receives>calculates>transmit the data. Not that it splits these things thoughout several fps's...

No, than the tickrate would be pointless. I think (guess this time ;-)) the (majority of the) calculations are done once per tick (e.g. every 10ms @ 100 tick -> every 10 frames @ 1000 fps), i.e. the 'snapshot' of the world is calculated only then. But all network traffic is done every frame.
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#10
Even though network traffic can be received between ticks, the received data must have to wait until a tick have occured before server knows what to return, right?!
"the box said 'requires windows xp or better'. so i installed linux"
Linux Ubuntu 9.04 Server 2.6.30-vanilla #1 SMP PREEMPT x86_64
Reply
#11
well that answered my question, it wasn't as straight forward as I hoped but I deciphered it all... and I'm pleased with the results, I thought I had to recompile my kernel but apparently I don't have to so I've quite happy now =D
Reply
#12
janne Wrote:Even though network traffic can be received between ticks, the received data must have to wait until a tick have occured before server knows what to return, right?!
right

the traffic is received by the kernel I guess and stored in a buffer until srcds processes it.
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#13
I believe you are mistaken BehaartesEtwas.

A valve programmer (Mike Dussault) is quoted in stating that a Source server will sleep at every frame above the tickrate. Therefore, a 500fps server should be capable of maintaining the highest level of performance attainable by solely server side FPS modifications.
Reply
#14
yet you're not going to see one iota of difference in game when u have 100 server fps vs 1000.

Ive played with this a lot with 32 people servers. changing the fps from around 100, to near 1000.
Reply
#15
I've seen those differences. Not with 32 players on a public (there will other problems dominate, 32 slot servers never run really smooth) but in wars. In the past my servers had fps-problems on one day and was runnung with stable 500 fps (IIRC i was running on 500 fps) on the other. When playing on the server I could say if the fps were stable or not *without* looking at the fps! Those days I monitored the fps with a predecessor of the fps-meter all day, so I could see after the war if the fps were stable. And btw I am only talking about variations e.g. between 500 and 200 fps or so, no drops below 100 fps (the tickrate).

@Forsaken: Can you give a link to that quote? Maybe it is something that can be interpreted in different ways...
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)