SRCDS Steam group


Tickrate, FPS, DST?
#16
Your first hand experience is hardly empirical. The following is the email excerpts.
Quote: Originally Posted by Stones Email
Will having a Server FPS higher than the tick rate actually improve anything?
Quote: Originally Posted by Dussaults Response
Nope.. The game will sleep in between ticks if it's running faster
than the tick rate.
Reply
#17
your quotes are not in context... we would need the whole conversation...

if it were correct, that the ping in the net_graph (which is a round trip time of the game packages, right?) would be higher by 10ms (on tick 100) than the normal round trip time measured with ping, right? I think this is not the case...

in fact, i am thinking of making some empirical tests (my old tests were in fact empirical, but i cannot prove this, as i have not documented it properly). but that would require a team that is willing to play on potentially sub-optimal servers for some time. i will talk to my clans team ;-)
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#18
That is as far as I know the entire conversation.

Quote:if it were correct, that the ping in the net_graph (which is a round trip time of the game packages, right?) would be higher by 10ms (on tick 100) than the normal round trip time measured with ping, right?
I believe this is almost correct. The RTT would be only 5ms higher since the probability of your ping arriving somewhere between a tick is uniform. Pinging a particular 100 tick game server of mine from shell gives:
min: 59
ave: 59
max: 60

Inside of CSS, net_graph shows
min: 64
ave: 65
max: 66

by typing ping in console I get
ave: 80ms

The results of ping are somewhat odd but I dont know how it is implemented.
Reply
#19
you shouldn't compare net_graph pings with others...

btw: read the last section of this:
http://supportwiki.steampowered.com/wiki/Optimizing_a_Dedicated_Server
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#20
BehaartesEtwas Wrote:you shouldn't compare net_graph pings with others...

btw: read the last section of this:
http://supportwiki.steampowered.com/wiki/Optimizing_a_Dedicated_Server
Read the "guide", but as It's not written by any source engine developers I wouldnt take it as hard facts.
"the box said 'requires windows xp or better'. so i installed linux"
Linux Ubuntu 9.04 Server 2.6.30-vanilla #1 SMP PREEMPT x86_64
Reply
#21
BehaartesEtwas Wrote:you shouldn't compare net_graph pings with others...

btw: read the last section of this:
http://supportwiki.steampowered.com/wiki/Optimizing_a_Dedicated_Server

Why shouldnt I compare net-graph pings with others? Its exactly what you were suggesting in your previous post so I compared them and the results make sense. I dont see the problem.


I believe that "Guide" is going off of some assumptions made in the community previously. Im sticking to the word of a valve developer until I have reason to believe otherwise.
Reply
#22
Forsaken Wrote:Why shouldnt I compare net-graph pings with others? Its exactly what you were suggesting in your previous post so I compared them and the results make sense. I dont see the problem.
nope, I was suggesting to compare e.g. net_graph pings with different server fps... the net_graph pings don't show any difference between 100 and 1000 fps, but in the rcon status I see a 4-5ms higher ping when I run fps_max 100, and I fall again after fps_max 0. reproducable. exactly what I expected :-)

Quote:I believe that "Guide" is going off of some assumptions made in the community previously. Im sticking to the word of a valve developer until I have reason to believe otherwise.
if you look in the history, the first 4 edits of the page were done by "Daniel" (a valve employee according to his profile) and an IP assigned to valve (see whois). in those edits most content of that page was created, including the last section.

enough proof? at least for me.

but, I will always agree if you say, that there is not much difference between e.g. 500 and 1000 fps. we are talking about differences of a few milliseconds (if not less), they can only be noticed due to "amplification" effects: the "interpolation" - which actually must be an extrapolation in mathematical terms - can only be precise with a constant ping. with 100 fps the ping will vary by 10 ms from tick to tick (so it rises by 5ms on average). this will probably lead to effects like I have seen, when my server wasn't running with stable fps.
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#23
Quote:nope, I was suggesting to compare e.g. net_graph pings with different server fps...
You suggested nothing of the sort. Your exact quote is:

Quote:if it were correct, that the ping in the net_graph (which is a round trip time of the game packages, right?) would be higher by 10ms (on tick 100) than the normal round trip time measured with ping, right? I think this is not the case...
No where did you mention anything about comparing pings with different server fps. This is a comparison of an ICMP ping to the ping reported by netgraph. On a 100 tick server it makes sense that netgraph reports a ping 5ms (not 10) higher than an ICMP ping. It also is logical that server side fps modifications between 100 and 1000 should not effect the netgraph ping since every frame above the tickrate is spent sleeping and therefore appears to poll at 100Hz.

Quote:if you look in the history, the first 4 edits of the page were done by "Daniel" (a valve employee according to his profile) and an IP assigned to valve (see whois). in those edits most content of that page was created, including the last section.
The last section does not "sound" like it was written by a developer. For now Im going to trust the email of valve developer and see if that makes sense against the behavior exhibited by srcds.
Reply
#24
Forsaken Wrote:You suggested nothing of the sort. Your exact quote is:
it doesn't matter what you think I have suggested. what I meant was, the RTT of the game packages should be higher when running with lower fps. You cannot measure RTTs of game packages with the ping command. And apparently the net_graph don't show that RTT either.

If your theory would be valid, we should never observe any change when setting fps_max 100 on a 1000 tick server. I saw a difference in the ping shown in the status command. The difference matches the expected value of 5 ms (you were right in this point, it's not 10 ms, it's the average of all values between 0 and 10 ms ;-)). That means your theory is proven wrong.

It doesn't necessarily mean my theory is correct. But I concluded it from a number of observations, and it matches what valve wrote (I don't care if it sounds like written by a developer. It might well sound like a developer trying to be understandable by non-experts). Btw. I haven't read this before coming to my conclusions.


Quote:The last section does not "sound" like it was written by a developer. For now Im going to trust the email of valve developer and see if that makes sense against the behavior exhibited by srcds.
Unless you post the whole conversation here or give us a link to read it, I cannot give any comment on this.

And btw: I know how software development in large projects is working. It is possible that not every developer knows how this specific part of the net engine is working. So a mail from an arbitrary valve developer (even supposed he was actually working at the srcds server, that's only a small fraction of the developers, I guess), it is of the same value as the statement in the wiki.

Together with the observations of the ping we can conclude that the email of the developer is wrong and the statement in the wiki is right. Unless you have a third theory.
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#25
Nevertheless, no one I've blind tested 100 vs 1000 fps on can tell the difference. Not even on ping.
So that "The game sleeps in between ticks" sound very logical to me.
I also explains why there's not any noticeable cpu increase between low/high fps, or?
"the box said 'requires windows xp or better'. so i installed linux"
Linux Ubuntu 9.04 Server 2.6.30-vanilla #1 SMP PREEMPT x86_64
Reply
#26
janne Wrote:Nevertheless, no one I've blind tested 100 vs 1000 fps on can tell the difference.
As I wrote before, I did some time ago. It's not properly enough documented to make a scientific publication ;-) but I know what I have seen. It were in those days I tried running a hlds server in parallel on the same root. Every time the hlds server was used, srcds had a lot of variations in the fps, but stayed always above 100 fps. If hlds was unused, the fps were as steady as I was used to. While playing wars on the srcds server, I could tell you with a precision of a few minutes when a war on the hlds server had started due to the much worse hit recognition. Those days I had the FPS-meter permanently observing my servers, so I could see afterwards what happened to the fps.

I'm sorry but I have unfortunately lost the FPS-meter data from those days. It was only a simple shell script, not as fancy as nowadays ;-)

Quote:Not even on ping.
You don't need a blind test, if you have a 100% correlation. I was on an empty server, the ping didn't not vary (+- 0.5ms at most). With fps_max 0 I had a ping of either 30 or 31 ms in the status, with (rcon) fps_max 100 I had a ping of 34 or 35. It switched the moment I entered the fps_max command. I did not see a single value in between. And I did a lot of status commands ;-)

You're free to do the test yourself, if you don't believe me.

Quote:So that "The game sleeps in between ticks" sound very logical to me.
It doesn't explain the change in ping, so it cannot be true.

Quote:I also explains why there's not any noticeable cpu increase between low/high fps, or?
wrong! srcds only calculates the world once per tick. what is done once per frame is only receiving and sending game packets. as each client can only send+receive one packet per tick, the total amount of work is constant, apart from a small overhead to be done every frame (e.g. calling gettimeofday to measure the fps -> that is done definitively!).
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)