SRCDS Steam group


Server FPS not going higher than 300
#16
(07-01-2011, 11:55 PM)dualcore1289 Wrote:  As stated, if the fps is stable, than game play is great. 1000fps is really an overkill, steady or not. a 66-67fps server stable fps will perform better than a 1000fps game server. It's just how the engine is designed. Just like the 66 tick vs 100 tick. With the orangebox engine, 66 tick, 66-67fps is perfect. 100 tick, and 1000fps servers is just a thing for companies to charge out the @$$ for game servers. People claim they are "better" because their minds are believing it. Test a stable 66 tick, 66-67fps server vs a 100 tick/1000fps server, and it will perform just as good if not better.

I did my research. And which companies still charge for 100tick? Could you point me to that?

I play with people that play this game very, very competitively. Some EX CGS, some currently main+ PPT, some ex CPL.

The 66tick vs 100tick debate on orangebox is true, as well as the FPS debate now. Either way though 66FPS surpasses 500FPS by quite a bit, but it doesn't really surpass 1000FPS. The game simply processes entities much faster at 1000FPS, and you can verify this yourself via net_graph 6 "var"

(07-04-2011, 11:13 PM)BehaartesEtwas Wrote:  nevertheless, since the orangebox engine update there is only one reason to run with fps higher than the tickrate: if you cannot exactly match the tickrate with the fps and instead run with fps very slightly above the tickrate, you will get the aliasing effect. that problem is solved when running with sufficient high fps. neither the exact height nor the stability pay any role then. only the fps have to stay above a certain value all the time.

"sufficient high fps" are those where the aliasing effect gets small enough to be safely caught by the client interpolation. that is the case, if the maximum ping variation (not the absolute height!) plus the time between two frames (i.e. inverse server fps) are shorter then the time between two ticks (assuming a cl_interp_ratio of 2). everyone can do the math by himself. ping variations are usually only a couple of milliseconds.

of course I cannot really prove this, but it matches my experience. I strongly suspect that anyone telling something different is subject to the placebo effect. that is nothing to be ashamed off, because that happens quite easily...

I would have to agree with you on this, but it is definitely not a placebo.


At the expense of CPU, you can simply just get that extra edge for your servers. If it wasn't even just the slightist bit true then why are providers linux or windows still running over 66FPS? Well marketing is one. But it does make a difference. Anything higher than 1000-2000FPS is just a waste, as you can tell by the lerp: turning into yellow, the engine doesn't like anything higher than 1000FPS. But comparing 66 to 1000FPS does make a difference. The engine latency is just the same as 66fps, so you can say that it is synchronized with the tick once again, but the var also moves alot faster, thus you can assume the engine is processing more data at once. You do need a beefy CPU to do this though.

Even with the legacy HL2 engine days, a much faster clocked cpu can help fps stability in that regard. X3460s back in the day, and now the E3-1270s.

I can believe what I want, you can believe what you want. I know what's best for me and I'm sticking with it. Smile
[Image: 1789915.png]

Smile
Reply
#17
(07-05-2011, 07:38 PM)cookies911 Wrote:  I did my research. And which companies still charge for 100tick? Could you point me to that?

I play with people that play this game very, very competitively. Some EX CGS, some currently main+ PPT, some ex CPL.

The 66tick vs 100tick debate on orangebox is true, as well as the FPS debate now. Either way though 66FPS surpasses 500FPS by quite a bit, but it doesn't really surpass 1000FPS. The game simply processes entities much faster at 1000FPS, and you can verify this yourself via net_graph 6 "var"

VALVe is getting rid of FPS in the future. The game does not process anything faster at higher frames. If that were true, physics would get out of sync (see the q3 article on pmove_fixed and sv_fps for a similar explanation) You can attach gdb/dtrace to a process and look at what it's doing on the stack per frame. You can script it in perl. gettimeofday has bounds checking so the simulation speed isn't faster in the engine. What higher FPS does is chew nanosleep more and more. Nothing more. VALVe games, each frame is delayed using NET_Sleep(), which is called by _SV_Frame(). It blocks for network activity or a timeout period before ret'ing.


Quote:I would have to agree with you on this, but it is definitely not a placebo.


At the expense of CPU, you can simply just get that extra edge for your servers. If it wasn't even just the slightist bit true then why are providers linux or windows still running over 66FPS? Well marketing is one. But it does make a difference. Anything higher than 1000-2000FPS is just a waste, as you can tell by the lerp: turning into yellow, the engine doesn't like anything higher than 1000FPS. But comparing 66 to 1000FPS does make a difference. The engine latency is just the same as 66fps, so you can say that it is synchronized with the tick once again, but the var also moves alot faster, thus you can assume the engine is processing more data at once. You do need a beefy CPU to do this though.

Even with the legacy HL2 engine days, a much faster clocked cpu can help fps stability in that regard. X3460s back in the day, and now the E3-1270s.

Because people are clueless, that's why you have people selling idiotic servers with high FPS. Then if you ask someone what happens when FPS >= tick, they make up thousands of excuses to why, most of it gibberish. FPS's are generated by syscalls. I can write something to give 10,000,00FPS on a old K6 Athlon. The users who buy high FPS servers are just as stupid as the people selling them.


http://leaf.dragonflybsd.org/~gary

“The two most common elements in the universe are hydrogen and stupidity.”








Reply
#18
(07-05-2011, 07:38 PM)cookies911 Wrote:  I would have to agree with you on this, but it is definitely not a placebo.
maybe not in case of your server. probably it does not really hold its 66.67 fps steady. keep in mind, that 66.7 fps is already too much... that will lead to a gap in the tickrate every 30 seconds (if I am doing the math right now -> the smaller the difference to 66 2/3 fps the longer the distance). also most people do not realize that the difference between 66.67fps and 62.5fps is already 1ms - same difference as between 500 and 1000fps (w.r.t. the frame time - not to the gameplay!)

so... unless you have a server showing 66.67fps in an fpsmeter measurement and no visible variations *and* a bunch of people who do not know at which fps it is running and nevertheless telling that the same server was better when running at 1000fps (again without knowing the fps), you cannot conclude that 1000fps are better. It's either pacebo or the server not being capable of holding the fps steady. at 66.67fps that is much more important!

@monk: probably I understand something different under a scientific journal. reverse engineering some piece of closed-source software to guess how the author implemented something is NOT science.
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#19
(07-06-2011, 05:30 PM)BehaartesEtwas Wrote:  
(07-05-2011, 07:38 PM)cookies911 Wrote:  I would have to agree with you on this, but it is definitely not a placebo.
maybe not in case of your server. probably it does not really hold its 66.67 fps steady. keep in mind, that 66.7 fps is already too much... that will lead to a gap in the tickrate every 30 seconds (if I am doing the math right now -> the smaller the difference to 66 2/3 fps the longer the distance). also most people do not realize that the difference between 66.67fps and 62.5fps is already 1ms - same difference as between 500 and 1000fps (w.r.t. the frame time - not to the gameplay!)

so... unless you have a server showing 66.67fps in an fpsmeter measurement and no visible variations *and* a bunch of people who do not know at which fps it is running and nevertheless telling that the same server was better when running at 1000fps (again without knowing the fps), you cannot conclude that 1000fps are better. It's either pacebo or the server not being capable of holding the fps steady. at 66.67fps that is much more important!

@monk: probably I understand something different under a scientific journal. reverse engineering some piece of closed-source software to guess how the author implemented something is NOT science.

My server holds 1000FPS stable, and I'm pretty sure it can hold 66.67 stable as well.

Once I get it populated I'll test with fpsmeter.
[Image: 1789915.png]

Smile
Reply
#20
Exactly. There is a reason why Valve is getting rid of the fps feature, just like they got rid of the tickrate future. And as stated, I am not debating facts about 1000fps rendering things faster than 66fps, but YOU, ME, anyone as a person can notice a difference in the fps. As the human eye can only see something like 60fps or whatever the number is.
Ryan White
Owner & CEO
GigabiteServers.com
Reply
#21
(07-07-2011, 01:26 AM)dualcore1289 Wrote:  Exactly. There is a reason why Valve is getting rid of the fps feature, just like they got rid of the tickrate future. And as stated, I am not debating facts about 1000fps rendering things faster than 66fps, but YOU, ME, anyone as a person can notice a difference in the fps. As the human eye can only see something like 60fps or whatever the number is.

.........

There's a difference between I/O fps, and then typical Graphical FPS. as we all know. I can tell a big difference in smoothness on client side when I cap my fps to 150, or 200. And then I use fps_max 600 in which I can get about 599 consistently It just feels much smoother. This is without SLI, single GPU, so Micro-stuttering is not a part of this. Human eye can only realistically Track 60FPS, but there is no set amount of how much FPS it can "see".

Don't hold me by the comment about the eye. But 150fps vs 500fps client side there is a difference. 66FPS vs 1000FPS server side there is a difference, only very slight. Worth it in general no, but it does make a difference.
[Image: 1789915.png]

Smile
Reply
#22
Still am doubting your theory there. You notice a difference because you know it's more fps. Its mind over matter. You know its 1000fps, so you have an "urge" to think its better, your mind plays a huge roll in it. I did this SAME test back in the day when I was running my GSP. I would test people on 1000fps servers vs 500fps servers. Everyone was stating "ZOMG, 1000fps is leet, its the greatest, and I can tell a difference, blah blah blah". Gave them one of my 500 fps boosted servers without telling them what the fps was, and they swore it was 1000fps as to how smooth it ran, and the quality of it.

Your mind plays a huge roll in this "Oh it's better and I can tell a difference". Then again, a lot plays a roll, proper rates, stable fps, server load, quality hardware, etc.
Ryan White
Owner & CEO
GigabiteServers.com
Reply
#23
(07-06-2011, 05:30 PM)BehaartesEtwas Wrote:  @monk: probably I understand something different under a scientific journal. reverse engineering some piece of closed-source software to guess how the author implemented something is NOT science.

Stop trying to talk to me. I do not like you.

http://leaf.dragonflybsd.org/~gary

“The two most common elements in the universe are hydrogen and stupidity.”








Reply
#24
(07-08-2011, 10:11 PM)dualcore1289 Wrote:  Still am doubting your theory there. You notice a difference because you know it's more fps. Its mind over matter. You know its 1000fps, so you have an "urge" to think its better, your mind plays a huge roll in it. I did this SAME test back in the day when I was running my GSP. I would test people on 1000fps servers vs 500fps servers. Everyone was stating "ZOMG, 1000fps is leet, its the greatest, and I can tell a difference, blah blah blah". Gave them one of my 500 fps boosted servers without telling them what the fps was, and they swore it was 1000fps as to how smooth it ran, and the quality of it.

Your mind plays a huge roll in this "Oh it's better and I can tell a difference". Then again, a lot plays a roll, proper rates, stable fps, server load, quality hardware, etc.

Yes I know alot matters. When the new update rolled out, people complained about how garbage the server was, and all that, I thus had to change the fps to the well acclaimed fps_max 66.67 or 66 or 67 tested it in various times. I left 1000FPS in the server host name as well. People still complained. When I then patched the .dll's and had them back at 1000FPS, the complaints went away, and I was happy fragging like I used to again Smile

FPS changes the way the engine runs tbh, obviously rates, cpu bandwidth all come into play, but it will make a difference either both good or bad. All depends.

But as you and I said, it's still a theory, but I think theres more than meets the eye here.
[Image: 1789915.png]

Smile
Reply
#25
(07-09-2011, 06:40 PM)cookies911 Wrote:  
(07-08-2011, 10:11 PM)dualcore1289 Wrote:  Still am doubting your theory there. You notice a difference because you know it's more fps. Its mind over matter. You know its 1000fps, so you have an "urge" to think its better, your mind plays a huge roll in it. I did this SAME test back in the day when I was running my GSP. I would test people on 1000fps servers vs 500fps servers. Everyone was stating "ZOMG, 1000fps is leet, its the greatest, and I can tell a difference, blah blah blah". Gave them one of my 500 fps boosted servers without telling them what the fps was, and they swore it was 1000fps as to how smooth it ran, and the quality of it.

Your mind plays a huge roll in this "Oh it's better and I can tell a difference". Then again, a lot plays a roll, proper rates, stable fps, server load, quality hardware, etc.

Yes I know alot matters. When the new update rolled out, people complained about how garbage the server was, and all that, I thus had to change the fps to the well acclaimed fps_max 66.67 or 66 or 67 tested it in various times. I left 1000FPS in the server host name as well. People still complained. When I then patched the .dll's and had them back at 1000FPS, the complaints went away, and I was happy fragging like I used to again Smile

FPS changes the way the engine runs tbh, obviously rates, cpu bandwidth all come into play, but it will make a difference either both good or bad. All depends.

But as you and I said, it's still a theory, but I think theres more than meets the eye here.

i have noticed that when my server is full with 32 people, the fps tends to be unstable after LONG periods of uptime (a week or more) if its set to 1000 (fps_max 0) and the variance in net graph goes haywire. if i hack the dedicated.dll and engine.dll to remove the 1000 cap and have 10000 fps then it doesn't matter how long the server runs, its always above 1000 and the variance is under 0.2 ms (literally). the cpu difference between 1000 fps and 66 fps is negligible (+-1-2%) contrary to popular belief. but > 1000 fps the cpu usage is just stupid because the engine doesnt sleep at all
Reply


Forum Jump:


Users browsing this thread: 6 Guest(s)