SRCDS Steam group


HL1 & HL2 Booster Library
#16
(10-26-2009, 11:42 PM)Terrorkarotte Wrote:  Ok lets say i ignore the errors and look at the max fps i get. I can not see any change. It is stuck at ~950fps
I am using hpet and tickless system.
I also tried deleting the constuctor part.

Why are you using HPET? Are you on an AMD server?

Try using TSC and tell me the results.
[Image: banner.php?t=2&bg=002244&amp...p;id=82023]

[Image: banner.php?t=2&bg=002244&amp...p;id=82024]
Reply
#17
I have a Core i7 920 with 8gb Ram.
I set up tsc
Code:
echo tsc >  /sys/devices/system/clocksource/clocksource0/current_clocksource
Still the same FPS result.
Do i need a rt kernel for this lib? I am using the standart debian amd64 kernel atm which has high resolution+tickless. cpufreq is in the performance mode.

I start with:
Code:
LD_PRELOAD=/usr/share/hl-booster/boost.so ./srcds_run -tickrate 100 -game dod -ip hereisanumber -port herealso +maxplayers 5 +map dod_gan +fps_max 0
Reply
#18
can i get 1000 FPS ? with this lib?
Reply
#19
I wasn´t able yet :/
But i do not know if my 64bit system causes this.
Reply
#20
Everyone... please take a moment to understand the purpose of this library. Its only purpose is to shave microseconds of latency off of a few calls over time. It does not, and will not (without modification), give you 1000FPS.

It does have built in process priority, as well as, a method to lock memory from being paged out. The best way to see whats going on is to run strace on the engine process and see how the lib is affecting the system.

If anyone needs help, I'll do my best to answer questions in #SourceKernel.

DiSTANT
[Image: banner.php?t=2&bg=002244&amp...p;id=82023]

[Image: banner.php?t=2&bg=002244&amp...p;id=82024]
Reply
#21
(10-27-2009, 01:37 AM)Terrorkarotte Wrote:  I have a Core i7 920 with 8gb Ram.
...
I am using the standart debian amd64 kernel atm which has high resolution+tickless. cpufreq is in the performance mode.

Your specification fits to Hetzner's EQ4. Let me know if you get this working. I've got the same system Wink

distant Wrote:The best way to see whats going on is to run strace on the engine process and see how the lib is affecting the system.

Isn't this little bit too much for performance? I don't want to run debugger to see if the server is good or not. Does this lib actually help anything or is it just technically cool?
Reply
#22
it just subtracts a few uS off each nanosleep call.. it also calls a deprecated function, ftime, which is coarse and not really used at all..
http://leaf.dragonflybsd.org/~gary

“The two most common elements in the universe are hydrogen and stupidity.”








Reply
#23
Compiled fine works great to my knowledge. I am in the process of reworking my own lib and this certainly clears up some of the questions I had about the different system calls and conversions that will be necessary for what I have planned.

Once again another job well done -applause-
Reply
#24
well, you should be careful with replacing gettimeofday. at a first glance you might be correct about gettimeofday being the wrong function for measuring fps (cf. http://groups.google.com/group/comp.os.linux.development.apps/browse_frm/thread/dc29071f2417f75f/c46264dba0863463?lnk=st&q=wall+time&rnum=1&pli=1). but, obey the following things:

- if you really want to improve the measurement of the fps, use CLOCK_MONOTONIC instead of CLOCK_REALTIME. I guess, CLOCK_REALTIME will be pretty much the same as gettimeofday (i.e. the best guess of the current wall time)...

- if changing gettimeofday results in different fps, most certainly this is not caused by an actual improvement of the fps, but by a change of the measurement of the fps. meaning basically there is no way to compare the fps stability etc. between a server running this lib and a server running without.

- to my knowledge the fps are also used to calculate all speeds in the world. you can see this if you set host_framerate e.g. to 500 while running at actual 1000 fps. the game will then run twice as fast as usual (host_framerate does not change the actual fps but makes the engine believe to run at the given fps). this means, valve can have actual reasons to use gettimeofday instead of clock_gettime. if you want the game running with the correct speed, you don't need a monotonic clock but a precise clock.

in fact, replacing usleep with nanosleep will almost certainly also change nothing. at least that's what I experienced with similar experiments little more than one year ago (that's why I never published this, I only talked with DiSTANT and a few others about this... so I'm wondering why he is now coming up with this, but doesn't matter. this is no secret ;-)).

monk, where do you see substracting a few microseconds of the nanosleep/usleep calls?
http://www.fpsmeter.org
http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide (Linux Kernel HOWTO!)
Do not ask technical questions via PM!
Reply
#25
(10-27-2009, 10:05 PM)BehaartesEtwas Wrote:  well, you should be careful with replacing gettimeofday. at a first glance you might be correct about gettimeofday being the wrong function for measuring fps (cf. http://groups.google.com/group/comp.os.linux.development.apps/browse_frm/thread/dc29071f2417f75f/c46264dba0863463?lnk=st&q=wall+time&rnum=1&pli=1). but, obey the following things:

- if you really want to improve the measurement of the fps, use CLOCK_MONOTONIC instead of CLOCK_REALTIME. I guess, CLOCK_REALTIME will be pretty much the same as gettimeofday (i.e. the best guess of the current wall time)...

- if changing gettimeofday results in different fps, most certainly this is not caused by an actual improvement of the fps, but by a change of the measurement of the fps. meaning basically there is no way to compare the fps stability etc. between a server running this lib and a server running without.

- to my knowledge the fps are also used to calculate all speeds in the world. you can see this if you set host_framerate e.g. to 500 while running at actual 1000 fps. the game will then run twice as fast as usual (host_framerate does not change the actual fps but makes the engine believe to run at the given fps). this means, valve can have actual reasons to use gettimeofday instead of clock_gettime. if you want the game running with the correct speed, you don't need a monotonic clock but a precise clock.

in fact, replacing usleep with nanosleep will almost certainly also change nothing. at least that's what I experienced with similar experiments little more than one year ago (that's why I never published this, I only talked with DiSTANT and a few others about this... so I'm wondering why he is now coming up with this, but doesn't matter. this is no secret ;-)).

monk, where do you see substracting a few microseconds of the nanosleep/usleep calls?

This is no secret. A lot of people have been asking for the source code of the lib I released a year ago. Instead of scrounging to find it, I just re-wrote what I remembered it had in it. If anything, it will at least give the community a place to start when it comes to coding their own libs. My newer libs do use CLOCK_MONOTONIC.
[Image: banner.php?t=2&bg=002244&amp...p;id=82023]

[Image: banner.php?t=2&bg=002244&amp...p;id=82024]
Reply
#26
(10-27-2009, 10:05 PM)BehaartesEtwas Wrote:  well, you should be careful with replacing gettimeofday. at a first glance you might be correct about gettimeofday being the wrong function for measuring fps (cf. http://groups.google.com/group/comp.os.linux.development.apps/browse_frm/thread/dc29071f2417f75f/c46264dba0863463?lnk=st&q=wall+time&rnum=1&pli=1). but, obey the following things:

- if you really want to improve the measurement of the fps, use CLOCK_MONOTONIC instead of CLOCK_REALTIME. I guess, CLOCK_REALTIME will be pretty much the same as gettimeofday (i.e. the best guess of the current wall time)...

- if changing gettimeofday results in different fps, most certainly this is not caused by an actual improvement of the fps, but by a change of the measurement of the fps. meaning basically there is no way to compare the fps stability etc. between a server running this lib and a server running without.

- to my knowledge the fps are also used to calculate all speeds in the world. you can see this if you set host_framerate e.g. to 500 while running at actual 1000 fps. the game will then run twice as fast as usual (host_framerate does not change the actual fps but makes the engine believe to run at the given fps). this means, valve can have actual reasons to use gettimeofday instead of clock_gettime. if you want the game running with the correct speed, you don't need a monotonic clock but a precise clock.

in fact, replacing usleep with nanosleep will almost certainly also change nothing. at least that's what I experienced with similar experiments little more than one year ago (that's why I never published this, I only talked with DiSTANT and a few others about this... so I'm wondering why he is now coming up with this, but doesn't matter. this is no secret ;-)).

monk, where do you see substracting a few microseconds of the nanosleep/usleep calls?

usleep calls nanosleep. the code calls nanosleep with a subtraction of 9uS. gettimeofday is a guess of the wallclock time from the last hardware tick, linux will add a few jiffies to make sure it's counting upwards. clock_gettime is better than gettimeofday, even though they have more or less the same APIs, one just uses a struct timespec and the other uses a timeval (microsecond to nanosecond)
http://leaf.dragonflybsd.org/~gary

“The two most common elements in the universe are hydrogen and stupidity.”








Reply
#27
(10-28-2009, 07:15 AM)Monk Wrote:  
(10-27-2009, 10:05 PM)BehaartesEtwas Wrote:  well, you should be careful with replacing gettimeofday. at a first glance you might be correct about gettimeofday being the wrong function for measuring fps (cf. http://groups.google.com/group/comp.os.linux.development.apps/browse_frm/thread/dc29071f2417f75f/c46264dba0863463?lnk=st&q=wall+time&rnum=1&pli=1). but, obey the following things:

- if you really want to improve the measurement of the fps, use CLOCK_MONOTONIC instead of CLOCK_REALTIME. I guess, CLOCK_REALTIME will be pretty much the same as gettimeofday (i.e. the best guess of the current wall time)...

- if changing gettimeofday results in different fps, most certainly this is not caused by an actual improvement of the fps, but by a change of the measurement of the fps. meaning basically there is no way to compare the fps stability etc. between a server running this lib and a server running without.

- to my knowledge the fps are also used to calculate all speeds in the world. you can see this if you set host_framerate e.g. to 500 while running at actual 1000 fps. the game will then run twice as fast as usual (host_framerate does not change the actual fps but makes the engine believe to run at the given fps). this means, valve can have actual reasons to use gettimeofday instead of clock_gettime. if you want the game running with the correct speed, you don't need a monotonic clock but a precise clock.

in fact, replacing usleep with nanosleep will almost certainly also change nothing. at least that's what I experienced with similar experiments little more than one year ago (that's why I never published this, I only talked with DiSTANT and a few others about this... so I'm wondering why he is now coming up with this, but doesn't matter. this is no secret ;-)).

monk, where do you see substracting a few microseconds of the nanosleep/usleep calls?

usleep calls nanosleep. the code calls nanosleep with a subtraction of 9uS. gettimeofday is a guess of the wallclock time from the last hardware tick, linux will add a few jiffies to make sure it's counting upwards. clock_gettime is better than gettimeofday, even though they have more or less the same APIs, one just uses a struct timespec and the other uses a timeval (microsecond to nanosecond)

BTW, ftime is only used when clock_gettime fails. Think of it as just an estimation... and yes, I know its depreciated.
[Image: banner.php?t=2&bg=002244&amp...p;id=82023]

[Image: banner.php?t=2&bg=002244&amp...p;id=82024]
Reply
#28
Please make sure that you have fps_max 0 set.
[Image: banner.php?t=2&bg=002244&amp...p;id=82023]

[Image: banner.php?t=2&bg=002244&amp...p;id=82024]
Reply
#29
can i start srcds server with screen and LD_PRELOAD=/usr/share/hl-booster/boost.so ??

#!/bin/sh
echo "Starting Cs:Source Server"
sleep 1
screen -A -m -d -S css LD_PRELOAD=/usr/share/hl-booster/boost.so ./srcds_run -game cstrike +ip <server ip> +port 2701 -secure +map de_dust2 -tickrate 66 +fps_max 0 +maxplayers 26



server is not starting with this command
Reply
#30
i dont think so ^^
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)