People i have problem with threads in Python on Freebsd and i don't know what to do.
I using language benchmark on http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=python&id=1 and sysbench.
I testing on :
1: 2x Xeon 5130(2GHz) Freebsd 7.0 amd64 (4 virtualcpu)
2: Intel Core 2 Duo Freebsd6.2 i386 (with libthr instead libpthread) (2 virtualcpu)
I test it on 4 work machine , 2 on 2xXeon and 2 on Intel Core 2 Duo .
This is tests show - (2) machine > (1) machine.
Results :
time python2.4 threadring.pyo 10000000
1:
real 1m26.747s
user 0m55.350s
sys 1m4.766s
2:
real 0m44.100s
user 0m43.931s
sys 0m0.019s
sysbench show (1) machine > (2) machine in test threads,cpu,mutex.
1:
Threads --num-threads=512:
Test execution summary:
total time: 49.5957s
total number of events: 10000
total time taken by event execution: 24911.7416
per-request statistics:
min: 0.0370s
avg: 2.4912s
max: 20.5852s
approx. 95 percentile: 7.5445s
Threads fairness:
events (avg/stddev): 19.5312/4.53
execution time (avg/stddev): 48.6557/0.58
Mutex --num-threads=512:
Test execution summary:
total time: 6.2510s
total number of events: 512
total time taken by event execution: 2114.5974
per-request statistics:
min: 2.0751s
avg: 4.1301s
max: 5.4984s
approx. 95 percentile: 4.9587s
Threads fairness:
events (avg/stddev): 1.0000/0.00
execution time (avg/stddev): 4.1301/0.56
CPU:
Test execution summary:
total time: 23.5695s
total number of events: 10000
total time taken by event execution: 23.5548
per-request statistics:
min: 0.0023s
avg: 0.0024s
max: 0.0024s
approx. 95 percentile: 0.0024s
Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 23.5548/0.00
2:
Threads --num-threads=512:
Test execution summary:
total time: 99.0739s
total number of events: 10000
total time taken by event execution: 50129.0512
per-request statistics:
min: 2.2370s
avg: 5.0129s
max: 5.1814s
approx. 95 percentile: 5.1647s
Threads fairness:
events (avg/stddev): 19.5312/0.50
execution time (avg/stddev): 97.9083/1.11
Mutex --num-threads=512:
Test execution summary:
total time: 74.4908s
total number of events: 512
total time taken by event execution: 36023.1950
per-request statistics:
min: 56.1208s
avg: 70.3578s
max: 74.4045s
approx. 95 percentile: 73.9859s
Threads fairness:
events (avg/stddev): 1.0000/0.00
execution time (avg/stddev): 70.3578/3.76
CPU:
Test execution summary:
total time: 32.5867s
total number of events: 10000
total time taken by event execution: 32.5663
per-request statistics:
min: 0.0032s
avg: 0.0033s
max: 0.0069s
approx. 95 percentile: 0.0035s
Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 32.5663/0.00
how to explain? how to improve? What do you think? May be GIL in python show it perfomance?
I using language benchmark on http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=python&id=1 and sysbench.
I testing on :
1: 2x Xeon 5130(2GHz) Freebsd 7.0 amd64 (4 virtualcpu)
2: Intel Core 2 Duo Freebsd6.2 i386 (with libthr instead libpthread) (2 virtualcpu)
I test it on 4 work machine , 2 on 2xXeon and 2 on Intel Core 2 Duo .
This is tests show - (2) machine > (1) machine.
Results :
time python2.4 threadring.pyo 10000000
1:
real 1m26.747s
user 0m55.350s
sys 1m4.766s
2:
real 0m44.100s
user 0m43.931s
sys 0m0.019s
sysbench show (1) machine > (2) machine in test threads,cpu,mutex.
1:
Threads --num-threads=512:
Test execution summary:
total time: 49.5957s
total number of events: 10000
total time taken by event execution: 24911.7416
per-request statistics:
min: 0.0370s
avg: 2.4912s
max: 20.5852s
approx. 95 percentile: 7.5445s
Threads fairness:
events (avg/stddev): 19.5312/4.53
execution time (avg/stddev): 48.6557/0.58
Mutex --num-threads=512:
Test execution summary:
total time: 6.2510s
total number of events: 512
total time taken by event execution: 2114.5974
per-request statistics:
min: 2.0751s
avg: 4.1301s
max: 5.4984s
approx. 95 percentile: 4.9587s
Threads fairness:
events (avg/stddev): 1.0000/0.00
execution time (avg/stddev): 4.1301/0.56
CPU:
Test execution summary:
total time: 23.5695s
total number of events: 10000
total time taken by event execution: 23.5548
per-request statistics:
min: 0.0023s
avg: 0.0024s
max: 0.0024s
approx. 95 percentile: 0.0024s
Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 23.5548/0.00
2:
Threads --num-threads=512:
Test execution summary:
total time: 99.0739s
total number of events: 10000
total time taken by event execution: 50129.0512
per-request statistics:
min: 2.2370s
avg: 5.0129s
max: 5.1814s
approx. 95 percentile: 5.1647s
Threads fairness:
events (avg/stddev): 19.5312/0.50
execution time (avg/stddev): 97.9083/1.11
Mutex --num-threads=512:
Test execution summary:
total time: 74.4908s
total number of events: 512
total time taken by event execution: 36023.1950
per-request statistics:
min: 56.1208s
avg: 70.3578s
max: 74.4045s
approx. 95 percentile: 73.9859s
Threads fairness:
events (avg/stddev): 1.0000/0.00
execution time (avg/stddev): 70.3578/3.76
CPU:
Test execution summary:
total time: 32.5867s
total number of events: 10000
total time taken by event execution: 32.5663
per-request statistics:
min: 0.0032s
avg: 0.0033s
max: 0.0069s
approx. 95 percentile: 0.0035s
Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 32.5663/0.00
how to explain? how to improve? What do you think? May be GIL in python show it perfomance?