www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Fairness of thread scheduling?

reply Jason House <jason.james.house gmail.com> writes:
I have a loop that creates an arbitrary number of slave threads for 
doing work.  The intent is to kick off about as many as a computer has 
cores.  Obviously, having extra threads should also work, even if with 
slightly less performance.

I just kicked off an example with 20 worker threads (and 3 monitoring 
threads) on a single core computer.  Each does about half a million 
actions and then prints out a status update with the thread number (In 
this case 0-19).  What I see surprises me.

Thread 0: ops/sec = 26k
Thread 1: ops/sec = 19k
Thread 2: ops/sec = 13k
Thread 0: ops/sec = 27k
Thread 1: ops/sec = 19k
Thread 0: ops/sec = 25k
Thread 2: ops/sec = 13k
Thread 0: ops/sec = 22k
Thread 1: ops/sec = 17k
Thread 0: ops/sec = 22k
Thread 2: ops/sec = 14k
Thread 1: ops/sec = 17k
Thread 0: ops/sec = 22k
<killed execution>

Two big things stand out.

1. Thread 0 (22-27k ops/sec) works faster than thread 1 (17-19k 
ops/sec), which in turn works faster than thread 2 (13-14k ops/sec).
2. Threads 3-19 never give any output.

Any ideas what could be going wrong?  I've tried adding a try block 
surrounding the run method of the worker threads to output any silent 
failures (somehow).  Since I see nothing failing and I see the decreased 
performance per thread, I'm assuming thread starvation is occurring.  Is 
this expected behavior?

(I'm using dmd 1.010 under linux)
Jun 04 2007
parent reply Sean Kelly <sean f4.ca> writes:
Jason House wrote:
 
 Any ideas what could be going wrong?  I've tried adding a try block 
 surrounding the run method of the worker threads to output any silent 
 failures (somehow).  Since I see nothing failing and I see the decreased 
 performance per thread, I'm assuming thread starvation is occurring.  Is 
 this expected behavior?
I think the default scheduling option on Posix is implementation defined. The pre-defined options are FIFO and round-robin however, and I'd guess your system is using FIFO? Sean
Jun 05 2007
parent reply Jason House <jason.james.house gmail.com> writes:
Sean Kelly wrote:
 Jason House wrote:
 Any ideas what could be going wrong?  I've tried adding a try block 
 surrounding the run method of the worker threads to output any silent 
 failures (somehow).  Since I see nothing failing and I see the 
 decreased performance per thread, I'm assuming thread starvation is 
 occurring.  Is this expected behavior?
I think the default scheduling option on Posix is implementation defined. The pre-defined options are FIFO and round-robin however, and I'd guess your system is using FIFO? Sean
I don't even know how to find that information out. In fifo, would an older thread get scheduled over a newer thread when both came up at the same for execution at the same time? How would I look up if it's FIFO or round robin? I tried it on windows today and thought I saw similar results. There was definitely significant swings in how many ops/sec was reported in the output and starvation was occurring. In a 4 thread test, only threads 1,2, and 3 gave output (0 didn't). I didn't examine the data as closely as the first test on Mandriva, but I thought a similar effect (of different threads running at different speeds) was occuring. I could have sworn that given equal usage needs, kernels (regardless of OS) would be fair among the threads if they were equal priority.
Jun 05 2007
parent Sean Kelly <sean f4.ca> writes:
Jason House wrote:
 
 I could have sworn that given equal usage needs, kernels (regardless of 
 OS) would be fair among the threads if they were equal priority.
That's basically true, but synchronization points affect things. Kernel calls and such all risk a premature context switch if the executing thread cannot proceed immediately. IO tends to involve a mutex at some level, for example, not to mention IO buffer limitations. The best was to be sure your threads are consuming every cycle they're allotted is to stay entirely in user code. As for determining the default scheduling scheme--I checked the OpenGroup POSIX spec, hoping to find something obvious. I remember seeing mention of it before, but couldn't find it again in the time I had to look. Similar information is available for Windows somewhere in MSDN, but finding it would take some time as well. Sean
Jun 05 2007