www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - Coming IO features in Tango

reply Lars Ivar Igesund <larsivar igesund.net> writes:
Dear D community

To make the Tango development process more transparent, we will start
announcing new and coming features outside of the release cycle itself.
This may be important changes to Tango, notable feature additions or
particularly exciting compatible libraries. They will be about features
already near-finished or well on the way, to avoid false pretences.

By popular demand, the Tango IO subsystem now exposes a 'stream' oriented
API which will be available in the upcoming release 0.99. Tango streams are
described by InputStream and OutputStream, which are hosted by the existing
Conduit mechanism. Both input and output support the notion of 'filter'
chains: distinct chains of attached streams to manipulate content as it
flows in one direction or the other. In order to avoid the pitfalls of a
purely Decorator-pattern design, these stream chains are fully encapsulated
within the hosting Conduit -- this allows the specific attributes of a
Conduit (such as file seek, or various socket attributes) to be exposed at
all times, instead of trying to force-fit those options into the stream
itself. Thus, streams retain an uncomplicated API with little more than
read, write, copy and flush operations.

Tango has been adjusted in various ways to take advantage of the new
streams, and we'll see further use of that model in later releases.

Further on, we're building an asynchronous I/O library based on Tango's IO
abstractions with notifications sent on completion of I/O events. The plan
for the first stage of development is to have an API capable of delivering
I/O, timer and (possibly) Unix signal events to applications through
delegates. It will be able to efficiently handle large numbers ( i.e.
thousands) of active file descriptors/handles (sockets, pipes, etc.) on all
the platforms that Tango supports. Initially the library will work both on
Windows (using I/O completion ports) and on Linux (using epoll); we will
then provide a Mac OS X and FreeBSD implementation (based on kqueue), and
other platforms if there is enough interest from the community.

During the second stage of development we will build a framework on top of
the asynchronous I/O library that will be able to multiplex I/O jobs using
Tango Fibers (i.e. lightweight or userspace threads). Each fiber waiting
for I/O events will be suspended until the event is received, helping to
avoid consuming excessive resources. The load from each fiber will be
distributed among a pool of threads.

The idea behind both libraries is to be able to efficiently implement
network protocols that are either synchronous (HTTP, SMTP, etc.) and
asynchronous (XMPP, etc.) in nature in both client and server applications.

Contact:
http://www.dsource.org/projects/tango/wiki/Contact 

Signed, 

The Tango Team 

http://www.dsource.org/projects/tango/wiki/Contributors 

----

Tango is a D library providing a cohesive runtime and library for the D
programming language. A feature list can be found on
http://www.dsource.org/projects/tango/wiki/Features
Jun 22 2007
next sibling parent DavidL <Davidl 126.com> writes:
I/O transaction is another valuble point. Vista has some API about it.
For non determinated IO behavior roll back func should be provided by
users. And with all rollback func available, then there's a transaction
I/O

 Dear D community

 To make the Tango development process more transparent, we will start
 announcing new and coming features outside of the release cycle itself.
 This may be important changes to Tango, notable feature additions or
 particularly exciting compatible libraries. They will be about features
 already near-finished or well on the way, to avoid false pretences.

 By popular demand, the Tango IO subsystem now exposes a 'stream' oriented
 API which will be available in the upcoming release 0.99. Tango streams  
 are
 described by InputStream and OutputStream, which are hosted by the  
 existing
 Conduit mechanism. Both input and output support the notion of 'filter'
 chains: distinct chains of attached streams to manipulate content as it
 flows in one direction or the other. In order to avoid the pitfalls of a
 purely Decorator-pattern design, these stream chains are fully  
 encapsulated
 within the hosting Conduit -- this allows the specific attributes of a
 Conduit (such as file seek, or various socket attributes) to be exposed  
 at
 all times, instead of trying to force-fit those options into the stream
 itself. Thus, streams retain an uncomplicated API with little more than
 read, write, copy and flush operations.

 Tango has been adjusted in various ways to take advantage of the new
 streams, and we'll see further use of that model in later releases.

 Further on, we're building an asynchronous I/O library based on Tango's  
 IO
 abstractions with notifications sent on completion of I/O events. The  
 plan
 for the first stage of development is to have an API capable of  
 delivering
 I/O, timer and (possibly) Unix signal events to applications through
 delegates. It will be able to efficiently handle large numbers ( i.e.
 thousands) of active file descriptors/handles (sockets, pipes, etc.) on  
 all
 the platforms that Tango supports. Initially the library will work both  
 on
 Windows (using I/O completion ports) and on Linux (using epoll); we will
 then provide a Mac OS X and FreeBSD implementation (based on kqueue), and
 other platforms if there is enough interest from the community.

 During the second stage of development we will build a framework on top  
 of
 the asynchronous I/O library that will be able to multiplex I/O jobs  
 using
 Tango Fibers (i.e. lightweight or userspace threads). Each fiber waiting
 for I/O events will be suspended until the event is received, helping to
 avoid consuming excessive resources. The load from each fiber will be
 distributed among a pool of threads.

 The idea behind both libraries is to be able to efficiently implement
 network protocols that are either synchronous (HTTP, SMTP, etc.) and
 asynchronous (XMPP, etc.) in nature in both client and server  
 applications.

 Contact:
 http://www.dsource.org/projects/tango/wiki/Contact

 Signed,

 The Tango Team

 http://www.dsource.org/projects/tango/wiki/Contributors

 ----

 Tango is a D library providing a cohesive runtime and library for the D
 programming language. A feature list can be found on
 http://www.dsource.org/projects/tango/wiki/Features
-- 使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/
Jun 22 2007
prev sibling next sibling parent reply Daniel919 <Daniel919 web.de> writes:
 To make the Tango development process more transparent, we will start
 announcing new and coming features outside of the release cycle itself.
 This may be important changes to Tango, notable feature additions or
 particularly exciting compatible libraries. They will be about features
 already near-finished or well on the way, to avoid false pretences.
Sounds like a good idea to keep the community informed about tango's progress.
 Further on, we're building an asynchronous I/O library based on Tango's IO
 abstractions with notifications sent on completion of I/O events. The plan
 for the first stage of development is to have an API capable of delivering
 I/O, timer and (possibly) Unix signal events to applications through
 delegates. It will be able to efficiently handle large numbers ( i.e.
 thousands) of active file descriptors/handles (sockets, pipes, etc.) on all
 the platforms that Tango supports. Initially the library will work both on
 Windows (using I/O completion ports) and on Linux (using epoll); we will
 then provide a Mac OS X and FreeBSD implementation (based on kqueue), and
 other platforms if there is enough interest from the community.
The funny thing is, I just created a class that does handle onConnect, onRead, onTimeout, onClose, onError, ... via delegates, using the selector package. It even handles SSL (openssl wrapped with bcd). My solution is relatively simple: It's like: if (selectionKey.isReadable()) { resetTimeout; onReadable(...); } Each object has a timeout value stored. Before the select call I lookup when the next timeout should occur and then I do like: select(nextTimeout - Clock.now). I'm really interested to see, how the new tango implementation of this and my simple solution will differ.
 During the second stage of development we will build a framework on top of
 the asynchronous I/O library that will be able to multiplex I/O jobs using
 Tango Fibers (i.e. lightweight or userspace threads). Each fiber waiting
 for I/O events will be suspended until the event is received, helping to
 avoid consuming excessive resources. The load from each fiber will be
 distributed among a pool of threads.
This is something I didn't implement in my solution so far.
 ... pool of threads.
So the load can be divided and will use all cores of a multi-core machine?
 The idea behind both libraries is to be able to efficiently implement
 network protocols that are either synchronous (HTTP, SMTP, etc.) and
 asynchronous (XMPP, etc.) in nature in both client and server applications.
Like I already said in IRC, for this idea SSL support would be nice. Best regards, Daniel
Jun 23 2007
parent Juan Jose Comellas <jcomellas gmail.com> writes:
The problem with your approach is that it won't scale very well on Windows.
The Selector implementation on Windows uses the select() API, which is very
inefficient. The new I/O interface will use I/O completion ports, which
scale much better but are inherently asynchronous and not appropriate for a
selector-like interface.


Daniel919 wrote:

 To make the Tango development process more transparent, we will start
 announcing new and coming features outside of the release cycle itself.
 This may be important changes to Tango, notable feature additions or
 particularly exciting compatible libraries. They will be about features
 already near-finished or well on the way, to avoid false pretences.
Sounds like a good idea to keep the community informed about tango's progress.
 Further on, we're building an asynchronous I/O library based on Tango's
 IO abstractions with notifications sent on completion of I/O events. The
 plan for the first stage of development is to have an API capable of
 delivering I/O, timer and (possibly) Unix signal events to applications
 through delegates. It will be able to efficiently handle large numbers (
 i.e. thousands) of active file descriptors/handles (sockets, pipes, etc.)
 on all the platforms that Tango supports. Initially the library will work
 both on Windows (using I/O completion ports) and on Linux (using epoll);
 we will then provide a Mac OS X and FreeBSD implementation (based on
 kqueue), and other platforms if there is enough interest from the
 community.
The funny thing is, I just created a class that does handle onConnect, onRead, onTimeout, onClose, onError, ... via delegates, using the selector package. It even handles SSL (openssl wrapped with bcd). My solution is relatively simple: It's like: if (selectionKey.isReadable()) { resetTimeout; onReadable(...); } Each object has a timeout value stored. Before the select call I lookup when the next timeout should occur and then I do like: select(nextTimeout - Clock.now). I'm really interested to see, how the new tango implementation of this and my simple solution will differ.
 During the second stage of development we will build a framework on top
 of the asynchronous I/O library that will be able to multiplex I/O jobs
 using Tango Fibers (i.e. lightweight or userspace threads). Each fiber
 waiting for I/O events will be suspended until the event is received,
 helping to avoid consuming excessive resources. The load from each fiber
 will be distributed among a pool of threads.
This is something I didn't implement in my solution so far.
 ... pool of threads.
So the load can be divided and will use all cores of a multi-core machine?
 The idea behind both libraries is to be able to efficiently implement
 network protocols that are either synchronous (HTTP, SMTP, etc.) and
 asynchronous (XMPP, etc.) in nature in both client and server
 applications.
Like I already said in IRC, for this idea SSL support would be nice. Best regards, Daniel
Jul 10 2007
prev sibling next sibling parent eao197 <eao197 intervale.ru> writes:
On Sat, 23 Jun 2007 01:18:28 +0400, Lars Ivar Igesund  
<larsivar igesund.net> wrote:

 Further on, we're building an asynchronous I/O library based on Tango's  
 IO
 abstractions with notifications sent on completion of I/O events. The  
 plan
 for the first stage of development is to have an API capable of  
 delivering
 I/O, timer and (possibly) Unix signal events to applications through
 delegates. It will be able to efficiently handle large numbers ( i.e.
 thousands) of active file descriptors/handles (sockets, pipes, etc.) on  
 all
 the platforms that Tango supports. Initially the library will work both  
 on
 Windows (using I/O completion ports) and on Linux (using epoll); we will
 then provide a Mac OS X and FreeBSD implementation (based on kqueue), and
 other platforms if there is enough interest from the community.
Looks like a significant part of ACE (ACE_Proactor) is going to Tango. Good luck! -- Regards, Yauheni Akhotnikau
Jun 23 2007
prev sibling next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Awesome! As a SEDA lover, I give you my 100% stamp of approval! Thanks for all
the hard work you guys are putting in!

Out of curiosity, will Tango be 2.0-ready soon?

Lars Ivar Igesund Wrote:

 Dear D community
 
 To make the Tango development process more transparent, we will start
 announcing new and coming features outside of the release cycle itself.
 This may be important changes to Tango, notable feature additions or
 particularly exciting compatible libraries. They will be about features
 already near-finished or well on the way, to avoid false pretences.
 
 By popular demand, the Tango IO subsystem now exposes a 'stream' oriented
 API which will be available in the upcoming release 0.99. Tango streams are
 described by InputStream and OutputStream, which are hosted by the existing
 Conduit mechanism. Both input and output support the notion of 'filter'
 chains: distinct chains of attached streams to manipulate content as it
 flows in one direction or the other. In order to avoid the pitfalls of a
 purely Decorator-pattern design, these stream chains are fully encapsulated
 within the hosting Conduit -- this allows the specific attributes of a
 Conduit (such as file seek, or various socket attributes) to be exposed at
 all times, instead of trying to force-fit those options into the stream
 itself. Thus, streams retain an uncomplicated API with little more than
 read, write, copy and flush operations.
 
 Tango has been adjusted in various ways to take advantage of the new
 streams, and we'll see further use of that model in later releases.
 
 Further on, we're building an asynchronous I/O library based on Tango's IO
 abstractions with notifications sent on completion of I/O events. The plan
 for the first stage of development is to have an API capable of delivering
 I/O, timer and (possibly) Unix signal events to applications through
 delegates. It will be able to efficiently handle large numbers ( i.e.
 thousands) of active file descriptors/handles (sockets, pipes, etc.) on all
 the platforms that Tango supports. Initially the library will work both on
 Windows (using I/O completion ports) and on Linux (using epoll); we will
 then provide a Mac OS X and FreeBSD implementation (based on kqueue), and
 other platforms if there is enough interest from the community.
 
 During the second stage of development we will build a framework on top of
 the asynchronous I/O library that will be able to multiplex I/O jobs using
 Tango Fibers (i.e. lightweight or userspace threads). Each fiber waiting
 for I/O events will be suspended until the event is received, helping to
 avoid consuming excessive resources. The load from each fiber will be
 distributed among a pool of threads.
 
 The idea behind both libraries is to be able to efficiently implement
 network protocols that are either synchronous (HTTP, SMTP, etc.) and
 asynchronous (XMPP, etc.) in nature in both client and server applications.
 
 Contact:
 http://www.dsource.org/projects/tango/wiki/Contact 
 
 Signed, 
 
 The Tango Team 
 
 http://www.dsource.org/projects/tango/wiki/Contributors 
 
 ----
 
 Tango is a D library providing a cohesive runtime and library for the D
 programming language. A feature list can be found on
 http://www.dsource.org/projects/tango/wiki/Features
Jun 24 2007
parent Sean Kelly <sean f4.ca> writes:
Robert Fraser wrote:
 
 Out of curiosity, will Tango be 2.0-ready soon?
I don't plan to even begin making Tango 2.0-ready until the 2.0 design is pretty conclusively settled. Doing so will require a great deal of work, and I don't want to do it more than once. Sean
Jun 24 2007
prev sibling parent reply Ingo Oeser <ioe-news rameria.de> writes:
Lars Ivar Igesund wrote:

 During the second stage of development we will build a framework on top of
 the asynchronous I/O library that will be able to multiplex I/O jobs using
 Tango Fibers (i.e. lightweight or userspace threads). Each fiber waiting
 for I/O events will be suspended until the event is received, helping to
 avoid consuming excessive resources. The load from each fiber will be
 distributed among a pool of threads.
Will it be possible to globally disable that or compile a version without that feature and have the application not noticing that? Reason is the that the Tango scheduler for these will NEVER be efficient enough to do that right, since it just doesn't know enough about the system global state to make scheduling decisions at all. Are you sure, you load each Core properly? Use HT properly? Schedule correctly for NUMA? Do you solve priority inversion properly? Can you schedule correctly being part of distributed (operating) system? What about real time? You'll also have much fun with unixoide systems and signals, if you do that. What about something like jobs and job queues? Or even better: Support OpenMP! GCC 4.2.x has everything there already, and I'm sure DMD will get this one day :-) Fibers, Greenthreads and user space threading stuff like that proved to be a can of worms already and is only necessary, if the native thread implementation of an OS sucks. For these thread challenged platforms, just provide a user space threading solutions deeply embedded and hidden in Tango. Oh and one thing: Never ever start threads in a library! The garbage collector stuff is necessary and ok, but anything more is going to lead to hard to debug deadlocks and priority inversions. Had fun with such an ATI library already :-/ Just say NO to threading stuff in your damn good library. It will make it unusable for me. Thanks you very much in advance! Best Regards Ingo Oeser
Jul 24 2007
parent reply Sean Kelly <sean f4.ca> writes:
Ingo Oeser wrote:
 Lars Ivar Igesund wrote:
 
 During the second stage of development we will build a framework on top of
 the asynchronous I/O library that will be able to multiplex I/O jobs using
 Tango Fibers (i.e. lightweight or userspace threads). Each fiber waiting
 for I/O events will be suspended until the event is received, helping to
 avoid consuming excessive resources. The load from each fiber will be
 distributed among a pool of threads.
Will it be possible to globally disable that or compile a version without that feature and have the application not noticing that? Reason is the that the Tango scheduler for these will NEVER be efficient enough to do that right, since it just doesn't know enough about the system global state to make scheduling decisions at all. Are you sure, you load each Core properly? Use HT properly? Schedule correctly for NUMA? Do you solve priority inversion properly? Can you schedule correctly being part of distributed (operating) system? What about real time?
I think the proposed IO system will largely be a generalized abstraction for multiplexing mechanisms provided by the OS: IOCP on Win32, epoll on Linux, etc.
 You'll also have much fun with unixoide systems and signals, if you do that.
 
 What about something like jobs and job queues? 
I've been thinking about adding these anyway, though it might be kind of interesting to mix job processing with the IO dispatch mechanism.
 Or even better: Support OpenMP! GCC 4.2.x has everything there already, 
 and I'm sure DMD will get this one day :-)
 
 Fibers, Greenthreads and user space threading stuff like that proved to be
 a can of worms already and is only necessary, if the native thread
 implementation of an OS sucks. 
Fibers/Coroutines are useful in that they vastly simplify the creation of state machines in many cases. They certainly aren't suitable for every task, but I think they have enough general utility that they should be available to the user. Green may be another story however, because such designs typically imply some sort of scheduling mechanism and such. I'm simply not convinced that there's any way to implement them effectively in a language like D. And besides, some OSes take care of this for you--Solaris, for example.
 For these thread challenged platforms, just provide a user space threading 
 solutions deeply embedded and hidden in Tango.
 
 Oh and one thing: Never ever start threads in a library! The garbage 
 collector stuff is necessary and ok, but anything more is going to lead to
 hard to debug deadlocks and priority inversions. Had fun with such an ATI 
 library already :-/
Thread pools are a common tool for multiplexed IO. So much so, in fact, that it's impossible to use IOCP without one. But I agree that libraries shouldn't take control away from the user. In fact, my personal approach with Tango is that it should be easy to use in the average case, but provide an elegant means to get "to the metal" for discerning programmers. I think we have succeeded thus far, and hope that this can extend to the new IO package as well. Sean
Jul 24 2007
next sibling parent reply Paul Findlay <r.lph50+d gmail.com> writes:
Ingo Oeser wrote:
 Fibers, Greenthreads and user space threading stuff like that proved to
 be a can of worms already and is only necessary, if the native thread
 implementation of an OS sucks.
Sean Kelly wrote:
 Fibers/Coroutines are useful in that they vastly simplify the creation
 of state machines in many cases.  They certainly aren't suitable for
 every task, but I think they have enough general utility that they
 should be available to the user.
Are fibers able to take advantage of the machine's cache any better than swapping threads (with their dramatically larger stack space) in and out? I sort of imagine a whole bunch of fibres allocated in a contiguous block of memory are going to be much better off than a whole bunch of threads allocated in a similar manner and doing a similar job. Unfortunately I have nil experience with fibers and therefore no way to validate this :) - Paul
Jul 25 2007
parent reply Sean Kelly <sean f4.ca> writes:
Paul Findlay wrote:
 Ingo Oeser wrote:
 Fibers, Greenthreads and user space threading stuff like that proved to
 be a can of worms already and is only necessary, if the native thread
 implementation of an OS sucks.
Sean Kelly wrote:
 Fibers/Coroutines are useful in that they vastly simplify the creation
 of state machines in many cases.  They certainly aren't suitable for
 every task, but I think they have enough general utility that they
 should be available to the user.
Are fibers able to take advantage of the machine's cache any better than swapping threads (with their dramatically larger stack space) in and out?
Probably not. But context switching fibers is much faster than context switching threads, which is a selling point in some cases. Another being that a non-running fiber can be passed between threads just like a delegate.
 I
 sort of imagine a whole bunch of fibres allocated in a contiguous block of
 memory are going to be much better off than a whole bunch of threads
 allocated in a similar manner and doing a similar job. Unfortunately I have
 nil experience with fibers and therefore no way to validate this :)
Fibers allocate memory using mmap or VirtualAlloc, so assuming a number of fibers are all allocated at the same time then their memory may well be contiguous. But there isn't any explicit pooling of memory for fibers or anything like that. Sean
Jul 25 2007
parent reply Ingo Oeser <ioe-news rameria.de> writes:
Sean Kelly wrote:

 Paul Findlay wrote:
 Sean Kelly wrote:
 Fibers/Coroutines are useful in that they vastly simplify the creation
 of state machines in many cases.  They certainly aren't suitable for
 every task, but I think they have enough general utility that they
 should be available to the user.
So the state of that state machine becomes at least 4K always? Ok, that's less state than a thread. But if you write a state machine, you need usually less. If you need more, you can use a thread. There usually are heavy states and small states. Heavy states involve several synchronisation primitives and are usually done by threads or processes. Small states are just a bunch of handles and are in the range of some hundred bytes and use the state machines of the OS kernel (e.g. file position, and 2 file handles -> FTP-Server-Control-Session). Forcing a page here is just waste. The last bigger per connection structure I needed was 460 bytes. That was a complete state of a chat user and is roughly more than 10% of a page.
 Are fibers able to take advantage of the machine's cache any better than
 swapping threads (with their dramatically larger stack space) in and out?
Probably not. But context switching fibers is much faster than context switching threads, which is a selling point in some cases.
So better improve the threading support :-) Did you measure your claim? Did you dirty the caches (aka "use the state of the state machine"), before context switching?
 Another being that a non-running fiber can be passed between threads 
 just like a delegate.
That's the first valid argument for me pro Fibers. But doesn't that make scope and dataflow analysis harder for the compiler? With delegates you just have to prove, that the object members are not accessed. With Fibers, you have to prove the same thing for the whole stack the fiber uses.
 Fibers allocate memory using mmap or VirtualAlloc, so assuming a number
 of fibers are all allocated at the same time then their memory may well
 be contiguous.  But there isn't any explicit pooling of memory for
 fibers or anything like that.
Without paging, per node placement is difficult (you would need per node pools). I'm happy already, that tango doesn't try to schedule that stuff and doesn't try to transparently create it. I can live with dead code in the library. Good that this "If you don't use it, we won't either." mantra is implemented in Tango. Only dirty hack left is inpl()/outpl() and friends. They are simply not available on many architectures and are getting unimportant on PC due to MMIO. A modern language like D should handle them as special address space, which is a nice feature for a compiler and required for some micro controllers and some DSPs. In C this is supported via CPP-hacks :-/ Best Regards Ingo Oeser
Jul 27 2007
parent reply Sean Kelly <sean f4.ca> writes:
Ingo Oeser wrote:
 Sean Kelly wrote:
 
 Paul Findlay wrote:
 Sean Kelly wrote:
 Fibers/Coroutines are useful in that they vastly simplify the creation
 of state machines in many cases.  They certainly aren't suitable for
 every task, but I think they have enough general utility that they
 should be available to the user.
So the state of that state machine becomes at least 4K always? Ok, that's less state than a thread. But if you write a state machine, you need usually less. If you need more, you can use a thread.
Good point. The state machines I use tend to require very little memory, though I'd be willing to trade that for 4k in some instances if it simplified the programming.
 There usually are heavy states and small states. Heavy states involve
 several synchronisation primitives and are usually done by threads or 
 processes. Small states are just a bunch of handles and are in the range of
 some hundred bytes and use the state machines of the OS kernel (e.g. file
 position, and 2 file handles -> FTP-Server-Control-Session).
Or even less. I've used state machines for formatted IO, and the requirements there are often minuscule.
 Are fibers able to take advantage of the machine's cache any better than
 swapping threads (with their dramatically larger stack space) in and out?
Probably not. But context switching fibers is much faster than context switching threads, which is a selling point in some cases.
So better improve the threading support :-)
Well, it's more an OS limitation than anything.
 Did you measure your claim? Did you dirty the caches 
 (aka "use the state of the state machine"), before context switching?
Nope. But Mikola Lysenko, the author of StackThreads on which Tango fibers are based performed some tests when developing coroutines and his StackThreads were substantially faster for multiplexing tasks. Though perhaps some of this difference was because StackThreads don't need mutexes to share data.
 Another being that a non-running fiber can be passed between threads 
 just like a delegate.
That's the first valid argument for me pro Fibers. But doesn't that make scope and dataflow analysis harder for the compiler? With delegates you just have to prove, that the object members are not accessed. With Fibers, you have to prove the same thing for the whole stack the fiber uses.
Yes. The compiler isn't aware of fibers, so dataflow analysis isn't any better than it would be with threads.
 Good that this "If you don't use it, we won't either." mantra is implemented 
 in Tango.
That's the goal :-)
 Only dirty hack left is inpl()/outpl() and friends. They are simply not
 available on many architectures and are getting unimportant on PC due to
 MMIO. A modern language like D should handle them as special address space, 
 which is a nice feature for a compiler and required for some 
 micro controllers and some DSPs. In C this is supported via CPP-hacks :-/
I'll have to read up on these instructions. Hadn't heard of them before. Sean
Jul 28 2007
parent Ingo Oeser <ioe-news rameria.de> writes:
Hi Sean,

first of all: I've set the Followup-To to digitalmars.D. since we are not 
announcing anything here anymore :-)
I've also changed the subject a little.

Sean Kelly wrote:
 Are fibers able to take advantage of the machine's cache any better
 than swapping threads (with their dramatically larger stack space) in
 and out?
Probably not. But context switching fibers is much faster than context switching threads, which is a selling point in some cases.
So better improve the threading support :-)
Well, it's more an OS limitation than anything.
So you give the library user a tool to workaround these issues in his platform agnostic code? Hope he will use a version(Limited_OS) then :-)
 Did you measure your claim? Did you dirty the caches
 (aka "use the state of the state machine"), before context switching?
Nope. But Mikola Lysenko, the author of StackThreads on which Tango fibers are based performed some tests when developing coroutines and his StackThreads were substantially faster for multiplexing tasks.
Didn't find his measurements either (using Google, Mikola Lysenko StackThreads). Do you have a pointer somewhere? Did he bother to measure that on Linux, which is known for low context switching latency?
 Though perhaps some of this difference was because StackThreads 
 don't need mutexes to share data.
The same thing can be done with per-thread data. Obvious implementation would be a property accessible only by the owning thread. If the property is a AA, we have the POSIX thread local storage there. First solution to (performance) problems with shared data is to not share the data at all, but to keep it local or replicate it periodically. That scales quite good. Second solution is to make more finegrained locks. Third solution is looking at optimising your locking primitives. Order usually matters in applying these solutions, since effort raises from first to third. Oh, and looking at the implementation, I see currently MORE global state in Tango due to Fibers. I mean the hacks for "Context".
 Only dirty hack left is inpl()/outpl() and friends. They are simply not
 available on many architectures and are getting unimportant on PC due to
 MMIO. A modern language like D should handle them as special address
 space, which is a nice feature for a compiler and required for some
 micro controllers and some DSPs. In C this is supported via CPP-hacks :-/
I'll have to read up on these instructions. Hadn't heard of them before.
inp() and out() implement the X86 port IO facilities. These are oneliners in assembler and implementation would differ depending on ring 0 (direct operation) and ring 3 (OS call). They might be useful for writing device drivers in D. If you write device drivers, they ARE ALREADY OS specific. And each OS has already much better defined abstractions of these routines. So I would just drop them in Tango. May be depreciate first. Best Regards Ingo Oeser
Jul 30 2007
prev sibling parent Ingo Oeser <ioe-news rameria.de> writes:
Hi Sean,

Sean Kelly wrote:
 I think the proposed IO system will largely be a generalised abstraction
 for multiplexing mechanisms provided by the OS: IOCP on Win32, epoll on
 Linux, etc.
Ok, so that context stuff will be dead/eliminated code on Linux, if I don't use it. I can live with that :-)
 You'll also have much fun with unixoide systems and signals, if you do
 that.
 
 What about something like jobs and job queues?
I've been thinking about adding these anyway, though it might be kind of interesting to mix job processing with the IO dispatch mechanism.
They are orthogonal. Let the user decide, what he puts on a job. Any program sequence, which is required to be a ordered sequence is a job. Your thread pools should then just pull these jobs from some queue. Putting jobs to thread pools is an important system library service. Job dispatch is a superset of IO dispatch.
 Or even better: Support Open MP! GCC 4.2.x has everything there already,
 and I'm sure DMD will get this one day :-)
What about this Open MP stuff? Might be useful in container classes and for loops, as Intel's TCB show. For D and its foreach() loops, it should be even easier to prove possible vectorisation and parallelism.
 Green may be another story however, because such designs typically imply
 some sort of scheduling mechanism and such.  I'm simply not convinced
 that there's any way to implement them effectively in a language like D.
   And besides, some OSes take care of this for you--Solaris, for example.
Good to know that you did a wise decision here :-)
 Thread pools are a common tool for multiplexed IO.  So much so, in fact,
 that it's impossible to use IOCP without one.  
Oh, I didn't know of that deficiency.
 But I agree that libraries shouldn't take control away from the user. 
 In fact, my personal approach with Tango is that it should be easy to 
 use in the average case, but provide an elegant means to get "to the
 metal" for discerning programmers.  
I'm more concerned of features, which hide O(N^2) algorithms without stating it or have effects like priority inversion and deadlocks, due to behind you back decisions ("hey starting a realtime thread is a nice hack here"), which are considered "good for you". People with really big iron (1024 CPU and more with matching RAM and storage) will become very angry with you :-) At least your interfaces should not try to annoy them. Implementation doesn't matter, because it can be changed without any effect.
 I think we have succeeded thus far, and hope that this can extend to the
 new IO package as well. 
Nearly. inpl() and outpl() are a No-Go for a multi platform library. Requiring threads and such stuff is also ok, with per-platform stuff. But No-Go as a mandatory library interface. PS: Should we follow up to digitalmars.D? Best Regards Ingo Oeser
Jul 27 2007