www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Promises in D

reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
Hi,

Here is an implementation of promises in D:

https://github.com/cybershadow/ae/blob/next/utils/promise.d

Docs: https://ae.dpldocs.info/ae.utils.promise.Promise.html

It attempts to implement the Promises/A+ standard as closely as 
possible.

Some thoughts about promises in D:

JavaScript's path towards asynchronous programming was callbacks 
-> promises -> async/await. Promises in JavaScript are one of the 
basic building blocks (and the next step down in terms of 
lowering) of async/await: `async` functions transparently return 
a promise, and `await` accepts a promise and "synchronously" 
waits for it to resolve, converting failures into thrown 
exceptions.

D doesn't have async/await (and, probably adding it would require 
a *significant* amount of work in the compiler), but D does have 
fibers. An interesting observation is that the same principles of 
async/await and promise interaction also apply to fibers: a task 
running in a fiber can be represented as a promise, and, in the 
fiber world, `await` is just a simple function which yields the 
fiber and wakes it up when the promise resolves (also converting 
failures into thrown exceptions).

Fibers do have overhead in terms of requiring the stack 
allocation per task and the cost of context switching, so they 
may not be the best solution all of the time. So, although an 
asynchronous networking / event loop library could just build 
everything on fibers (as older versions of Vibe.d have), it would 
seem that you could instead use promises as the lower-overhead 
"glue", and make fibers opt-in.
Apr 06
parent reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Wednesday, 7 April 2021 at 06:41:36 UTC, Vladimir Panteleev 
wrote:
 Hi,

 [...]
Sorry, I sent this too soon. Going even lower, there are callbacks/delegates. In JavaScript, there are few reasons to use callbacks today, because they are much less composable and don't provide much other benefit. However, there is one reason in D where callbacks trump both fibers and promises: a callback's caller controls the lifetime of the passed values, whereas promises require that their held value either is owned by the promise, or has infinite lifetime (i.e. is immutable and on the heap). So, if you were to write a low-level high-performance networking library, you would probably want to have: void delegate(const(ubyte)[] bytes) onDataReceived; where `bytes` is owned by the caller and is only valid until `onDataReceived` returns. You can't do the same with promises, because promises may be `then`'d at any point in the future, and you can't really do this with fibers because `await` returns its value as an lvalue and the network library doesn't know how long the user program needs the buffer for. (In case of fibers, you could instead do `ubyte[4096] buf; auto bytes = socket.receive(buf[]);`, but that's less flexible and may involve an additional copy.) I see that eventcore uses delegates, which is probably the right choice considering the above. In any case, here is the code that inspired me to write the promises module: https://github.com/CyberShadow/ae/blob/ab6cb48e338047c5a30da7af7eeba122181ba1cd/demo/x11/demo.d#L76-L86 (Some good old callback hell.) Here is the version with promises: https://github.com/CyberShadow/ae/blob/3400e45bc14d93de381136a03b3db2dac7f56785/demo/x11/demo.d#L82-L88 Much better, but would be even nicer with fibers. :)
Apr 06
next sibling parent reply Calvin P <cloudlessapp gmail.com> writes:
On Wednesday, 7 April 2021 at 06:51:12 UTC, Vladimir Panteleev 
wrote:
 Going even lower, there are callbacks/delegates. In JavaScript, 
 there are few reasons to use callbacks today, because they are 
 much less composable and don't provide much other benefit. 
 However, there is one reason in D where callbacks trump both 
 fibers and promises: a callback's caller controls the lifetime 
 of the passed values, whereas promises require that their held 
 value either is owned by the promise, or has infinite lifetime 
 (i.e. is immutable and on the heap). So, if you were to write a 
 low-level high-performance networking library, you would 
 probably want to have:

 [...]
I use refCount to pass resource with promises provide by QuickJS, It also work well with async IO code based on D.
Apr 07
parent reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Wednesday, 7 April 2021 at 16:14:15 UTC, Calvin P wrote:
 I use refCount to pass resource with promises provide by 
 QuickJS, It also work well with async IO code based on D.
Reference counting doesn't change the situation. With delegates, the resource provider can pass a temporary view into an internal buffer owned by the provider. With reference counting, you still need to allocate memory dynamically, and possibly do an additional copy from the internal buffer to the reference-counted one.
Apr 07
next sibling parent reply Calvin P <cloudlessapp gmail.com> writes:
On Wednesday, 7 April 2021 at 16:43:59 UTC, Vladimir Panteleev 
wrote:
 Reference counting doesn't change the situation. With 
 delegates, the resource provider can pass a temporary view into 
 an internal buffer owned by the provider. With reference 
 counting, you still need to allocate memory dynamically, and 
 possibly do an additional copy from the internal buffer to the 
 reference-counted one.
I don't think so. A buffer instance and it' derivative Slice instance can shared one refCount pointer. If the Promise no long need own the Buffer just release the refCount. If Caller finish with the Buffer, just release the refCount. If the Buffer need be handle by multi callback cross thread, then it is passed by const. If callback need modify it then it should copy the Slice. A const temporary view (in my case a const Slice) or a non-const temporary view (a Unique Slice) can work with Atomic refCount with multi thread safe code. A Unique Slice will borrow the ownership from Buffer and return to Buffer when it released. If the Buffer refCount is zero when Unique Slice released, the resource is release to object pool for reuse. I don't see a case your Promise solution can void copy Slice but refCount need do copy.
Apr 07
parent Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Thursday, 8 April 2021 at 04:53:47 UTC, Calvin P wrote:
 I don't see a case your Promise solution can void copy Slice 
 but refCount need do copy.
The paragraph you quoted discussed delegates, not promises. Promises do need a copy always, as I stated in my post.
 If the Buffer refCount is zero when Unique Slice released, the 
 resource is release to object pool for reuse.
You don't need an object pool at all with delegates. You can just pass a slice of an array on the stack.
Apr 07
prev sibling parent reply Calvin P <cloudlessapp gmail.com> writes:
On Wednesday, 7 April 2021 at 16:43:59 UTC, Vladimir Panteleev 
wrote:
 You don't need an object pool at all with delegates. You can 
 just pass a slice of an array on the stack.
Yes, I cloud just free it.
 The paragraph you quoted discussed delegates, not promises. 
 Promises do need a copy always, as I stated in my post.
I think it depends on the Promise lifetime and the Caller lifetime. In my code the buffer do copy at caller lifetime finished like scope exit or __dtor, or it need a bigger capability. If the Promise outlive the caller lifetime, I can not pass caller delegate as reject/resolve because the caller cloud be destroyed, in this case The callback accepted a copy slice result( when caller destroy the buffer the copy take place). If you can pass caller delegate to promise(the caller live), then you can pass slice to promise. If the caller destroy before resolve/reject, then you code crashed or you need copy your template view. A Buffer destroyed don't mean the memory is released, it can be moved into other instance, or hold by a Slice.
Apr 07
parent Calvin P <cloudlessapp gmail.com> writes:
On Thursday, 8 April 2021 at 05:44:15 UTC, Calvin P wrote:
 On Wednesday, 7 April 2021 at 16:43:59 UTC, Vladimir Panteleev 
 wrote:
 You don't need an object pool at all with delegates. You can 
 just pass a slice of an array on the stack.
Yes, I cloud just free it.
 The paragraph you quoted discussed delegates, not promises. 
 Promises do need a copy always, as I stated in my post.
I think it depends on the Promise lifetime and the Caller lifetime. In my code the buffer do copy at caller lifetime finished like scope exit or __dtor, or it need a bigger capability. If the Promise outlive the caller lifetime, I can not pass caller delegate as reject/resolve because the caller cloud be destroyed, in this case The callback accepted a copy slice result( when caller destroy the buffer the copy take place). If you can pass caller delegate to promise(the caller live), then you can pass slice to promise. If the caller destroy before resolve/reject, then you code crashed or you need copy your template view. A Buffer destroyed don't mean the memory is released, it can be moved into other instance, or hold by a Slice.
The pointer is If the caller want provide memory to promise pass a buffer or slice instead array. Then the caller can control the lifetime of the buffer/slice. And the buffer always get copy then the buffer/slice lifetime finished. If this in a single thread event loop, then the buffer/slice destroy code will make sure the resource will be available for the promise object by made a copy. (or not copy if the caller owned stack array still live when promise resolved/rejected.)
Apr 07
prev sibling parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Wednesday, 7 April 2021 at 06:51:12 UTC, Vladimir Panteleev 
wrote:
 On Wednesday, 7 April 2021 at 06:41:36 UTC, Vladimir Panteleev 
 wrote:
 Hi,
Having been inspired by the Senders/Receivers C++ proposal http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p0443r14.html I started an implementation here https://github.com/symmetryinvestments/concurrency [*] I initially dismissed the proposal completely, and it took me at least a few months before I realized the beauty of it. Now, I see them as fundamental building blocks in asynchronous code. They are cancelable, they avoid unnecessary allocations and synchronizations, and above all, they adhere to the principles of structured concurrency. This is a good talk from eric niebler about them: https://www.youtube.com/watch?v=h-ExnuD6jms [*] it are still early days but it implements a fair bit of useful asynchronous algorithms.
Apr 07
next sibling parent reply Andre Pany <andre s-e-a-p.de> writes:
On Wednesday, 7 April 2021 at 21:20:11 UTC, Sebastiaan Koppe 
wrote:
 On Wednesday, 7 April 2021 at 06:51:12 UTC, Vladimir Panteleev 
 wrote:
 On Wednesday, 7 April 2021 at 06:41:36 UTC, Vladimir Panteleev 
 wrote:
 Hi,
Having been inspired by the Senders/Receivers C++ proposal http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p0443r14.html I started an implementation here https://github.com/symmetryinvestments/concurrency [*] I initially dismissed the proposal completely, and it took me at least a few months before I realized the beauty of it. Now, I see them as fundamental building blocks in asynchronous code. They are cancelable, they avoid unnecessary allocations and synchronizations, and above all, they adhere to the principles of structured concurrency. This is a good talk from eric niebler about them: https://www.youtube.com/watch?v=h-ExnuD6jms [*] it are still early days but it implements a fair bit of useful asynchronous algorithms.
The library looks majorly useful. I just noticed it has the license "proprietary" which makes usage just a little bit more complex in a business environment. Is there any reason for not using a common license? Kind regards Andre
Apr 07
next sibling parent reply Max Haughton <maxhaton gmail.com> writes:
On Wednesday, 7 April 2021 at 21:37:10 UTC, Andre Pany wrote:
 On Wednesday, 7 April 2021 at 21:20:11 UTC, Sebastiaan Koppe 
 wrote:
 [...]
The library looks majorly useful. I just noticed it has the license "proprietary" which makes usage just a little bit more complex in a business environment. Is there any reason for not using a common license? Kind regards Andre
https://github.com/symmetryinvestments/concurrency/blob/master/LICENSE ?
Apr 07
parent Andre Pany <andre s-e-a-p.de> writes:
On Thursday, 8 April 2021 at 03:40:50 UTC, Max Haughton wrote:
 On Wednesday, 7 April 2021 at 21:37:10 UTC, Andre Pany wrote:
 On Wednesday, 7 April 2021 at 21:20:11 UTC, Sebastiaan Koppe 
 wrote:
 [...]
The library looks majorly useful. I just noticed it has the license "proprietary" which makes usage just a little bit more complex in a business environment. Is there any reason for not using a common license? Kind regards Andre
https://github.com/symmetryinvestments/concurrency/blob/master/LICENSE ?
Yes, I noticed this file. Within dub.sdl it is declared as proprietary. Therefore, from a business perspective, you have to read the license file very careful. This is little more effort compared to a common license like apache, mit, bsl... Kind regards Andre
Apr 07
prev sibling parent Sebastiaan Koppe <mail skoppe.eu> writes:
On Wednesday, 7 April 2021 at 21:37:10 UTC, Andre Pany wrote:
 The library looks majorly useful.
Thx, just the stoptoken alone has already saved my life.
 I just noticed it has the license "proprietary" which makes 
 usage just a little bit more complex in a business environment. 
 Is there any reason for not using a common license?
I think it was an oversight, let me fix it.
Apr 07
prev sibling parent reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Wednesday, 7 April 2021 at 21:20:11 UTC, Sebastiaan Koppe 
wrote:
 Having been inspired by the Senders/Receivers C++ proposal 
 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p0443r14.html I
started an implementation here
https://github.com/symmetryinvestments/concurrency [*]

 I initially dismissed the proposal completely, and it took me 
 at least a few months before I realized the beauty of it.

 Now, I see them as fundamental building blocks in asynchronous 
 code. They are cancelable, they avoid unnecessary allocations 
 and synchronizations, and above all, they adhere to the 
 principles of structured concurrency.

 This is a good talk from eric niebler about them:

 https://www.youtube.com/watch?v=h-ExnuD6jms

 [*] it are still early days but it implements a fair bit of 
 useful asynchronous algorithms.
Thanks! Looking into this a bit, I understand that this doesn't quite attempt to solve the same problems. The document and the talk begins about how this is aimed to be a tool at doing parallel/asynchronous computations. Promises and async/await are mainly concerned about scheduling execution of code on the CPU asynchronously while avoiding waiting for blocking operations (network or I/O). Everything still runs on one thread. I wasn't able to quickly divine if this approach allows avoiding the value copy as delegates do. If they do, would you mind explaining how (such as in the case that a promise/equivalent is resolved immediately, before a result handler is attached)?
Apr 07
parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Thursday, 8 April 2021 at 05:42:28 UTC, Vladimir Panteleev 
wrote:
 Thanks!

 Looking into this a bit, I understand that this doesn't quite 
 attempt to solve the same problems.
Indeed, it is much broader.
 The document and the talk begins about how this is aimed to be 
 a tool at doing parallel/asynchronous computations. Promises 
 and async/await are mainly concerned about scheduling execution 
 of code on the CPU asynchronously while avoiding waiting for 
 blocking operations (network or I/O). Everything still runs on 
 one thread.
Senders/Receivers doesn't impose a specific execution model, you can use it on coroutines, threads, fibers, etc. In the implementation I focused on threads cause that is what we needed, but there isn't anything preventing from building an fiber scheduler on top of this.
 I wasn't able to quickly divine if this approach allows 
 avoiding the value copy as delegates do. If they do, would you 
 mind explaining how (such as in the case that a 
 promise/equivalent is resolved immediately, before a result 
 handler is attached)?
There is a section in the talk about promises and futures. https://youtu.be/h-ExnuD6jms?t=686 In short, they are eager. This means that they start running as soon as possible. That means the setValue of the promise and the attaching of the continuation can happen concurrently. Therefor space has to be allocated for the return value, as well as some sort of synchronization for the continuation handler. Senders/Receivers on the other hand are lazy. They don't start until after the receiver has been attached. Because of that it needs no allocation for the value, doesn't need to type-erase the continuation, and there is no concurrent access on the continuation handler. If you limit your program to a single thread you can avoid the concurrent access on the continuation handler, but that still leaves the value the promise produces, you still need to allocate that on the heap and ref count it. Because Senders/Receivers are lazy, there is less shared state and the ownership is simpler. On top of that they can often use the stack of whoever awaits them.
Apr 08
parent reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Thursday, 8 April 2021 at 07:39:44 UTC, Sebastiaan Koppe wrote:
 In short, they are eager. This means that they start running as 
 soon as possible. That means the setValue of the promise and 
 the attaching of the continuation can happen concurrently. 
 Therefor space has to be allocated for the return value, as 
 well as some sort of synchronization for the continuation 
 handler.

 Senders/Receivers on the other hand are lazy. They don't start 
 until after the receiver has been attached. Because of that it 
 needs no allocation for the value, doesn't need to type-erase 
 the continuation, and there is no concurrent access on the 
 continuation handler.

 If you limit your program to a single thread you can avoid the 
 concurrent access on the continuation handler, but that still 
 leaves the value the promise produces, you still need to 
 allocate that on the heap and ref count it.

 Because Senders/Receivers are lazy, there is less shared state 
 and the ownership is simpler. On top of that they can often use 
 the stack of whoever awaits them.
I see, thanks! So, if I understand correctly - to put it in layman terms, senders/receivers is just a structured way to chain together callables, plus propagating errors (as with promises), plus cancellation. I understand that `setValue` just calls the next continuation with its argument (as opposed to storing the value somewhere as its name might imply), which means that the value may reside on the stack of the sender's start function, and remain valid only until `setValue` exits. The API is also somewhat similar, and I understand the main distinction is that starting execution is explicit (so, and the end of your `.then` chain, there must be a `.start()` call OSLT).
 Senders/Receivers doesn't impose a specific execution model, 
 you can use it on coroutines, threads, fibers, etc. In the 
 implementation I focused on threads cause that is what we 
 needed, but there isn't anything preventing from building an 
 fiber scheduler on top of this.
I see how you could write a fiber-based executor/scheduler, but, I don't see how you could use these as a base for a synchronous fiber API like async/await. With delegates (and senders/receivers), there is a known finite lifetime of the value being propagated. With async/await, the value is obtained as the return value of `await`, which does not really provide a way to notify the value's source of when it is no longer needed.
Apr 08
parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Thursday, 8 April 2021 at 09:31:53 UTC, Vladimir Panteleev 
wrote:
 I see, thanks! So, if I understand correctly - to put it in 
 layman terms, senders/receivers is just a structured way to 
 chain together callables, plus propagating errors (as with 
 promises), plus cancellation. I understand that `setValue` just 
 calls the next continuation with its argument (as opposed to 
 storing the value somewhere as its name might imply), which 
 means that the value may reside on the stack of the sender's 
 start function, and remain valid only until `setValue` exits.
 The API is also somewhat similar, and I understand the main 
 distinction is that starting execution is explicit (so, and the 
 end of your `.then` chain, there must be a `.start()` call 
 OSLT).
Yes, but be aware that the callee of .start() has the obligation to keep the operational state alive until *after* one of the three receiver's functions are called. Often, instead of calling `.start` you would call `.sync_wait`, or just return the sender itself (and have the parent take care of it).
 I see how you could write a fiber-based executor/scheduler, 
 but, I don't see how you could use these as a base for a 
 synchronous fiber API like async/await. With delegates (and 
 senders/receivers), there is a known finite lifetime of the 
 value being propagated. With async/await, the value is obtained 
 as the return value of `await`, which does not really provide a 
 way to notify the value's source of when it is no longer needed.
Hmm, I see. But isn't that the limitation of async/await itself? I suppose the solution would be to build refcounts on top of the value, such that the promise hold a reference to the value (slot), as well as any un-called continuations. Which would tie the lifetime of the value to that of the promise and all its continuations. Ultimately this is all caused by the promise's design. Specifically the fact that you can `.then` the same promise twice and get the same value. Senders/Receivers don't have this. You get the value/error/done exactly once. Calling start again is not allowed.
Apr 08
parent reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Thursday, 8 April 2021 at 11:13:55 UTC, Sebastiaan Koppe wrote:
 On Thursday, 8 April 2021 at 09:31:53 UTC, Vladimir Panteleev 
 wrote:
 I see, thanks! So, if I understand correctly - to put it in 
 layman terms, senders/receivers is just a structured way to 
 chain together callables, plus propagating errors (as with 
 promises), plus cancellation. I understand that `setValue` 
 just calls the next continuation with its argument (as opposed 
 to storing the value somewhere as its name might imply), which 
 means that the value may reside on the stack of the sender's 
 start function, and remain valid only until `setValue` exits.
 The API is also somewhat similar, and I understand the main 
 distinction is that starting execution is explicit (so, and 
 the end of your `.then` chain, there must be a `.start()` call 
 OSLT).
Yes, but be aware that the callee of .start() has the obligation to keep the operational state alive until *after* one of the three receiver's functions are called.
Sorry, what does operational state mean here? Does that refer to the root sender object (which is saved on the stack and referenced by the objects implementing the intermediate steps/operations)? Or something else (locals referred by the lambdas performing the asynchronous operations, though I guess in that case DMD would create a closure)? Also, does this mean that this approach is not feasible for ` safe` D?
 Often, instead of calling `.start` you would call `.sync_wait`, 
 or just return the sender itself (and have the parent take care 
 of it).
I'm finding it a bit difficult to imagine how that would look like on a larger scale. Would it be possible to write e.g. an entire web app where all functions accept and return senders, with only the top-level function calling `.start`? Or is there perhaps a small demo app making use of this as a demonstration? :)
 I see how you could write a fiber-based executor/scheduler, 
 but, I don't see how you could use these as a base for a 
 synchronous fiber API like async/await. With delegates (and 
 senders/receivers), there is a known finite lifetime of the 
 value being propagated. With async/await, the value is 
 obtained as the return value of `await`, which does not really 
 provide a way to notify the value's source of when it is no 
 longer needed.
Hmm, I see. But isn't that the limitation of async/await itself? I suppose the solution would be to build refcounts on top of the value, such that the promise hold a reference to the value (slot), as well as any un-called continuations. Which would tie the lifetime of the value to that of the promise and all its continuations.
Logically, at any point in time, a promise either has un-called continuations, OR holds a value. As soon as it is fulfilled, it schedules all registered continuations to be called as soon as possible. (In reality there is a small window of time as the scheduler runs these continuations before they consume the value.) We *could* avoid having to do reference counting or such with promises if we were to: 1. Move the value into the promise, thus making the promise the value's owner 2. Call continuations actually immediately (not "soon" as JavaScript promises do) 3. Define that continuation functions may only use the value until they return. With these modifications, it is sufficient to make the promise itself reference-counted (or, well, non-copyable). When it is no longer referenced / goes out of scope, all consumers of the value will have been called, and no more can be registered. However, these modifications unfortunately do make such promises unusable for async/await. Here, the continuation is the fragment of the `async` function from that `await` and only until the next `await` (or the return). We can't really make any assumptions about the lifetime of the value in this case. (I think the same applies to fibers, too?) The "call soon" requirement is interesting because it does help avoid an entire class of bugs, where something N levels deep removes the rug from under something N-10 levels deep, so I guess it's a trade-off between performance and potential correctness.
 Ultimately this is all caused by the promise's design. 
 Specifically the fact that you can `.then` the same promise 
 twice and get the same value. Senders/Receivers don't have 
 this. You get the value/error/done exactly once. Calling start 
 again is not allowed.
Yeah, I see. They don't hold a copy of the value at all, but are just a protocol for passing them around to the next processing step.
Apr 08
parent Sebastiaan Koppe <mail skoppe.eu> writes:
On Thursday, 8 April 2021 at 11:55:37 UTC, Vladimir Panteleev 
wrote:
 On Thursday, 8 April 2021 at 11:13:55 UTC, Sebastiaan Koppe 
 wrote:
 Yes, but be aware that the callee of .start() has the 
 obligation to keep the operational state alive until *after* 
 one of the three receiver's functions are called.
Sorry, what does operational state mean here? Does that refer to the root sender object (which is saved on the stack and referenced by the objects implementing the intermediate steps/operations)? Or something else (locals referred by the lambdas performing the asynchronous operations, though I guess in that case DMD would create a closure)?
Operational state is a term from the proposal, it is what is returned when you call `.connect(receiver)` on a Sender. It contains all the state needed to start the Sender, often including the receiver itself. It is this state that requires an allocation when you are doing Futures. With senders/receivers it lives on the callee's stack. With that comes the responsibility to keep it alive. In practice it is a non-issue though. You are unlikely to call `.start()` yourself, instead you often push the responsibility all the way up to `void main`, where you do a `sync_wait` to ensure all is done. There are cases where you want to make the operational state live on the heap though (because it gets too big), and there are ways to do that.
 Also, does this mean that this approach is not feasible for 
 ` safe` D?
I certainly tried, but there are likely some safety-violations left. Undoubtedly some of those could be resolved by a more safety-capable engineer than me; I sometimes feel it is more complicated to write safe code correctly than lock-free algorithms - which are notoriously hard - and sometimes it is not possible to express the wanted semantics. Even so, even if there is some large unsafe hole in this library, I rather have it than not. There is a lot of upside in being able to write asynchronous algorithms separate from the async tasks themselves. Just like the STL separated the algorithms from the containers, senders/receivers separate the algorithms from the async tasks. That is so valuable to me I gladly take a little possible unsafety. Although obviously I certainly welcome any improvements on that front!
 Often, instead of calling `.start` you would call 
 `.sync_wait`, or just return the sender itself (and have the 
 parent take care of it).
I'm finding it a bit difficult to imagine how that would look like on a larger scale. Would it be possible to write e.g. an entire web app where all functions accept and return senders, with only the top-level function calling `.start`?
Yes, except the top-level function would call `.sync_wait`. The main reason is because that awaits completion. The key part is expressing the web server as a Sender, and then run it till completion. A web server is a bit special in that it spawns additional sub-tasks as part of its execution. You can use a `Nursery()` for that, which is a Sender itself, but allows adding additional senders during its execution. Then you just model each request as a Sender and add it to the Nursery. They can be short lived or long lived tasks. When it is time for shutdown the StopToken is triggered, and that will stop the listening thread as well as any running sub-tasks as well (like open requests or websockets, etc.).
 Or is there perhaps a small demo app making use of this as a 
 demonstration? :)
Nothing public at the moment sorry, but I plan to open source our webserver in time.
 Hmm, I see. But isn't that the limitation of async/await 
 itself? I suppose the solution would be to build refcounts on 
 top of the value, such that the promise hold a reference to 
 the value (slot), as well as any un-called continuations. 
 Which would tie the lifetime of the value to that of the 
 promise and all its continuations.
Logically, at any point in time, a promise either has un-called continuations, OR holds a value. As soon as it is fulfilled, it schedules all registered continuations to be called as soon as possible. (In reality there is a small window of time as the scheduler runs these continuations before they consume the value.)
I think it is possible to attach a continuation after the promise has already completed. ``` promise = getFoo(); getBar().then(x => promise.then(y => print(x*y)); ``` The one thing I miss most from promises though, is cancellation. With senders/receivers you get that (almost) for free, and it is not at all difficult to properly shutdown (parts of) your application (including sending shutdown notifications to any clients).
Apr 10