www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - DConf talk : Exceptions will disappear in the future?

reply ludo456 <fakeaddress gmail.com> writes:
Listening to the first visioconf of the Dconf 2020, titled 
Destroy All Memory Corruption, 
(https://www.youtube.com/watch?v=XQHAIglE9CU) Walter talks about 
not using exceptions any more in the future. He says something 
like "this is where languages are going" [towards no using 
exceptions any more].

Can someone point me to an article or more explanations about 
that?
Jan 04 2021
next sibling parent =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 1/4/21 7:39 AM, ludo456 wrote:

 Can someone point me to an article or more explanations about that?
Joe Duffy has a very complete document contrasting various error management strategies in the context of Midori: http://joeduffyblog.com/2016/02/07/the-error-model/ Herb Sutter has a detailed document for a future direction for C++: http://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0709r0.pdf I would guess if D supports a different error model in the future, it would be like the C++ proposal. Ali
Jan 04 2021
prev sibling next sibling parent oddp <oddp posteo.de> writes:
On 04.01.21 16:39, ludo456 via Digitalmars-d-learn wrote:
 Can someone point me to an article or more explanations about that?
already came up, see: https://forum.dlang.org/thread/jnrvugxqjzenykzttdie forum.dlang.org https://forum.dlang.org/thread/lhyagawrjzzmrtbokazt forum.dlang.org
Jan 04 2021
prev sibling next sibling parent reply sighoya <sighoya gmail.com> writes:
Personally, I don't appreciate error handling models much which 
pollute the return type of each function simply because of the 
conclusion that every function you define have to handle errors 
as errors can happen everywhere even in pure functions.

You don't believe me? What about memory overflow errors which can 
occur in any stack/heap allocation? Don't know how this is 
handled in D but in Java you got exceptions for this as well.

The other point is the direction which is chosen in Go and Rust 
to make error handling as deterministic as possible by 
enumerating all possible error types.
Afaict, this isn't a good idea as this increases the fragile code 
problem by over specifying behavior. Any change requires a 
cascade of updates if this is possible at all.
What you do in Rust then?, simply panic?
Though, it doesn't mean that it is bad in every case.

By churning different language design forums it all comes down to 
the point, every time, that the default error handling model 
equipped with the considered language is dump and that people 
call for extensions from other languages, even ones which include 
exception handling to improve things.

I see this in Rust and Go and even in Swift forums that people 
are annoyed how it currently works.

No error handling model was the HIT and will never be, therefore 
I would recommend to leave things as they are and to develop 
alternatives and not to replace existing ones.
Jan 05 2021
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 January 2021 at 18:23:25 UTC, sighoya wrote:
 No error handling model was the HIT and will never be, 
 therefore I would recommend to leave things as they are and to 
 develop alternatives and not to replace existing ones.
Or implement C++ exceptions, so that D can catch C++ exceptions transparently (ldc catch clang++ exceptions and gdc catch g++ exceptions).
Jan 05 2021
parent Max Haughton <maxhaton gmail.com> writes:
On Tuesday, 5 January 2021 at 19:42:40 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 5 January 2021 at 18:23:25 UTC, sighoya wrote:
 No error handling model was the HIT and will never be, 
 therefore I would recommend to leave things as they are and to 
 develop alternatives and not to replace existing ones.
Or implement C++ exceptions, so that D can catch C++ exceptions transparently (ldc catch clang++ exceptions and gdc catch g++ exceptions).
Walter already got quite a lot of the way there on that. There are some PRs on dmd about it but it's not in a state worth documenting yet if it's still there (the tests are still there so I assume it still works)
Jan 05 2021
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Jan 05, 2021 at 06:23:25PM +0000, sighoya via Digitalmars-d-learn wrote:
 Personally, I don't appreciate error handling models much which
 pollute the return type of each function simply because of the
 conclusion that every function you define have to handle errors as
 errors can happen everywhere even in pure functions.
Yesterday, I read Herb Sutter's proposal on zero-overhead deterministic exceptions (for C++): http://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0709r0.pdf tl;dr: 1) The ABI is expanded so that every (throwing) function's return type is a tagged union of the user-declared return type and a universal error type. a) The tag is implementation-defined, and can be as simple as a CPU flag or register. b) The universal error type is a value type that fits in 1 or 2 CPU registers (Herb defines it as the size of two pointers), so it can be returned in the usual register(s) used for return values. 2) The `throw` keyword becomes syntactic sugar for returning an instance of the universal error type. The `return` keyword becomes syntactic sugar for returning an instance of the declared return value (as before -- so the only difference is clearing the tag of the returned union). 3) Upon returning from a function call, if the tag indicates an error: a) If there's a catch block, it receives the returned instance of the universal error type and acts on it. b) Otherwise, it returns the received instance of the universal error type -- via the usual function return value mechanism, so no libunwind or any of that complex machinery. 4) The universal error type contains two fields: a type field and a context field. a) The type field is an ID unique to every thrown exception -- uniqueness can be guaranteed by making this a pointer to some static global object that the compiler implicitly inserts per throw statement, so it will be unique even across shared libraries. The catch block can use this field to determine what the error was, or it can just call some standard function to turn this into a string message, print it and abort. b) The context field contains exception-specific data that gives more information about the nature of the specific instance of the error that occurred, e.g., an integer value, or a pointer to a string description or block of additional information about the error (set by the thrower), or even a pointer to a dynamically-allocated exception object if the user wishes to use traditional polymorphic exceptions. c) The universal error type is constrained to have trivial move semantics, i.e., propagating it up the call stack is as simple as blitting the bytes over. (Any object(s) it points to need not be thus constrained, though.) The value semantics of the universal error type ensures that there is no overhead in propagating it up the call stack. The universality of the universal error type allows it to represent errors of any kind without needing runtime polymorphism, thus eliminating the overhead the current exception implementation incurs. The context field, however, still allows runtime polymorphism to be supported, should the user wish to. The addition of the universal error type to return value is automated by the compiler, and the user need not worry about it. The usual try/catch syntax can be built on top of it. Of course, this was proposed for C++, so a D implementation will probably be somewhat different. But the underlying thrust is: exceptions become value types by default, thus eliminating most of the overhead associated with the current exception implementation. (Throwing dynamically-allocated objects can of course still be supported for users who still wish to do that.) Stack unwinding is replaced by normal function return mechanisms, which is much more optimizer-friendly. This also lets us support exceptions in nogc code. [...]
 The other point is the direction which is chosen in Go and Rust to
 make error handling as deterministic as possible by enumerating all
 possible error types.
 Afaict, this isn't a good idea as this increases the fragile code
 problem by over specifying behavior. Any change requires a cascade of
 updates if this is possible at all.
There is no need for a cascade of updates if you do it right. As I hinted at above, this enumeration does not have to be a literal enumeration from 0 to N; the only thing required is that it is unique *within the context of a running program*. A pointer to a static global suffices to serve such a role: it is guaranteed to be unique in the program's address space, and it fits in a size_t. The actual value may differ across different executions, but that's not a problem: any references to the ID from user code is resolved by the runtime dynamic linker -- as it already does for pointers to global objects. This also takes care of any shared libraries or dynamically loaded .so's or DLLs. [...]
 No error handling model was the HIT and will never be, therefore I
 would recommend to leave things as they are and to develop
 alternatives and not to replace existing ones.
I've said this before, that the complaints about the current exception handling mechanism is really an issue of how it's implemented, rather than the concept of exceptions itself. If we implement Sutter's proposal, or something similar suitably adapted to D, it would eliminate the runtime overhead, solve the nogc exceptions issue, and still support traditional polymorphic exception objects that some people still want. T -- Philosophy: how to make a career out of daydreaming.
Jan 05 2021
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 January 2021 at 21:46:46 UTC, H. S. Teoh wrote:
 implemented, rather than the concept of exceptions itself.  If 
 we implement Sutter's proposal, or something similar suitably 
 adapted to D, it would eliminate the runtime overhead, solve 
 the  nogc exceptions issue, and still support traditional 
 polymorphic exception objects that some people still want.
I am not against it per se, but one caveat is that it would not be compatible with C++. Also, I think this is better determined using whole program optimization, the chosen integer bit pattern used for propagating errors has performance implications. The most freguently thrown/tested value should be the one tested most on performance critical paths. Well, I guess you could manually assign integer values where there is important and autogenerate the others.
Jan 05 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 January 2021 at 22:01:08 UTC, Ola Fosheim Grøstad 
wrote:
 Also, I think this is better determined using whole program 
 optimization, the chosen integer bit pattern used for 
 propagating errors has performance implications. The most 
 freguently thrown/tested value should be the one tested most on 
 performance critical paths.
I messed that sentence up in editing :/... The most frequently thrown/tested values on performance critical paths should be represented with a bitpattern that is most easily tested. (you can test for more than one value using a single bitand, etc).
Jan 05 2021
prev sibling next sibling parent reply sighoya <sighoya gmail.com> writes:
On Tuesday, 5 January 2021 at 21:46:46 UTC, H. S. Teoh wrote:
 4) The universal error type contains two fields: a type field 
 and a context field.

     a) The type field is an ID unique to every thrown exception 
 --
     uniqueness can be guaranteed by making this a pointer to 
 some static
     global object that the compiler implicitly inserts per throw
     statement, so it will be unique even across shared 
 libraries. The
     catch block can use this field to determine what the error 
 was, or
     it can just call some standard function to turn this into a 
 string
     message, print it and abort.
Why it must be unique? Doesn't it suffice to return the typeid here?
     b) The context field contains exception-specific data that 
 gives
     more information about the nature of the specific instance 
 of the
     error that occurred, e.g., an integer value, or a pointer 
 to a
     string description or block of additional information about 
 the
     error (set by the thrower), or even a pointer to a
     dynamically-allocated exception object if the user wishes 
 to use
     traditional polymorphic exceptions.
Okay, but in 99% you need dynamically allocated objects because the context is most of the time simply unknown. But yes, in specific cases a simple error code suffice, but even then it would be better to be aware that an error code is returned instead of a runtime object. It sucks to me to box over the context pointer/value to find out if it is an error code or not when I only want an error code.
     c) The universal error type is constrained to have trivial 
 move
     semantics, i.e., propagating it up the call stack is as 
 simple as
     blitting the bytes over. (Any object(s) it points to need 
 not be
     thus constrained, though.)

 The value semantics of the universal error type ensures that 
 there is no overhead in propagating it up the call stack.  The 
 universality of the universal error type allows it to represent 
 errors of any kind without needing runtime polymorphism, thus 
 eliminating the overhead the current exception implementation 
 incurs.
So it seems the universal error type just tells me if there is or isn't error and checking for it is just a bitflip?
 The context field, however, still allows runtime polymorphism 
 to be supported, should the user wish to.
Which in most of the cases will be required.
 The addition of the universal error type to return value is 
 automated by the compiler, and the user need not worry about 
 it.  The usual try/catch syntax can be built on top of it.

 Of course, this was proposed for C++, so a D implementation 
 will probably be somewhat different.  But the underlying thrust 
 is: exceptions become value types by default, thus eliminating 
 most of the overhead associated with the current exception 
 implementation.
I didn't know exactly how this is implemented in D, but class objects are passed as simple pointer and pointers are likewise value types. Using value types itself doesn't guarantee anything about performance, because the context field of an exception can be anything you need some kind of boxing involving runtime polymorphism anyway.
  Stack unwinding is replaced by normal function return 
 mechanisms, which is much more optimizer-friendly.
I heard that all the time, but why is that true?
 This also lets us support exceptions in  nogc code.
Okay, this would be optionally great. However, if we insert the context pointer into a List we may get a problem of cyclicity.
 There is no need for a cascade of updates if you do it right. 
 As I hinted at above, this enumeration does not have to be a 
 literal enumeration from 0 to N; the only thing required is 
 that it is unique *within the context of a running program*.  A 
 pointer to a static global suffices to serve such a role: it is 
 guaranteed to be unique in the program's address space, and it 
 fits in a size_t.  The actual value may differ across different 
 executions, but that's not a problem: any references to the ID 
 from user code is resolved by the runtime dynamic linker -- as 
 it already does for pointers to global objects.  This also 
 takes care of any shared libraries or dynamically loaded .so's 
 or DLLs.
What means unique, why is it important? Type ids aren't unique to distinguish exceptions and I don't know why we need this requirement. The point in Rust or Java was to limit the plurality of error types a function call receive, but this is exactly the point where idiomatic and productive development differs. Assumptions change and there you are.
 I've said this before, that the complaints about the current 
 exception handling mechanism is really an issue of how it's 
 implemented, rather than the concept of exceptions itself.
Okay, I think this is definitely debatable.
  If we implement Sutter's proposal, or something similar 
 suitably adapted to D, it would eliminate the runtime overhead, 
 solve the  nogc exceptions issue, and still support traditional 
 polymorphic exception objects that some people still want.
If we don't care of the exception type nor on the kind of message of an exception did we have either runtime overhead excluding unwinding? I refer here to the kind of exception as entity. Does a class object really require more runtime polymorphism than a tagged union? The other point is how to unify the same frontend (try catch) with different backends (nonlocal jumps+unwinding vs value type errors implicitly in return types). You can use Sutter's proposal in your whole project, but what is with libraries expecting the other kind of error handling backend. Did we provide an implicit conversion from one backend to another either by turning an error object into an exception or vice versa?
Jan 06 2021
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jan 06, 2021 at 05:36:07PM +0000, sighoya via Digitalmars-d-learn wrote:
 On Tuesday, 5 January 2021 at 21:46:46 UTC, H. S. Teoh wrote:
 4) The universal error type contains two fields: a type field and a
 context field.
 
     a) The type field is an ID unique to every thrown exception --
     uniqueness can be guaranteed by making this a pointer to some
     static global object that the compiler implicitly inserts per
     throw statement, so it will be unique even across shared
     libraries. The catch block can use this field to determine what
     the error was, or it can just call some standard function to
     turn this into a string message, print it and abort.
Why it must be unique? Doesn't it suffice to return the typeid here?
It must be unique because different functions may return different sets of error codes. If these sets overlap, then once the error propagates up the call stack it becomes ambiguous which error it is. Contrived example: enum FuncAError { fileNotFound = 1, ioError = 2 } enum FuncBError { outOfMem = 1, networkError = 2 } int funcA() { throw FuncAError.fileNotFound; } int funcB() { throw FuncBError.outOfMem; } void main() { try { funcA(); funcB(); } catch (Error e) { // cannot distinguish between FuncAError and // FuncBError } } Using the typeid is no good because: (1) typeid in D is a gigantic historic hack containing cruft that even Walter doesn't fully understand; (2) when all you want is to return an integer return code, using typeid is overkill.
     b) The context field contains exception-specific data that gives
     more information about the nature of the specific instance of
     the error that occurred, e.g., an integer value, or a pointer to
     a string description or block of additional information about
     the error (set by the thrower), or even a pointer to a
     dynamically-allocated exception object if the user wishes to use
     traditional polymorphic exceptions.
Okay, but in 99% you need dynamically allocated objects because the context is most of the time simply unknown.
If the context is sufficiently represented in a pointer-sized integer, there is no need for allocation at all. E.g., if you're returning an integer error code. If you're in nogc code, you can point to a statically-allocated block that the throwing code updates with relevant information about the error, e.g., a struct that contains further details about the error. If you're using traditional polymorphic exceptions, you already have to allocate anyway, so this does not add any overhead.
 But yes, in specific cases a simple error code suffice, but even then
 it would be better to be aware that an error code is returned instead
 of a runtime object. It sucks to me to box over the context
 pointer/value to find out if it is an error code or not when I only
 want an error code.
You don't need to box anything. The unique type ID already tells you what type the context is, whether it's integer or pointer and what the type of the latter is.
     c) The universal error type is constrained to have trivial move
     semantics, i.e., propagating it up the call stack is as simple
     as blitting the bytes over. (Any object(s) it points to need not
     be thus constrained, though.)
 
 The value semantics of the universal error type ensures that there
 is no overhead in propagating it up the call stack.  The
 universality of the universal error type allows it to represent
 errors of any kind without needing runtime polymorphism, thus
 eliminating the overhead the current exception implementation
 incurs.
So it seems the universal error type just tells me if there is or isn't error and checking for it is just a bitflip?
No, it's a struct that represents the error. Basically: struct Error { size_t type; size_t context; } When you `throw` something, this is what is returned from the function. To propagate it, you just return it, using the usual function return mechanisms. It's "zero-cost" because it the cost is exactly the same as normal returns from a function.
 The context field, however, still allows runtime polymorphism to be
 supported, should the user wish to.
Which in most of the cases will be required.
Only if you want to use traditional dynamically-allocated exceptions. If you only need error codes, no polymorphism is needed. [...]
 Of course, this was proposed for C++, so a D implementation will
 probably be somewhat different.  But the underlying thrust is:
 exceptions become value types by default, thus eliminating most of
 the overhead associated with the current exception implementation.
I didn't know exactly how this is implemented in D, but class objects are passed as simple pointer and pointers are likewise value types. Using value types itself doesn't guarantee anything about performance, because the context field of an exception can be anything you need some kind of boxing involving runtime polymorphism anyway.
You don't need boxing for POD types. Just store the value directly in Error.context.
  Stack unwinding is replaced by normal function return mechanisms,
  which is much more optimizer-friendly.
I heard that all the time, but why is that true?
The traditional implementation of stack unwinding bypasses normal function return mechanisms. It's basically a glorified longjmp() to the catch block, augmented with the automatic destruction of any objects that might need destruction on the way up the call stack. Turns out, the latter is not quite so simple in practice. In order to properly destroy objects on the way up to the catch block, you need to store information about what to destroy somewhere. You also need to know where the catch blocks are so that you know where to land. Once you land, you need to know how to match the exception type to what the catch block expects, etc.. To implement this, every function needs to setup standard stack frames so that libunwind knows how to unwind the stack. It also requires exception tables, an LSDA (language-specific data area) for each function, personality functions, etc.. A whole bunch of heavy machinery just to get things to work properly. By contrast, by returning a POD type like the example Error above, none of the above is necessary: all that's required is: 1) A small ABI addition for an error indicator per function call (to a throwing function). This can either be a single CPU register, or probably better, a 1-bit CPU flag that's either set or cleared by the called function. 2) The addition of a branch in the caller to check this error indicator: if there's no error, continue as usual; if there's an error, propagate it (return it) or branch to the catch block. The catch block then checks the Error.type field to discriminate between errors if it needs to -- if not, just bail out with a standard error message. If it's catching a specific exception, which will be a unique Error.type value, then it already knows at compile-time how to interpret Error.context, so it can take whatever corresponding action is necessary. None of the heavy machinery would be needed.
 This also lets us support exceptions in  nogc code.
Okay, this would be optionally great. However, if we insert the context pointer into a List we may get a problem of cyclicity.
Why would you want to insert it into a list? The context field is a type-erased pointer-sized value. It may not even be a pointer. [...]
 If we implement Sutter's proposal, or something similar suitably
 adapted to D, it would eliminate the runtime overhead, solve the
  nogc exceptions issue, and still support traditional polymorphic
 exception objects that some people still want.
If we don't care of the exception type nor on the kind of message of an exception did we have either runtime overhead excluding unwinding? I refer here to the kind of exception as entity. Does a class object really require more runtime polymorphism than a tagged union?
It's not about class vs. non-class (though Error being a struct rather than a class is important for nogc support). It's about how exception throwing is handled. The current stack unwinding implementation is too heavyweight for what it does; we want it replaced with something simpler and more pay-as-you-go.
 The other point is how to unify the same frontend (try catch) with
 different backends (nonlocal jumps+unwinding vs value type errors
 implicitly in return types).
That's the whole point of Sutter's proposal: they are all unified with the universal Error struct. There is only one "backend": normal function return values, augmented as a tagged union to distinguish between normal return and error return. We are throwing out nonlocal jumps in favor of normal function return mechanisms. We are throwing out libunwind and all the heavy machinery it entails. This is about *replacing* the entire exception handling mechanism, not adding another alternative (which would make things even more complicated and heavyweight for no good reason).
 You can use Sutter's proposal in your whole project, but what is with
 libraries expecting the other kind of error handling backend.
We will not support a different "backend". Having more than one exception-handling mechanism just over-complicates things with no real benefit.
 Did we provide an implicit conversion from one backend to another
 either by turning an error object into an exception or vice versa?
No. Except perhaps for C++ interop, in which case we can confine the heavy machinery to the C++/D boundary. Internally, all D code will use the Sutter mechanism. T -- There are four kinds of lies: lies, damn lies, and statistics.
Jan 06 2021
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 6 January 2021 at 21:27:59 UTC, H. S. Teoh wrote:
 It must be unique because different functions may return 
 different sets of error codes. If these sets overlap, then once 
 the error propagates up the call stack it becomes ambiguous 
 which error it is.
I don't think this is the case. If you analyse the full program then you know the functions that interact. All you need to do is dataflow analysis. I also don't think there should be a specific error-code, I think that should be left implementation defined. The program should just specify a set of errors. Then it is up to the compiler if that for a given call can be represented using some free bits in another return value as a nullpointer or whatever. If speed is what is sought, well, then design for it. :-)
Jan 06 2021
prev sibling next sibling parent reply sighoya <sighoya gmail.com> writes:
On Wednesday, 6 January 2021 at 21:27:59 UTC, H. S. Teoh wrote:
 It must be unique because different functions may return 
 different sets of error codes. If these sets overlap, then once 
 the error propagates up the call stack it becomes ambiguous 
 which error it is.

 Contrived example:

 	enum FuncAError { fileNotFound = 1, ioError = 2 }
 	enum FuncBError { outOfMem = 1, networkError = 2 }

 	int funcA() { throw FuncAError.fileNotFound; }
 	int funcB() { throw FuncBError.outOfMem; }

 	void main() {
 		try {
 			funcA();
 			funcB();
 		} catch (Error e) {
 			// cannot distinguish between FuncAError and
 			// FuncBError
 		}
 	}
Thanks, reminds on swift error types which are enum cases. So the type is the pointer to the enum or something which describes the enum uniquely and the context is the enum value, or does the context describe where to find the enum value in the statically allocated object.
 Using the typeid is no good because: (1) typeid in D is a
Sorry, I misspelled it, I meant the internal id in which type is turned to by the compiler, not the RTTI structure of a type at runtime.
 If you're in  nogc code, you can point to a 
 statically-allocated block that the throwing code updates with 
 relevant information about the error, e.g., a struct that 
 contains further details about the error
But the amount of information for an error can't be statically known. So we can't pre-allocate it via a statically allocated block, we need some kind of runtime polymorphism here to know all the fields considered.
 You don't need to box anything.  The unique type ID already 
 tells you what type the context is, whether it's integer or 
 pointer and what the type of the latter is.
The question is how can a type id as integer value do that, is there any mask to retrieve this kind of information from the type id field, e.g. the first three bits say something about the context data type or did we use some kind of log2n hashing of the typeid to retrieve that kind of information.
 When you `throw` something, this is what is returned from the 
 function. To propagate it, you just return it, using the usual 
 function return mechanisms.  It's "zero-cost" because it the 
 cost is exactly the same as normal returns from a function.
Except that bit check after each call is required which is neglectable for some function calls, but it's summing up rapidly for the whole amount of modularization. Further, the space for the return value in the caller needs to be widened in some cases.
 Only if you want to use traditional dynamically-allocated 
 exceptions. If you only need error codes, no polymorphism is 
 needed.
Checking the bit flag is runtime polymorphism, checking the type field against the catches is runtime polymorphism, checking what the typeid tells about the context type is runtime polymorphism. Checking the type of information behind the context pointer in case of non error codes is runtime polymorphism. The only difference is it is coded somewhat more low level and is a bit more compact than a class object. What if we use structs for exceptions where the first field is the type and the second field the string message pointer/or error code?
 The traditional implementation of stack unwinding bypasses 
 normal function return mechanisms.  It's basically a glorified 
 longjmp() to the catch block, augmented with the automatic 
 destruction of any objects that might need destruction on the 
 way up the call stack.
It depends. There are two ways I know, either jumping or decrementing the stack pointer and read out the information in the exception tables.
 Turns out, the latter is not quite so simple in practice.  In 
 order to properly destroy objects on the way up to the catch 
 block, you need to store information about what to destroy 
 somewhere.
I can't imagine why this is different in your case, this is generally the problem of exception handling independent of the underlying mechanism. Once the pointer of the first landing pad is known, the control flow continues as known before until the next error is thrown.
 You also need to know where the catch blocks are so that you 
 know where to land. Once you land, you need to know how to 
 match the exception type to what the catch block expects, etc.. 
 To implement this, every function needs to setup standard stack 
 frames so that libunwind knows how to unwind the stack.
Touché, that's better in case of error returns.
 It also requires exception tables, an LSDA (language-specific 
 data area) for each function, personality functions, etc..  A 
 whole bunch of heavy machinery just to get things to work 
 properly.
 Why would you want to insert it into a list?  The context field 
 is a type-erased pointer-sized value. It may not even be a 
 pointer.
Good point, I don't know if anyone tries to gather errors in an intermediate list which is passed to certain handlers. Sometimes exceptions are used as control flow elements though that isn't good practice.
 It's not about class vs. non-class (though Error being a struct 
 rather than a class is important for  nogc support). It's about 
 how exception throwing is handled.  The current stack unwinding 
 implementation is too heavyweight for what it does; we want it 
 replaced with something simpler and more pay-as-you-go.
I agree, that fast exceptions are worthwhile for certain areas as opt-in, but I don't want them to replace non-fast exceptions because of the runtime impact of normal running code.
 That's the whole point of Sutter's proposal: they are all 
 unified with the universal Error struct.  There is only one 
 "backend": normal function return values, augmented as a tagged 
 union to distinguish between normal return and error return.  
 We are throwing out nonlocal jumps in favor of normal function 
 return mechanisms.  We are throwing out libunwind and all the 
 heavy machinery it entails.

 This is about *replacing* the entire exception handling 
 mechanism, not adding another alternative (which would make 
 things even more complicated and heavyweight for no good 
 reason).
Oh, no please not. Interestingly we don't use longjmp in default exception handling, but that would be a good alternative to Herb Sutter’s proposal because exceptions are likewise faster, but have likewise an impact on normal running code in case a new landing pad have to be registered. But interestingly, the occurrence of this is much more seldom than checking the return value after each function.
Jan 06 2021
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2021-01-07 01:01, sighoya wrote:

 Thanks, reminds on swift error types which are enum cases.
Swift can throw anything that implements the Error protocol. Classes, structs and enums can implement protocols.
 Oh, no please not. Interestingly we don't use longjmp in default 
 exception handling, but that would be a good alternative to Herb 
 Sutter’s proposal
Some platforms implement C++ exception using longjmp, for example, iOS.
 because exceptions are likewise faster, but have 
 likewise an impact on normal running code in case a new landing pad have 
 to be registered.
 But interestingly, the occurrence of this is much more seldom than 
 checking the return value after each function.
It's claimed that exceptions are not zero cost, even when an exception is not thrown. Because the compiler cannot optimize functions that may throw as well as those that cannot throw. -- /Jacob Carlborg
Jan 07 2021
parent reply sighoya <sighoya gmail.com> writes:
On Thursday, 7 January 2021 at 10:36:39 UTC, Jacob Carlborg wrote:

 Swift can throw anything that implements the Error protocol. 
 Classes, structs and enums can implement protocols.
True, Swift can throw anything what implements the Error protocol. It seems the error protocol itself doesn't define any constraints how an error has to look like. I'm contemplating if this is a good idea, maybe, I don't know yet.
 Some platforms implement C++ exception using longjmp, for 
 example, iOS.
Interesting, I've heard some OSes don't support exception tables, therefore an alternate implementation have to be chosen.
 It's claimed that exceptions are not zero cost, even when an 
 exception is not thrown. Because the compiler cannot optimize 
 functions that may throw as well as those that cannot throw.
Did you refer to the case a pure function is inlined into the caller and the machinery of stack pointer decrementation doesn't work anymore? You may be right about that. However, I think it can be transformed safely in case the source code is still available. In case of dyn libs, we may, can develop a machinery to gather exception table information at compile time and to manipulate them in order to inline them safely, but I don't know about the case in D though.
Jan 07 2021
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jan 07, 2021 at 11:15:26AM +0000, sighoya via Digitalmars-d-learn wrote:
 On Thursday, 7 January 2021 at 10:36:39 UTC, Jacob Carlborg wrote:
[...]
 It's claimed that exceptions are not zero cost, even when an
 exception is not thrown. Because the compiler cannot optimize
 functions that may throw as well as those that cannot throw.
Did you refer to the case a pure function is inlined into the caller and the machinery of stack pointer decrementation doesn't work anymore?
This has nothing to do with inlining. Inlining is done at compile-time, and the inlined function becomes part of the caller. There is no stack pointer decrementing involved anymore because there's no longer a function call in the emitted code. The optimizer works by transforming the code so that redundant operations are eliminated, and/or expensive operations are replaced with cheaper ones. It does this by relying on certain assumptions about the code that lets it replace/rearrange the code in a way that preserves its semantics. One very important assumption is control flow: if you have operations A, B, C in your function and the optimizer can assume that control will always reach all 3 operations, then it can reorder the operations (e.g., to improve instruction cache coherence) without changing the meaning of the code. The problem with unwinding is that the optimizer can no longer assume, for instance, that every function call will return control to the caller (if the called function throws, control flow will bypass the current function). So if B is a function call, then the optimizer can no longer assume C is always reached, so it cannot reorder the operations. Maybe there's a better sequence of instructions that does A and C together, but now the optimizer cannot use it because that would change the semantics of the code. If the exception were propagated via normal return mechanisms, then the optimizer still has a way to optimize it: it can do A and C first, then if B fails it can insert code to undo C, which may still be faster than doing A and C separately. This is why performance-conscious people prefer nothrow where possible: it lets the optimizer make more assumptions, and thereby, opens the possibility for better optimizations. [...]
 In case of dyn libs, we may, can develop a machinery to gather
 exception table information at compile time and to manipulate them in
 order to inline them safely, but I don't know about the case in D
 though.
This makes no sense. Inlining is done at compile-time; if you are loading the code as a dynamic library, by definition you're not inlining anymore. T -- He who does not appreciate the beauty of language is not worthy to bemoan its flaws.
Jan 07 2021
parent reply sighoya <sighoya gmail.com> writes:
On Thursday, 7 January 2021 at 14:34:50 UTC, H. S. Teoh wrote:
 This has nothing to do with inlining.  Inlining is done at 
 compile-time, and the inlined function becomes part of the 
 caller.
True
There is no stack pointer decrementing involved anymore
Also true.
 because there's no longer a function call in the emitted code.
And this is the problem, how to refer to the original line of the inlined function were the exception was thrown? We need either some machinery for that to be backpropagated or we didn't inline at all in the said case.
 One very important assumption is control flow: if you have 
 operations A, B, C in your function and the optimizer can 
 assume that control will always reach all 3 operations, then it 
 can reorder the operations (e.g., to improve instruction cache 
 coherence) without changing the meaning of the code.
Wonderful, we have an example! If all three operations don't refer to depend on each other. Or maybe the compiler execute them in parallel. Did we refer to lazy evaluation or asynchronous code execution here?
 If the exception were propagated via normal return mechanisms, 
 then the optimizer still has a way to optimize it: it can do A 
 and C first, then if B fails it can insert code to undo C, 
 which may still be faster than doing A and C separately.
Puh, that's sounds a bit of reordering nondeterministic effectful operations which definitely aren't rollbackable in general, only in simple cases. But in general, why not generate a try catch mechanism at compile time catching the exception in case B throws and store it temporarily in an exception variable. After A has executed and was successful, just rethrow the exception of B. All this could be generated at compile time, no runtime cost but involves some kind of code duplication.
 This is why performance-conscious people prefer nothrow where 
 possible: it lets the optimizer make more assumptions, and 
 thereby, opens the possibility for better optimizations.
But the assumption is wrong, every function can fail, e.g. out of memory, aborting the whole program in this case just to do better optimizations isn't the fine english way.
 This makes no sense. Inlining is done at compile-time; if you 
 are loading the code as a dynamic library, by definition you're 
 not inlining anymore.
As I said, I don't know how this is handled in D, but in theory you can even inline an already compiled function though you need meta information to do that. My idea was just to fetch the line number from the metadata of the throw statement in the callee in order to localize the error correctly in the original source code.
Jan 07 2021
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jan 07, 2021 at 05:47:37PM +0000, sighoya via Digitalmars-d-learn wrote:
 On Thursday, 7 January 2021 at 14:34:50 UTC, H. S. Teoh wrote:
 This has nothing to do with inlining.  Inlining is done at
 compile-time, and the inlined function becomes part of the caller.
True
 There is no stack pointer decrementing involved anymore
Also true.
 because there's no longer a function call in the emitted code.
And this is the problem, how to refer to the original line of the inlined function were the exception was thrown?
The compiler knows exactly which line it is at the point where the exception is created, and can insert it there.
 We need either some machinery for that to be backpropagated or we
 didn't inline at all in the said case.
There is no need for any machinery. The information is already statically available at compile-time.
 One very important assumption is control flow: if you have
 operations A, B, C in your function and the optimizer can assume
 that control will always reach all 3 operations, then it can reorder
 the operations (e.g., to improve instruction cache coherence)
 without changing the meaning of the code.
Wonderful, we have an example! If all three operations don't refer to depend on each other. Or maybe the compiler execute them in parallel. Did we refer to lazy evaluation or asynchronous code execution here?
This is just an over-simplified example to illustrate the point. Real code obviously isn't this simple, and neither are real optimizers.
 If the exception were propagated via normal return mechanisms, then
 the optimizer still has a way to optimize it: it can do A and C
 first, then if B fails it can insert code to undo C, which may still
 be faster than doing A and C separately.
Puh, that's sounds a bit of reordering nondeterministic effectful operations which definitely aren't rollbackable in general, only in simple cases.
Again, this was an over-simplified contrived example just to illustrate the point. Real code and real optimizers are obviously much more complex than this. The main point here is that being able to assume things about control flow in a function gives the optimizer more tools to produce better code. This is neither the time nor place to get into the nitty-gritty details of how exactly optimization works. If you're unfamiliar with the subject, I recommend reading a textbook on compiler construction.
 But in general, why not generate a try catch mechanism at compile time
 catching the exception in case B throws and store it temporarily in an
 exception variable.
Because every introduced catch block in the libunwind implementation introduces additional overhead. [...]
 This is why performance-conscious people prefer nothrow where
 possible: it lets the optimizer make more assumptions, and thereby,
 opens the possibility for better optimizations.
But the assumption is wrong, every function can fail, e.g. out of memory, aborting the whole program in this case just to do better optimizations isn't the fine english way.
Wrong. Out of memory only occurs at specific points in the code (i.e., when you call a memory allocation primitive).
 This makes no sense. Inlining is done at compile-time; if you are
 loading the code as a dynamic library, by definition you're not
 inlining anymore.
As I said, I don't know how this is handled in D, but in theory you can even inline an already compiled function though you need meta information to do that.
This tells me that you do not understand how compiled languages work. Again, I recommend reading a textbook on compiler construction. It will help you understand this issues better. (And it will also indirectly help you write better code, once you understand what exactly the compiler does with it, and what the machine actually does.)
 My idea was just to fetch the line number from the metadata of the
 throw statement in the callee in order to localize the error correctly
 in the original source code.
All of this information is already available at compile-time. The compiler can be easily emit code to write this information into some error-handling area that can be looked up by the catch block. Also, you are confusing debugging information with the mechanism of try/catch. Any such information is a part of the payload of an exception; this is not the concern of the mechanism of how try/catch are implemented. T -- Perhaps the most widespread illusion is that if we were in power we would behave very differently from those who now hold it---when, in truth, in order to get power we would have to become very much like them. -- Unknown
Jan 07 2021
parent reply sighoya <sighoya gmail.com> writes:
On Thursday, 7 January 2021 at 18:12:18 UTC, H. S. Teoh wrote:
 If you're unfamiliar with the subject, I recommend reading a 
 textbook on compiler construction.
I already read one.
 Because every introduced catch block in the libunwind 
 implementation introduces additional overhead.
But only when an exception is thrown, right?
 Wrong. Out of memory only occurs at specific points in the code 
 (i.e., when you call a memory allocation primitive).
What about pushing a new stack frame on top/bottom of the stack? This is very implicit. I don't talk about a theoretical Turing machine with unbounded memory, rather about a linear bounded automaton with finite memory. What happens if stack memory isn't available anymore?
 As I said, I don't know how this is handled in D, but in 
 theory you can even inline an already compiled function though 
 you need meta information to do that.
This tells me that you do not understand how compiled languages work.
Traditionally, inlining means the insertion of code from the callee into the caller, yes. Imagine now, that the source code of the callee isn't available because it is already compiled and wrapped in a dynlib/static lib before (and now you link to that dynlib/static lib), then you can't inline the source code, but you can inline the binary code of the callee. For this to be "optimize-safe" regarding exceptions you need to store some meta information, e.g. the line number of all direct thrown exceptions in it, during the compilation of the callee in the dynlib/static lib for any caller outside the dynlib/static lib. Theoretically, you can even pass functions as binary code blocks to the callee, this is mostly inperformant, but it is at least possible. Though, I assume that most compiles doesn't any sort of this, but it doesn't mean that it isn't possible.
 Again, I recommend reading a textbook on compiler construction. 
 It will help you understand this issues better. (And it will 
 also indirectly help you write better code, once you understand 
 what exactly the compiler does with it, and what the machine 
 actually does.)
It also depends on the considered compiler and how it is relating to the design discussed in textbooks.
 All of this information is already available at compile-time. 
 The compiler can be easily emit code to write this information 
 into some error-handling area that can be looked up by the 
 catch block.
Yes, but the line number is changing when inlining the code, and we don't want the new line number to be outputed by the runtime if an exception was thrown because it points to a line number only visible to the optimizer not to the user?
 Also, you are confusing debugging information with the 
 mechanism of try/catch.
So you only want to output line numbers in stack trace during debugging and not in production code?
Jan 07 2021
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jan 07, 2021 at 07:00:15PM +0000, sighoya via Digitalmars-d-learn wrote:
 On Thursday, 7 January 2021 at 18:12:18 UTC, H. S. Teoh wrote:
[...]
 Wrong. Out of memory only occurs at specific points in the code
 (i.e., when you call a memory allocation primitive).
What about pushing a new stack frame on top/bottom of the stack? This is very implicit. I don't talk about a theoretical Turing machine with unbounded memory, rather about a linear bounded automaton with finite memory. What happens if stack memory isn't available anymore?
In all non-trivial OSes that I'm aware of, running out of stack space causes the OS to forcefully terminate the program. No non-toy compiler I know of checks the remaining stack space when making a function call; that would be an unreasonable amount of overhead. No performance-conscious programmer would accept that. [...]
 This tells me that you do not understand how compiled languages work.
Traditionally, inlining means the insertion of code from the callee into the caller, yes. Imagine now, that the source code of the callee isn't available because it is already compiled and wrapped in a dynlib/static lib before (and now you link to that dynlib/static lib), then you can't inline the source code, but you can inline the binary code of the callee.
This is not inlining, it's linking.
 For this to be "optimize-safe" regarding exceptions you need to store
 some meta information, e.g. the line number of all direct thrown
 exceptions in it, during the compilation of the callee in the
 dynlib/static lib for any caller outside the dynlib/static lib.
I don't understand what's the point you're trying to make here. What has this got to do with how exceptions are thrown? Any code, exception or not, exports such information to the linker for debugging purposes. It does not directly relate to how exceptions are implemented. [...]
 All of this information is already available at compile-time. The
 compiler can be easily emit code to write this information into some
 error-handling area that can be looked up by the catch block.
Yes, but the line number is changing when inlining the code,
[...] ???! How does inlining (or linking) change line numbers?! Whether or not something is inlined has nothing to do with what line number it was written in. The compiler does not edit your source code and move lines around, if that's what you're trying to say. That would be absurd.
 Also, you are confusing debugging information with the mechanism of
 try/catch.
So you only want to output line numbers in stack trace during debugging and not in production code?
"Debugging information" can be included in production code. Nothing stops you from doing that. And this has nothing to do with how try/catch is implemented. T -- Живёшь только однажды.
Jan 07 2021
parent sighoya <sighoya gmail.com> writes:
On Thursday, 7 January 2021 at 19:35:00 UTC, H. S. Teoh wrote:

 Whether or not something is inlined has nothing to do with what 
 line number it was written in.
Okay, I've tried it out, and it seems it isn't the problem in the binary case as the code was first compiled and the inlined, so the line number is correct in that case. For source code inlining however, the direction is simply opposite. Therefore, compilation of inlined code have to respect the original line number pointing to the throw statement in the inlined function. I think D can handle this, I hope so.
"Debugging information" can be included in production code.
Yes, but exception line numbers aren't debug infos rather they are passed implicitly as argument to the exception class constructor.
Jan 07 2021
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jan 07, 2021 at 12:01:23AM +0000, sighoya via Digitalmars-d-learn wrote:
 On Wednesday, 6 January 2021 at 21:27:59 UTC, H. S. Teoh wrote:
[...]
 You don't need to box anything.  The unique type ID already tells
 you what type the context is, whether it's integer or pointer and
 what the type of the latter is.
The question is how can a type id as integer value do that, is there any mask to retrieve this kind of information from the type id field, e.g. the first three bits say something about the context data type or did we use some kind of log2n hashing of the typeid to retrieve that kind of information.
Your catch block either knows exactly what type value(s) it's looking for, or it's just a generic catch for all errors. In the former case, you already know at compile-time how to interpret the context information, and can cast it directly to the correct type. (This can, of course, be implicitly inserted by the compiler.) In the latter case, you don't actually care what the interpretation is, so it doesn't matter. The most you might want to do in this case is to generate some string error message; this could be implemented in various ways. If the type field is a pointer to a static global, it could be a pointer to a function that takes the context argument and returns a string, for example. Of course, it can also be a pointer to a static global struct containing more information, if needed.
 When you `throw` something, this is what is returned from the
 function.  To propagate it, you just return it, using the usual
 function return mechanisms.  It's "zero-cost" because it the cost is
 exactly the same as normal returns from a function.
Except that bit check after each call is required which is neglectable for some function calls, but it's summing up rapidly for the whole amount of modularization.
But you already have to do that if you're checking error codes after the function call. The traditional implementation of exceptions doesn't incur this particular overhead, but introduces (many!) others. Optimizers are constrained, for example, when a particular function call may throw (under the traditional unwinding implementation): it cannot assume control flow will always return to the caller. Handling the exception by returning the error using normal function return mechanisms allows the optimizer to assume control always returns to the caller, which enables certain optimizations not possible otherwise.
 Further, the space for the return value in the caller needs to be
 widened in some cases.
Perhaps. But this should not be a big problem if the error type is at most 2 pointers big. Most common architectures like x86 have plenty of registers that can be used for this purpose.
 Only if you want to use traditional dynamically-allocated
 exceptions. If you only need error codes, no polymorphism is needed.
Checking the bit flag is runtime polymorphism, checking the type field against the catches is runtime polymorphism, checking what the typeid tells about the context type is runtime polymorphism. Checking the type of information behind the context pointer in case of non error codes is runtime polymorphism.
The catch block either knows exactly what error types it's catching, or it's a generic catch-all. In the former case, it already knows at compile-time what type the context field is. So no runtime polymorphism there. Unless the error type indicates a traditional exception class hierarchy, in which case the context field can just be a pointer to the exception object and you can use the traditional RTTI mechanisms to get at the information. In the latter case, you don't care what the context field is anyway, or only want to perform some standard operation like convert to string, as described earlier. I suppose that's runtime polymorphism, but it's optional.
 The only difference is it is coded somewhat more low level and is a
 bit more compact than a class object.
 What if we use structs for exceptions where the first field is the
 type and the second field the string message pointer/or error code?
That's exactly what struct Error is. [...]
 Turns out, the latter is not quite so simple in practice.  In order
 to properly destroy objects on the way up to the catch block, you
 need to store information about what to destroy somewhere.
I can't imagine why this is different in your case, this is generally the problem of exception handling independent of the underlying mechanism. Once the pointer of the first landing pad is known, the control flow continues as known before until the next error is thrown.
The difference is that for unwinding you need to duplicate / reflect this information outside the function body, and you're constrained in how you use the runtime stack (it must follow some standard stack frame format so that the unwinder knows how to unwind it). If exceptions are handled by normal function return mechanisms, the optimizer is more free to change the way it uses the stack -- you can omit stack frames for functions that don't need it, for instance. And you don't need to duplicate dtor knowledge outside of the function body: the function just exits via the usual return mechanism that already handles the destruction of local variables. You don't even need to know where the catch blocks are: this is already encoded into the catching function via the error bit check after the function call. The exception table can be completely elided. [...]
 Why would you want to insert it into a list?  The context field is a
 type-erased pointer-sized value. It may not even be a pointer.
Good point, I don't know if anyone tries to gather errors in an intermediate list which is passed to certain handlers. Sometimes exceptions are used as control flow elements though that isn't good practice.
Exceptions should never be used as control flow. That's definitely a code smell. :-D But anyway, if you ever want to store errors in a list, just store the entire Error struct. It's only 2 pointers long, and includes all the information necessary to interpret it.
 It's not about class vs. non-class (though Error being a struct
 rather than a class is important for  nogc support). It's about how
 exception throwing is handled.  The current stack unwinding
 implementation is too heavyweight for what it does; we want it
 replaced with something simpler and more pay-as-you-go.
I agree, that fast exceptions are worthwhile for certain areas as opt-in, but I don't want them to replace non-fast exceptions because of the runtime impact of normal running code.
It will *improve* normal running code. Please note that the proposed mechanism does NOT exclude traditional class-based exceptions. All you need is to reserve a specific Error.type value to mean "class-based exception", and store the class reference in Error.context: enum classBasedException = ... /* some magic value */; // This: throw new Exception(...); // Gets translated to this: Error e; e.type = classBasedException; e.context = cast(size_t) new Exception(...); return e; // ... then in the catch block, this: catch(MyExceptionSubclass e) { handleError(e); } // gets translated to this: catch(Error e) { if (e.type == classBasedException) { auto ex = cast(Exception) e.context; auto mex = cast(MyExceptionSubclass) ex; // query RTTI if (mex !is null) { handleError(ex); goto next; } } ... // propagate to next catch block or return e } next: // continue normal control flow Nothing breaks in traditional class-based exception code. You earn the free benefit of no external tables for libunwind, as well as better optimizer friendlines. And you get a really cheap code path if you opt to use error codes instead of class objects. *And* it works for nogc. [...]
 This is about *replacing* the entire exception handling mechanism,
 not adding another alternative (which would make things even more
 complicated and heavyweight for no good reason).
Oh, no please not. Interestingly we don't use longjmp in default exception handling, but that would be a good alternative to Herb Sutter’s proposal because exceptions are likewise faster, but have likewise an impact on normal running code in case a new landing pad have to be registered. But interestingly, the occurrence of this is much more seldom than checking the return value after each function.
[...] I don't understand why you would need to register a new landing pad. There is no need to register anything; catch blocks become just part of the function body and are automatically handled as part of the function call mechanism. The reason we generally don't use longjmp is because it doesn't unwind the stack properly (does not destruct local variables that need destruction). You *could* make it work, e.g., each function pushes dtor code onto a global list of dtors, and the setjmp handler just runs all the dtors in this list. But that just brings us back to the same performance problems that libunwind has, just implemented differently. (Every function has to push/pop dtors to the global list, for instance. That's a LOT of overhead, and is very cache-unfriendly. Even libunwind does better than this.) T -- VI = Visual Irritation
Jan 07 2021
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2021-01-06 22:27, H. S. Teoh wrote:

 That's the whole point of Sutter's proposal: they are all unified with
 the universal Error struct.  There is only one "backend": normal
 function return values, augmented as a tagged union to distinguish
 between normal return and error return.  We are throwing out nonlocal
 jumps in favor of normal function return mechanisms.  We are throwing
 out libunwind and all the heavy machinery it entails.
This is not what Sutter is proposing. He's proposing to add a new "backend", so you end up with three different types of functions (when it comes to error handling): * Functions annotated with `throws`. This is the new "backend": void foo() throws; * Functions annotated with `noexcept`. This indicates a function will not throw an exception (of the existing style): void foo() noexcept; * Functions without annotation. This indicates a function that may or may not throw an exception (of the existing style): void foo(); From the proposal, paragraph 4.1.7: "Compatibility: Dynamic exceptions and conditional noexcept still work. You can call a function that throws a dynamic exception from one that throws a static exception (and vice versa); each is translated to the other automatically by default or you can do it explicitly if you prefer." But perhaps you're proposing something different for D? -- /Jacob Carlborg
Jan 07 2021
prev sibling parent sighoya <sighoya gmail.com> writes:
Citing Herb Sutter:
As noted in §1.1, preconditions, postconditions, and assertions 
are for identifying program bugs, they are never recoverable 
errors; violating them is always corruption, undefined behavior. 
Therefore they should never be reported via error reporting 
channels (regardless of whether exceptions, error codes, or 
another style is used). Instead, once we have contracts 
(expected in C++20), users should be taught to prefer expressing 
these as contracts, and we should consider using those also in 
the standard library.
Oh men, did you ever hear of non-determinism? Why not just use compile time contracts and path dependent typing to solve those problems as well? Because perfectionism is our enemy in productive development. And terminating the whole program doesn't help either, exactly for this purpose we have error types or contexts, to know to which degree we are required to terminate and this should hold even for contracts.
Jan 06 2021
prev sibling next sibling parent reply Marvin <thegrapevine email.com> writes:
On Monday, 4 January 2021 at 15:39:50 UTC, ludo456 wrote:
 Listening to the first visioconf of the Dconf 2020, titled 
 Destroy All Memory Corruption, 
 (https://www.youtube.com/watch?v=XQHAIglE9CU) Walter talks 
 about not using exceptions any more in the future. He says 
 something like "this is where languages are going" [towards no 
 using exceptions any more].

 Can someone point me to an article or more explanations about 
 that?
if Exceptions disappear in the future in Dlang, I will download the last version that support exceptions and never update.
Jan 05 2021
parent Tony <tonytdominguez aol.com> writes:
On Tuesday, 5 January 2021 at 18:42:42 UTC, Marvin wrote:
 On Monday, 4 January 2021 at 15:39:50 UTC, ludo456 wrote:
 Listening to the first visioconf of the Dconf 2020, titled 
 Destroy All Memory Corruption, 
 (https://www.youtube.com/watch?v=XQHAIglE9CU) Walter talks 
 about not using exceptions any more in the future. He says 
 something like "this is where languages are going" [towards no 
 using exceptions any more].

 Can someone point me to an article or more explanations about 
 that?
if Exceptions disappear in the future in Dlang, I will download the last version that support exceptions and never update.
I have a similar feeling. Exceptions were a great addition to programming languages in my opinion.
Jan 08 2021
prev sibling next sibling parent Marcone <marcone email.com> writes:
Bye bye nothrow functions in Dlang.
Jan 05 2021
prev sibling parent Dukc <ajieskola gmail.com> writes:
On Monday, 4 January 2021 at 15:39:50 UTC, ludo456 wrote:
 Listening to the first visioconf of the Dconf 2020, titled 
 Destroy All Memory Corruption, 
 (https://www.youtube.com/watch?v=XQHAIglE9CU) Walter talks 
 about not using exceptions any more in the future. He says 
 something like "this is where languages are going" [towards no 
 using exceptions any more].
I don't think exceptions are going anywhere. It might be that new libraries tend to avoid them (to work with nothrow and live), but there is no reason to banish them from the whole language - that would only result in huge breakage for limited benefit. And I suspect Walter didn't mean all code -just the relatively low-level stuff that might want to use ` live`. Even if he did, community will force him to reconsider.
Jan 06 2021