www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - toHash => pure, nothrow, const, safe

reply Walter Bright <newshound2 digitalmars.com> writes:
Consider the toHash() function for struct key types:

http://dlang.org/hash-map.html

And of course the others:

const hash_t toHash();
const bool opEquals(ref const KeyType s);
const int opCmp(ref const KeyType s);

They need to be, as well as const, pure nothrow  safe.

The problem is:
1. a lot of code must be retrofitted
2. it's just plain annoying to annotate them

It's the same problem as for Object.toHash(). That was addressed by making
those 
attributes inheritable, but that won't work for struct ones.

So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct 
members be automatically annotated with pure, nothrow, and  safe (if not
already 
marked as  trusted).
Mar 11 2012
next sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 12-03-2012 00:54, Walter Bright wrote:
 Consider the toHash() function for struct key types:

 http://dlang.org/hash-map.html

 And of course the others:

 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);

 They need to be, as well as const, pure nothrow  safe.

 The problem is:
 1. a lot of code must be retrofitted
 2. it's just plain annoying to annotate them

 It's the same problem as for Object.toHash(). That was addressed by
 making those attributes inheritable, but that won't work for struct ones.

 So I propose instead a bit of a hack. toHash, opEquals, and opCmp as
 struct members be automatically annotated with pure, nothrow, and  safe
 (if not already marked as  trusted).
It may be a hack, but you know, those have special semantics/meanings in the first place, so is it really that bad? Consider also that contract blocks are now implicitly const, etc. -- - Alex
Mar 11 2012
next sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Sunday, 11 March 2012 at 23:55:34 UTC, Alex Rønne Petersen 
wrote:
 It may be a hack, but you know, those have special 
 semantics/meanings in the first place, so is it really that 
 bad? Consider also that contract blocks are now implicitly 
 const, etc.
Agreed. Those are already special, so I don't think it hurts to make this change. But I may be missing some implications.
Mar 11 2012
parent bearophile <bearophileHUGS lycos.com> writes:
Kapps:

 Agreed. Those are already special, so I don't think it hurts to 
 make this change. But I may be missing some implications.
At risk of sounding like a troll, I hope from now on Walter will not use this kind of strategy to solve all the MANY breaking changes D/DMD will need to face :-) Bye, bearophile
Mar 11 2012
prev sibling parent Don Clugston <dac nospam.com> writes:
On 12/03/12 00:55, Alex Rønne Petersen wrote:
 On 12-03-2012 00:54, Walter Bright wrote:
 Consider the toHash() function for struct key types:

 http://dlang.org/hash-map.html

 And of course the others:

 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);

 They need to be, as well as const, pure nothrow  safe.

 The problem is:
 1. a lot of code must be retrofitted
A 2. it's just plain annoying to annotate them
Maybe we need nice or something, to mean pure nothrow safe.
 It's the same problem as for Object.toHash(). That was addressed by
 making those attributes inheritable, but that won't work for struct ones.

 So I propose instead a bit of a hack. toHash, opEquals, and opCmp as
 struct members be automatically annotated with pure, nothrow, and
That was sounding reasonable, but...
  safe (if not already marked as  trusted).
...this part is a bit scary. It sounds as though the semantics are a bit fuzzy. There is no way to make a function as 'impure' or 'does_throw'. But you can annotate with system.
 It may be a hack, but you know, those have special semantics/meanings in
 the first place, so is it really that bad?
Agreed, they are in some sense virtual functions. But how would you declare those functions. With "pure nothrow safe", or with "pure nothrow trusted" ?
 Consider also that contract
 blocks are now implicitly const, etc.
But the clutter problem isn't restricted to those specific functions. One issue with pure, nothrow is that they have no inverse, so you cannot simply write pure: nothrow: at the top of the file and use 'pure nothrow' by default. The underlying problem is that, when spelt out in full, those annotations uglify the code.
Mar 12 2012
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter:

 So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct 
 members be automatically annotated with pure, nothrow, and  safe (if not
already 
 marked as  trusted).
Recently I have suggested to deprecate and later remove the need of opCmp for the built-in AAs. Regarding this hack proposal of yours, I don't fully understand its consequences yet. What are the negative sides of this idea? Bye, bearophile
Mar 11 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Mar 11, 2012 at 04:54:09PM -0700, Walter Bright wrote:
 Consider the toHash() function for struct key types:
 
 http://dlang.org/hash-map.html
 
 And of course the others:
 
 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);
 
 They need to be, as well as const, pure nothrow  safe.
 
 The problem is:
 1. a lot of code must be retrofitted
 2. it's just plain annoying to annotate them
Ah, I see the "just add a new attribute" thing is coming back to bite you. ;-)
 It's the same problem as for Object.toHash(). That was addressed by
 making those attributes inheritable, but that won't work for struct
 ones.
 
 So I propose instead a bit of a hack. toHash, opEquals, and opCmp as
 struct members be automatically annotated with pure, nothrow, and
  safe (if not already marked as  trusted).
I'm wary of the idea of automatically-imposed attributes on a "special" set of functions... seems a bit arbitrary, and arbitrary things don't tend to stand the test of time. OTOH I can see the value of this. Forcing all toHash's to be pure nothrow safe makes is much easier to, for example, implement AA's purely in object_.d (which I'm trying to do :-P). You don't have to worry about somebody defining a toHash that does strange things. Same thing with opEquals, etc.. It also lets you freely annotate stuff that calls these functions as pure, nothrow, safe, etc., without having to dig through every function in druntime and phobos to mark all of them. Here's an alternative (and perhaps totally insane) idea: what if, instead of needing to mark functions as pure, nothrow, etc., etc., we ASSUME all functions are pure, nothrow, and safe unless they're explicitly declared otherwise? IOW, let all D code be pure, nothrow, and safe by default, and if you want non-pure, or throwing code, or unsafe code, then you annotate the function as impure, throwing, or system. It goes along with D's general philosophy of safe-by-default, unsafe-if-you-want-to. Or, as a compromise, perhaps the compiler can auto-infer most of the attributes without any further effort from the user. T -- If the comments and the code disagree, it's likely that *both* are wrong. -- Christopher
Mar 11 2012
parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 12-03-2012 06:43, H. S. Teoh wrote:
 On Sun, Mar 11, 2012 at 04:54:09PM -0700, Walter Bright wrote:
 Consider the toHash() function for struct key types:

 http://dlang.org/hash-map.html

 And of course the others:

 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);

 They need to be, as well as const, pure nothrow  safe.

 The problem is:
 1. a lot of code must be retrofitted
 2. it's just plain annoying to annotate them
Ah, I see the "just add a new attribute" thing is coming back to bite you. ;-)
 It's the same problem as for Object.toHash(). That was addressed by
 making those attributes inheritable, but that won't work for struct
 ones.

 So I propose instead a bit of a hack. toHash, opEquals, and opCmp as
 struct members be automatically annotated with pure, nothrow, and
  safe (if not already marked as  trusted).
I'm wary of the idea of automatically-imposed attributes on a "special" set of functions... seems a bit arbitrary, and arbitrary things don't tend to stand the test of time. OTOH I can see the value of this. Forcing all toHash's to be pure nothrow safe makes is much easier to, for example, implement AA's purely in object_.d (which I'm trying to do :-P). You don't have to worry about somebody defining a toHash that does strange things. Same thing with opEquals, etc.. It also lets you freely annotate stuff that calls these functions as pure, nothrow, safe, etc., without having to dig through every function in druntime and phobos to mark all of them. Here's an alternative (and perhaps totally insane) idea: what if, instead of needing to mark functions as pure, nothrow, etc., etc., we ASSUME all functions are pure, nothrow, and safe unless they're explicitly declared otherwise? IOW, let all D code be pure, nothrow, and safe by default, and if you want non-pure, or throwing code, or unsafe code, then you annotate the function as impure, throwing, or system. It goes along with D's general philosophy of safe-by-default, unsafe-if-you-want-to.
No. Too late in the design process. I have 20k+ lines of code that rely on the opposite behavior.
 Or, as a compromise, perhaps the compiler can auto-infer most of the
 attributes without any further effort from the user.
No, that has API design issues. You can silently break a guarantee you made previously.
 T
-- - Alex
Mar 11 2012
next sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 12-03-2012 07:04, Alex Rønne Petersen wrote:
 On 12-03-2012 06:43, H. S. Teoh wrote:
 On Sun, Mar 11, 2012 at 04:54:09PM -0700, Walter Bright wrote:
 Consider the toHash() function for struct key types:

 http://dlang.org/hash-map.html

 And of course the others:

 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);

 They need to be, as well as const, pure nothrow  safe.

 The problem is:
 1. a lot of code must be retrofitted
 2. it's just plain annoying to annotate them
Ah, I see the "just add a new attribute" thing is coming back to bite you. ;-)
 It's the same problem as for Object.toHash(). That was addressed by
 making those attributes inheritable, but that won't work for struct
 ones.

 So I propose instead a bit of a hack. toHash, opEquals, and opCmp as
 struct members be automatically annotated with pure, nothrow, and
  safe (if not already marked as  trusted).
I'm wary of the idea of automatically-imposed attributes on a "special" set of functions... seems a bit arbitrary, and arbitrary things don't tend to stand the test of time. OTOH I can see the value of this. Forcing all toHash's to be pure nothrow safe makes is much easier to, for example, implement AA's purely in object_.d (which I'm trying to do :-P). You don't have to worry about somebody defining a toHash that does strange things. Same thing with opEquals, etc.. It also lets you freely annotate stuff that calls these functions as pure, nothrow, safe, etc., without having to dig through every function in druntime and phobos to mark all of them. Here's an alternative (and perhaps totally insane) idea: what if, instead of needing to mark functions as pure, nothrow, etc., etc., we ASSUME all functions are pure, nothrow, and safe unless they're explicitly declared otherwise? IOW, let all D code be pure, nothrow, and safe by default, and if you want non-pure, or throwing code, or unsafe code, then you annotate the function as impure, throwing, or system. It goes along with D's general philosophy of safe-by-default, unsafe-if-you-want-to.
No. Too late in the design process. I have 20k+ lines of code that rely on the opposite behavior.
I should point out that I *do* think the idea is good (i.e. if you want the "bad" things, that's what you have to declare), but it's just too late now. Also, there might be issues with const and the likes - should the system assume const or immutable or inout or...?
 Or, as a compromise, perhaps the compiler can auto-infer most of the
 attributes without any further effort from the user.
No, that has API design issues. You can silently break a guarantee you made previously.
 T
-- - Alex
Mar 11 2012
parent Marco Leise <Marco.Leise gmx.de> writes:
Am Mon, 12 Mar 2012 07:06:33 +0100
schrieb Alex R=C3=B8nne Petersen <xtzgzorex gmail.com>:

 I should point out that I *do* think the idea is good (i.e. if you want=20
 the "bad" things, that's what you have to declare), but it's just too=20
 late now. Also, there might be issues with const and the likes - should=20
 the system assume const or immutable or inout or...?
" safe pure nothrow" as default could have worked better than manually sett= ing it, I agree. safe can be set at module level, so it is less of an issu= e to make it the default in your code. The problem with those attributes is= not that pure is used more often than impure or nothrow more often than th= rows, but that they need to be set transitive in function calls. And even t= hough the attributes do no harm to the user of the function (unlike immutab= le) they can easily be forgotten or left away, because it is tedious to typ= e them. --=20 Marco
Mar 12 2012
prev sibling parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
On Mon, 12 Mar 2012 07:04:52 +0100, Alex R=C3=B8nne Petersen  =

<xtzgzorex gmail.com> wrote:

 Or, as a compromise, perhaps the compiler can auto-infer most of the
 attributes without any further effort from the user.
  No, that has API design issues. You can silently break a guarantee yo=
u =
 made previously.
What's wrong with auto-inference. Inferred attributes are only = strengthening guarantees.
Mar 12 2012
next sibling parent reply James Miller <james aatch.net> writes:
On 12 March 2012 21:08, Martin Nowak <dawg dawgfoto.de> wrote:
 On Mon, 12 Mar 2012 07:04:52 +0100, Alex R=C3=B8nne Petersen
 <xtzgzorex gmail.com> wrote:

 Or, as a compromise, perhaps the compiler can auto-infer most of the
 attributes without any further effort from the user.
 =C2=A0No, that has API design issues. You can silently break a guarantee=
you
 made previously.
What's wrong with auto-inference. Inferred attributes are only strengthen=
ing
 guarantees.
One problem I can think of is relying on the auto-inference can create fragile code. You make a change in one place without concentrating and suddenly a completely different part of your code breaks, because it's expecting pure, or safe code and you have done something to prevent the inference. I don't know how much of a problem that could be, but its one I can think of. -- James Miller
Mar 12 2012
parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
 One problem I can think of is relying on the auto-inference can create
 fragile code. You make a change in one place without concentrating and
 suddenly a completely different part of your code breaks, because it's
 expecting pure, or  safe code and you have done something to prevent
 the inference. I don't know how much of a problem that could be, but
 its one I can think of.

 --
 James Miller
That sounds intentionally. Say you have a struct with a getHash method. struct Key { hash_t getHash() /* inferred pure */ { } } Say you have an Set that requires a pure opHash. void insert(Key key) pure { immutable hash = key.toHash(); } Now if you change the implementation of Key.getHash then maybe it can no longer be inserted into that Set. If OTOH your set.insert were inferred pure itself, then the impureness would escalate to the set.insert(key) caller. It's about the same logic that would makes nothrow more useful. You can omit it most of the times but always have the possibility to enforce it, e.g. at a much higher level.
Mar 12 2012
parent James Miller <james aatch.net> writes:
 That sounds intentionally.

 Say you have a struct with a getHash method.

 struct Key
 {
 =C2=A0 =C2=A0hash_t getHash() /* inferred pure */
 =C2=A0 =C2=A0{
 =C2=A0 =C2=A0}
 }

 Say you have an Set that requires a pure opHash.

 void insert(Key key) pure
 {
 =C2=A0 =C2=A0immutable hash =3D key.toHash();
 }

 Now if you change the implementation of Key.getHash
 then maybe it can no longer be inserted into that Set.
 If OTOH your set.insert were inferred pure itself, then
 the impureness would escalate to the set.insert(key) caller.

 It's about the same logic that would makes nothrow more useful.
 You can omit it most of the times but always have the
 possibility to enforce it, e.g. at a much higher level.
My point was more about distant code breaking. Its more to do with unexpected behavior than code correctness in this case. As i said, I could be worrying about nothing though. -- James Miller
Mar 12 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 12 2012
next sibling parent reply =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 12-03-2012 10:40, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Isn't auto-inference for templates a Bad Thing (TM) since it may give API guarantees that you can end up silently breaking? -- - Alex
Mar 12 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 12/03/2012 13:51, Alex Rønne Petersen a écrit :
 On 12-03-2012 10:40, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Isn't auto-inference for templates a Bad Thing (TM) since it may give API guarantees that you can end up silently breaking?
As long as you can explicitly specify that too, and that you get a compile time error when you fail to provide what is explicitly stated, this isn't a problem.
Mar 12 2012
parent reply =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 12-03-2012 14:16, deadalnix wrote:
 Le 12/03/2012 13:51, Alex Rønne Petersen a écrit :
 On 12-03-2012 10:40, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Isn't auto-inference for templates a Bad Thing (TM) since it may give API guarantees that you can end up silently breaking?
As long as you can explicitly specify that too, and that you get a compile time error when you fail to provide what is explicitly stated, this isn't a problem.
But people might be relying on your API that just so happens to be pure, but then suddenly isn't! -- - Alex
Mar 12 2012
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, March 12, 2012 14:23:28 Alex Rønne Petersen wrote:
 On 12-03-2012 14:16, deadalnix wrote:
 Le 12/03/2012 13:51, Alex Rønne Petersen a écrit :
 On 12-03-2012 10:40, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Isn't auto-inference for templates a Bad Thing (TM) since it may give API guarantees that you can end up silently breaking?
As long as you can explicitly specify that too, and that you get a compile time error when you fail to provide what is explicitly stated, this isn't a problem.
But people might be relying on your API that just so happens to be pure, but then suddenly isn't!
True, but without out, pure, safe, and nothrow are essentially useless with templates, because far too many templates depend on their arguments for whether they can be pure, safe, and/or nothrow or not. It's attribute inference for templates that made it possible to use something stuff like std.range and std.algorithm in pure functions. Without that, it couldn't be done (at least not without some nasty casting). Attribute inference is necessary for templates. Now, that _does_ introduce the possibility of a template being to be pure and then not being able to be pure thanks to a change that's made to it or something that it uses, and that makes impossible for any code using it to be pure. CTFE has the same problem. It's fairly easy to have a function which is CTFEable cease to be CTFEable thanks to a change to it, and no one notices. We've had issues with this in the past. In both cases, I believe that the best solution that we have is to unit test stuff to show that it _can_ be pure, safe, nothrow, and/or CTFEable if the arguments support it, and then those tests can guarantee that it stays that way in spite of any code changes, since they'll fail if the changes break that. - Jonathan M Davis
Mar 12 2012
parent reply =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 12-03-2012 18:38, Jonathan M Davis wrote:
 On Monday, March 12, 2012 14:23:28 Alex Rønne Petersen wrote:
 On 12-03-2012 14:16, deadalnix wrote:
 Le 12/03/2012 13:51, Alex Rønne Petersen a écrit :
 On 12-03-2012 10:40, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Isn't auto-inference for templates a Bad Thing (TM) since it may give API guarantees that you can end up silently breaking?
As long as you can explicitly specify that too, and that you get a compile time error when you fail to provide what is explicitly stated, this isn't a problem.
But people might be relying on your API that just so happens to be pure, but then suddenly isn't!
True, but without out, pure, safe, and nothrow are essentially useless with templates, because far too many templates depend on their arguments for whether they can be pure, safe, and/or nothrow or not. It's attribute inference for templates that made it possible to use something stuff like std.range and std.algorithm in pure functions. Without that, it couldn't be done (at least not without some nasty casting). Attribute inference is necessary for templates. Now, that _does_ introduce the possibility of a template being to be pure and then not being able to be pure thanks to a change that's made to it or something that it uses, and that makes impossible for any code using it to be pure. CTFE has the same problem. It's fairly easy to have a function which is CTFEable cease to be CTFEable thanks to a change to it, and no one notices. We've had issues with this in the past.
That could be solved with a ctfe attribute or something, no? Like, if the function has ctfe, go through all possible CTFE paths (excluding !__ctfe paths of course) and make sure they are CTFEable.
 In both cases, I believe that the best solution that we have is to unit test
 stuff to show that it _can_ be pure,  safe, nothrow, and/or CTFEable if the
 arguments support it, and then those tests can guarantee that it stays that
 way in spite of any code changes, since they'll fail if the changes break
 that.

 - Jonathan M Davis
-- - Alex
Mar 12 2012
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, March 12, 2012 18:44:06 Alex Rønne Petersen wrote:
 Now, that _does_ introduce the possibility of a template being to be pure
 and then not being able to be pure thanks to a change that's made to it
 or something that it uses, and that makes impossible for any code using
 it to be pure. CTFE has the same problem. It's fairly easy to have a
 function which is CTFEable cease to be CTFEable thanks to a change to it,
 and no one notices. We've had issues with this in the past.
That could be solved with a ctfe attribute or something, no? Like, if the function has ctfe, go through all possible CTFE paths (excluding !__ctfe paths of course) and make sure they are CTFEable.
1. That goes completely against how CTFE was designed in that part of the idea was that you _wouldn't_ have to annotate it. 2. I don't really know how feasible that would be. At minimum, the fact that CTFE works with classes now would probably render it completely infeasible for classes, since they're polymorphic, and the compiler can't possibly know all of the possible types that could be passed to the function. Templates would screw it over too for the exact same reasons that they can have issues with pure, safe, and nothrow. It may or may not be feasible without classes or templates being involved. So, no, I don't think that ctfe would really work. And while I agree that the situation isn't exactly ideal, I don't really see a way around it. Unit tests _do_ catch it for you though. The only thing that they can't catch is whether the template is going to be pure, nothrow, safe, and/or CTFEable with _your_ arguments to it, but as long as it's pure, nothrow, safe, and/or CTFEable with _a_ set of arguments, it will generally be the fault of the arguments when such a function fails to be pure, nothrow, safe, and/or CTFEable as expected. If the unit tests don't hit all of the possible static if-else blocks and all of the possible code paths for CTFE, it could still be a problem, but that just means that the unit tests aren't thorough enough, and more thorough unit tests will fix the problem, as tedious as it may be to do that. - Jonathan M Davis
Mar 12 2012
parent =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 12-03-2012 18:55, Jonathan M Davis wrote:
 On Monday, March 12, 2012 18:44:06 Alex Rønne Petersen wrote:
 Now, that _does_ introduce the possibility of a template being to be pure
 and then not being able to be pure thanks to a change that's made to it
 or something that it uses, and that makes impossible for any code using
 it to be pure. CTFE has the same problem. It's fairly easy to have a
 function which is CTFEable cease to be CTFEable thanks to a change to it,
 and no one notices. We've had issues with this in the past.
That could be solved with a ctfe attribute or something, no? Like, if the function has ctfe, go through all possible CTFE paths (excluding !__ctfe paths of course) and make sure they are CTFEable.
1. That goes completely against how CTFE was designed in that part of the idea was that you _wouldn't_ have to annotate it.
Though, rarely, functions written with runtime execution in mind actually Just Work in CTFE. You usually have to change code or special-case things for it to work. In my experience, anyway...
 2. I don't really know how feasible that would be. At minimum, the fact that
 CTFE works with classes now would probably render it completely infeasible for
 classes, since they're polymorphic, and the compiler can't possibly know all
 of the possible types that could be passed to the function. Templates would
 screw it over too for the exact same reasons that they can have issues with
 pure,  safe, and nothrow. It may or may not be feasible without classes or
 templates being involved.
I hadn't thought of classes at all. In practice, it's impossible then.
 So, no, I don't think that  ctfe would really work. And while I agree that the
 situation isn't exactly ideal, I don't really see a way around it. Unit tests
 _do_ catch it for you though. The only thing that they can't catch is whether
 the template is going to be pure, nothrow,  safe, and/or CTFEable with _your_
 arguments to it, but as long as it's pure, nothrow,  safe, and/or CTFEable
 with _a_ set of arguments, it will generally be the fault of the arguments
 when such a function fails to be pure, nothrow,  safe, and/or CTFEable as
 expected. If the unit tests don't hit all of the possible static if-else
 blocks and all of the possible code paths for CTFE, it could still be a
 problem, but that just means that the unit tests aren't thorough enough, and
 more thorough unit tests will fix the problem, as tedious as it may be to do
 that.

 - Jonathan M Davis
-- - Alex
Mar 12 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 12, 2012 at 01:55:33PM -0400, Jonathan M Davis wrote:
[...]
 So, no, I don't think that  ctfe would really work. And while I agree
 that the situation isn't exactly ideal, I don't really see a way
 around it. Unit tests _do_ catch it for you though. The only thing
 that they can't catch is whether the template is going to be pure,
 nothrow,  safe, and/or CTFEable with _your_ arguments to it, but as
 long as it's pure, nothrow,  safe, and/or CTFEable with _a_ set of
 arguments, it will generally be the fault of the arguments when such a
 function fails to be pure, nothrow,  safe, and/or CTFEable as
 expected. If the unit tests don't hit all of the possible static
 if-else blocks and all of the possible code paths for CTFE, it could
 still be a problem, but that just means that the unit tests aren't
 thorough enough, and more thorough unit tests will fix the problem, as
 tedious as it may be to do that.
[...] Tangential note: writing unit tests may be tedious, but D's inline unittest syntax has alleviated a large part of that tedium. So much so that I find myself writing as much code in unittests as real code. Which is a good thing, because in the past I'd always been too lazy to write any unittests at all. T -- Ruby is essentially Perl minus Wall.
Mar 12 2012
next sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 12-03-2012 19:04, H. S. Teoh wrote:
 On Mon, Mar 12, 2012 at 01:55:33PM -0400, Jonathan M Davis wrote:
 [...]
 So, no, I don't think that  ctfe would really work. And while I agree
 that the situation isn't exactly ideal, I don't really see a way
 around it. Unit tests _do_ catch it for you though. The only thing
 that they can't catch is whether the template is going to be pure,
 nothrow,  safe, and/or CTFEable with _your_ arguments to it, but as
 long as it's pure, nothrow,  safe, and/or CTFEable with _a_ set of
 arguments, it will generally be the fault of the arguments when such a
 function fails to be pure, nothrow,  safe, and/or CTFEable as
 expected. If the unit tests don't hit all of the possible static
 if-else blocks and all of the possible code paths for CTFE, it could
 still be a problem, but that just means that the unit tests aren't
 thorough enough, and more thorough unit tests will fix the problem, as
 tedious as it may be to do that.
[...] Tangential note: writing unit tests may be tedious, but D's inline unittest syntax has alleviated a large part of that tedium. So much so that I find myself writing as much code in unittests as real code. Which is a good thing, because in the past I'd always been too lazy to write any unittests at all. T
I stopped writing inline unit tests in larger code bases. If I do that, I have to maintain a separate build configuration just for test execution, which is not practical. Furthermore, I want to test my code in debug and release mode, which... goes against having a test configuration. So, I've ended up moving all unit tests to a separate executable that links in all my libraries and runs their tests in debug/release mode. Works much better. I don't feel that unittest in D was really thought through properly for large projects targeting actual end users... -- - Alex
Mar 12 2012
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 12, 2012 at 07:41:39PM +0100, Alex Rønne Petersen wrote:
 On 12-03-2012 19:04, H. S. Teoh wrote:
[...]
Tangential note: writing unit tests may be tedious, but D's inline
unittest syntax has alleviated a large part of that tedium. So much
so that I find myself writing as much code in unittests as real code.
Which is a good thing, because in the past I'd always been too lazy
to write any unittests at all.
[...]
 I stopped writing inline unit tests in larger code bases. If I do
 that, I have to maintain a separate build configuration just for test
 execution, which is not practical. Furthermore, I want to test my code
 in debug and release mode, which... goes against having a test
 configuration.
[...] Hmm. Sounds like what you want is not really unittests, but global program startup self-checks. In my mind, unittests is for running specific checks against specific functions, classes/structs inside a module. I frequently write lots of unittests that instantiates all sorts of templates never used by the real program, contrived data objects, etc., that may potentially have long running times, or creates files in the working directory or other stuff like that. IOW, stuff that are not suitable to be used for release builds at all. It's really more of a way of forcing the program to refuse to start during development when a code change breaks the system, so that the developer notices the breakage immediately. Definitely not for the end-user. If I wanted release-build self-consistency checking, then yeah, I'd use a different framework than unittests. As for build configuration, I've given up on make a decade ago for something saner, which can handle complicated build options properly. But that belongs to another topic. T -- Error: Keyboard not attached. Press F1 to continue. -- Yoon Ha Lee, CONLANG
Mar 12 2012
parent =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 12-03-2012 20:08, H. S. Teoh wrote:
 On Mon, Mar 12, 2012 at 07:41:39PM +0100, Alex Rønne Petersen wrote:
 On 12-03-2012 19:04, H. S. Teoh wrote:
[...]
 Tangential note: writing unit tests may be tedious, but D's inline
 unittest syntax has alleviated a large part of that tedium. So much
 so that I find myself writing as much code in unittests as real code.
 Which is a good thing, because in the past I'd always been too lazy
 to write any unittests at all.
[...]
 I stopped writing inline unit tests in larger code bases. If I do
 that, I have to maintain a separate build configuration just for test
 execution, which is not practical. Furthermore, I want to test my code
 in debug and release mode, which... goes against having a test
 configuration.
[...] Hmm. Sounds like what you want is not really unittests, but global program startup self-checks. In my mind, unittests is for running specific checks against specific functions, classes/structs inside a
That's what I do. I simply moved my unittest blocks to a separate executable.
 module. I frequently write lots of unittests that instantiates all sorts
 of templates never used by the real program, contrived data objects,
 etc., that may potentially have long running times, or creates files in
 the working directory or other stuff like that.  IOW, stuff that are not
You never know if some code that seems to work fine in debug mode breaks in release mode then (until your user runs into a bug). This is why I want full coverage in all configurations.
 suitable to be used for release builds at all. It's really more of a way
 of forcing the program to refuse to start during development when a code
 change breaks the system, so that the developer notices the breakage
 immediately. Definitely not for the end-user.
Right. That's why my tests are in a separate executable from the actual program.
 If I wanted release-build self-consistency checking, then yeah, I'd use
 a different framework than unittests.
IMHO unittest works fine for both debug and release, just not inline.
 As for build configuration, I've given up on make a decade ago for
 something saner, which can handle complicated build options properly.
 But that belongs to another topic.
I used to use Make for this project, then switched to Waf. It's an amazing build tool.
 T
-- - Alex
Mar 12 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-03-12 19:41, Alex Rønne Petersen wrote:
 I stopped writing inline unit tests in larger code bases. If I do that,
 I have to maintain a separate build configuration just for test
 execution, which is not practical. Furthermore, I want to test my code
 in debug and release mode, which... goes against having a test
 configuration.
I don't inline my unit test either.
 So, I've ended up moving all unit tests to a separate executable that
 links in all my libraries and runs their tests in debug/release mode.
 Works much better.

 I don't feel that unittest in D was really thought through properly for
 large projects targeting actual end users...
I agree. I've also started to do more high level testing of some of my command line tools using Cucumber and Aruba. But these test are written in Ruby because of Cucumber and Aruba. http://cukes.info/ https://github.com/cucumber/aruba -- /Jacob Carlborg
Mar 12 2012
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2012 11:04 AM, H. S. Teoh wrote:
 Tangential note: writing unit tests may be tedious, but D's inline
 unittest syntax has alleviated a large part of that tedium. So much so
 that I find myself writing as much code in unittests as real code.
 Which is a good thing, because in the past I'd always been too lazy to
 write any unittests at all.
That's exactly how it was intended! It seems like such a small feature, really just a syntactic convenience, but what a difference it makes.
Mar 12 2012
prev sibling parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
 That could be solved with a  ctfe attribute or something, no? Like, if  
 the function has  ctfe, go through all possible CTFE paths (excluding  
 !__ctfe paths of course) and make sure they are CTFEable.
Everything that's pure should be CTFEable which doesn't imply that you can turn every CTFEable function into a pure one.
Mar 12 2012
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, March 12, 2012 21:36:21 Martin Nowak wrote:
 That could be solved with a  ctfe attribute or something, no? Like, if
 the function has  ctfe, go through all possible CTFE paths (excluding
 !__ctfe paths of course) and make sure they are CTFEable.
Everything that's pure should be CTFEable which doesn't imply that you can turn every CTFEable function into a pure one.
I don't think that that's quite true. pure doesn't imply safe, so you could do pointer arithmetic and stuff and the like - which I'm pretty sure CTFE won't allow. And, of course, if you mark a C function as pure or subvert pure through casts, then pure _definitely_ doesn't imply CTFEability. - Jonathan M Davis
Mar 12 2012
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 03/12/2012 09:46 PM, Jonathan M Davis wrote:
 On Monday, March 12, 2012 21:36:21 Martin Nowak wrote:
 That could be solved with a  ctfe attribute or something, no? Like, if
 the function has  ctfe, go through all possible CTFE paths (excluding
 !__ctfe paths of course) and make sure they are CTFEable.
Everything that's pure should be CTFEable which doesn't imply that you can turn every CTFEable function into a pure one.
I don't think that that's quite true. pure doesn't imply safe, so you could do pointer arithmetic and stuff and the like - which I'm pretty sure CTFE won't allow. And, of course, if you mark a C function as pure or subvert pure through casts, then pure _definitely_ doesn't imply CTFEability. - Jonathan M Davis
CTFE allows quite some pointer arithmetic, but makes sure it is actually safe.
Mar 12 2012
prev sibling next sibling parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Mon, 12 Mar 2012 10:40:16 +0100, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only  
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
A " safe pure nothrow const" might be used as " system". That means someone using a declaration may have a different view than someone providing the implementation. Those interface boundaries are also a good place for by-hand annotations to provide explicit API guarantees and enforce a correct implementation. Though another issue with inference is that it would require a depth-first-order for the semantic passes. I also hope we still don't mangle inferred attributes.
Mar 12 2012
prev sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only 
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.
Mar 13 2012
next sibling parent deadalnix <deadalnix gmail.com> writes:
Le 13/03/2012 12:02, Peter Alexander a écrit :
 On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.
That is exactly what I was thinking about.
Mar 13 2012
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/12 6:02 AM, Peter Alexander wrote:
 On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.
Because in the general case functions call one another so there's no way to figure which to look at first. Andrei
Mar 13 2012
next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 13/03/2012 15:46, Andrei Alexandrescu a écrit :
 On 3/13/12 6:02 AM, Peter Alexander wrote:
 On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.
Because in the general case functions call one another so there's no way to figure which to look at first. Andrei
This problem is pretty close to garbage collection. Let's use pure as example, but it work with other qualifier too. function are marked pure, impure, or pure given all function called are pure (possibly pure). Then you go throw all possibly pure function and if it call an impure function, they mark it impure. When you don't mark any function as impure on a loop, you can mark all remaining possibly pure functions as pure.
Mar 13 2012
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/12 10:47 AM, deadalnix wrote:
 This problem is pretty close to garbage collection. Let's use pure as
 example, but it work with other qualifier too.

 function are marked pure, impure, or pure given all function called are
 pure (possibly pure). Then you go throw all possibly pure function and
 if it call an impure function, they mark it impure. When you don't mark
 any function as impure on a loop, you can mark all remaining possibly
 pure functions as pure.
Certain analyses can be done using the so-called worklist approach. The analysis can be pessimistic (initially marking all functions as not carrying the property analyzed and gradually proving some do carry it) or optimistic (the other way around). The algorithm ends when the worklist is empty. This approach is well-studied and probably ought more coverage in compiler books. I learned about it in a graduate compiler class. However, the discussion was about availability of the body. A worklist-based approach would need all functions that call one another regardless of module. That makes the analysis interprocedural, i.e. difficult on large codebases. Andrei
Mar 13 2012
next sibling parent deadalnix <deadalnix gmail.com> writes:
Le 13/03/2012 17:06, Andrei Alexandrescu a écrit :
 On 3/13/12 10:47 AM, deadalnix wrote:
 This problem is pretty close to garbage collection. Let's use pure as
 example, but it work with other qualifier too.

 function are marked pure, impure, or pure given all function called are
 pure (possibly pure). Then you go throw all possibly pure function and
 if it call an impure function, they mark it impure. When you don't mark
 any function as impure on a loop, you can mark all remaining possibly
 pure functions as pure.
Certain analyses can be done using the so-called worklist approach. The analysis can be pessimistic (initially marking all functions as not carrying the property analyzed and gradually proving some do carry it) or optimistic (the other way around). The algorithm ends when the worklist is empty. This approach is well-studied and probably ought more coverage in compiler books. I learned about it in a graduate compiler class. However, the discussion was about availability of the body. A worklist-based approach would need all functions that call one another regardless of module. That makes the analysis interprocedural, i.e. difficult on large codebases. Andrei
I expect the function we are talking about here not to call almost all the codebase. It would be scary.
Mar 13 2012
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Mar 13, 2012 at 11:06:00AM -0500, Andrei Alexandrescu wrote:
 On 3/13/12 10:47 AM, deadalnix wrote:
This problem is pretty close to garbage collection. Let's use pure as
example, but it work with other qualifier too.

function are marked pure, impure, or pure given all function called
are pure (possibly pure). Then you go throw all possibly pure
function and if it call an impure function, they mark it impure. When
you don't mark any function as impure on a loop, you can mark all
remaining possibly pure functions as pure.
Certain analyses can be done using the so-called worklist approach. The analysis can be pessimistic (initially marking all functions as not carrying the property analyzed and gradually proving some do carry it) or optimistic (the other way around). The algorithm ends when the worklist is empty. This approach is well-studied and probably ought more coverage in compiler books. I learned about it in a graduate compiler class.
[...] I have an idea. Instead of making potentially risky changes to the compiler, or changes with unknown long-term consequences, what about an external tool (or a new compiler option) that performs this analysis and saves it into a file, say in json format or something? So we run the analysis on druntime, and it tells us exactly which functions can be marked pure, const, whatever, then we can (1) look through the list to see if functions that *should* be pure aren't, then investigate why and (possibly) fix the problem; (2) annotate all functions in druntime just by going through the list, without needing to manually fix one function, find out it breaks 5 other functions, fix those functions, find another 25 broken, etc.. We can also run this on phobos, cleanup whatever functions aren't marked pure, and then go through the list and annotate everything in one shot. Now that I think of it, it seems quite silly that we should be agonizing over the amount of manual work needed to annotate druntime and phobos, when the compiler already has all the necessary information to automate most of the tedious work. T -- It is not the employer who pays the wages. Employers only handle the money. It is the customer who pays the wages. -- Henry Ford
Mar 13 2012
prev sibling parent reply kennytm <kennytm gmail.com> writes:
Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
 On 3/13/12 6:02 AM, Peter Alexander wrote:
 On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.
Because in the general case functions call one another so there's no way to figure which to look at first. Andrei
That's no difference from template functions calling each others right? int a()(int x) { return x==0?1:b(x-1); } int b()(int x) { return x==0?1:a(x-1); }
Mar 13 2012
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 03/13/2012 11:39 PM, kennytm wrote:
 Andrei Alexandrescu<SeeWebsiteForEmail erdani.org>  wrote:
 On 3/13/12 6:02 AM, Peter Alexander wrote:
 On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:
 On 3/12/2012 1:08 AM, Martin Nowak wrote:
 What's wrong with auto-inference. Inferred attributes are only
 strengthening
 guarantees.
Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.
Because in the general case functions call one another so there's no way to figure which to look at first. Andrei
That's no difference from template functions calling each others right? int a()(int x) { return x==0?1:b(x-1); } int b()(int x) { return x==0?1:a(x-1); }
http://d.puremagic.com/issues/show_bug.cgi?id=7205 The non-trivial issue is what to do with compile-time reflection in the function body. I think during reflection, the function should appear non-annotated to itself and all functions with inferred attributes it calls transitively through other functions with inferred attributes regardless of whether or not they are later inferred. (currently inference fails spectacularly for anything inside a typeof expression anyway, therefore it is not yet that much of an issue.) pragma(msg, typeof({writeln("hello world");})); // "void function() pure safe"
Mar 13 2012
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/12 5:39 PM, kennytm wrote:
 Andrei Alexandrescu<SeeWebsiteForEmail erdani.org>  wrote:
 Because in the general case functions call one another so there's no way
 to figure which to look at first.

 Andrei
That's no difference from template functions calling each others right? int a()(int x) { return x==0?1:b(x-1); } int b()(int x) { return x==0?1:a(x-1); }
There is. Templates are guaranteed to have the body available. Walter uses a recursive on-demand approach instead of a worklist approach for inferring attributes (worklists have an issue I forgot about). Andrei
Mar 13 2012
prev sibling next sibling parent reply "so" <so so.so> writes:
On Sunday, 11 March 2012 at 23:54:10 UTC, Walter Bright wrote:
 Consider the toHash() function for struct key types:

 http://dlang.org/hash-map.html

 And of course the others:

 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);

 They need to be, as well as const, pure nothrow  safe.

 The problem is:
 1. a lot of code must be retrofitted
 2. it's just plain annoying to annotate them

 It's the same problem as for Object.toHash(). That was 
 addressed by making those attributes inheritable, but that 
 won't work for struct ones.

 So I propose instead a bit of a hack. toHash, opEquals, and 
 opCmp as struct members be automatically annotated with pure, 
 nothrow, and  safe (if not already marked as  trusted).
A pattern is emerging. Why not analyze it a bit and somehow try to find a common ground? Then we can generalize it to a single annotation.
Mar 12 2012
parent "so" <so so.so> writes:
On Monday, 12 March 2012 at 07:18:06 UTC, so wrote:

 A pattern is emerging. Why not analyze it a bit and somehow try 
 to find a common ground? Then we can generalize it to a single 
 annotation.
mask(wat) const|pure|nothrow|safe wat hash_t toHash() wat bool opEquals(ref const KeyType s) wat int opCmp(ref const KeyType s)
Mar 12 2012
prev sibling next sibling parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
 So I propose instead a bit of a hack. toHash, opEquals, and opCmp as  
 struct members be automatically annotated with pure, nothrow, and  safe  
 (if not already marked as  trusted).
How about complete inference instead of a hack?
Mar 12 2012
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, March 12, 2012 09:14:17 Martin Nowak wrote:
 So I propose instead a bit of a hack. toHash, opEquals, and opCmp as
 struct members be automatically annotated with pure, nothrow, and  safe
 (if not already marked as  trusted).
How about complete inference instead of a hack?
Because that requires having all of the source code. The fact that we have .di files prevents that. You'd have to be able to guarantee that you can always see the whole source (including the source of anything that the functions call) in order for attribute inferrence to work. The only reason that we can do it with templates is because we _do_ always have their source, and the fact that non- templated functions must have the attributes in their signatures makes it so that the templated functions don't need their source in order to determine their own attributes. The fact that we can't guarantee that all of the source is available when compiling a particular module seriously hampers any attempts at general attribute inference. - Jonathan M Davis
Mar 12 2012
parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
 Because that requires having all of the source code. The fact that we  
 have .di
 files prevents that.
It doesn't require all source code. It just means that without source code nothing can be inferred and the attributes fall back to what has been annotated by hand. It could be used to annotated functions at the API level and have the compiler check that transitively. It should behave like implicit conversion to "pure nothrow ..." if the compiler hasn't found them inapplicable. On the downside it has some implications for the compilation model because functions would need to be analyzed transitively. But then again we already do this for CTFE.
Mar 12 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2012 1:56 PM, Martin Nowak wrote:
 It doesn't require all source code.
 It just means that without source code nothing can be inferred and the
 attributes fall back to what has been annotated by hand.
Hello endless bug reports of the form: "It compiles when I send the arguments to dmd this way but not that way. dmd is broken. D sux."
Mar 12 2012
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-03-13 01:40, Walter Bright wrote:
 On 3/12/2012 1:56 PM, Martin Nowak wrote:
 It doesn't require all source code.
 It just means that without source code nothing can be inferred and the
 attributes fall back to what has been annotated by hand.
Hello endless bug reports of the form: "It compiles when I send the arguments to dmd this way but not that way. dmd is broken. D sux."
We already have that, sometimes :( -- /Jacob Carlborg
Mar 13 2012
prev sibling parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
On Tue, 13 Mar 2012 01:40:08 +0100, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 3/12/2012 1:56 PM, Martin Nowak wrote:
 It doesn't require all source code.
 It just means that without source code nothing can be inferred and the
 attributes fall back to what has been annotated by hand.
Hello endless bug reports of the form: "It compiles when I send the arguments to dmd this way but not that way. dmd is broken. D sux."
Yeah, you're right. It would easily create confusing behavior.
Mar 13 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2012 4:50 AM, Martin Nowak wrote:
 Yeah, you're right. It would easily create confusing behavior.
In general, for modules a and b, all of these should work: dmd a b dmd b a dmd -c a dmd -c b
Mar 13 2012
parent "Martin Nowak" <dawg dawgfoto.de> writes:
 In general, for modules a and b, all of these should work:

 dmd a b

 dmd b a

 dmd -c a
 dmd -c b
For '-c' CTFE will already run semantic3 on the other module's functions. But it would be very inefficient to do that for attributes.
Mar 14 2012
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 12/03/2012 00:54, Walter Bright a écrit :
 Consider the toHash() function for struct key types:

 http://dlang.org/hash-map.html

 And of course the others:

 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);

 They need to be, as well as const, pure nothrow  safe.

 The problem is:
 1. a lot of code must be retrofitted
 2. it's just plain annoying to annotate them

 It's the same problem as for Object.toHash(). That was addressed by
 making those attributes inheritable, but that won't work for struct ones.

 So I propose instead a bit of a hack. toHash, opEquals, and opCmp as
 struct members be automatically annotated with pure, nothrow, and  safe
 (if not already marked as  trusted).
I don't really see the point. For Objects, we inherit from Object, which can define theses. For struct, we have inference, so most of the time attributes will correct. const pure nothrow safe are something we want, but is it something we want to enforce ?
Mar 12 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2012 4:11 AM, deadalnix wrote:
 For struct, we have inference,
? No we don't.
 so most of the time attributes will correct.
 const pure nothrow  safe are something we want, but is it something we want to
 enforce ?
Yes, because they are referred to by TypeInfo, and that's fairly useless if it isn't const etc.
Mar 12 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 13/03/2012 01:50, Walter Bright a écrit :
 On 3/12/2012 4:11 AM, deadalnix wrote:
 For struct, we have inference,
? No we don't.
Ok my mistake. So why not dig in that direction ?
 so most of the time attributes will correct.
 const pure nothrow  safe are something we want, but is it something we
 want to
 enforce ?
Yes, because they are referred to by TypeInfo, and that's fairly useless if it isn't const etc.
I always though that TypeInfo is a poor substitute for metaprograming and compile time reflexion.
Mar 13 2012
parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 13-03-2012 16:56, deadalnix wrote:
 Le 13/03/2012 01:50, Walter Bright a écrit :
 On 3/12/2012 4:11 AM, deadalnix wrote:
 For struct, we have inference,
? No we don't.
Ok my mistake. So why not dig in that direction ?
 so most of the time attributes will correct.
 const pure nothrow  safe are something we want, but is it something we
 want to
 enforce ?
Yes, because they are referred to by TypeInfo, and that's fairly useless if it isn't const etc.
I always though that TypeInfo is a poor substitute for metaprograming and compile time reflexion.
Yes, and in some cases, it doesn't even work right; i.e. you can declare certain opCmp and opEquals signatures that work fine for ==, >, <, etc but don't get emitted to the TypeInfo metadata, and vice versa. It's a mess. -- - Alex
Mar 13 2012
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 13 Mar 2012 12:03:22 -0400, Alex R=C3=B8nne Petersen  =

<xtzgzorex gmail.com> wrote:

 Yes, and in some cases, it doesn't even work right; i.e. you can decla=
re =
 certain opCmp and opEquals signatures that work fine for =3D=3D, >, <,=
etc =
 but don't get emitted to the TypeInfo metadata, and vice versa. It's a=
=
 mess.
See my post in this thread. It fixes this problem. -Steve
Mar 14 2012
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sun, 11 Mar 2012 19:54:09 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Consider the toHash() function for struct key types:

 http://dlang.org/hash-map.html

 And of course the others:

 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);

 They need to be, as well as const, pure nothrow  safe.

 The problem is:
 1. a lot of code must be retrofitted
 2. it's just plain annoying to annotate them

 It's the same problem as for Object.toHash(). That was addressed by  
 making those attributes inheritable, but that won't work for struct ones.

 So I propose instead a bit of a hack. toHash, opEquals, and opCmp as  
 struct members be automatically annotated with pure, nothrow, and  safe  
 (if not already marked as  trusted).
What about a new attribute type (or better name?) that means "this function is part of the TypeInfo interface, and has an equivalent xFuncname in TypeInfo_Struct". Then it implicitly inherits all the attributes of that xFuncname (not necessarily defined by the compiler). Then, we can have several benefits: 1. This triggers the compiler to complain if we don't correctly define the function (as specified in TypeInfo_Struct). In other words, it allows the developer to specify "I want this function to go into TypeInfo". 2. It potentially allows additional interface hooks without compiler modification. For example, you could add xfoo in TypeInfo_Struct, and then every struct you define type foo() would get a hook there. 3. As you wanted, it eliminates having to duplicate all the attributes. The one large drawback is, you need to annotate all existing functions. We could potentially assume that type is specified on the functions that currently enjoy automatic inclusion in the TypeInfo_Struct instance. I'd recommend at some point eliminating this hack though. -Steve
Mar 12 2012
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Walter:

 toHash, opEquals, and opCmp as struct members be automatically 
 annotated with pure, nothrow, and  safe (if not already marked 
 as  trusted).
I have read the other answers of this thread, and I don't like some of them. In this case I think this programmer convenience doesn't justify adding one more special case to D purity. So for me it's a -1. Bye, bearophile
Mar 12 2012
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, March 12, 2012 11:04:54 H. S. Teoh wrote:
 Tangential note: writing unit tests may be tedious, but D's inline
 unittest syntax has alleviated a large part of that tedium. So much so
 that I find myself writing as much code in unittests as real code.
 Which is a good thing, because in the past I'd always been too lazy to
 write any unittests at all.
D doesn't make writing unit tests easy, since there's an intrinsic amount of effort required to write them, just like there is with any code, but it takes away all of the extraneous effort in having to set up a unit test framework and the like. And by removing pretty much anything from the effort which is not actually required, it makes writing unit testing about as easy as it can be. I believe that Walter likes to say that it takes away your excuse _not_ to write them because of how easy it is to write unit tests in D. - Jonathan M Davis
Mar 12 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2012 11:10 AM, Jonathan M Davis wrote:
 I believe that Walter likes to say that it takes away your excuse _not_ to
 write them because of how easy it is to write unit tests in D.
It can be remarkable how much more use something gets if you just make it a bit more convenient.
Mar 12 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 12, 2012 at 02:10:23PM -0400, Jonathan M Davis wrote:
 On Monday, March 12, 2012 11:04:54 H. S. Teoh wrote:
 Tangential note: writing unit tests may be tedious, but D's inline
 unittest syntax has alleviated a large part of that tedium. So much so
 that I find myself writing as much code in unittests as real code.
 Which is a good thing, because in the past I'd always been too lazy to
 write any unittests at all.
D doesn't make writing unit tests easy, since there's an intrinsic amount of effort required to write them, just like there is with any code, but it takes away all of the extraneous effort in having to set up a unit test framework and the like. And by removing pretty much anything from the effort which is not actually required, it makes writing unit testing about as easy as it can be.
I would argue that D *does* make unit tests easier to write, in that you can write them in straight D code inline (as opposed to some testing frameworks that require external stuff like Expect, Python, intermixed with native code), so you don't need to put what you're writing on hold while you go off and write unittests. You can just insert a unittest block after the function/class/etc immediately while the code is still fresh in your mind. I often find myself writing unittests simultaneously with real code, since while writing the code I see a possible boundary condition to test for, and immediately put that in a unittest to ensure I don't forget about it later. This improves the quality of both the code and the unittests.
 I believe that Walter likes to say that it takes away your excuse
 _not_ to write them because of how easy it is to write unit tests in
 D.
[...] Yep. They're so easy to write in D that I'd be embarrassed to *not* write them. T -- Famous last words: I *think* this will work...
Mar 12 2012
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, March 12, 2012 11:25:41 H. S. Teoh wrote:
 On Mon, Mar 12, 2012 at 02:10:23PM -0400, Jonathan M Davis wrote:
 On Monday, March 12, 2012 11:04:54 H. S. Teoh wrote:
 Tangential note: writing unit tests may be tedious, but D's inline
 unittest syntax has alleviated a large part of that tedium. So much so
 that I find myself writing as much code in unittests as real code.
 Which is a good thing, because in the past I'd always been too lazy to
 write any unittests at all.
D doesn't make writing unit tests easy, since there's an intrinsic amount of effort required to write them, just like there is with any code, but it takes away all of the extraneous effort in having to set up a unit test framework and the like. And by removing pretty much anything from the effort which is not actually required, it makes writing unit testing about as easy as it can be.
I would argue that D *does* make unit tests easier to write, in that you can write them in straight D code inline (as opposed to some testing frameworks that require external stuff like Expect, Python, intermixed with native code), so you don't need to put what you're writing on hold while you go off and write unittests. You can just insert a unittest block after the function/class/etc immediately while the code is still fresh in your mind. I often find myself writing unittests simultaneously with real code, since while writing the code I see a possible boundary condition to test for, and immediately put that in a unittest to ensure I don't forget about it later. This improves the quality of both the code and the unittests.
I didn't say that D doesn't make writing unit tests easier. I just said that it doesn't make them _easy_. They're as much work as writing any code is. But by making them easier, D makes them about as easy to write as they can be. Regardless, built-in unit testing is a fantastic feature. - Jonathan m Davis
Mar 12 2012
prev sibling parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
On 11/03/2012 23:54, Walter Bright wrote:
 Consider the toHash() function for struct key types:

 http://dlang.org/hash-map.html

 And of course the others:

 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);
<snip> And what about toString? Stewart.
Mar 12 2012
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, March 13, 2012 01:15:59 Stewart Gordon wrote:
 On 11/03/2012 23:54, Walter Bright wrote:
 Consider the toHash() function for struct key types:
 
 http://dlang.org/hash-map.html
 
 And of course the others:
 
 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);
<snip> And what about toString?
That really should be too, but work is probably going to have to be done to make sure that format and std.conv.to can be pure, since they're pretty much required in most toString functions. I believe that changes to toUTF8, toUTF16, and toUTF32 were made recently which are at least a major step in that direction for std.conv.to (since it uses them) and may outright make it so that it can be pure now (I'm not sure if anything else is preventing them from being pure). But I have no idea what the current state of format is with regards to purity, and if the changes to toUTFx weren't enough to make std.conv.to pure for strings, then more will need to be done there as well. - Jonathan M Davis
Mar 12 2012
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Stewart Gordon:

 And what about toString?
Often in toString I use format() or text(), or to!string(), that currently aren't pure nor nothrow. But in this thread I have seen no answers regarding deprecating the need of opCmp() for hashability. Bye, bearophile
Mar 12 2012
parent =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 13-03-2012 02:28, bearophile wrote:
 Stewart Gordon:

 And what about toString?
Often in toString I use format() or text(), or to!string(), that currently aren't pure nor nothrow. But in this thread I have seen no answers regarding deprecating the need of opCmp() for hashability. Bye, bearophile
I fully support that. -- - Alex
Mar 12 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 12, 2012 at 09:27:41PM -0400, Jonathan M Davis wrote:
 On Tuesday, March 13, 2012 01:15:59 Stewart Gordon wrote:
 On 11/03/2012 23:54, Walter Bright wrote:
 Consider the toHash() function for struct key types:
 
 http://dlang.org/hash-map.html
 
 And of course the others:
 
 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);
<snip> And what about toString?
That really should be too, but work is probably going to have to be done to make sure that format and std.conv.to can be pure, since they're pretty much required in most toString functions.
This is not going to be a choice, because some overrides of toHash calls toString.
 I believe that changes to toUTF8, toUTF16, and toUTF32 were made
 recently which are at least a major step in that direction for
 std.conv.to (since it uses them) and may outright make it so that it
 can be pure now (I'm not sure if anything else is preventing them from
 being pure). But I have no idea what the current state of format is
 with regards to purity, and if the changes to toUTFx weren't enough to
 make std.conv.to pure for strings, then more will need to be done
 there as well.
[...] Looks like we need a massive one-shot overhaul of almost all of druntime and a potentially large part of phobos in order to get this pure/ safe/nothrow thing off the ground. There are just too many interdependencies everywhere that there's practically no way to do it incrementally. I tried the incremental approach several times and have given up, 'cos every small change inevitably propagates to functions all over druntime and then some. And I'm not talking about doing just toHash, or just toString either. Any of these functions have complex interdependencies with each other, so it's either fix them ALL, or not at all. T -- Obviously, some things aren't very obvious.
Mar 12 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2012 6:40 PM, H. S. Teoh wrote:
 And I'm not talking about doing just toHash, or just toString either.
 Any of these functions have complex interdependencies with each other,
 so it's either fix them ALL, or not at all.
Yup. It also seems very hard to figure out a transitional path to it.
Mar 12 2012
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 12, 2012 at 07:06:51PM -0700, Walter Bright wrote:
 On 3/12/2012 6:40 PM, H. S. Teoh wrote:
And I'm not talking about doing just toHash, or just toString either.
Any of these functions have complex interdependencies with each
other, so it's either fix them ALL, or not at all.
Yup. It also seems very hard to figure out a transitional path to it.
Perhaps we just need to bite the bullet and do it all in one shot and check it into master, then deal with the aftermath later. :-) Otherwise it will simply never get off the ground. T -- Two wrongs don't make a right; but three rights do make a left...
Mar 12 2012
parent =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 13-03-2012 03:15, H. S. Teoh wrote:
 On Mon, Mar 12, 2012 at 07:06:51PM -0700, Walter Bright wrote:
 On 3/12/2012 6:40 PM, H. S. Teoh wrote:
 And I'm not talking about doing just toHash, or just toString either.
 Any of these functions have complex interdependencies with each
 other, so it's either fix them ALL, or not at all.
Yup. It also seems very hard to figure out a transitional path to it.
Perhaps we just need to bite the bullet and do it all in one shot and check it into master, then deal with the aftermath later. :-) Otherwise it will simply never get off the ground. T
I have to say this seems like the most sensible approach right now. -- - Alex
Mar 12 2012
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
On 13 March 2012 04:15, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:

 On Mon, Mar 12, 2012 at 07:06:51PM -0700, Walter Bright wrote:
 On 3/12/2012 6:40 PM, H. S. Teoh wrote:
And I'm not talking about doing just toHash, or just toString either.
Any of these functions have complex interdependencies with each
other, so it's either fix them ALL, or not at all.
Yup. It also seems very hard to figure out a transitional path to it.
Perhaps we just need to bite the bullet and do it all in one shot and check it into master, then deal with the aftermath later. :-) Otherwise it will simply never get off the ground.
MMMmmm, now we're talking! I've always preferred this approach :P
Mar 14 2012
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 12 Mar 2012 22:06:51 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 3/12/2012 6:40 PM, H. S. Teoh wrote:
 And I'm not talking about doing just toHash, or just toString either.
 Any of these functions have complex interdependencies with each other,
 so it's either fix them ALL, or not at all.
Yup. It also seems very hard to figure out a transitional path to it.
It seems most people have ignored my post in this thread, so I'll say it again: What about an annotation (I suggested type, it could be anything, but I'll use that as my strawman) that says to the compiler "this is part of the TypeInfo_Struct interface." In essence, when type is encountered the compiler looks at TypeInfo_Struct (in object.di) for the equivalent xfuncname. Then uses the attributes of that function pointer (and also the parameter types/count) to compile the given method. It does two things: 1. It provides an indicator a user can use when he *wants* to include that function as part of the typeinfo. Right now, you have to guess, and pray to the compiler gods that your function signature is deemed worthy. 2. It takes all sorts of burden off the compiler to know which functions are "special", and to make assumptions about them. We can implement it now *without* making those function pointers pure/safe/nothrow/whatever, and people can then experiment with changing it without having to muck with the compiler. As a bonus, it also allows people to experiment with adding more interface methods to structs without having to muck with the compiler. The only drawback is what to do with existing code that *doesn't* have type on it's functions that go into TypeInfo_Struct. There are ways to handle this. My suggestion is to simply treat the current methods as special and assume type is on those methods. But I would suggest removing that "hack" in the future, with some way to easily tell you where you need to put type annotations. -Steve
Mar 14 2012
next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 14.03.2012 16:11, Steven Schveighoffer wrote:
 On Mon, 12 Mar 2012 22:06:51 -0400, Walter Bright
 <newshound2 digitalmars.com> wrote:

 On 3/12/2012 6:40 PM, H. S. Teoh wrote:
 And I'm not talking about doing just toHash, or just toString either.
 Any of these functions have complex interdependencies with each other,
 so it's either fix them ALL, or not at all.
Yup. It also seems very hard to figure out a transitional path to it.
It seems most people have ignored my post in this thread, so I'll say it again: What about an annotation (I suggested type, it could be anything, but I'll use that as my strawman) that says to the compiler "this is part of the TypeInfo_Struct interface." In essence, when type is encountered the compiler looks at TypeInfo_Struct (in object.di) for the equivalent xfuncname. Then uses the attributes of that function pointer (and also the parameter types/count) to compile the given method. It does two things: 1. It provides an indicator a user can use when he *wants* to include that function as part of the typeinfo. Right now, you have to guess, and pray to the compiler gods that your function signature is deemed worthy. 2. It takes all sorts of burden off the compiler to know which functions are "special", and to make assumptions about them. We can implement it now *without* making those function pointers pure/safe/nothrow/whatever, and people can then experiment with changing it without having to muck with the compiler. As a bonus, it also allows people to experiment with adding more interface methods to structs without having to muck with the compiler. The only drawback is what to do with existing code that *doesn't* have type on it's functions that go into TypeInfo_Struct. There are ways to handle this. My suggestion is to simply treat the current methods as special and assume type is on those methods. But I would suggest removing that "hack" in the future, with some way to easily tell you where you need to put type annotations.
For one, I'm sold on it. And the proposed magic hack can work right now, then it'll just get deprecated in favor of explicit type. -- Dmitry Olshansky
Mar 14 2012
prev sibling parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
 In essence, when  type is encountered the compiler looks at  
 TypeInfo_Struct (in object.di) for the equivalent xfuncname.  Then uses  
 the attributes of that function pointer (and also the parameter  
 types/count) to compile the given method.
Why would you want to add explicit annotation for implicit TypeInfo_Struct methods? I think type is a very interesting idea if combined with a string->method lookup in TypeInfo_Struct, but this wouldn't allow for static type checking. If you wanted static type checking then type could probably refer to Interfaces.
Mar 14 2012
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 14 Mar 2012 09:27:08 -0400, Martin Nowak <dawg dawgfoto.de> wrote:

 In essence, when  type is encountered the compiler looks at  
 TypeInfo_Struct (in object.di) for the equivalent xfuncname.  Then uses  
 the attributes of that function pointer (and also the parameter  
 types/count) to compile the given method.
Why would you want to add explicit annotation for implicit TypeInfo_Struct methods?
Because right now, it's a guessing game of whether you wanted an operation to be part of the typeinfo's interface. And many times, the compiler guesses wrong. I've seen countless posts on d.learn saying "why won't AA's call my opEquals or opHash function?" With explicit annotation, you have instructed the compiler "I expect this to be in TypeInfo," so it can take the appropriate actions if it doesn't match.
 I think  type is a very interesting idea if combined with a  
 string->method lookup in
 TypeInfo_Struct, but this wouldn't allow for static type checking.
Yes it would. It has access to TypeInfo_Struct in object.di, so it can figure out what the signature should be. -Steve
Mar 14 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2012 6:15 PM, Stewart Gordon wrote:
 And what about toString?
Good question. What do you suggest?
Mar 12 2012
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 Good question. What do you suggest?
I suggest to follow a slow but reliable path, working bottom-up: turn to!string()/text()/format() into pure+nothrow functions, and then later require toString to be pure+nothrow and to have such annotations. Bye, bearophile
Mar 12 2012
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 12, 2012 at 10:58:18PM -0400, bearophile wrote:
 Walter:
 
 Good question. What do you suggest?
I suggest to follow a slow but reliable path, working bottom-up: turn to!string()/text()/format() into pure+nothrow functions, and then later require toString to be pure+nothrow and to have such annotations.
[...] The problem is that there may not *be* a bottom to start from. These functions are all interlinked to each other in various places (spread across a myriad of different overrides of them). I've tried to find one function that I can annotate without needing hundreds of other changes, but alas, they all depend on each other at some level, and every time I end up annotating almost every other function in druntime and the change just gets bigger and bigger. T -- Trying to define yourself is like trying to bite your own teeth. -- Alan Watts
Mar 12 2012
prev sibling parent reply Don Clugston <dac nospam.com> writes:
On 13/03/12 03:05, Walter Bright wrote:
 On 3/12/2012 6:15 PM, Stewart Gordon wrote:
 And what about toString?
Good question. What do you suggest?
Why can't we just kill that abomination?
Mar 13 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2012 4:15 AM, Don Clugston wrote:
 On 13/03/12 03:05, Walter Bright wrote:
 On 3/12/2012 6:15 PM, Stewart Gordon wrote:
 And what about toString?
Good question. What do you suggest?
Why can't we just kill that abomination?
Break a lot of existing code?
Mar 13 2012
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter:

 Break a lot of existing code?
Invent a good deprecation strategy for toString? The idea of modifying toString isn't new, we have discussed it more than on time, and I agree with the general strategy Don appreciates. As far as I know you didn't do much on it mostly because there were other more important things to do, like fixing important bugs, not because people thought the situation was good enough. Maybe now it's a good moment to slow down bug fixing and look for things more important than bugs (where "important" means "can't be done much later"). Bye, bearophile
Mar 13 2012
prev sibling next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 14.03.2012 3:23, Walter Bright wrote:
 On 3/13/2012 4:15 AM, Don Clugston wrote:
 On 13/03/12 03:05, Walter Bright wrote:
 On 3/12/2012 6:15 PM, Stewart Gordon wrote:
 And what about toString?
Good question. What do you suggest?
Why can't we just kill that abomination?
Break a lot of existing code?
And gain efficiency. BTW transition paths were suggested, need to just dig up DIP9 discussions. -- Dmitry Olshansky
Mar 14 2012
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 13 Mar 2012 19:23:25 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 3/13/2012 4:15 AM, Don Clugston wrote:
 On 13/03/12 03:05, Walter Bright wrote:
 On 3/12/2012 6:15 PM, Stewart Gordon wrote:
 And what about toString?
Good question. What do you suggest?
Why can't we just kill that abomination?
Break a lot of existing code?
I'm unaware of much code that uses TypeInfo.xtostring to print anything. write[f][ln] doesn't, and I don't think std.conv.to does either. In other words, killing the "specialness" of toString doesn't mean killing toString methods in all structs. What this does is allow us to not worry about what you annotate your toString methods with, it just becomes a regular method. -Steve
Mar 14 2012
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/12/12 8:15 PM, Stewart Gordon wrote:
 On 11/03/2012 23:54, Walter Bright wrote:
 Consider the toHash() function for struct key types:

 http://dlang.org/hash-map.html

 And of course the others:

 const hash_t toHash();
 const bool opEquals(ref const KeyType s);
 const int opCmp(ref const KeyType s);
<snip> And what about toString?
I think the three others have a special regime because pointers to them must be saved for the sake of associative arrays. toString is used only generically, Andrei
Mar 12 2012
parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Tue, 13 Mar 2012 04:40:01 +0100, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 I think the three others have a special regime because pointers to them  
 must be saved for the sake of associative arrays. toString is used only  
 generically,
  Andrei
Adding a special case for AAs is not a good idea but these operators are indeed special and should have a defined behavior. Requiring pureness for comparison for example is good for all kind of generic algorithms.
Mar 13 2012