www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Possible way to achieve lazy loading with const objects

reply Jonathan M Davis <jmdavisProg gmx.com> writes:
Okay. I'm not saying that we should necessarily implement this. I'm just 
looking to air out an idea here and see if there are any technical reasons why 
it can't be done or is unreasonable.

Some programmers have expressed annoyance and/or disappointment that there is 
no logical const of any kind in D. They generally seem to be trying to do one 
of two things - caching return values in member functions or lazily loading 
the values of member variables. I really don't know how we could possibly do 
caching with const, but I _do_ have an idea of how we could implement lazy 
loading. Here's what it looks like syntactically:

struct S
{
    lazy T var = func();
}

The lazy indicates that var is going to be lazily loaded, and func returns the 
value that var will be initialized with. However, instead of being a normal 
variable of type T, this is what happens to var:

1. Instead of a member variable of type T, S gets a bool (e.g. __varLoaded) 
and a variable of type T (e.g. __var).

2. __varLoaded is default-initialized to false, and __var is void (so, 
garbage).

3. Every reference to var is replaced with a call to a getter property 
function (e.g. __varProp). There is no setter property.

4. __varProp looks something like this:

T __varProp()
{
    if(!__varLoaded)
    {
        __var = func();
        __varLoaded = true;
    }

    return __var;
}

5.  __varProp may or may not be inlined (but it would be nice if it would be).

6.  If the S being constructed is shared or immutable and __varProp is not 
called in the constructor, then __varProp is called immediately after the 
constructor (or at the end of the constructor if that works better for the 
compiler).

7. An opCast is added to S for shared S and immutable S which calls __varProp 
- or if such on opCast already exists, the call to __varProp is added at the 
end of it.


The result of all of this is that the value of var is constant, but it isn't 
calculated until it's asked for. It doesn't break const at all, since the 
compiler can guarantee that altering the value of __varLoaded and __var is 
safe. And since the value is eagerly loaded in the case of immutable and 
shared, immutable  and shared don't cause any problems.

So, the question is: Does this work? And if not, why? And if it _does_ work, 
is it a good idea? And if not, why?

Again, I'm not necessarily suggesting that we implement this right now, but it 
at least _seems_ like a viable solution for introducing lazy loading in const 
objects, and I'd like to know whether there's a good possibility that it will 
actually work to implement something like this in the compiler. If it _is_ 
feasible, and we want to actually do it, since it's backwards compatible (as 
far as I can tell anyway), we can implement it at some point in the future 
when D has stabilized more, but I thought that the idea was at least worth 
dicsussing.

I'm not at all convinced that the added complexity to the language and to the 
compiler is worth the gain, but there are a number of programmers who want 
some sort of lazy-loading ability for member variables in const objects, and 
this seems to provide that.

- Jonathan M Davis
Sep 23 2011
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-09-24 06:11, Jonathan M Davis wrote:
 Okay. I'm not saying that we should necessarily implement this. I'm just
 looking to air out an idea here and see if there are any technical reasons why
 it can't be done or is unreasonable.

 Some programmers have expressed annoyance and/or disappointment that there is
 no logical const of any kind in D. They generally seem to be trying to do one
 of two things - caching return values in member functions or lazily loading
 the values of member variables. I really don't know how we could possibly do
 caching with const, but I _do_ have an idea of how we could implement lazy
 loading. Here's what it looks like syntactically:

 struct S
 {
      lazy T var = func();
 }

 The lazy indicates that var is going to be lazily loaded, and func returns the
 value that var will be initialized with. However, instead of being a normal
 variable of type T, this is what happens to var:

 1. Instead of a member variable of type T, S gets a bool (e.g. __varLoaded)
 and a variable of type T (e.g. __var).

 2. __varLoaded is default-initialized to false, and __var is void (so,
 garbage).

 3. Every reference to var is replaced with a call to a getter property
 function (e.g. __varProp). There is no setter property.

 4. __varProp looks something like this:

 T __varProp()
 {
      if(!__varLoaded)
      {
          __var = func();
          __varLoaded = true;
      }

      return __var;
 }

 5.  __varProp may or may not be inlined (but it would be nice if it would be).

 6.  If the S being constructed is shared or immutable and __varProp is not
 called in the constructor, then __varProp is called immediately after the
 constructor (or at the end of the constructor if that works better for the
 compiler).

 7. An opCast is added to S for shared S and immutable S which calls __varProp
 - or if such on opCast already exists, the call to __varProp is added at the
 end of it.


 The result of all of this is that the value of var is constant, but it isn't
 calculated until it's asked for. It doesn't break const at all, since the
 compiler can guarantee that altering the value of __varLoaded and __var is
 safe. And since the value is eagerly loaded in the case of immutable and
 shared, immutable  and shared don't cause any problems.

 So, the question is: Does this work? And if not, why? And if it _does_ work,
 is it a good idea? And if not, why?

 Again, I'm not necessarily suggesting that we implement this right now, but it
 at least _seems_ like a viable solution for introducing lazy loading in const
 objects, and I'd like to know whether there's a good possibility that it will
 actually work to implement something like this in the compiler. If it _is_
 feasible, and we want to actually do it, since it's backwards compatible (as
 far as I can tell anyway), we can implement it at some point in the future
 when D has stabilized more, but I thought that the idea was at least worth
 dicsussing.

 I'm not at all convinced that the added complexity to the language and to the
 compiler is worth the gain, but there are a number of programmers who want
 some sort of lazy-loading ability for member variables in const objects, and
 this seems to provide that.

 - Jonathan M Davis
I like it. -- /Jacob Carlborg
Sep 24 2011
prev sibling next sibling parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 24/09/11 5:11 AM, Jonathan M Davis wrote:
 Okay. I'm not saying that we should necessarily implement this. I'm just
 looking to air out an idea here and see if there are any technical reasons why
 it can't be done or is unreasonable.

 Some programmers have expressed annoyance and/or disappointment that there is
 no logical const of any kind in D. They generally seem to be trying to do one
 of two things - caching return values in member functions or lazily loading
 the values of member variables. I really don't know how we could possibly do
 caching with const, but I _do_ have an idea of how we could implement lazy
 loading. Here's what it looks like syntactically:

 struct S
 {
      lazy T var = func();
 }

 The lazy indicates that var is going to be lazily loaded, and func returns the
 value that var will be initialized with. However, instead of being a normal
 variable of type T, this is what happens to var:

 1. Instead of a member variable of type T, S gets a bool (e.g. __varLoaded)
 and a variable of type T (e.g. __var).

 2. __varLoaded is default-initialized to false, and __var is void (so,
 garbage).

 3. Every reference to var is replaced with a call to a getter property
 function (e.g. __varProp). There is no setter property.

 4. __varProp looks something like this:

 T __varProp()
 {
      if(!__varLoaded)
      {
          __var = func();
          __varLoaded = true;
      }

      return __var;
 }

 5.  __varProp may or may not be inlined (but it would be nice if it would be).

 6.  If the S being constructed is shared or immutable and __varProp is not
 called in the constructor, then __varProp is called immediately after the
 constructor (or at the end of the constructor if that works better for the
 compiler).

 7. An opCast is added to S for shared S and immutable S which calls __varProp
 - or if such on opCast already exists, the call to __varProp is added at the
 end of it.


 The result of all of this is that the value of var is constant, but it isn't
 calculated until it's asked for. It doesn't break const at all, since the
 compiler can guarantee that altering the value of __varLoaded and __var is
 safe. And since the value is eagerly loaded in the case of immutable and
 shared, immutable  and shared don't cause any problems.

 So, the question is: Does this work? And if not, why? And if it _does_ work,
 is it a good idea? And if not, why?

 Again, I'm not necessarily suggesting that we implement this right now, but it
 at least _seems_ like a viable solution for introducing lazy loading in const
 objects, and I'd like to know whether there's a good possibility that it will
 actually work to implement something like this in the compiler. If it _is_
 feasible, and we want to actually do it, since it's backwards compatible (as
 far as I can tell anyway), we can implement it at some point in the future
 when D has stabilized more, but I thought that the idea was at least worth
 dicsussing.

 I'm not at all convinced that the added complexity to the language and to the
 compiler is worth the gain, but there are a number of programmers who want
 some sort of lazy-loading ability for member variables in const objects, and
 this seems to provide that.

 - Jonathan M Davis
Lazy loading and caching are the same thing. struct Foo { T m_lazyObj = null; bool m_isLoaded = false; T getObj() { if (!m_isLoaded) { m_lazyObj = func(); m_isLoaded = true; } return m_lazyObj; } } Change m_lazyObj to m_cachedObj and m_isLoaded to m_isCached and you have caching. A problem with your suggestion is that it can only handle one load. It's very often the case that you need to mark the object as 'dirty' and set the flag back to false so that it gets reloaded. I'm happy to not have logical const in D provided that the Object interface (and other similar interfaces) don't require that opEquals is const or any nonsense like that. const means physical const, and opEquals should not require physical const. IMO const/immutable should *only* be used when you need to pass things between threads i.e. when you *really do* need physical const. If people start using const like you would in C++ then every interface just becomes unnecessarily restrictive.
Sep 24 2011
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, September 24, 2011 12:19:33 Peter Alexander wrote:
 Lazy loading and caching are the same thing.
No. Caching is more general. Lazy loading is explicitly one load and has different characteristics, whereas is caching can have multiple loads.
 struct Foo
 {
      T m_lazyObj = null;
      bool m_isLoaded = false;
 
      T getObj()
      {
          if (!m_isLoaded)
          {
              m_lazyObj = func();
              m_isLoaded = true;
          }
          return m_lazyObj;
      }
 }
 
 Change m_lazyObj to m_cachedObj and m_isLoaded to m_isCached and you
 have caching.
 
 A problem with your suggestion is that it can only handle one load. It's
 very often the case that you need to mark the object as 'dirty' and set
 the flag back to false so that it gets reloaded.
The problem with this is that with a single load, you can guarantee constness, but once you have multiple, you can't. Unless you can find a mechism which _guarantees_ logical constness while changing the variable multiple times, then you can't guarantee constness, and it doesn't work. Also, whatever the mechanism is, it _has_ to work in the face of immutable, which may or may not be harder with a more general caching mechanism. This particular solution jsn't even _trying_ to solve the caching problem. I seriously question that that's at all solvable without breaking const. The whole reason that this could work is because it does the load only once, which therefore guarantees that the value never changes and therefore never breaks const. This is only attempting to solve the lazy-loading issue, not the general caching issue. If you can come up with a way to do general caching without breaking const, then that's great. But all I've been able to come up with is a potential means of doing lazy loading. But really, my real question here is whether this scheme is really feasible or whether I've missed something. If it's feasible, then _maybe_ it's possible to build on it somehow to come up with a general caching mechanism (though I doubt it), but if it's _not_, then obviously this sort of approach isn't going to be able to do a single lazy load, let alone be expanded somehow to make general caching possible. - Jonathan M Davis
Sep 24 2011
parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 24/09/11 12:47 PM, Jonathan M Davis wrote:
 On Saturday, September 24, 2011 12:19:33 Peter Alexander wrote:
 Lazy loading and caching are the same thing.
No. Caching is more general. Lazy loading is explicitly one load and has different characteristics, whereas is caching can have multiple loads.
Yes, you are right. Sorry.
 The problem with this is that with a single load, you can guarantee constness,
 but once you have multiple, you can't. Unless you can find a mechism which
 _guarantees_ logical constness while changing the variable multiple times,
 then you can't guarantee constness, and it doesn't work. Also, whatever the
 mechanism is, it _has_ to work in the face of immutable, which may or may not
 be harder with a more general caching mechanism.

 This particular solution jsn't even _trying_ to solve the caching problem. I
 seriously question that that's at all solvable without breaking const. The
 whole reason that this could work is because it does the load only once, which
 therefore guarantees that the value never changes and therefore never breaks
 const. This is only attempting to solve the lazy-loading issue, not the
 general caching issue.

 If you can come up with a way to do general caching without breaking const,
 then that's great. But all I've been able to come up with is a potential means
 of doing lazy loading.

 But really, my real question here is whether this scheme is really feasible or
 whether I've missed something. If it's feasible, then _maybe_ it's possible to
 build on it somehow to come up with a general caching mechanism (though I
 doubt it), but if it's _not_, then obviously this sort of approach isn't going
 to be able to do a single lazy load, let alone be expanded somehow to make
 general caching possible.

 - Jonathan M Davis
I think your approach to lazy loading works. However, I'm not sure I see the point of just allowing one kind of enforced logical const. Logical const can't be enforced in general, and I think in this case a partial attempt is as good as no attempt.
Sep 24 2011
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09/24/2011 07:21 PM, Peter Alexander wrote:
 On 24/09/11 12:47 PM, Jonathan M Davis wrote:
 On Saturday, September 24, 2011 12:19:33 Peter Alexander wrote:
 Lazy loading and caching are the same thing.
No. Caching is more general. Lazy loading is explicitly one load and has different characteristics, whereas is caching can have multiple loads.
Yes, you are right. Sorry.
 The problem with this is that with a single load, you can guarantee
 constness,
 but once you have multiple, you can't. Unless you can find a mechism
 which
 _guarantees_ logical constness while changing the variable multiple
 times,
 then you can't guarantee constness, and it doesn't work. Also,
 whatever the
 mechanism is, it _has_ to work in the face of immutable, which may or
 may not
 be harder with a more general caching mechanism.

 This particular solution jsn't even _trying_ to solve the caching
 problem. I
 seriously question that that's at all solvable without breaking const.
 The
 whole reason that this could work is because it does the load only
 once, which
 therefore guarantees that the value never changes and therefore never
 breaks
 const. This is only attempting to solve the lazy-loading issue, not the
 general caching issue.

 If you can come up with a way to do general caching without breaking
 const,
 then that's great. But all I've been able to come up with is a
 potential means
 of doing lazy loading.

 But really, my real question here is whether this scheme is really
 feasible or
 whether I've missed something. If it's feasible, then _maybe_ it's
 possible to
 build on it somehow to come up with a general caching mechanism (though I
 doubt it), but if it's _not_, then obviously this sort of approach
 isn't going
 to be able to do a single lazy load, let alone be expanded somehow to
 make
 general caching possible.

 - Jonathan M Davis
I think your approach to lazy loading works. However, I'm not sure I see the point of just allowing one kind of enforced logical const. Logical const can't be enforced in general, and I think in this case a partial attempt is as good as no attempt.
I disagree. Lazy values are useful for many purposes.
Sep 24 2011
parent Peter Alexander <peter.alexander.au gmail.com> writes:
On 24/09/11 6:53 PM, Timon Gehr wrote:
 On 09/24/2011 07:21 PM, Peter Alexander wrote:
 On 24/09/11 12:47 PM, Jonathan M Davis wrote:
 On Saturday, September 24, 2011 12:19:33 Peter Alexander wrote:
 Lazy loading and caching are the same thing.
No. Caching is more general. Lazy loading is explicitly one load and has different characteristics, whereas is caching can have multiple loads.
Yes, you are right. Sorry.
 The problem with this is that with a single load, you can guarantee
 constness,
 but once you have multiple, you can't. Unless you can find a mechism
 which
 _guarantees_ logical constness while changing the variable multiple
 times,
 then you can't guarantee constness, and it doesn't work. Also,
 whatever the
 mechanism is, it _has_ to work in the face of immutable, which may or
 may not
 be harder with a more general caching mechanism.

 This particular solution jsn't even _trying_ to solve the caching
 problem. I
 seriously question that that's at all solvable without breaking const.
 The
 whole reason that this could work is because it does the load only
 once, which
 therefore guarantees that the value never changes and therefore never
 breaks
 const. This is only attempting to solve the lazy-loading issue, not the
 general caching issue.

 If you can come up with a way to do general caching without breaking
 const,
 then that's great. But all I've been able to come up with is a
 potential means
 of doing lazy loading.

 But really, my real question here is whether this scheme is really
 feasible or
 whether I've missed something. If it's feasible, then _maybe_ it's
 possible to
 build on it somehow to come up with a general caching mechanism
 (though I
 doubt it), but if it's _not_, then obviously this sort of approach
 isn't going
 to be able to do a single lazy load, let alone be expanded somehow to
 make
 general caching possible.

 - Jonathan M Davis
I think your approach to lazy loading works. However, I'm not sure I see the point of just allowing one kind of enforced logical const. Logical const can't be enforced in general, and I think in this case a partial attempt is as good as no attempt.
I disagree. Lazy values are useful for many purposes.
Sounds like we agree to me :-)
Sep 25 2011
prev sibling next sibling parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Sat, 24 Sep 2011 13:19:33 +0200, Peter Alexander  
<peter.alexander.au gmail.com> wrote:

 Lazy loading and caching are the same thing.

 struct Foo
 {
      T m_lazyObj = null;
      bool m_isLoaded = false;

      T getObj()
      {
          if (!m_isLoaded)
          {
              m_lazyObj = func();
              m_isLoaded = true;
          }
          return m_lazyObj;
      }
 }

 Change m_lazyObj to m_cachedObj and m_isLoaded to m_isCached and you  
 have caching.
This is too simple. If the system also rewrites all members to properties that sets m_isLoaded = true, it might work. Example: struct S { int n; lazy int otherN = () { return n + 2; } // compile time error here if you refer to lazily initialized members. } => struct S { int __n; int __otherN; bool __otherNloaded = false; property int n( ) { return __n; } property int n(int value) { if (__n != value) { __n = value; __otherNloaded = false; } return __n; } property int otherN( ) { if (!__otherNloaded) { __otherN = () { return n + 2; }(); __otherNloaded = true; } return __otherN; } } Now, the overhead might be troublesome, but this seems to me to work. -- Simen
Sep 25 2011
next sibling parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Mon, 26 Sep 2011 02:02:06 +0200, Simen Kjaeraas  
<simen.kjaras gmail.com> wrote:

 On Sat, 24 Sep 2011 13:19:33 +0200, Peter Alexander  
 <peter.alexander.au gmail.com> wrote:

 Lazy loading and caching are the same thing.

 struct Foo
 {
      T m_lazyObj = null;
      bool m_isLoaded = false;

      T getObj()
      {
          if (!m_isLoaded)
          {
              m_lazyObj = func();
              m_isLoaded = true;
          }
          return m_lazyObj;
      }
 }

 Change m_lazyObj to m_cachedObj and m_isLoaded to m_isCached and you  
 have caching.
This is too simple. If the system also rewrites all members to properties that sets m_isLoaded = true, it might work. Example: struct S { int n; lazy int otherN = () { return n + 2; } // compile time error here if you refer to lazily initialized members. } => struct S { int __n; int __otherN; bool __otherNloaded = false; property int n( ) { return __n; } property int n(int value) { if (__n != value) { __n = value; __otherNloaded = false; } return __n; } property int otherN( ) { if (!__otherNloaded) { __otherN = () { return n + 2; }(); __otherNloaded = true; } return __otherN; } } Now, the overhead might be troublesome, but this seems to me to work.
Oh, and of course more stuff is need if you try to make this work across threads. -- Simen
Sep 25 2011
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, September 26, 2011 02:03:23 Simen Kjaeraas wrote:
 On Mon, 26 Sep 2011 02:02:06 +0200, Simen Kjaeraas
 
 <simen.kjaras gmail.com> wrote:
 On Sat, 24 Sep 2011 13:19:33 +0200, Peter Alexander
 
 <peter.alexander.au gmail.com> wrote:
 Lazy loading and caching are the same thing.
 
 struct Foo
 {
 
      T m_lazyObj = null;
      bool m_isLoaded = false;
      
      T getObj()
      {
      
          if (!m_isLoaded)
          {
          
              m_lazyObj = func();
              m_isLoaded = true;
          
          }
          return m_lazyObj;
      
      }
 
 }
 
 Change m_lazyObj to m_cachedObj and m_isLoaded to m_isCached and you
 have caching.
This is too simple. If the system also rewrites all members to properties that sets m_isLoaded = true, it might work. Example: struct S { int n; lazy int otherN = () { return n + 2; } // compile time error here if you refer to lazily initialized members. } => struct S { int __n; int __otherN; bool __otherNloaded = false; property int n( ) { return __n; } property int n(int value) { if (__n != value) { __n = value; __otherNloaded = false; } return __n; } property int otherN( ) { if (!__otherNloaded) { __otherN = () { return n + 2; }(); __otherNloaded = true; } return __otherN; } } Now, the overhead might be troublesome, but this seems to me to work.
Oh, and of course more stuff is need if you try to make this work across threads.
The threading issue is exactly why I suggested that objects which are constructed as shared automatically have the property called at the end of the constructor (to force initialization) and that an opCast to shared should be added which does the same (or if there's already one, then the call to the property function is added at the end of it). You should completely eliminate the threading issues that way. - Jonathan M Davis
Sep 25 2011
parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Mon, 26 Sep 2011 02:19:29 +0200, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 Oh, and of course more stuff is need if you try to make this work across
 threads.
The threading issue is exactly why I suggested that objects which are constructed as shared automatically have the property called at the end of the constructor (to force initialization) and that an opCast to shared should be added which does the same (or if there's already one, then the call to the property function is added at the end of it). You should completely eliminate the threading issues that way.
True. I just chose to ignore that out of stupidity. :p -- Simen
Sep 26 2011
parent "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Mon, 26 Sep 2011 16:44:29 +0200, Simen Kjaeraas  
<simen.kjaras gmail.com> wrote:

 On Mon, 26 Sep 2011 02:19:29 +0200, Jonathan M Davis  
 <jmdavisProg gmx.com> wrote:

 Oh, and of course more stuff is need if you try to make this work  
 across
 threads.
The threading issue is exactly why I suggested that objects which are constructed as shared automatically have the property called at the end of the constructor (to force initialization) and that an opCast to shared should be added which does the same (or if there's already one, then the call to the property function is added at the end of it). You should completely eliminate the threading issues that way.
True. I just chose to ignore that out of stupidity. :p
Meant my own, in case it wasn't clear. -- Simen
Sep 26 2011
prev sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, September 26, 2011 02:02:06 Simen Kjaeraas wrote:
 On Sat, 24 Sep 2011 13:19:33 +0200, Peter Alexander
 
 <peter.alexander.au gmail.com> wrote:
 Lazy loading and caching are the same thing.
 
 struct Foo
 {
 
      T m_lazyObj = null;
      bool m_isLoaded = false;
      
      T getObj()
      {
      
          if (!m_isLoaded)
          {
          
              m_lazyObj = func();
              m_isLoaded = true;
          
          }
          return m_lazyObj;
      
      }
 
 }
 
 Change m_lazyObj to m_cachedObj and m_isLoaded to m_isCached and you
 have caching.
This is too simple. If the system also rewrites all members to properties that sets m_isLoaded = true, it might work. Example: struct S { int n; lazy int otherN = () { return n + 2; } // compile time error here if you refer to lazily initialized members. }
Why would that be a compile time error? otherN won't be initialized until after n is. I suppose that it would be a problem if you tried to use otherN in the constructor, but if that's the case, why lazily load it? We could simply disallow the use of lazy member variables in constructors - though then you'd have to worry about whether a lazy member variable were used in any functions which the constructor called, so you'd probably have to disallow calling other member functions inside of the constructors of objects with lazy member variables. So, it does start to get a bit restrictive.
 =>
 
 struct S {
      int __n;
      int __otherN;
      bool __otherNloaded = false;
 
 
       property
      int n( ) {
          return __n;
      }
 
       property
      int n(int value) {
          if (__n != value) {
              __n = value;
              __otherNloaded = false;
          }
          return __n;
      }
 
       property
      int otherN( ) {
         if (!__otherNloaded) {
             __otherN = () { return n + 2; }();
             __otherNloaded = true;
         }
         return __otherN;
      }
 }
 
 Now, the overhead might be troublesome, but this seems to me to work.
That doesn't scale, and the compiler would have to know that it need to do that to n because the function inilializing otherN used n. I just don't think that that's tenable. So, it may be that further restrictions would have to be put on the initializing function for this to work, and if we do that, the idea could quickly become too limited to be very useful. Obviously, some more details need to be ironed out. - Jonathan M Davis
Sep 25 2011
parent "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Mon, 26 Sep 2011 02:18:30 +0200, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Monday, September 26, 2011 02:02:06 Simen Kjaeraas wrote:
 On Sat, 24 Sep 2011 13:19:33 +0200, Peter Alexander

 <peter.alexander.au gmail.com> wrote:
 Lazy loading and caching are the same thing.

 struct Foo
 {

      T m_lazyObj = null;
      bool m_isLoaded = false;

      T getObj()
      {

          if (!m_isLoaded)
          {

              m_lazyObj = func();
              m_isLoaded = true;

          }
          return m_lazyObj;

      }

 }

 Change m_lazyObj to m_cachedObj and m_isLoaded to m_isCached and you
 have caching.
This is too simple. If the system also rewrites all members to properties that sets m_isLoaded = true, it might work. Example: struct S { int n; lazy int otherN = () { return n + 2; } // compile time error here if you refer to lazily initialized members. }
Why would that be a compile time error? otherN won't be initialized until after n is. I suppose that it would be a problem if you tried to use otherN in the constructor, but if that's the case, why lazily load it? We could simply disallow the use of lazy member variables in constructors - though then you'd have to worry about whether a lazy member variable were used in any functions which the constructor called, so you'd probably have to disallow calling other member functions inside of the constructors of objects with lazy member variables. So, it does start to get a bit restrictive.
I might have been a bit unclear here. This is allowed: struct S { int n; lazy int otherN = (){ return n + 2; }; } This is not: struct S2 { int n; lazy int otherN = (){ return n + 2; }; lazy int otherOtherN = (){ return otherN + 2; }; // Error! } (I forgot to specify this) The initializing function needs to be pure and depend only on non-logical const members (otherwise it may break the const guarantee, by e.g. being dependent upon its own previous value). Conceivably it could depend on other logconst members, but only if these dependencies form no loops (which would logically be infinite).
 Now, the overhead might be troublesome, but this seems to me to work.
That doesn't scale, and the compiler would have to know that it need to do that to n because the function inilializing otherN used n.
That's why I said "rewrites all members". What really makes this untenable is nested structs. Those would of course also have to be rewritten so that changing one of their members would dirty the outer struct. It should come as no surprise that this is unpossible. An obvious optimization is, as you say, to only rewrite the members that actually do influence the value of the logconst member. -- Simen
Sep 26 2011
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander  
<peter.alexander.au gmail.com> wrote:

 I'm happy to not have logical const in D provided that the Object  
 interface (and other similar interfaces) don't require that opEquals is  
 const or any nonsense like that. const means physical const, and  
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass things  
 between threads i.e. when you *really do* need physical const. If people  
 start using const like you would in C++ then every interface just  
 becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=1824 It *will* be fixed eventually. The fact that opEquals is not const is a huge problem. -Steve
Sep 26 2011
next sibling parent reply travert phare.normalesup.org (Christophe) writes:
"Steven Schveighoffer" , dans le message (digitalmars.D:145415), a
 écrit :
 On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander  
 <peter.alexander.au gmail.com> wrote:
 
 I'm happy to not have logical const in D provided that the Object  
 interface (and other similar interfaces) don't require that opEquals is  
 const or any nonsense like that. const means physical const, and  
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass things  
 between threads i.e. when you *really do* need physical const. If people  
 start using const like you would in C++ then every interface just  
 becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=1824 It *will* be fixed eventually. The fact that opEquals is not const is a huge problem. -Steve
Why would it be such a huge problem, as long as there is both non-const and const overload ?
Sep 28 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 28 Sep 2011 06:22:14 -0400, Christophe  =

<travert phare.normalesup.org> wrote:

 "Steven Schveighoffer" , dans le message (digitalmars.D:145415), a
  =C3=A9crit :
 On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 I'm happy to not have logical const in D provided that the Object
 interface (and other similar interfaces) don't require that opEquals=
is
 const or any nonsense like that. const means physical const, and
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass thin=
gs
 between threads i.e. when you *really do* need physical const. If  =
 people
 start using const like you would in C++ then every interface just
 becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=3D1824 It *will* be fixed eventually. The fact that opEquals is not const i=
s a
 huge problem.

 -Steve
Why would it be such a huge problem, as long as there is both non-cons=
t
 and const overload ?
Having multiple overloads is not good either. Then you have to overload= = both to have pretty much the same code. Also note that the way the compiler compares objects is not conducive to= = multiple overloads. const covers all three constancies (mutable, const, immutable), why is = that one overload not enough? One aspect which will be interesting to tackle (if desired at all) is = comparing shared objects. We *would* need another overload for that. -Steve
Sep 28 2011
prev sibling parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 26/09/11 12:52 PM, Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 I'm happy to not have logical const in D provided that the Object
 interface (and other similar interfaces) don't require that opEquals
 is const or any nonsense like that. const means physical const, and
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass things
 between threads i.e. when you *really do* need physical const. If
 people start using const like you would in C++ then every interface
 just becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=1824 It *will* be fixed eventually. The fact that opEquals is not const is a huge problem. -Steve
I was arguing that opEquals (and co.) should *not* be const. IMO it would be a huge problem if they were. Andrei says that it will (in a way) be both, so I'm happy with that.
Sep 28 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 28 Sep 2011 19:21:33 -0400, Peter Alexander  
<peter.alexander.au gmail.com> wrote:

 On 26/09/11 12:52 PM, Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 I'm happy to not have logical const in D provided that the Object
 interface (and other similar interfaces) don't require that opEquals
 is const or any nonsense like that. const means physical const, and
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass things
 between threads i.e. when you *really do* need physical const. If
 people start using const like you would in C++ then every interface
 just becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=1824 It *will* be fixed eventually. The fact that opEquals is not const is a huge problem. -Steve
I was arguing that opEquals (and co.) should *not* be const. IMO it would be a huge problem if they were.
why? For what purpose do you need to change an object during comparison?
 Andrei says that it will (in a way) be both, so I'm happy with that.
I haven't seen that statement. -Steve
Sep 29 2011
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09/29/2011 01:33 PM, Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 19:21:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 On 26/09/11 12:52 PM, Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 I'm happy to not have logical const in D provided that the Object
 interface (and other similar interfaces) don't require that opEquals
 is const or any nonsense like that. const means physical const, and
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass things
 between threads i.e. when you *really do* need physical const. If
 people start using const like you would in C++ then every interface
 just becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=1824 It *will* be fixed eventually. The fact that opEquals is not const is a huge problem. -Steve
I was arguing that opEquals (and co.) should *not* be const. IMO it would be a huge problem if they were.
why? For what purpose do you need to change an object during comparison?
caching computations / calling logically const member functions.
Sep 29 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 07:52:21 -0400, Timon Gehr <timon.gehr gmx.ch> wrote:

 On 09/29/2011 01:33 PM, Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 19:21:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 On 26/09/11 12:52 PM, Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 I'm happy to not have logical const in D provided that the Object
 interface (and other similar interfaces) don't require that opEquals
 is const or any nonsense like that. const means physical const, and
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass  
 things
 between threads i.e. when you *really do* need physical const. If
 people start using const like you would in C++ then every interface
 just becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=1824 It *will* be fixed eventually. The fact that opEquals is not const is a huge problem. -Steve
I was arguing that opEquals (and co.) should *not* be const. IMO it would be a huge problem if they were.
why? For what purpose do you need to change an object during comparison?
caching computations / calling logically const member functions.
caching computations is an optimization, it's not necessary. And if logical const were supported, those functions would be const, so opEquals could be const too. But I'm still not convinced that equality comparison is so complex that we need either of these. Do you have any real-world examples? -Steve
Sep 29 2011
prev sibling next sibling parent reply travert phare.normalesup.org (Christophe) writes:
"Steven Schveighoffer" , dans le message (digitalmars.D:145729), a
 I was arguing that opEquals (and co.) should *not* be const. IMO it  
 would be a huge problem if they were.
why? For what purpose do you need to change an object during comparison?
You may not need to change the object, but you may need to call a non-const method. It has been argued against the opponent to transitive const that they are not obliged to use const. Then opEqual should not oblige them to use const. const is so viral in D that people not willing to use const will have to change lines and lines of code to get opEqual working. It is always possible for a non-const version of opEqual to forward to the const version, so people willing to use a const version do not have to define a non-const version. -- Christophe
Sep 29 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 09:32:06 -0400, Christophe  
<travert phare.normalesup.org> wrote:

 "Steven Schveighoffer" , dans le message (digitalmars.D:145729), a
 I was arguing that opEquals (and co.) should *not* be const. IMO it
 would be a huge problem if they were.
why? For what purpose do you need to change an object during comparison?
You may not need to change the object, but you may need to call a non-const method. It has been argued against the opponent to transitive const that they are not obliged to use const. Then opEqual should not oblige them to use const. const is so viral in D that people not willing to use const will have to change lines and lines of code to get opEqual working.
The argument that you are not obliged to use const is very hollow, especially when it comes to Object. The reality is, whatever Object uses, so must you use. There are no choices. Either Object is const-aware and all your derivatives must be, or Object does not use const, and all your derivatives must not. inout should make this much const more palatable, when it gets implemented. But I think the right move is to make Object const-aware. For structs, I think you should be able to use whatever you want, there is no base interface to implement.
 It is always possible for a non-const version of opEqual to forward to
 the const version, so people willing to use a const version do not have
 to define a non-const version.
Again, you still need to define both, this is not a good situation. -Steve
Sep 29 2011
parent reply travert phare.normalesup.org (Christophe) writes:
"Steven Schveighoffer" , dans le message (digitalmars.D:145738), a
 It is always possible for a non-const version of opEqual to forward to
 the const version, so people willing to use a const version do not have
 to define a non-const version.
Again, you still need to define both, this is not a good situation.
No, I didn't express myself correctly. The non-const version should forward to the const version *by default*.
Sep 29 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 11:09:13 -0400, Christophe  
<travert phare.normalesup.org> wrote:

 "Steven Schveighoffer" , dans le message (digitalmars.D:145738), a
 It is always possible for a non-const version of opEqual to forward to
 the const version, so people willing to use a const version do not have
 to define a non-const version.
Again, you still need to define both, this is not a good situation.
No, I didn't express myself correctly. The non-const version should forward to the const version *by default*.
Fine, but if you want to define a non-const version that *doesn't* call the const version, you have to define both. So even if you *don't* want to deal with const, you still do. I should have been clearer, sorry. Note that the compiler currently calls a global method which accepts two non-const Objects. In order for it to support both const and mutable versions, it would have to have 4 different functions. I really don't think all this complexity is worth the benefit. Just learn to use const properly, or don't use the operator system to do comparisons. Object.opEquals should be const. -Steve
Sep 29 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/29/11 8:38 AM, Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 11:09:13 -0400, Christophe
 <travert phare.normalesup.org> wrote:

 "Steven Schveighoffer" , dans le message (digitalmars.D:145738), a
 It is always possible for a non-const version of opEqual to forward to
 the const version, so people willing to use a const version do not have
 to define a non-const version.
Again, you still need to define both, this is not a good situation.
No, I didn't express myself correctly. The non-const version should forward to the const version *by default*.
Fine, but if you want to define a non-const version that *doesn't* call the const version, you have to define both. So even if you *don't* want to deal with const, you still do. I should have been clearer, sorry. Note that the compiler currently calls a global method which accepts two non-const Objects. In order for it to support both const and mutable versions, it would have to have 4 different functions. I really don't think all this complexity is worth the benefit. Just learn to use const properly, or don't use the operator system to do comparisons. Object.opEquals should be const.
If we make this change we're liable to break all code that defines opEquals for classes. Two versions should be enough: const/const and mutable/mutable, which by default forwards to const/const. Old code will run unchanged at a slight efficiency cost due to forwarding. Since it didn't previously work for const anyway, no harm done. New code gets to only override the const/const version. Where is this wrong? Andrei
Sep 29 2011
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 11:45:18 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 9/29/11 8:38 AM, Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 11:09:13 -0400, Christophe
 <travert phare.normalesup.org> wrote:

 "Steven Schveighoffer" , dans le message (digitalmars.D:145738), a
 It is always possible for a non-const version of opEqual to forward  
 to
 the const version, so people willing to use a const version do not  
 have
 to define a non-const version.
Again, you still need to define both, this is not a good situation.
No, I didn't express myself correctly. The non-const version should forward to the const version *by default*.
Fine, but if you want to define a non-const version that *doesn't* call the const version, you have to define both. So even if you *don't* want to deal with const, you still do. I should have been clearer, sorry. Note that the compiler currently calls a global method which accepts two non-const Objects. In order for it to support both const and mutable versions, it would have to have 4 different functions. I really don't think all this complexity is worth the benefit. Just learn to use const properly, or don't use the operator system to do comparisons. Object.opEquals should be const.
If we make this change we're liable to break all code that defines opEquals for classes. Two versions should be enough: const/const and mutable/mutable, which by default forwards to const/const. Old code will run unchanged at a slight efficiency cost due to forwarding. Since it didn't previously work for const anyway, no harm done. New code gets to only override the const/const version. Where is this wrong?
class MyExistingClass { string name; this(string n) { name = n;} bool opEquals(Object other) { if(auto x = cast(MyExistingClass)other) { return x.name == name; } return false; } } void main() { auto mec = new MyExistingClass("foo".idup); auto mec2 = new MyExistingClass("foo".idup); const mec_const = mec; const mec2_const = mec2; assert(mec == mec2); assert(mec_const == mec2_const); //??? } So what does the second assert do? With the current compiler, it should fail to compile (not sure if it does, I know there was a bug where it used to pass). The current Object has a default equality comparison that checks for identity. Is that what the default const version would do? In that case, the first assert passes, whereas the second assert fails. If we make the default const opEquals throw, then this breaks existing code for classes which *don't* implement opEquals. Any way you slice it, code is going to break. I'd rather it break in the compile phase then silently misbehave later. -Steve
Sep 29 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/29/11 10:15 AM, Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 11:45:18 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 9/29/11 8:38 AM, Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 11:09:13 -0400, Christophe
 <travert phare.normalesup.org> wrote:

 "Steven Schveighoffer" , dans le message (digitalmars.D:145738), a
 It is always possible for a non-const version of opEqual to
 forward to
 the const version, so people willing to use a const version do not
 have
 to define a non-const version.
Again, you still need to define both, this is not a good situation.
No, I didn't express myself correctly. The non-const version should forward to the const version *by default*.
Fine, but if you want to define a non-const version that *doesn't* call the const version, you have to define both. So even if you *don't* want to deal with const, you still do. I should have been clearer, sorry. Note that the compiler currently calls a global method which accepts two non-const Objects. In order for it to support both const and mutable versions, it would have to have 4 different functions. I really don't think all this complexity is worth the benefit. Just learn to use const properly, or don't use the operator system to do comparisons. Object.opEquals should be const.
If we make this change we're liable to break all code that defines opEquals for classes. Two versions should be enough: const/const and mutable/mutable, which by default forwards to const/const. Old code will run unchanged at a slight efficiency cost due to forwarding. Since it didn't previously work for const anyway, no harm done. New code gets to only override the const/const version. Where is this wrong?
class MyExistingClass { string name; this(string n) { name = n;} bool opEquals(Object other) { if(auto x = cast(MyExistingClass)other) { return x.name == name; } return false; } } void main() { auto mec = new MyExistingClass("foo".idup); auto mec2 = new MyExistingClass("foo".idup); const mec_const = mec; const mec2_const = mec2; assert(mec == mec2); assert(mec_const == mec2_const); //??? } So what does the second assert do?
Should fail during runtime. Your example does not currently compile so you can't talk about breaking code that currently doesn't work. Andrei
Sep 29 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 13:20:21 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 9/29/11 10:15 AM, Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 11:45:18 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 9/29/11 8:38 AM, Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 11:09:13 -0400, Christophe
 <travert phare.normalesup.org> wrote:

 "Steven Schveighoffer" , dans le message (digitalmars.D:145738), a
 It is always possible for a non-const version of opEqual to
 forward to
 the const version, so people willing to use a const version do not
 have
 to define a non-const version.
Again, you still need to define both, this is not a good situation.
No, I didn't express myself correctly. The non-const version should forward to the const version *by default*.
Fine, but if you want to define a non-const version that *doesn't* call the const version, you have to define both. So even if you *don't* want to deal with const, you still do. I should have been clearer, sorry. Note that the compiler currently calls a global method which accepts two non-const Objects. In order for it to support both const and mutable versions, it would have to have 4 different functions. I really don't think all this complexity is worth the benefit. Just learn to use const properly, or don't use the operator system to do comparisons. Object.opEquals should be const.
If we make this change we're liable to break all code that defines opEquals for classes. Two versions should be enough: const/const and mutable/mutable, which by default forwards to const/const. Old code will run unchanged at a slight efficiency cost due to forwarding. Since it didn't previously work for const anyway, no harm done. New code gets to only override the const/const version. Where is this wrong?
class MyExistingClass { string name; this(string n) { name = n;} bool opEquals(Object other) { if(auto x = cast(MyExistingClass)other) { return x.name == name; } return false; } } void main() { auto mec = new MyExistingClass("foo".idup); auto mec2 = new MyExistingClass("foo".idup); const mec_const = mec; const mec2_const = mec2; assert(mec == mec2); assert(mec_const == mec2_const); //??? } So what does the second assert do?
Should fail during runtime. Your example does not currently compile so you can't talk about breaking code that currently doesn't work.
Everything compiles except for the second assert. But just because nobody wrote the assert before the change does not mean they will refrain from writing it after the change. The runtime failure makes code that was not quite fully defined (i.e. cannot compare two const objects) to badly defined (you can compare them, but you get an exception), especially when a const version is *easy* to create. I mean, all you are doing is comparing two immutable strings, that should be doable using const. Now I have to nag the author of MyExistingClass to change his opEquals to const. Compare changing opEquals to const instead of just *adding* a const version of opEquals. With a straight change, the code doesn't compile and you put const on opEquals wherever it complains. Now it's still fully defined and runs the same as before. How is that a bad thing? -Steve
Sep 29 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/29/11 10:39 AM, Steven Schveighoffer wrote:
 The runtime failure makes code that was not quite fully defined (i.e.
 cannot compare two const objects) to badly defined (you can compare
 them, but you get an exception), especially when a const version is
 *easy* to create. I mean, all you are doing is comparing two immutable
 strings, that should be doable using const. Now I have to nag the author
 of MyExistingClass to change his opEquals to const.
I agree with this characterization.
 Compare changing opEquals to const instead of just *adding* a const
 version of opEquals. With a straight change, the code doesn't compile
 and you put const on opEquals wherever it complains. Now it's still
 fully defined and runs the same as before. How is that a bad thing?
It's a bad thing because it breaks existing code. Andrei
Sep 29 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 13:41:04 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 9/29/11 10:39 AM, Steven Schveighoffer wrote:
 Compare changing opEquals to const instead of just *adding* a const
 version of opEquals. With a straight change, the code doesn't compile
 and you put const on opEquals wherever it complains. Now it's still
 fully defined and runs the same as before. How is that a bad thing?
It's a bad thing because it breaks existing code.
But not *silently*. Breaking existing code is a tradeoff. I contend that the tradeoff is worth it, because: 1. 99% of the cases where opEquals is not const can just be switched to const (i.e. the code runs fine as const, but must not be labeled const because of the current language defect). For these cases, the break is trivial to fix. 2. Any place where opEquals actually *does* modify the object are either abusing opEquals (i.e. actually changing state) and should be rewritten, or enabling an optimization (i.e. caching). So the change *promotes* doing the right thing (switching opEquals to const) in cases where it's a trivial change. The optimization is the only legitimate counter-case. However, we should make that version the exception, since it is not common (and is not technically necessary). Most opEquals are like what I have written -- compare each member. So how do we solve the optimization? I think this is better solved by solving the logical const problem in general rather than infect every object which does not care about it with the burden of defining multiple opEquals'. Note that I think the compiler should not care at all about opEquals' definition for structs. -Steve
Sep 29 2011
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, September 29, 2011 08:45:18 Andrei Alexandrescu wrote:
 If we make this change we're liable to break all code that defines
 opEquals for classes.
 
 Two versions should be enough: const/const and mutable/mutable, which by
 default forwards to const/const. Old code will run unchanged at a slight
 efficiency cost due to forwarding. Since it didn't previously work for
 const anyway, no harm done.
 
 New code gets to only override the const/const version.
So, are we talking about having non-const equals as a temporary thing or as a permanent thing? From the discussion on github, I got the impression that we were talking about it being permanent. From the discussion here, it sounds more like we're talking about having both a const and non-const equals as a deprecation path followed by having only a const opEquals. So, which of the two are we really proposing here? - Jonathan M Davis
Sep 29 2011
prev sibling parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 29/09/11 12:33 PM, Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 19:21:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 On 26/09/11 12:52 PM, Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 I'm happy to not have logical const in D provided that the Object
 interface (and other similar interfaces) don't require that opEquals
 is const or any nonsense like that. const means physical const, and
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass things
 between threads i.e. when you *really do* need physical const. If
 people start using const like you would in C++ then every interface
 just becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=1824 It *will* be fixed eventually. The fact that opEquals is not const is a huge problem. -Steve
I was arguing that opEquals (and co.) should *not* be const. IMO it would be a huge problem if they were.
why? For what purpose do you need to change an object during comparison?
The comparison may involve comparing a sub-object that is lazily created. It could also involve computing a cached perfect hash for faster comparison, requiring memoization.
 Andrei says that it will (in a way) be both, so I'm happy with that.
I haven't seen that statement.
I can't find it, but he said that there will be two versions: a const version and a non-const version. By default, the non-const version will forward to the const version, so you only have to implement one.
Sep 29 2011
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 13:48:02 -0400, Peter Alexander  
<peter.alexander.au gmail.com> wrote:

 On 29/09/11 12:33 PM, Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 19:21:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 On 26/09/11 12:52 PM, Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 I'm happy to not have logical const in D provided that the Object
 interface (and other similar interfaces) don't require that opEquals
 is const or any nonsense like that. const means physical const, and
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass  
 things
 between threads i.e. when you *really do* need physical const. If
 people start using const like you would in C++ then every interface
 just becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=1824 It *will* be fixed eventually. The fact that opEquals is not const is a huge problem. -Steve
I was arguing that opEquals (and co.) should *not* be const. IMO it would be a huge problem if they were.
why? For what purpose do you need to change an object during comparison?
The comparison may involve comparing a sub-object that is lazily created. It could also involve computing a cached perfect hash for faster comparison, requiring memoization.
Neither of these are required for opEquals to work. They are optimizations. -Steve
Sep 29 2011
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, September 29, 2011 10:50 Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 13:48:02 -0400, Peter Alexander
 The comparison may involve comparing a sub-object that is lazily
 created. It could also involve computing a cached perfect hash for
 faster comparison, requiring memoization.
Neither of these are required for opEquals to work. They are optimizations.
True, but by making opEquals const and only const, you're disallowing such optimzations. It becomes impossible for someone to do any kind of lazy loading at all. So, in principle, I definitely think that opEquals should be const, and practically-speaking it _needs_ to be const for const to work properly, but if we make it _only_ const, then we're disallowing some performance- related stuff that some people care a lot about. Also, if we force opEquals to be const, then those classes that just _can't_ be const and work properly (e.g. those that _require_ logical const to even work - like if they have a mutex) won't be usable at all. So, making it both seems to be the only real option. Though come to think of it, since you _still_ can't have lazy loading when a const opEquals even _exists_, you're either still going to have to forgoe lazy loading, make the const opEquals throw, or get nasty bugs when the non- overridden base class version of opEquals gets thrown. So, this could be even worse than we were thinking... I'm not quite sure what the correct solution is then. By having both, we allow for both paradigms but make the use of const opEquals potentially buggy in the cases where someone insists on doing lazy loading. But if we insist on only having the const opEquals, then some folks are going to get an unavoidable performance hit in their programs. If caching is the only issue, then it's a non-issue, because the const versions just become less efficient in the cases where the value is dirty, but if the state of the function actually needs to be changed for opEquals to work (e.g. lazy loading or internal mutexes), then we're going to have cases where the const opEquals can't possibly work correctly. This is ugly. - Jonathan M Davis
Sep 29 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 14:45:13 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Thursday, September 29, 2011 10:50 Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 13:48:02 -0400, Peter Alexander
 The comparison may involve comparing a sub-object that is lazily
 created. It could also involve computing a cached perfect hash for
 faster comparison, requiring memoization.
Neither of these are required for opEquals to work. They are optimizations.
True, but by making opEquals const and only const, you're disallowing such optimzations. It becomes impossible for someone to do any kind of lazy loading at all. So, in principle, I definitely think that opEquals should be const, and practically-speaking it _needs_ to be const for const to work properly, but if we make it _only_ const, then we're disallowing some performance- related stuff that some people care a lot about.
Not at all. You are not forced to use obj == obj to do comparisons.
 Also, if we force opEquals to
 be const, then those classes that just _can't_ be const and work properly
 (e.g. those that _require_ logical const to even work - like if they  
 have a
 mutex) won't be usable at all. So, making it both seems to be the only  
 real
 option.
Mutexes are usable within const functions. They are *always* logically const. It's one of the reasons I think a subset of logical const works well, even if we don't have full-blown support. I still feel there is a logical const solution out there which gets enough of the way there to be useful, but makes circumventing immutable impossible or highly unlikely.
 Though come to think of it, since you _still_ can't have lazy loading  
 when a
 const opEquals even _exists_, you're either still going to have to  
 forgoe lazy
 loading, make the const opEquals throw, or get nasty bugs when the non-
 overridden base class version of opEquals gets thrown. So, this could be  
 even
 worse than we were thinking...

 I'm not quite sure what the correct solution is then. By having both, we  
 allow
 for both paradigms but make the use of const opEquals potentially buggy  
 in the
 cases where someone insists on doing lazy loading. But if we insist on  
 only
 having the const opEquals, then some folks are going to get an  
 unavoidable
 performance hit in their programs. If caching is the only issue, then  
 it's a
 non-issue, because the const versions just become less efficient in the  
 cases
 where the value is dirty, but if the state of the function actually  
 needs to
 be changed for opEquals to work (e.g. lazy loading or internal mutexes),  
 then
 we're going to have cases where the const opEquals can't possibly work
 correctly.
I think the caching/lazy loading *specifically for opEquals* has been blown way out of proportion. I don't think I've ever written one which requires caching to be efficient. -Steve
Sep 29 2011
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, September 29, 2011 14:55:37 Steven Schveighoffer wrote:
 I think the caching/lazy loading *specifically for opEquals* has been
 blown way out of proportion.  I don't think I've ever written one which
 requires caching to be efficient.
Caching is non-issue. It's just less efficient if you can't update the cache on an opEquals call. Semantically, you're fine. The problem is lazy loading. If you don't actually set the variable until its getter property is called, and the getter property has never been called prior to opEquals being called, then opEquals _can't_ properly do its calculation, because the value hasn't been loaded yet. Now, assuming that the value is lazily loaded in a pure manner, then you could have a const overload of the property which just does the calculation without actually changing the state of the object, but if the value can't be loaded purely, then that doesn't work. Personally, I don't care much about lazy loading. I _never_ use it. However, several folks have been saying that it's important to them and their D programs, and as far as I can see, forcing opEquals to be const makes that impossible for them. But maybe the solution is that they're just going to have to throw from opEquals if not everything has been loaded yet and either make sure that they load all values before calling it or use something other than == for equality comparison (though honestly, it reflects really poorly on D if people are forced to reimplement their own equals function because of const). I don't know what the best solution is, but it's clear that this is a case where const is causing problems. opEquals _must_ be const for const to work correctly, but making it const eliminates - or at least seriously hampers - some peformance-critical solutions. - Jonathan M Davis
Sep 29 2011
prev sibling parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 29/09/11 6:50 PM, Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 13:48:02 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 On 29/09/11 12:33 PM, Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 19:21:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 On 26/09/11 12:52 PM, Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 I'm happy to not have logical const in D provided that the Object
 interface (and other similar interfaces) don't require that opEquals
 is const or any nonsense like that. const means physical const, and
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass
 things
 between threads i.e. when you *really do* need physical const. If
 people start using const like you would in C++ then every interface
 just becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=1824 It *will* be fixed eventually. The fact that opEquals is not const is a huge problem. -Steve
I was arguing that opEquals (and co.) should *not* be const. IMO it would be a huge problem if they were.
why? For what purpose do you need to change an object during comparison?
The comparison may involve comparing a sub-object that is lazily created. It could also involve computing a cached perfect hash for faster comparison, requiring memoization.
Neither of these are required for opEquals to work. They are optimizations.
So what you're saying is that, in D, I'm not allowed to optimize my opEquals and that I should be fine with that?
Sep 29 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 14:59:25 -0400, Peter Alexander  
<peter.alexander.au gmail.com> wrote:

 On 29/09/11 6:50 PM, Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 13:48:02 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 On 29/09/11 12:33 PM, Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 19:21:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 On 26/09/11 12:52 PM, Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 07:19:33 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 I'm happy to not have logical const in D provided that the Object
 interface (and other similar interfaces) don't require that  
 opEquals
 is const or any nonsense like that. const means physical const, and
 opEquals should not require physical const.

 IMO const/immutable should *only* be used when you need to pass
 things
 between threads i.e. when you *really do* need physical const. If
 people start using const like you would in C++ then every interface
 just becomes unnecessarily restrictive.
FYI, this is a bug, not a feature. http://d.puremagic.com/issues/show_bug.cgi?id=1824 It *will* be fixed eventually. The fact that opEquals is not const is a huge problem. -Steve
I was arguing that opEquals (and co.) should *not* be const. IMO it would be a huge problem if they were.
why? For what purpose do you need to change an object during comparison?
The comparison may involve comparing a sub-object that is lazily created. It could also involve computing a cached perfect hash for faster comparison, requiring memoization.
Neither of these are required for opEquals to work. They are optimizations.
So what you're saying is that, in D, I'm not allowed to optimize my opEquals and that I should be fine with that?
Or use a method other than opEquals. Or overload opEquals (this really should be possible) for your specific needs. Again, I ask, what is a real-world example of something that needs lazy caching (or where lazy caching significantly helps performance) for comparison. You have already stated that you appreciate it's not const, so you must have *something* that needs it. So far, I don't think it's a very common requirement. It certainly doesn't seem like it's so important that the entire body of D code in existence should have to deal with mutable opEquals. The fact that it's mutable now is really a legacy D1 issue. -Steve
Sep 29 2011
parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 29/09/11 8:01 PM, Steven Schveighoffer wrote:
 Again, I ask, what is a real-world example of something that needs lazy
 caching (or where lazy caching significantly helps performance) for
 comparison. You have already stated that you appreciate it's not const,
 so you must have *something* that needs it.
1. A renderer lazily caching world transform matrices. 2. Collision detection queries often cache recent intermediate results due to spacial coherency between queries. 3. A simple asset loader may lazily load assets on demand (rather than having to eagerly load things up front). 4. String hashes are cached all over the place for fast resource look ups. I'm honestly quite amazed that I have to justify my belief that caching is useful, and commonly used. Caches are everywhere: your CPU has multiple levels of caches, you hard disk has caches, your browser caches, domain lookups are cached; and they're all lazy, too. Lazy caching is a cornerstone of optimisation.
 So far, I don't think it's a very common requirement. It certainly
 doesn't seem like it's so important that the entire body of D code in
 existence should have to deal with mutable opEquals. The fact that it's
 mutable now is really a legacy D1 issue.
Inline assembler isn't a common requirement either, but that's no argument to ignore it. I suppose something like __restrict isn't very important to you either. It's certainly used a lot less than lazy caching. However, it's worth pointing out that __restrict was introduced into compilers through popular demand by people that needed it. These things are real and should not be ignored.
Sep 29 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 20:14:01 -0400, Peter Alexander  
<peter.alexander.au gmail.com> wrote:

 On 29/09/11 8:01 PM, Steven Schveighoffer wrote:
 Again, I ask, what is a real-world example of something that needs lazy
 caching (or where lazy caching significantly helps performance) for
 comparison. You have already stated that you appreciate it's not const,
 so you must have *something* that needs it.
1. A renderer lazily caching world transform matrices.
How does this relate to opEquals?
 2. Collision detection queries often cache recent intermediate results  
 due to spacial coherency between queries.
Surely you are not doing collision detection with opEquals?
 3. A simple asset loader may lazily load assets on demand (rather than  
 having to eagerly load things up front).
This is a good one. The thought of loading a resource just to throw it away because const won't let you store it is not really acceptable. I can see this being related to opEquals more than the others. I wonder how often this would affect opEquals in practice. I have to think about this, and how it could possibly be solvable. I suspect we may need to do something that supports logical const specifically for this problem (i.e. not generalized logical const). BTW, see my other thread "logical const without casts" for a possible solution.
 4. String hashes are cached all over the place for fast resource look  
 ups.
strings have no place to store a cache. Plus resource lookup is not opEquals.
 I'm honestly quite amazed that I have to justify my belief that caching  
 is useful, and commonly used. Caches are everywhere: your CPU has  
 multiple levels of caches, you hard disk has caches, your browser  
 caches, domain lookups are cached; and they're all lazy, too. Lazy  
 caching is a cornerstone of optimisation.
Oh, I am not questioning the value or usefulness of caching in general. What I'm wondering is how often it's needed for comparisons. What we are discussing is why opEquals should not be const, not why all functions should not be const.
 So far, I don't think it's a very common requirement. It certainly
 doesn't seem like it's so important that the entire body of D code in
 existence should have to deal with mutable opEquals. The fact that it's
 mutable now is really a legacy D1 issue.
Inline assembler isn't a common requirement either, but that's no argument to ignore it.
Inline assembler is a basic requirement, even if it's not used often. Many low-level pieces depend on it. Besides, having inline assembler does *not* affect code that does not use inline assembler. This is not the same thing as making opEquals cater to non-const implementations.
 I suppose something like __restrict isn't very important to you either.  
 It's certainly used a lot less than lazy caching. However, it's worth  
 pointing out that __restrict was introduced into compilers through  
 popular demand by people that needed it. These things are real and  
 should not be ignored.
I'm unaware of __restrict, so I can't really comment on it. But I'll respond to the general argument that something that isn't used often is still useful: Yes, things can be included that are used infrequently, as long as the inclusion of such support does not adversely affect code that doesn't need it. The problem we have here is: 1. opEquals is not const, so you cannot compare const objects. This is downright unacceptable. 2. A compromise solution where opEquals is both const and non const in Object sacrifices performance and simplicity for the benefit of having the *possibility* of obj == obj working for lazy-caching opEquals. 3. Is it worth making the majority of opEquals implementations (i.e. those that are const-only) lower performing in order to allow for the possibility? How much do we gain by downgrading const opEquals? 4. Is it worth making existing code stop compiling in order to switch to const opEquals exclusively? My opinion is: 1. we need const opEquals. There is no debate on this I think. 2. The compromise solution is not worth the gain. I'd rather have most of my objects compare as quickly as possible. 3. It should be possible to create a mutable opEquals and have it hook with obj == obj. This is different than the compromise solution, which puts both const and non-const opEquals in Object. This means we need to reengineer how the compiler does opEquals for objects. 4. Yes, it is worth breaking existing compilation to switch to const opEquals in Object. -Steve
Sep 30 2011
parent reply travert phare.normalesup.org (Christophe) writes:
"Steven Schveighoffer" , dans le message (digitalmars.D:145812), a écrit :
 What we are discussing is why opEquals should not be const, not why all  
 functions should not be const.
Considering the viral behavior of const in D, we are discussing why all function callable from opEqual should be declared const, even if the programmer do not want to use const at all (but obviously want to use opEqual.
 My opinion is:
 
 1. we need const opEquals.  There is no debate on this I think.
 2. The compromise solution is not worth the gain.  I'd rather have most of  
 my objects compare as quickly as possible.
 3. It should be possible to create a mutable opEquals and have it hook  
 with obj == obj.  This is different than the compromise solution, which  
 puts both const and non-const opEquals in Object.  This means we need to  
 reengineer how the compiler does opEquals for objects.
 4. Yes, it is worth breaking existing compilation to switch to const  
 opEquals in Object.
I think I agree with you, except perhaps on point 4. Can you ellaborate point 3 ? I am not sure I understand. Do you talk about hooking obj == obj2 to a mutable opEqual only when obj and obj2 are mutable _and_ define a mutable opEqual, instead of calling mutable opEqual directly, and having it forward to const opEqual at the cost of one extra virtual call ? -- Christophe
Sep 30 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 30 Sep 2011 09:39:42 -0400, Christophe  =

<travert phare.normalesup.org> wrote:

 "Steven Schveighoffer" , dans le message (digitalmars.D:145812), a =C3=
=A9crit =
 :
 What we are discussing is why opEquals should not be const, not why a=
ll
 functions should not be const.
Considering the viral behavior of const in D, we are discussing why al=
l
 function callable from opEqual should be declared const, even if the
 programmer do not want to use const at all (but obviously want to use
 opEqual.
I think we are past the point where you can use D2 without using const. = = inout should make things better. Don't forget that not only is const viral, but not-const is viral too.
 My opinion is:

 1. we need const opEquals.  There is no debate on this I think.
 2. The compromise solution is not worth the gain.  I'd rather have mo=
st =
 of
 my objects compare as quickly as possible.
 3. It should be possible to create a mutable opEquals and have it hoo=
k
 with obj =3D=3D obj.  This is different than the compromise solution,=
which
 puts both const and non-const opEquals in Object.  This means we need=
to
 reengineer how the compiler does opEquals for objects.
 4. Yes, it is worth breaking existing compilation to switch to const
 opEquals in Object.
I think I agree with you, except perhaps on point 4. Can you ellaborat=
e
 point 3 ? I am not sure I understand. Do you talk about hooking obj =3D=
=3D
 obj2 to a mutable opEqual only when obj and obj2 are mutable _and_
 define a mutable opEqual, instead of calling mutable opEqual directly,=
 and having it forward to const opEqual at the cost of one extra virtua=
l
 call ?
A while back, opEquals worked like this: obj1 =3D=3D obj2 =3D> obj1.opEquals(obj2) However, since a recent compiler release (in tune with TDPL): obj1 =3D=3D obj2 =3D> object.opEquals(obj1, obj2) where object.opEquals is a function in object.d (not a member function) Here is the entire function: equals_t opEquals(Object lhs, Object rhs) { if (lhs is rhs) return true; if (lhs is null || rhs is null) return false; if (typeid(lhs) =3D=3D typeid(rhs)) return lhs.opEquals(rhs); return lhs.opEquals(rhs) && rhs.opEquals(lhs); } The changes from the previous version are: 1. if obj1 is null, no segfault 2. if obj1 and obj2 are of different types, both objects' opEquals resul= ts = are taken into account. However, we lose the ability to define a second opEquals for a = const/immutable/shared object in opEquals (also, interfaces can no longe= r = be compared, but that's a separate bug). So in order to have an opEquals that works for both const and mutable = objects, we need to figure out how to make this work. opEquals inside = object.di depends on its arguments being Object, so if we go that route,= = Object must define both const and non-const opEquals. The default = non-const opEquals in Object will simply call the const one. But this means a double-virtual call when doing opEquals for const objec= ts. What I think we need is to make the free function opEquals a template, = which only instantiates for objects, and then the lhs.opEquals(rhs) and = = rhs.opEquals(lhs) will take full advantage of any overloaded opEquals. For example, if you wanted to overload for non-const objects. But I thi= nk = the default in Object should be const. -Steve
Sep 30 2011
parent reply travert phare.normalesup.org (Christophe) writes:
 What I think we need is to make the free function opEquals a template,  
 which only instantiates for objects, and then the lhs.opEquals(rhs) and  
 rhs.opEquals(lhs) will take full advantage of any overloaded opEquals.
 
 For example, if you wanted to overload for non-const objects.  But I think  
 the default in Object should be const.
Thanks for the explanation. That seems to be a nice solution to me. (some people might complain of template bloat...)
Sep 30 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 30 Sep 2011 10:53:50 -0400, Christophe  
<travert phare.normalesup.org> wrote:

 What I think we need is to make the free function opEquals a template,
 which only instantiates for objects, and then the lhs.opEquals(rhs) and
 rhs.opEquals(lhs) will take full advantage of any overloaded opEquals.

 For example, if you wanted to overload for non-const objects.  But I  
 think
 the default in Object should be const.
Thanks for the explanation. That seems to be a nice solution to me. (some people might complain of template bloat...)
I actually just tested this and it works (no compiler changes necessary). I'll start a new thread to discuss. -Steve
Sep 30 2011
prev sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, September 29, 2011 10:48 Peter Alexander wrote:
 On 29/09/11 12:33 PM, Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 19:21:33 -0400, Peter Alexander
 Andrei says that it will (in a way) be both, so I'm happy with that.
I haven't seen that statement.
I can't find it, but he said that there will be two versions: a const version and a non-const version. By default, the non-const version will forward to the const version, so you only have to implement one.
https://github.com/D-Programming-Language/phobos/pull/262 - Jonathan M Davis
Sep 29 2011
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2011 9:11 PM, Jonathan M Davis wrote:
 Okay. I'm not saying that we should necessarily implement this. I'm just
 looking to air out an idea here and see if there are any technical reasons why
 it can't be done or is unreasonable.
Andrei and I talked about this some time back. Where it ran aground was Andrei wanted a way to mark the object as 'dirty' so it would get reloaded. We couldn't find a way that didn't look like a mess. It also has problems if you try and add 'const' to it, because it is under the hood not const. Nor is it immutable or thread safe.
Sep 25 2011
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, September 25, 2011 17:46:00 Walter Bright wrote:
 On 9/23/2011 9:11 PM, Jonathan M Davis wrote:
 Okay. I'm not saying that we should necessarily implement this. I'm just
 looking to air out an idea here and see if there are any technical
 reasons why it can't be done or is unreasonable.
Andrei and I talked about this some time back. Where it ran aground was Andrei wanted a way to mark the object as 'dirty' so it would get reloaded. We couldn't find a way that didn't look like a mess. It also has problems if you try and add 'const' to it, because it is under the hood not const. Nor is it immutable or thread safe.
Well, that's why I tried to find a solution specifically for lazy loading as opposed to a more general caching mechanism. Using const with a more general caching mechanism seems like it would be _really_ hard to do if not outright impossible, but it at least seems like it may be possible to do it with a single, lazy load, since the value never changes once it's been set. And the idea with immutability and shared was that they would force eager-loading so that they wouldn't be a problem. The whole thing becomes vastly more complicated if you try and have a more general caching mechanism. On the whole, it looks to me like my idea could work, but there may be complications with regards to what you can allow in the initializer function for the lazy member variable. Sorting those out could render the idea more or less useless, and even if it works perfectly exactly as I suggested, I don't know that what it adds merits the extra complexity that it requires. It would be very nice if we could expand const to be able to allow for some level of controlled, logical constness, but the ultimate problem is finding a way to control it. If it could be controlled, then there are probably ways to make it work with immutable (such as eliminating the caching in the case of immutable), but it's a very difficult problem, and I'm not at all convinced that it's ultimately solvable. I was just hoping to find a partial solution. - Jonathan M Davis
Sep 25 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2011 6:03 PM, Jonathan M Davis wrote:
 I don't know that what it adds merits the extra complexity that it requires.
I think that's the real issue. D has a lot of stuff in it, we ought to be very demanding of significant new features.
Sep 25 2011
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 09/26/2011 02:46 AM, Walter Bright wrote:
 On 9/23/2011 9:11 PM, Jonathan M Davis wrote:
 Okay. I'm not saying that we should necessarily implement this. I'm just
 looking to air out an idea here and see if there are any technical
 reasons why
 it can't be done or is unreasonable.
Andrei and I talked about this some time back. Where it ran aground was Andrei wanted a way to mark the object as 'dirty' so it would get reloaded. We couldn't find a way that didn't look like a mess.
lazyField=void ? :o)
 It also has problems if you try and add 'const' to it, because it is
 under the hood not const. Nor is it immutable or thread safe.
under the hood, a const can be either mutable or immutable. Calling a const member function does not at all preclude that the object is changed in D. immutable objects would have their lazy fields loaded eagerly. Thread safety: Every object in D is thread safe because unshared by default.
Sep 26 2011
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sat, 24 Sep 2011 00:11:52 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 Okay. I'm not saying that we should necessarily implement this. I'm just
 looking to air out an idea here and see if there are any technical  
 reasons why
 it can't be done or is unreasonable.

 Some programmers have expressed annoyance and/or disappointment that  
 there is
 no logical const of any kind in D. They generally seem to be trying to  
 do one
 of two things - caching return values in member functions or lazily  
 loading
 the values of member variables.
The major reason for having logical const is to a) store state on the object that is not considered part of the object, or b) store references to objects that are not part of the object state. For example, storing a reference to a mutex in an object, or a reference to an owner object. It's the difference between a "has a" relationship and a "points to" relationship. Your lazy loading idea does not help at all for these.
 struct S
 {
     lazy T var = func();
 }

 The lazy indicates that var is going to be lazily loaded, and func  
 returns the
 value that var will be initialized with. However, instead of being a  
 normal
 variable of type T, this is what happens to var:

 1. Instead of a member variable of type T, S gets a bool (e.g.  
 __varLoaded)
 and a variable of type T (e.g. __var).

 2. __varLoaded is default-initialized to false, and __var is void (so,
 garbage).

 3. Every reference to var is replaced with a call to a getter property
 function (e.g. __varProp). There is no setter property.

 4. __varProp looks something like this:

 T __varProp()
 {
     if(!__varLoaded)
     {
         __var = func();
         __varLoaded = true;
     }

     return __var;
 }

 5.  __varProp may or may not be inlined (but it would be nice if it  
 would be).

 6.  If the S being constructed is shared or immutable and __varProp is  
 not
 called in the constructor, then __varProp is called immediately after the
 constructor (or at the end of the constructor if that works better for  
 the
 compiler).
Why? What if the calculation is very expensive, and you never access var? Besides, we can already pro-actively initialize data in an immutable constructor, what is the benefit here?
 So, the question is: Does this work? And if not, why? And if it _does_  
 work,
 is it a good idea? And if not, why?
It doesn't solve the problem. -Steve
Sep 26 2011
next sibling parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Mon, 26 Sep 2011 13:01:29 +0100, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:
 On Sat, 24 Sep 2011 00:11:52 -0400, Jonathan M Davis
 6.  If the S being constructed is shared or immutable and __varProp is  
 not
 called in the constructor, then __varProp is called immediately after  
 the
 constructor (or at the end of the constructor if that works better for  
 the
 compiler).
Why? What if the calculation is very expensive, and you never access var? Besides, we can already pro-actively initialize data in an immutable constructor, what is the benefit here?
I think this is to avoid threading issues, like double checked locking problems etc. -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Sep 26 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 26 Sep 2011 09:44:31 -0400, Regan Heath <regan netmail.co.nz>  
wrote:

 On Mon, 26 Sep 2011 13:01:29 +0100, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:
 On Sat, 24 Sep 2011 00:11:52 -0400, Jonathan M Davis
 6.  If the S being constructed is shared or immutable and __varProp is  
 not
 called in the constructor, then __varProp is called immediately after  
 the
 constructor (or at the end of the constructor if that works better for  
 the
 compiler).
Why? What if the calculation is very expensive, and you never access var? Besides, we can already pro-actively initialize data in an immutable constructor, what is the benefit here?
I think this is to avoid threading issues, like double checked locking problems etc.
My point is, can't I do this now? struct S { int var; immtuable this() { var = func(); } const int func() {...} } vs struct S { lazy int var = func(); const int func() {...} } If you aren't going to *actually* lazily initialize a variable, what is the point of all this? I can non-lazily initialize a variable without any new language features. -Steve
Sep 26 2011
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, September 26, 2011 10:35:29 Steven Schveighoffer wrote:
 On Mon, 26 Sep 2011 09:44:31 -0400, Regan Heath <regan netmail.co.nz>
 
 wrote:
 On Mon, 26 Sep 2011 13:01:29 +0100, Steven Schveighoffer
 
 <schveiguy yahoo.com> wrote:
 On Sat, 24 Sep 2011 00:11:52 -0400, Jonathan M Davis
 
 6.  If the S being constructed is shared or immutable and __varProp
 is
 not
 called in the constructor, then __varProp is called immediately
 after
 the
 constructor (or at the end of the constructor if that works better
 for
 the
 compiler).
Why? What if the calculation is very expensive, and you never access var? Besides, we can already pro-actively initialize data in an immutable constructor, what is the benefit here?
I think this is to avoid threading issues, like double checked locking problems etc.
My point is, can't I do this now? struct S { int var; immtuable this() { var = func(); } const int func() {...} } vs struct S { lazy int var = func(); const int func() {...} } If you aren't going to *actually* lazily initialize a variable, what is the point of all this? I can non-lazily initialize a variable without any new language features.
The point was to allow for lazy initialization when _not_ using immutable or shared but to still have the type work when immutable or shared. If lazy loading were implemented as I suggested but made no attempt at dealing with immutable, then it would have to be _illegal_ to use such a type with immutable, because lazily loading the member variable would violate immutability. Allowing it would potentially result in trying to alter read- only memory. By forcing eager loading with immutable and shared, you avoid the immutability and threading problems but still allow lazy loading with const. The whole point of this proposal is to allow for lazy loading and const to mix. If all you want is lazy loading, then you can do it right now, but you can't do it with const. But since lazy loading and immutable don't mix at all and _can't_ (since immutable objects could be put in read-only memory), you either have to make immutable illegal with such objects or make them eagerly load for immutable. - Jonathan M Davis
Sep 26 2011
prev sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, September 26, 2011 08:01:29 Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 00:11:52 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 
 wrote:
 Okay. I'm not saying that we should necessarily implement this. I'm just
 looking to air out an idea here and see if there are any technical
 reasons why
 it can't be done or is unreasonable.
 
 Some programmers have expressed annoyance and/or disappointment that
 there is
 no logical const of any kind in D. They generally seem to be trying to
 do one
 of two things - caching return values in member functions or lazily
 loading
 the values of member variables.
The major reason for having logical const is to a) store state on the object that is not considered part of the object, or b) store references to objects that are not part of the object state. For example, storing a reference to a mutex in an object, or a reference to an owner object. It's the difference between a "has a" relationship and a "points to" relationship. Your lazy loading idea does not help at all for these.
I believe that the two main complaints about the lack of logical const which have been coming up in the newsgroup have been the inability to cache the return values of member functions and the inability to lazily load the values of member variables. This is simply an attempt to solve the lazy loading portion of that problem.
 struct S
 {
 
     lazy T var = func();
 
 }
 
 The lazy indicates that var is going to be lazily loaded, and func
 returns the
 value that var will be initialized with. However, instead of being a
 normal
 variable of type T, this is what happens to var:
 
 1. Instead of a member variable of type T, S gets a bool (e.g.
 __varLoaded)
 and a variable of type T (e.g. __var).
 
 2. __varLoaded is default-initialized to false, and __var is void (so,
 garbage).
 
 3. Every reference to var is replaced with a call to a getter property
 function (e.g. __varProp). There is no setter property.
 
 4. __varProp looks something like this:
 
 T __varProp()
 {
 
     if(!__varLoaded)
     {
     
         __var = func();
         __varLoaded = true;
     
     }
     
     return __var;
 
 }
 
 5.  __varProp may or may not be inlined (but it would be nice if it
 would be).
 
 6.  If the S being constructed is shared or immutable and __varProp is
 not
 called in the constructor, then __varProp is called immediately after
 the
 constructor (or at the end of the constructor if that works better for
 the
 compiler).
Why? What if the calculation is very expensive, and you never access var? Besides, we can already pro-actively initialize data in an immutable constructor, what is the benefit here?
The point is that if aren't using immutable or shared, then you can afford to lazy load it, so you can wait to initialize the variable until it's used, but you _can't_ afford to do that in the case of immutable, because the data must be immutable and can't be changed later to do the lazy loading, and you can't afford to do that in the case of shared, because then you have thread-safety issues, so you have to pay the cost upfront in those cases.
 So, the question is: Does this work? And if not, why? And if it _does_
 work,
 is it a good idea? And if not, why?
It doesn't solve the problem.
Well, it was never intended to solve the _whole_ problem. It's an attempt at solving a piece of the problem. As far as I can see, enforcing logical constness with const is completely intractable, but lazy loading at least seems like it could be done, which would _partially_ solve the problem. Of course, the fact that it only partially solves the problem makes it far less valuable (and arguably not worth it given the extra complexity), but it does make it possible to solve _part_ of the problem instead of _none_ of the problem. - Jonathan M Davis
Sep 26 2011
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 26 Sep 2011 12:12:30 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Monday, September 26, 2011 08:01:29 Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 00:11:52 -0400, Jonathan M Davis  
 <jmdavisProg gmx.com>

 wrote:
 Okay. I'm not saying that we should necessarily implement this. I'm  
just
 looking to air out an idea here and see if there are any technical
 reasons why
 it can't be done or is unreasonable.

 Some programmers have expressed annoyance and/or disappointment that
 there is
 no logical const of any kind in D. They generally seem to be trying to
 do one
 of two things - caching return values in member functions or lazily
 loading
 the values of member variables.
The major reason for having logical const is to a) store state on the object that is not considered part of the object, or b) store references to objects that are not part of the object state. For example, storing a reference to a mutex in an object, or a reference to an owner object. It's the difference between a "has a" relationship and a "points to" relationship. Your lazy loading idea does not help at all for these.
I believe that the two main complaints about the lack of logical const which have been coming up in the newsgroup have been the inability to cache the return values of member functions and the inability to lazily load the values of member variables. This is simply an attempt to solve the lazy loading portion of that problem.
Of course. My point was that your two use cases I think are far less used than the refer-to-other-object-but-don't-own-it cases. Maybe I'm wrong, it's been a long time since I used C++, and mutable in general. But I remember using it when I wanted to have a reference to something that was not part of the object.
 6.  If the S being constructed is shared or immutable and __varProp is
 not
 called in the constructor, then __varProp is called immediately after
 the
 constructor (or at the end of the constructor if that works better for
 the
 compiler).
Why? What if the calculation is very expensive, and you never access var? Besides, we can already pro-actively initialize data in an immutable constructor, what is the benefit here?
The point is that if aren't using immutable or shared, then you can afford to lazy load it, so you can wait to initialize the variable until it's used, but you _can't_ afford to do that in the case of immutable, because the data must be immutable and can't be changed later to do the lazy loading, and you can't afford to do that in the case of shared, because then you have thread-safety issues, so you have to pay the cost upfront in those cases.
It is only important that the value is constant *when it's read*, not at the beginning of the object existence. If you hook the only way to read it with a lazy initializer, then the two cases are indistinguishable, or using lazy initialization doesn't make sense. Let's think of the case where *two* threads are lazily initializing an immutable struct. None of the members can be any different, because they are immutable, so if the value depends solely on the internal struct data, then both initializers will set the *Same value*. Two competing threads writing the same value do not result in corruption. If the initializer depends on some *external state*, if that external state is also immutable, same result. If the initializer depends on some external state that is *not* immutable, then why mark it lazy initialization? What is the point of initializing data that depends on something that's changing over time? I can't see the point of doing that. This is of course, only if you can't restart the initialization (i.e. clear the 'set' flag). Note that if you want your proposed behavior you can achieve it by defining a constructor that eagerly initializes the variable by simply reading it. I still don't think this proposal (even one that always lazily initializes) gives enough benefit to be included. Why would you want a constant lazily-initialized value in a non-immutable struct? If this were to mean anything, there would have to be a way to clear the 'set' flag in a mutable struct. -Steve
Sep 26 2011
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, September 26, 2011 10:26 Steven Schveighoffer wrote:
 On Mon, 26 Sep 2011 12:12:30 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 
 wrote:
 On Monday, September 26, 2011 08:01:29 Steven Schveighoffer wrote:
 On Sat, 24 Sep 2011 00:11:52 -0400, Jonathan M Davis
 <jmdavisProg gmx.com>
 
 wrote:
 Okay. I'm not saying that we should necessarily implement this. I'm
just
 looking to air out an idea here and see if there are any technical
 reasons why
 it can't be done or is unreasonable.
 
 Some programmers have expressed annoyance and/or disappointment that
 there is
 no logical const of any kind in D. They generally seem to be trying to
 do one
 of two things - caching return values in member functions or lazily
 loading
 the values of member variables.
The major reason for having logical const is to a) store state on the object that is not considered part of the object, or b) store references to objects that are not part of the object state. For example, storing a reference to a mutex in an object, or a reference to an owner object. It's the difference between a "has a" relationship and a "points to" relationship. Your lazy loading idea does not help at all for these.
I believe that the two main complaints about the lack of logical const which have been coming up in the newsgroup have been the inability to cache the return values of member functions and the inability to lazily load the values of member variables. This is simply an attempt to solve the lazy loading portion of that problem.
Of course. My point was that your two use cases I think are far less used than the refer-to-other-object-but-don't-own-it cases. Maybe I'm wrong, it's been a long time since I used C++, and mutable in general. But I remember using it when I wanted to have a reference to something that was not part of the object.
 6. If the S being constructed is shared or immutable and __varProp is
 not
 called in the constructor, then __varProp is called immediately after
 the
 constructor (or at the end of the constructor if that works better for
 the
 compiler).
Why? What if the calculation is very expensive, and you never access var? Besides, we can already pro-actively initialize data in an immutable constructor, what is the benefit here?
The point is that if aren't using immutable or shared, then you can afford to lazy load it, so you can wait to initialize the variable until it's used, but you _can't_ afford to do that in the case of immutable, because the data must be immutable and can't be changed later to do the lazy loading, and you can't afford to do that in the case of shared, because then you have thread-safety issues, so you have to pay the cost upfront in those cases.
It is only important that the value is constant *when it's read*, not at the beginning of the object existence. If you hook the only way to read it with a lazy initializer, then the two cases are indistinguishable, or using lazy initialization doesn't make sense. Let's think of the case where *two* threads are lazily initializing an immutable struct. None of the members can be any different, because they are immutable, so if the value depends solely on the internal struct data, then both initializers will set the *Same value*. Two competing threads writing the same value do not result in corruption.
The problem with immutable is that it could (at least in theory) go in read- only memory, so lazy initalization doesn't work for it _at all_.
 If the initializer depends on some *external state*, if that external
 state is also immutable, same result.
 
 If the initializer depends on some external state that is *not* immutable,
 then why mark it lazy initialization? What is the point of initializing
 data that depends on something that's changing over time? I can't see the
 point of doing that. This is of course, only if you can't restart the
 initialization (i.e. clear the 'set' flag).
 
 Note that if you want your proposed behavior you can achieve it by
 defining a constructor that eagerly initializes the variable by simply
 reading it.
 
 I still don't think this proposal (even one that always lazily
 initializes) gives enough benefit to be included. Why would you want a
 constant lazily-initialized value in a non-immutable struct? If this were
 to mean anything, there would have to be a way to clear the 'set' flag in
 a mutable struct.
People have been complaining about the lack of logical const. The two use cases that they seem to have been looking for are the ability cache the results of member functions and to lazily load member variables. They want to be able to do those things with const and can't (in the case of Peter Alexander, he seems to have come to the conclusion that it's bad enough that he doesn't use const for anything not related to threaings, though I do find that stance a bit odd, since it's _immutable_ that's needed for threading, not const). I was merely trying to present a solution to lazy loading that worked with const. It would therefore partially solve the issues that people have been complaining about. Personally, I don't think that the feature merits the extra complexity that it incurs. I was just proposing a possible solution. And it seems that some folks (Peter in particular) don't think that it goes far enough to be of any real value anyway (he wants full-on caching, not lazy loading). Personally, I don't even remember the last time that I used lazy loading or caching in a type, so the feature was never intended for me anyway, and finding out why folks would find it useful would really require their input. But there _are_ people who have wanted lazy loading to work with const objects; they just also want generally caching to work as well, which isn't at all feasible as far as I can tell, whereas lazy loading seems like it could be. - Jonathan M Davis
Sep 26 2011
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 26 Sep 2011 13:57:04 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Monday, September 26, 2011 10:26 Steven Schveighoffer wrote:
 On Mon, 26 Sep 2011 12:12:30 -0400, Jonathan M Davis  
 <jmdavisProg gmx.com>

 wrote:
 The point is that if aren't using immutable or shared, then you can
 afford to
 lazy load it, so you can wait to initialize the variable until it's
 used, but
 you _can't_ afford to do that in the case of immutable, because the  
data
 must
 be immutable and can't be changed later to do the lazy loading, and  
you
 can't
 afford to do that in the case of shared, because then you have
 thread-safety
 issues, so you have to pay the cost upfront in those cases.
It is only important that the value is constant *when it's read*, not at the beginning of the object existence. If you hook the only way to read it with a lazy initializer, then the two cases are indistinguishable, or using lazy initialization doesn't make sense. Let's think of the case where *two* threads are lazily initializing an immutable struct. None of the members can be any different, because they are immutable, so if the value depends solely on the internal struct data, then both initializers will set the *Same value*. Two competing threads writing the same value do not result in corruption.
The problem with immutable is that it could (at least in theory) go in read- only memory, so lazy initalization doesn't work for it _at all_.
That is not a good reason :) Of course, any lazy initializers would have to be called during compilation time if the variable is put into ROM! Any lazy initializer that doesn't run during CTFE would result in a compiler error.
 If the initializer depends on some *external state*, if that external
 state is also immutable, same result.

 If the initializer depends on some external state that is *not*  
 immutable,
 then why mark it lazy initialization? What is the point of initializing
 data that depends on something that's changing over time? I can't see  
 the
 point of doing that. This is of course, only if you can't restart the
 initialization (i.e. clear the 'set' flag).

 Note that if you want your proposed behavior you can achieve it by
 defining a constructor that eagerly initializes the variable by simply
 reading it.

 I still don't think this proposal (even one that always lazily
 initializes) gives enough benefit to be included. Why would you want a
 constant lazily-initialized value in a non-immutable struct? If this  
 were
 to mean anything, there would have to be a way to clear the 'set' flag  
 in
 a mutable struct.
People have been complaining about the lack of logical const. The two use cases that they seem to have been looking for are the ability cache the results of member functions and to lazily load member variables. They want to be able to do those things with const and can't (in the case of Peter Alexander, he seems to have come to the conclusion that it's bad enough that he doesn't use const for anything not related to threaings, though I do find that stance a bit odd, since it's _immutable_ that's needed for threading, not const). I was merely trying to present a solution to lazy loading that worked with const. It would therefore partially solve the issues that people have been complaining about.
Forgive me for objecting, but a lazy-initialization scheme that eagerly initializes isn't even a valid solution. I don't mean to be blunt, but I just can't put it any other way. I see that you intended it to work lazily for non-const non-immutable items, but what would be the point then? I can implement lazy initialization on mutable types today.
 Personally, I don't think that the feature merits the extra complexity  
 that it
 incurs. I was just proposing a possible solution. And it seems that some  
 folks
 (Peter in particular) don't think that it goes far enough to be of any  
 real
 value anyway (he wants full-on caching, not lazy loading). Personally, I  
 don't
 even remember the last time that I used lazy loading or caching in a  
 type, so
 the feature was never intended for me anyway, and finding out why folks  
 would
 find it useful would really require their input. But there _are_ people  
 who
 have wanted lazy loading to work with const objects; they just also want
 generally caching to work as well, which isn't at all feasible as far as  
 I can
 tell, whereas lazy loading seems like it could be.
I think a better avenue would be to implement some sort of strong-pure memoization system. Then all you have to do is make an immutable pure member, and the compiler will take care of the rest for you. I think this only works for classes, however, since there is no place to put hidden memoization members. -Steve
Sep 26 2011
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 26 Sep 2011 15:02:24 -0400, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:


 I think a better avenue would be to implement some sort of strong-pure  
 memoization system.  Then all you have to do is make an immutable pure  
 member, and the compiler will take care of the rest for you.

 I think this only works for classes, however, since there is no place to  
 put hidden memoization members...
...in structs. -Steve
Sep 26 2011
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, September 26, 2011 12:02 Steven Schveighoffer wrote:
 On Mon, 26 Sep 2011 13:57:04 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 
 wrote:
 On Monday, September 26, 2011 10:26 Steven Schveighoffer wrote:
 On Mon, 26 Sep 2011 12:12:30 -0400, Jonathan M Davis
 <jmdavisProg gmx.com>
 
 wrote:
 The point is that if aren't using immutable or shared, then you can
 afford to
 lazy load it, so you can wait to initialize the variable until it's
 used, but
 you _can't_ afford to do that in the case of immutable, because the
data
 must
 be immutable and can't be changed later to do the lazy loading, and
you
 can't
 afford to do that in the case of shared, because then you have
 thread-safety
 issues, so you have to pay the cost upfront in those cases.
It is only important that the value is constant *when it's read*, not at the beginning of the object existence. If you hook the only way to read it with a lazy initializer, then the two cases are indistinguishable, or using lazy initialization doesn't make sense. Let's think of the case where *two* threads are lazily initializing an immutable struct. None of the members can be any different, because they are immutable, so if the value depends solely on the internal struct data, then both initializers will set the *Same value*. Two competing threads writing the same value do not result in corruption.
The problem with immutable is that it could (at least in theory) go in read- only memory, so lazy initalization doesn't work for it _at all_.
That is not a good reason :) Of course, any lazy initializers would have to be called during compilation time if the variable is put into ROM! Any lazy initializer that doesn't run during CTFE would result in a compiler error.
 If the initializer depends on some *external state*, if that external
 state is also immutable, same result.
 
 If the initializer depends on some external state that is *not*
 immutable,
 then why mark it lazy initialization? What is the point of initializing
 data that depends on something that's changing over time? I can't see
 the
 point of doing that. This is of course, only if you can't restart the
 initialization (i.e. clear the 'set' flag).
 
 Note that if you want your proposed behavior you can achieve it by
 defining a constructor that eagerly initializes the variable by simply
 reading it.
 
 I still don't think this proposal (even one that always lazily
 initializes) gives enough benefit to be included. Why would you want a
 constant lazily-initialized value in a non-immutable struct? If this
 were
 to mean anything, there would have to be a way to clear the 'set' flag
 in
 a mutable struct.
People have been complaining about the lack of logical const. The two use cases that they seem to have been looking for are the ability cache the results of member functions and to lazily load member variables. They want to be able to do those things with const and can't (in the case of Peter Alexander, he seems to have come to the conclusion that it's bad enough that he doesn't use const for anything not related to threaings, though I do find that stance a bit odd, since it's _immutable_ that's needed for threading, not const). I was merely trying to present a solution to lazy loading that worked with const. It would therefore partially solve the issues that people have been complaining about.
Forgive me for objecting, but a lazy-initialization scheme that eagerly initializes isn't even a valid solution. I don't mean to be blunt, but I just can't put it any other way. I see that you intended it to work lazily for non-const non-immutable items, but what would be the point then? I can implement lazy initialization on mutable types today.
No, I meant it to work for _const_ items. The entire point was to enable lazy initialization in objects which are mutable but passed to a function as const. As it stands, a const function can't do any kind of lazy initialization, so if you want to have lazy initialization, you can't use const. If _all_ you care about is lazy initialization, yes, you can do it just fine right now. The _entire_ point was to get it to work with const. But since lazy initializion will _never_ work with immutable, it does eager initialization in that case, and yes that reduces the value of the solution, but that's life with immutable. But you still gain the benefit of having the lazy initialization with const objects, which was the entire point of the proposal. - Jonathan M Davis
Sep 26 2011
parent reply travert phare.normalesup.org (Christophe) writes:
"Jonathan M Davis" , dans le message (digitalmars.D:145479), a écrit :
 But since lazy initializion will _never_ work with immutable
Never say never. One could build a clean and thread-safe way to lazily initialize fields. Like someone said, even without mutex, if the function to compute the variable is pure and based on immutable data, the worst case would be to run the intializing function twice. And the langage could include mutexes to do prevent that to happen. The fact that immutable data could, one day, be put in ROM doesn't mean that it has to. The close issue is like another-one said the issue of memoization of pure methods. -- Christophe
Sep 28 2011
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, September 28, 2011 11:31:31 Christophe wrote:
 "Jonathan M Davis" , dans le message (digitalmars.D:145479), a =C3=A9=
crit :
 But since lazy initializion will _never_ work with immutable
=20 Never say never. One could build a clean and thread-safe way to lazil=
y
 initialize fields. Like someone said, even without mutex, if the
 function to compute the variable is pure and based on immutable data,=
 the worst case would be to run the intializing function twice. And th=
e
 langage could include mutexes to do prevent that to happen. The fact
 that immutable data could, one day, be put in ROM doesn't mean that i=
t
 has to. The close issue is like another-one said the issue of
 memoization of pure methods.
The very fact that an immutable variable _could_ be put into ROM negate= s the=20 possibility of the semantics being such that you don't have to fully=20= initialize an immutable variable up front. - Jonathan M Davis
Sep 28 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 28 Sep 2011 16:03:51 -0400, Jonathan M Davis <jmdavisProg gmx.co=
m>  =

wrote:

 On Wednesday, September 28, 2011 11:31:31 Christophe wrote:
 "Jonathan M Davis" , dans le message (digitalmars.D:145479), a =C3=A9=
crit :
 But since lazy initializion will _never_ work with immutable
Never say never. One could build a clean and thread-safe way to lazil=
y
 initialize fields. Like someone said, even without mutex, if the
 function to compute the variable is pure and based on immutable data,=
 the worst case would be to run the intializing function twice. And th=
e
 langage could include mutexes to do prevent that to happen. The fact
 that immutable data could, one day, be put in ROM doesn't mean that i=
t
 has to. The close issue is like another-one said the issue of
 memoization of pure methods.
The very fact that an immutable variable _could_ be put into ROM negat=
es =
 the
 possibility of the semantics being such that you don't have to fully
 initialize an immutable variable up front.
No it doesn't. If it's in ROM, initialize eagerly (which should cost = nothing, done at compile-time). If it's not, initialize lazily. -Steve
Sep 28 2011
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, September 28, 2011 13:56 Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 16:03:51 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 
 wrote:
 On Wednesday, September 28, 2011 11:31:31 Christophe wrote:
 "Jonathan M Davis" , dans le message (digitalmars.D:145479), a écrit :
 But since lazy initializion will _never_ work with immutable
Never say never. One could build a clean and thread-safe way to lazily initialize fields. Like someone said, even without mutex, if the function to compute the variable is pure and based on immutable data, the worst case would be to run the intializing function twice. And the langage could include mutexes to do prevent that to happen. The fact that immutable data could, one day, be put in ROM doesn't mean that it has to. The close issue is like another-one said the issue of memoization of pure methods.
The very fact that an immutable variable _could_ be put into ROM negates the possibility of the semantics being such that you don't have to fully initialize an immutable variable up front.
No it doesn't. If it's in ROM, initialize eagerly (which should cost nothing, done at compile-time). If it's not, initialize lazily.
That's assuming that objects can only ever be put into ROM at compile time. Is such an assumption valid? And not just currently valid, but permanently valid? Though I suppose that even if that assumption isn't valid, the compiler should still know when it's putting something in ROM and force eager loading it that case. However, that does introduce the whole locking issue again, since immutable variables are implicitly shared. - Jonathan M Davis
Sep 28 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 28 Sep 2011 17:09:59 -0400, Jonathan M Davis <jmdavisProg gmx.co=
m>  =

wrote:

 On Wednesday, September 28, 2011 13:56 Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 16:03:51 -0400, Jonathan M Davis  =
 <jmdavisProg gmx.com>

 wrote:
 On Wednesday, September 28, 2011 11:31:31 Christophe wrote:
 "Jonathan M Davis" , dans le message (digitalmars.D:145479), a =C3=
=A9crit =
 :
 But since lazy initializion will _never_ work with immutable
Never say never. One could build a clean and thread-safe way to =
 lazily
 initialize fields. Like someone said, even without mutex, if the
 function to compute the variable is pure and based on immutable da=
ta,
 the worst case would be to run the intializing function twice. And=
=
 the
 langage could include mutexes to do prevent that to happen. The fa=
ct
 that immutable data could, one day, be put in ROM doesn't mean tha=
t =
 it
 has to. The close issue is like another-one said the issue of
 memoization of pure methods.
The very fact that an immutable variable _could_ be put into ROM =
 negates
 the
 possibility of the semantics being such that you don't have to full=
y
 initialize an immutable variable up front.
No it doesn't. If it's in ROM, initialize eagerly (which should cost nothing, done at compile-time). If it's not, initialize lazily.
That's assuming that objects can only ever be put into ROM at compile =
=
 time. Is
 such an assumption valid? And not just currently valid, but permanentl=
y =
 valid?
By definition ROM is read only. How can it be created during runtime if= = it's read-only? And if it's some hardware-based ROM, what makes it = read/write during the constructor?
 Though I suppose that even if that assumption isn't valid, the compile=
r =
 should
 still know when it's putting something in ROM and force eager loading =
it =
 that
 case.

 However, that does introduce the whole locking issue again, since  =
 immutable
 variables are implicitly shared.
Locking issues are not present when all the data is immutable. It's why= = immutable is implicitly shared. -Steve
Sep 29 2011
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, September 29, 2011 04:36 Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 17:09:59 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 
 wrote:
 On Wednesday, September 28, 2011 13:56 Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 16:03:51 -0400, Jonathan M Davis
 <jmdavisProg gmx.com>
 
 wrote:
 On Wednesday, September 28, 2011 11:31:31 Christophe wrote:
 "Jonathan M Davis" , dans le message (digitalmars.D:145479), a écrit
 
 But since lazy initializion will _never_ work with immutable
Never say never. One could build a clean and thread-safe way to
lazily
 initialize fields. Like someone said, even without mutex, if the
 function to compute the variable is pure and based on immutable data,
 the worst case would be to run the intializing function twice. And
the
 langage could include mutexes to do prevent that to happen. The fact
 that immutable data could, one day, be put in ROM doesn't mean that
it
 has to. The close issue is like another-one said the issue of
 memoization of pure methods.
The very fact that an immutable variable _could_ be put into ROM
negates
 the
 possibility of the semantics being such that you don't have to fully
 initialize an immutable variable up front.
No it doesn't. If it's in ROM, initialize eagerly (which should cost nothing, done at compile-time). If it's not, initialize lazily.
That's assuming that objects can only ever be put into ROM at compile time. Is such an assumption valid? And not just currently valid, but permanently valid?
By definition ROM is read only. How can it be created during runtime if it's read-only? And if it's some hardware-based ROM, what makes it read/write during the constructor?
I don't know. I'm not an expert on ROM. The main place that I've heard it being discussed as being actually used in D is the place in the program where the string literals go on Linux. But from the descriptions and explanations of immutable, it's always sounded to me like the compiler could choose to put something in ROM at runtime (which I guess would be more like WORM than ROM). So, I may be completely misunderstanding something about what could even thereotically be done by the compiler and ROM.
 Though I suppose that even if that assumption isn't valid, the compiler
 should
 still know when it's putting something in ROM and force eager loading it
 that
 case.
 
 However, that does introduce the whole locking issue again, since
 immutable
 variables are implicitly shared.
Locking issues are not present when all the data is immutable. It's why immutable is implicitly shared.
But locks _are_ an issue if you're doing lazy loading with immutable as you've suggested. The only way to do lazy loading that I'm aware of is to have a flag that says whether the data has been loaded or not. That flag is going to have to be changed from false to true when the loading is done, and without a lock, you could end up with two threads loading the value at the same time. So, you end up with a locking. - Jonathan M Davis
Sep 29 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 14:45:24 -0400, Jonathan M Davis <jmdavisProg gmx.co=
m>  =

wrote:

 On Thursday, September 29, 2011 04:36 Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 17:09:59 -0400, Jonathan M Davis  =
 <jmdavisProg gmx.com>

 wrote:
 On Wednesday, September 28, 2011 13:56 Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 16:03:51 -0400, Jonathan M Davis
 <jmdavisProg gmx.com>

 wrote:
 On Wednesday, September 28, 2011 11:31:31 Christophe wrote:
 "Jonathan M Davis" , dans le message (digitalmars.D:145479), a =
=
 =C3=A9crit
 But since lazy initializion will _never_ work with immutable
Never say never. One could build a clean and thread-safe way to=
 lazily

 initialize fields. Like someone said, even without mutex, if th=
e
 function to compute the variable is pure and based on immutable=
=
 data,
 the worst case would be to run the intializing function twice. =
And
 the

 langage could include mutexes to do prevent that to happen. The=
=
 fact
 that immutable data could, one day, be put in ROM doesn't mean =
=
 that
 it

 has to. The close issue is like another-one said the issue of
 memoization of pure methods.
The very fact that an immutable variable _could_ be put into ROM=
 negates

 the
 possibility of the semantics being such that you don't have to  =
 fully
 initialize an immutable variable up front.
No it doesn't. If it's in ROM, initialize eagerly (which should co=
st
 nothing, done at compile-time). If it's not, initialize lazily.
That's assuming that objects can only ever be put into ROM at compi=
le
 time. Is
 such an assumption valid? And not just currently valid, but  =
 permanently
 valid?
By definition ROM is read only. How can it be created during runtime =
if
 it's read-only? And if it's some hardware-based ROM, what makes it
 read/write during the constructor?
I don't know. I'm not an expert on ROM. The main place that I've heard=
it
 being discussed as being actually used in D is the place in the progra=
m =
 where
 the string literals go on Linux. But from the descriptions and  =
 explanations of
 immutable, it's always sounded to me like the compiler could choose to=
=
 put
 something in ROM at runtime (which I guess would be more like WORM tha=
n =
 ROM).
 So, I may be completely misunderstanding something about what could ev=
en
 thereotically be done by the compiler and ROM.
I can't really see a reason to do this. It certainly does not sound lik= e = something we need to cater to.
 Though I suppose that even if that assumption isn't valid, the  =
 compiler
 should
 still know when it's putting something in ROM and force eager loadi=
ng =
 it
 that
 case.

 However, that does introduce the whole locking issue again, since
 immutable
 variables are implicitly shared.
Locking issues are not present when all the data is immutable. It's w=
hy
 immutable is implicitly shared.
But locks _are_ an issue if you're doing lazy loading with immutable a=
s =
 you've
 suggested. The only way to do lazy loading that I'm aware of is to hav=
e =
 a flag
 that says whether the data has been loaded or not. That flag is going =
to =
 have
 to be changed from false to true when the loading is done, and without=
a =
 lock,
 you could end up with two threads loading the value at the same time. =
=
 So, you
 end up with a locking.
If all the data the calculated value depends on is immutable, then the t= wo = threads loading the value at the same time will be loading the same = value. If you're writing a 42 to an int from 2 threads, there is no = deadlock or race issue. Writing a 42 over a 42 does not cause any = problems. -Steve
Sep 29 2011
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, September 29, 2011 15:05:56 Steven Schveighoffer wrote:
 If all the data the calculated value depends on is immutable, then the two
 threads loading the value at the same time will be loading the same
 value.  If you're writing a 42 to an int from 2 threads, there is no
 deadlock or race issue.  Writing a 42 over a 42 does not cause any
 problems.
An excellent point, but that's assuming that the data being used is all immutable, and that particular stipulation was not given previously. But if that stipulation is there, then you're right. Otherwise, the locking is still needed. - Jonathan M Davis
Sep 29 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 15:23:18 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Thursday, September 29, 2011 15:05:56 Steven Schveighoffer wrote:
 If all the data the calculated value depends on is immutable, then the  
 two
 threads loading the value at the same time will be loading the same
 value.  If you're writing a 42 to an int from 2 threads, there is no
 deadlock or race issue.  Writing a 42 over a 42 does not cause any
 problems.
An excellent point, but that's assuming that the data being used is all immutable, and that particular stipulation was not given previously. But if that stipulation is there, then you're right. Otherwise, the locking is still needed.
Well, the object itself is immutable. All that is needed is to ensure any static data used is also immutable. Wait, we have that -- pure functions :) So what if lazy initialization is allowed for immutable as long as the function being assigned from is pure? -Steve
Sep 29 2011
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, September 29, 2011 16:01:05 Steven Schveighoffer wrote:
 On Thu, 29 Sep 2011 15:23:18 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 
 wrote:
 On Thursday, September 29, 2011 15:05:56 Steven Schveighoffer wrote:
 If all the data the calculated value depends on is immutable, then the
 two
 threads loading the value at the same time will be loading the same
 value.  If you're writing a 42 to an int from 2 threads, there is no
 deadlock or race issue.  Writing a 42 over a 42 does not cause any
 problems.
An excellent point, but that's assuming that the data being used is all immutable, and that particular stipulation was not given previously. But if that stipulation is there, then you're right. Otherwise, the locking is still needed.
Well, the object itself is immutable. All that is needed is to ensure any static data used is also immutable. Wait, we have that -- pure functions :) So what if lazy initialization is allowed for immutable as long as the function being assigned from is pure?
As I said, there was no such stipulation it the original proposal or discussion, and without that stipulation, the assumption that two runs of the same initializer function will result in the same value does not necessarily hold, and a mutex will be required, but if we require that stipulation and therefore insist that the initializer be pure, then yes, we can avoid the mutex. Now, that's that much more limiting and and makes the whole proposal that much less useful and therefore that much less worth the added complexity, but it could be that such a stipulation would be required for it to work. Regardless, I think that it's pretty clear that a mechanism for lazy loading such as I have proposed does not provide enough benefit to be worth the added complexity. And since it doesn't seem to appease the people who wanted something like it at all (since what they really want is full-on logical const), then I don't see much point to it anyway. - Jonathan M Davis
Sep 29 2011
prev sibling parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 26/09/11 8:02 PM, Steven Schveighoffer wrote:
 I think a better avenue would be to implement some sort of strong-pure
 memoization system. Then all you have to do is make an immutable pure
 member, and the compiler will take care of the rest for you.
How can the compiler possibly figure out the best way to cache things for you? Or have I misunderstood?
Sep 28 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 28 Sep 2011 20:11:51 -0400, Peter Alexander  
<peter.alexander.au gmail.com> wrote:

 On 26/09/11 8:02 PM, Steven Schveighoffer wrote:
 I think a better avenue would be to implement some sort of strong-pure
 memoization system. Then all you have to do is make an immutable pure
 member, and the compiler will take care of the rest for you.
How can the compiler possibly figure out the best way to cache things for you? Or have I misunderstood?
It would likely be some sort of tag. Like: memoize pure int reallyTimeConsumingMethod() immutable -Steve
Sep 29 2011
parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 29/09/11 12:37 PM, Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 20:11:51 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 On 26/09/11 8:02 PM, Steven Schveighoffer wrote:
 I think a better avenue would be to implement some sort of strong-pure
 memoization system. Then all you have to do is make an immutable pure
 member, and the compiler will take care of the rest for you.
How can the compiler possibly figure out the best way to cache things for you? Or have I misunderstood?
It would likely be some sort of tag. Like: memoize pure int reallyTimeConsumingMethod() immutable
That's the syntax, but what code would the compiler generate to do the memoization? A hash table of inputs to outputs? That seems really inefficient.
Sep 29 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 13:50:54 -0400, Peter Alexander  
<peter.alexander.au gmail.com> wrote:

 On 29/09/11 12:37 PM, Steven Schveighoffer wrote:
 On Wed, 28 Sep 2011 20:11:51 -0400, Peter Alexander
 <peter.alexander.au gmail.com> wrote:

 On 26/09/11 8:02 PM, Steven Schveighoffer wrote:
 I think a better avenue would be to implement some sort of strong-pure
 memoization system. Then all you have to do is make an immutable pure
 member, and the compiler will take care of the rest for you.
How can the compiler possibly figure out the best way to cache things for you? Or have I misunderstood?
It would likely be some sort of tag. Like: memoize pure int reallyTimeConsumingMethod() immutable
That's the syntax, but what code would the compiler generate to do the memoization? A hash table of inputs to outputs? That seems really inefficient.
It depends on the situation. The compiler/runtime is free to put it wherever it wants. If it's a class member function, I'd strongly suggest allocating extra space in the instance to store it there. -Steve
Sep 29 2011
prev sibling parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 26/09/11 6:57 PM, Jonathan M Davis wrote:
 On Monday, September 26, 2011 10:26 Steven Schveighoffer wrote:
 I still don't think this proposal (even one that always lazily
 initializes) gives enough benefit to be included. Why would you want a
 constant lazily-initialized value in a non-immutable struct? If this were
 to mean anything, there would have to be a way to clear the 'set' flag in
 a mutable struct.
People have been complaining about the lack of logical const. The two use cases that they seem to have been looking for are the ability cache the results of member functions and to lazily load member variables. They want to be able to do those things with const and can't (in the case of Peter Alexander, he seems to have come to the conclusion that it's bad enough that he doesn't use const for anything not related to threaings, though I do find that stance a bit odd, since it's _immutable_ that's needed for threading, not const). I was merely trying to present a solution to lazy loading that worked with const. It would therefore partially solve the issues that people have been complaining about.
Yes, immutable is needed for concurrency, but const has the same restrictions: if something is const then you can't do lazy loading, internal caching, or anything else that you might want to do with logical const. Given that: (a) I may want/need to use logical const, (b) I want my code to be const-correct, and (c) const is incredibly viral in D That gives me no choice but to not use const in any situation where I'm not 100% certain that logical const won't be needed in the future. Otherwise I may end up trapping myself, forced to resort to undefined behaviour, or restructure the use of const in a large amount of my code. I've had to do this even in large C++ code bases more than once, and it's not fun. With D it would be a lot worse, and that's something I'd like to avoid.
 Personally, I don't think that the feature merits the extra complexity that it
 incurs. I was just proposing a possible solution. And it seems that some folks
 (Peter in particular) don't think that it goes far enough to be of any real
 value anyway (he wants full-on caching, not lazy loading). Personally, I don't
 even remember the last time that I used lazy loading or caching in a type, so
 the feature was never intended for me anyway, and finding out why folks would
 find it useful would really require their input. But there _are_ people who
 have wanted lazy loading to work with const objects; they just also want
 generally caching to work as well, which isn't at all feasible as far as I can
 tell, whereas lazy loading seems like it could be.
You're right, it isn't feasible. Enforcing logical const is intractable.
Sep 28 2011
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, September 29, 2011 01:08:31 Peter Alexander wrote:
 On 26/09/11 6:57 PM, Jonathan M Davis wrote:
 On Monday, September 26, 2011 10:26 Steven Schveighoffer wrote:
 I still don't think this proposal (even one that always lazily
 initializes) gives enough benefit to be included. Why would you want a
 constant lazily-initialized value in a non-immutable struct? If this
 were to mean anything, there would have to be a way to clear the
 'set' flag in a mutable struct.
People have been complaining about the lack of logical const. The two use cases that they seem to have been looking for are the ability cache the results of member functions and to lazily load member variables. They want to be able to do those things with const and can't (in the case of Peter Alexander, he seems to have come to the conclusion that it's bad enough that he doesn't use const for anything not related to threaings, though I do find that stance a bit odd, since it's _immutable_ that's needed for threading, not const). I was merely trying to present a solution to lazy loading that worked with const. It would therefore partially solve the issues that people have been complaining about.
Yes, immutable is needed for concurrency, but const has the same restrictions: if something is const then you can't do lazy loading, internal caching, or anything else that you might want to do with logical const. Given that: (a) I may want/need to use logical const, (b) I want my code to be const-correct, and (c) const is incredibly viral in D That gives me no choice but to not use const in any situation where I'm not 100% certain that logical const won't be needed in the future. Otherwise I may end up trapping myself, forced to resort to undefined behaviour, or restructure the use of const in a large amount of my code. I've had to do this even in large C++ code bases more than once, and it's not fun. With D it would be a lot worse, and that's something I'd like to avoid.
One suggestion, which you may or may not have thought of and may or may not be acceptable, would be to declare both const and non-const versions of member functions where the non-const version does caching, and the const version uses the cached value if there is one and does the calculation if there isn't (or if it's dirty). It's not perfect, since you could still end up with extra cost in situations where you end up having to call the function multiple times when the variable is const or even just where the first call after the variable has been changed (and the cached value made dirty) is when the variable is const. But it would allow you to use const in many more cases and stil have caching. So, depending on the situation, it could very well solve your problem in at least the general case. - Jonathan M Davis
Sep 28 2011
prev sibling parent reply "Marco Leise" <Marco.Leise gmx.de> writes:
Am 26.09.2011, 18:12 Uhr, schrieb Jonathan M Davis <jmdavisProg gmx.com>:

 On Monday, September 26, 2011 08:01:29 Steven Schveighoffer wrote:
 For example, storing a reference to a mutex in an object, or a reference
 to an owner object.  It's the difference between a "has a" relationship
 and a "points to" relationship.

 Your lazy loading idea does not help at all for these.
I believe that the two main complaints about the lack of logical const which have been coming up in the newsgroup have been the inability to cache the return values of member functions and the inability to lazily load the values of member variables.
I brought the issue with "points to" relationships up in another thread here - also about transitive const. I believe a proper object-oriented design should allow for pointers to other objects that have nothing in common with the host object. car.getVendor() for example should always return the mutable instance of the shop where that - possibly const car - was bought. Let's call it the "parent reference" issue for short :) +1 to Steven's comment.
Sep 29 2011
parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 29/09/11 8:13 AM, Marco Leise wrote:
 Am 26.09.2011, 18:12 Uhr, schrieb Jonathan M Davis <jmdavisProg gmx.com>:

 On Monday, September 26, 2011 08:01:29 Steven Schveighoffer wrote:
 For example, storing a reference to a mutex in an object, or a reference
 to an owner object. It's the difference between a "has a" relationship
 and a "points to" relationship.

 Your lazy loading idea does not help at all for these.
I believe that the two main complaints about the lack of logical const which have been coming up in the newsgroup have been the inability to cache the return values of member functions and the inability to lazily load the values of member variables.
I brought the issue with "points to" relationships up in another thread here - also about transitive const. I believe a proper object-oriented design should allow for pointers to other objects that have nothing in common with the host object. car.getVendor() for example should always return the mutable instance of the shop where that - possibly const car - was bought. Let's call it the "parent reference" issue for short :) +1 to Steven's comment.
Why would a car be able mutate it's vendor? Also, the problem with mutable parent references is that it completely defeats the purpose of const. Consider this C++ code: struct Tree { Tree* m_parent; std::vector<Tree> m_children; void doSomethingConst() const; }; The tree owns it's children, and has a mutable reference back to its parent. Inside doSomethingConst, I shouldn't be able to modify my children because I own them, however, you are arguing that I should be able to get a mutable reference to my parent. However, if I can get a mutable reference to my parent, then I can get mutable references to its children, of which (this) is one. So, through my parent I can get a mutable reference to myself, circumventing the const, making it pointless. Strict transitive const is essential. Without it, it becomes far too easily to wriggle your way out of const, which breaks the guarantees of immutable.
Sep 29 2011
next sibling parent Gor Gyolchanyan <gor.f.gyolchanyan gmail.com> writes:
Immutable is a guarantee, that no mutable reference can be obtained on
the object from any reference (I don't take into account casting it
away, because that's not guaranteed to make the object actually
mutable).
Const is a guarantee that, no mutable reference can be obtained from
this particular reference reference.
The key difference is, that const implies the general mutability of
the object, which is not accessib
Const actually is a logical const. Immutable, on the other hand, is a
physical const.
I'm not sure if const-defined objects are potentially stored in ROM
(in contrast with immutable-defined ones), but if it is, then it
shouldn't be, since ROM is purely immutable's domain.
Const should in fact be transitive, because if you can change
something, you can't change it's part either.
Another question is, that casting away const should be 100% defined
behavior and result in 100% valid mutable reference, in contrast to
immutable.
Some classes, for example, may be designed to mutate due to some
caching, reference counting or similar mandatory-mutable stuff.
In which case the class should be able to obtain a mutable reference on its=
elf.
Immutable classes, however should not be able to, because
synchronization depends on physical immutability of the data.
In light of that, one should be able to define specializations of
classes based on their mutability level (kind of class overloading):

class MyCachingClass
{
/// No const or immutabe shenanigans.
/// The class itself is mutable.
}

const class MyCachingClass
{
/// Some minor internal tricks with const.
/// The class itself is const.
}

immutable class MyCachingClass
{
/// No caching, no reference counting.
/// The class itself is completely immutable to anyone.
}

void main()
{
    auto mcc1 =3D new MyCachingClass;
    auto mcc2 =3D new const(MyCachingClass);
    auto mcc3 =3D new immutable(MyCachingClass);
}

The whole point of having const and immutable simultaneously is
because const is purely logical and immutable is physical.
And const's logic can be changed when necessary, while immutable's cannot.

Cheers,
Gor.

On Thu, Sep 29, 2011 at 12:12 PM, Peter Alexander
<peter.alexander.au gmail.com> wrote:
 On 29/09/11 8:13 AM, Marco Leise wrote:
 Am 26.09.2011, 18:12 Uhr, schrieb Jonathan M Davis <jmdavisProg gmx.com>=
:
 On Monday, September 26, 2011 08:01:29 Steven Schveighoffer wrote:
 For example, storing a reference to a mutex in an object, or a referen=
ce
 to an owner object. It's the difference between a "has a" relationship
 and a "points to" relationship.

 Your lazy loading idea does not help at all for these.
I believe that the two main complaints about the lack of logical const which have been coming up in the newsgroup have been the inability to cache t=
he
 return values of member functions and the inability to lazily load the
 values of member variables.
I brought the issue with "points to" relationships up in another thread here - also about transitive const. I believe a proper object-oriented design should allow for pointers to other objects that have nothing in common with the host object. car.getVendor() for example should always return the mutable instance of the shop where that - possibly const car - was bought. Let's call it the "parent reference" issue for short :) +1 to Steven's comment.
Why would a car be able mutate it's vendor? Also, the problem with mutable parent references is that it completely defeats the purpose of const. Consider this C++ code: struct Tree { =A0 =A0Tree* m_parent; =A0 =A0std::vector<Tree> m_children; =A0 =A0void doSomethingConst() const; }; The tree owns it's children, and has a mutable reference back to its pare=
nt.
 Inside doSomethingConst, I shouldn't be able to modify my children becaus=
e I
 own them, however, you are arguing that I should be able to get a mutable
 reference to my parent. However, if I can get a mutable reference to my
 parent, then I can get mutable references to its children, of which (this=
)
 is one. So, through my parent I can get a mutable reference to myself,
 circumventing the const, making it pointless.

 Strict transitive const is essential. Without it, it becomes far too easi=
ly
 to wriggle your way out of const, which breaks the guarantees of immutabl=
e.

Sep 29 2011
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 29 Sep 2011 04:12:01 -0400, Peter Alexander  
<peter.alexander.au gmail.com> wrote:

 On 29/09/11 8:13 AM, Marco Leise wrote:
 Am 26.09.2011, 18:12 Uhr, schrieb Jonathan M Davis  
 <jmdavisProg gmx.com>:

 On Monday, September 26, 2011 08:01:29 Steven Schveighoffer wrote:
 For example, storing a reference to a mutex in an object, or a  
 reference
 to an owner object. It's the difference between a "has a" relationship
 and a "points to" relationship.

 Your lazy loading idea does not help at all for these.
I believe that the two main complaints about the lack of logical const which have been coming up in the newsgroup have been the inability to cache the return values of member functions and the inability to lazily load the values of member variables.
I brought the issue with "points to" relationships up in another thread here - also about transitive const. I believe a proper object-oriented design should allow for pointers to other objects that have nothing in common with the host object. car.getVendor() for example should always return the mutable instance of the shop where that - possibly const car - was bought. Let's call it the "parent reference" issue for short :) +1 to Steven's comment.
Why would a car be able mutate it's vendor?
Because a vendor is not part of a car. It's a link. In logical const land, the point of the piece of logical const data is that it's not part of the object's state. It's a piece of data *related* to the object, but not *part* of the object. My favorite example is a Widget and a window to draw the widget in. The primitives for drawing in the window obviously cannot be const, because they alter the window. However, it makes complete sense that when drawing an object, you don't want it's state to change. So how to define the widget? Window w; const draw(); This doesn't work, because the window is a part of the widget state, and therefore const. So you can't draw anything in the window inside the draw function. There are two possible solutions. First is, just make it not const. But then you lose any guarantees that the compiler gives you about not having the object state change. Second option is to pass the window into the widget: const draw(Window w); But then, it becomes cumbersome to drag around a window reference everywhere you have a Widget reference. What logical const does is allocate space that is *associated* with the object, but not *owned* by the object. It actually doesn't matter if it lives in the object's block or not.
 Also, the problem with mutable parent references is that it completely  
 defeats the purpose of const. Consider this C++ code:

 struct Tree
 {
      Tree* m_parent;
      std::vector<Tree> m_children;

      void doSomethingConst() const;
 };

 The tree owns it's children, and has a mutable reference back to its  
 parent.

 Inside doSomethingConst, I shouldn't be able to modify my children  
 because I own them, however, you are arguing that I should be able to  
 get a mutable reference to my parent. However, if I can get a mutable  
 reference to my parent, then I can get mutable references to its  
 children, of which (this) is one. So, through my parent I can get a  
 mutable reference to myself, circumventing the const, making it  
 pointless.
This is a tricky situation. I think logical const only makes sense if there are no cycles. That is, in my widget example, the window doesn't know about the widget, and does not have a pointer back to it. I think in your example, the parent should be part of the tree state, since it contains itself. Therefore, logical const should not be used. Such a thing is impossible to prove by the compiler, so logical const does break the compiler guarantees. There is no way to allow logical const in the case of the widget, and not allow it in the case of the Tree struct.
 Strict transitive const is essential. Without it, it becomes far too  
 easily to wriggle your way out of const, which breaks the guarantees of  
 immutable.
This is not true. An immutable object is never mutable, and could never be assigned to a mutable reference, even if that reference was inside a logically const object. Logical const only comes into play when you are talking about mutable pieces that are temporarily cast to const. -Steve
Sep 29 2011
prev sibling parent "Marco Leise" <Marco.Leise gmx.de> writes:
Am 29.09.2011, 10:12 Uhr, schrieb Peter Alexander  
<peter.alexander.au gmail.com>:

 Why would a car be able mutate it's vendor?
A car is a passive object, that's obvious. Of course it doesn't mutate the vendor and if it did, *mutate* doesn't mean the vendor can be robbed by the car or something. The vendor class still has it's encapsulation and invariants. My point here is, that it is just the straight forward way to get that information about a car. The vendor itself doesn't need to have a reference to the car any more. It comes handy if I look at the car of my neighbor and want to buy from the same vendor who's address is printed on the license plate. The fastest way to implement that in a computer is a simple mutable pointer. Mutable, because when I buy from the vendor it has to change it's internal state. The next best solution I can imagine is an external meta data table for cases like this. It would be a hash map that holds a CarMetaData struct for each car pointer with whatever is not part of the state of a car in the close sense.
Sep 29 2011
prev sibling parent reply Kagamin <spam here.lot> writes:
Jonathan M Davis Wrote:

 Some programmers have expressed annoyance and/or disappointment that there is 
 no logical const of any kind in D.
isn't it trivial? void setConst(T1,T2)(ref T1 dst, in T2 src) { *cast()&dst=src; } const int v; setConst(v,5); assert(v==5);
Sep 28 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 28 Sep 2011 10:52:24 -0400, Kagamin <spam here.lot> wrote:

 Jonathan M Davis Wrote:

 Some programmers have expressed annoyance and/or disappointment that  
 there is
 no logical const of any kind in D.
isn't it trivial? void setConst(T1,T2)(ref T1 dst, in T2 src) { *cast()&dst=src; } const int v; setConst(v,5); assert(v==5);
Trivial, and also undefined behavior ;) -Steve
Sep 28 2011
parent reply Kagamin <spam here.lot> writes:
Steven Schveighoffer Wrote:

 Trivial, and also undefined behavior ;)
Well, it gets job done. If you don't like it, don't use logical const.
Sep 28 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 28 Sep 2011 11:55:01 -0400, Kagamin <spam here.lot> wrote:

 Steven Schveighoffer Wrote:

 Trivial, and also undefined behavior ;)
Well, it gets job done. If you don't like it, don't use logical const.
It's not a matter of whether I don't like it or not. Undefined behavior means anything can happen. The application could crash. Memory could become corrupted. I feel like in this area, labelling this as "undefined behavior" is an extreme. I think with the right implementation (and the right preconditions), it should be possible to safely cast away const and modify the data. I think it's worth trying to do. All I was saying is, the language is not going to help you out on that. -Steve
Sep 28 2011
next sibling parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Wed, 28 Sep 2011 18:00:39 +0200, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 On Wed, 28 Sep 2011 11:55:01 -0400, Kagamin <spam here.lot> wrote:

 Steven Schveighoffer Wrote:

 Trivial, and also undefined behavior ;)
Well, it gets job done. If you don't like it, don't use logical const.
It's not a matter of whether I don't like it or not. Undefined behavior means anything can happen. The application could crash. Memory could become corrupted. I feel like in this area, labelling this as "undefined behavior" is an extreme. I think with the right implementation (and the right preconditions), it should be possible to safely cast away const and modify the data. I think it's worth trying to do. All I was saying is, the language is not going to help you out on that. -Steve
Const is the union of mutable/immutable, both can be implicitly converted to const. This means one can't distinguish ROM data from mutable data based on having const. Because immutable data is convertible to const, const is a promise not to change the data (while immutable is the promise that the data won't change). Actually if one thinks of const declarations as immutable data being implicitly converted to const, then there is only mutable or immutable data, with const being the polymorph container. Now an issue arises when the programmer choses to only use the mutable subset to avoid accidental changes. The logical const idiom is used to do a reasonable change of data. This conflicts with the language providing const to hold mutable and immutable data. What is needed here is a construct that can hold only mutable data. A simple lvalue wrapper solves most of the problem. It provides the necessary safety and adds an explicit, safe hole to do changes. struct Mutable(T) if(!is(T == const) && !is(T == immutable)) { // enforce construction with mutable value at runtime this(T t) { if (__ctfe) assert(0); _t = t; } // default access same as own qualifier property ref T get() { return _t; } property ref const(T) get() const { return _t; } property ref immutable(T) get() immutable { return _t; } alias get this; // privileged access through explicit/self-documenting method property ref T rvalue() const { return *cast(T*)&_t; } private: // mutable value T _t; } What would be nice from the language side is allowing to write ' disable this() immutable'. martin
Sep 28 2011
prev sibling next sibling parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Wed, 28 Sep 2011 21:58:20 +0200, Martin Nowak <dawg dawgfoto.de> wrote:

 On Wed, 28 Sep 2011 18:00:39 +0200, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 On Wed, 28 Sep 2011 11:55:01 -0400, Kagamin <spam here.lot> wrote:

 Steven Schveighoffer Wrote:

 Trivial, and also undefined behavior ;)
Well, it gets job done. If you don't like it, don't use logical const.
It's not a matter of whether I don't like it or not. Undefined behavior means anything can happen. The application could crash. Memory could become corrupted. I feel like in this area, labelling this as "undefined behavior" is an extreme. I think with the right implementation (and the right preconditions), it should be possible to safely cast away const and modify the data. I think it's worth trying to do. All I was saying is, the language is not going to help you out on that. -Steve
Const is the union of mutable/immutable, both can be implicitly converted to const. This means one can't distinguish ROM data from mutable data based on having const. Because immutable data is convertible to const, const is a promise not to change the data (while immutable is the promise that the data won't change). Actually if one thinks of const declarations as immutable data being implicitly converted to const, then there is only mutable or immutable data, with const being the polymorph container. Now an issue arises when the programmer choses to only use the mutable subset to avoid accidental changes.
...to wrap mutable data with const to avoid accidental changes.
 The logical const idiom is used to do a reasonable change of data.
 This conflicts with the language providing const to hold mutable and  
 immutable data.

 What is needed here is a construct that can hold only mutable data.

 A simple lvalue wrapper solves most of the problem.
 It provides the necessary safety and adds an explicit, safe hole
 to do changes.

 struct Mutable(T) if(!is(T == const) && !is(T == immutable))
 {
      // enforce construction with mutable value at runtime
      this(T t) { if (__ctfe) assert(0); _t = t; }

      // default access same as own qualifier
       property ref T get() { return _t; }
       property ref const(T) get() const { return _t; }
       property ref immutable(T) get() immutable { return _t; }
      alias get this;

      // privileged access through explicit/self-documenting method
       property ref T rvalue() const { return *cast(T*)&_t; }

 private:
      // mutable value
      T _t;
 }

 What would be nice from the language side is allowing to write ' disable  
 this() immutable'.

 martin
Sep 28 2011
prev sibling parent Kagamin <spam here.lot> writes:
Steven Schveighoffer Wrote:

 I think it's worth trying to do.  All I was saying is, the language is not  
 going to help you out on that.
As you can see, language support is not required to implement logical const anyway.
Sep 29 2011