www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - D's treatment of values versus side-effect free nullary functions

reply Justin Johansson <no spam.com> writes:
Perhaps this is a philosophical question, like what is time, though it
is also a novice programmer question as well.  The question exactly
being:

"What is the difference between a (constant/immutable) value and the
result of calling a side-effect free function that returns the same?"

May I please petition respondents not to answer in terms of PL syntax.
That is, this question is not about optional parentheses which some
PL's use to make values and nullary function calls look syntactical
alike.

Going beyond syntax and into semantics, does the D language have
any special "opinion" or idiom relating to constant/immutable
values versus nullary function calls?

 From my observation, the Scala PL community has had similar questions
poised in the past (about the difference between val's and nullary
functions) and their answer has largely given into constraints
imposed upon Scala's implementation and semantics as are
implementable on the Java Virtual Machine (JWM).

Thanks for answers,
Justin Johansson
Jul 20 2010
next sibling parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Tuesday, July 20, 2010 06:36:57 Justin Johansson wrote:
 Perhaps this is a philosophical question, like what is time, though it
 is also a novice programmer question as well.  The question exactly
 being:
 
 "What is the difference between a (constant/immutable) value and the
 result of calling a side-effect free function that returns the same?"
 
 May I please petition respondents not to answer in terms of PL syntax.
 That is, this question is not about optional parentheses which some
 PL's use to make values and nullary function calls look syntactical
 alike.
 
 Going beyond syntax and into semantics, does the D language have
 any special "opinion" or idiom relating to constant/immutable
 values versus nullary function calls?
 
  From my observation, the Scala PL community has had similar questions
 poised in the past (about the difference between val's and nullary
 functions) and their answer has largely given into constraints
 imposed upon Scala's implementation and semantics as are
 implementable on the Java Virtual Machine (JWM).
 
 Thanks for answers,
 Justin Johansson
1. A const variable is a chunk of memory which can be read as a particular type but not written to in that context. Optimizations may remove it in some cases. 2. An immutable variable is a chunk of memory which can be read as a particular type and cannot be written to in any context. Optimizations may remove it in some cases (and likely more cases than with const). 3. A manifest constant using an enum is a value of a particular type which is reused throughout the code without being an addressable chunk of memory (though it may still actually have an address depending on what the compiler does - you just can't address it yourself). The value never changes and cannot be changed. 4. A pure function which takes no parameters and simply returns a value of a particular type is not a chunk of memory. It's a procedure at a certain location in memory, which is therefore addressable for the purposes of calling it (or passing it around to be called). When it is run, it produces a value, and it will always produce the same value regardless of when it is run. Optimizations may remove the call in some cases and simply re-use the result. Also, the function could be used in CTFE to produce the value at compile time and just use that value instead of calling the function at runtime. Depending on how the function is used and how much optimizing the compiler does, the function may never get called at runtime. But it is ultimately a function rather than a variable, which means that it is used as a function, including what that means for addressing it. So, there _will_ be cases where it cannot be optimized out (like if you're passing around a pointer to it and calling it through the pointer). It is _not_ a variable. It may get optimized to the point that it's as efficient as a variable, but then again, it might not be. And there are cases where it acts very much like the function that it is. - Jonathan M Davis
Jul 20 2010
next sibling parent Justin Johansson <no spam.com> writes:
Jonathan M Davis wrote:
 On Tuesday, July 20, 2010 06:36:57 Justin Johansson wrote:
 Perhaps this is a philosophical question, like what is time, though it
 is also a novice programmer question as well.  The question exactly
 being:

 "What is the difference between a (constant/immutable) value and the
 result of calling a side-effect free function that returns the same?"

 May I please petition respondents not to answer in terms of PL syntax.
 That is, this question is not about optional parentheses which some
 PL's use to make values and nullary function calls look syntactical
 alike.

 Going beyond syntax and into semantics, does the D language have
 any special "opinion" or idiom relating to constant/immutable
 values versus nullary function calls?

  From my observation, the Scala PL community has had similar questions
 poised in the past (about the difference between val's and nullary
 functions) and their answer has largely given into constraints
 imposed upon Scala's implementation and semantics as are
 implementable on the Java Virtual Machine (JWM).

 Thanks for answers,
 Justin Johansson
1. A const variable is a chunk of memory which can be read as a particular type but not written to in that context. Optimizations may remove it in some cases. 2. An immutable variable is a chunk of memory which can be read as a particular type and cannot be written to in any context. Optimizations may remove it in some cases (and likely more cases than with const). 3. A manifest constant using an enum is a value of a particular type which is reused throughout the code without being an addressable chunk of memory (though it may still actually have an address depending on what the compiler does - you just can't address it yourself). The value never changes and cannot be changed. 4. A pure function which takes no parameters and simply returns a value of a particular type is not a chunk of memory. It's a procedure at a certain location in memory, which is therefore addressable for the purposes of calling it (or passing it around to be called). When it is run, it produces a value, and it will always produce the same value regardless of when it is run. Optimizations may remove the call in some cases and simply re-use the result. Also, the function could be used in CTFE to produce the value at compile time and just use that value instead of calling the function at runtime. Depending on how the function is used and how much optimizing the compiler does, the function may never get called at runtime. But it is ultimately a function rather than a variable, which means that it is used as a function, including what that means for addressing it. So, there _will_ be cases where it cannot be optimized out (like if you're passing around a pointer to it and calling it through the pointer). It is _not_ a variable. It may get optimized to the point that it's as efficient as a variable, but then again, it might not be. And there are cases where it acts very much like the function that it is. - Jonathan M Davis
Thanks for your time in constructing this well-written, well-considered and detailed answer :-) Justin
Jul 20 2010
prev sibling parent reply Justin Johansson <no spam.com> writes:
Jonathan M Davis wrote:
 Also, the 
 function could be used in CTFE to produce the value at compile time and just
use 
 that value instead of calling the function at runtime. Depending on how the 
 function is used and how much optimizing the compiler does, the function may 
 never get called at runtime.
Thanks again. This bit of insight into CTFE was particularly useful to me. :-)
Jul 20 2010
parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Tuesday, July 20, 2010 14:28:16 Justin Johansson wrote:
 Jonathan M Davis wrote:
 Also, the
 function could be used in CTFE to produce the value at compile time and
 just use that value instead of calling the function at runtime.
 Depending on how the function is used and how much optimizing the
 compiler does, the function may never get called at runtime.
Thanks again. This bit of insight into CTFE was particularly useful to me. :-)
Glad to be of help. I would point out though that CTFE is its infancy at this point. I believe that the goal is to have the entirety of SafeD work with CTFE and likely use it in optimizations quite a bit. However, what can be used in CTFE is currently rather limited. You also generally have to force the issue a bit by using it in places where the value _must_ be known at compile-time (such as enum values, member variable initializations, etc.). I'm not sure that CTFE gets done at this point if it doesn't need to be. It's certainly a goal though for it to be much more expansive. And pure functions are a prime candidate for CTFE since they don't rely on global state at all (pure functions with no parameters even more so since there's never a case where you don't know one or more of their parameters at compile time). So, how much CTFE does for you depends on the current state of the compiler (and likely whether you compiling with optimizations turned on), but it definitely has the capacity to run functions - especially pure ones - at compile time such that they may never get called at runtime. - Jonathan M Davis
Jul 20 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Jonathan M Davis:
 However, what can be used in CTFE is currently rather limited.
Currently its main limitation is the amount of memory it burns. I think Don will eventually remove this problem too, even if this requires a half rewrite of the whole thing :-)
 You also generally have to force the issue a 
 bit by using it in places where the value _must_ be known at compile-time
(such 
 as enum values, member variable initializations, etc.). I'm not sure that CTFE 
 gets done at this point if it doesn't need to be.
Currently CTFE is performed only where explicitly asked for. DMD doesn't currently perform partial compilation (except the kind of manual compilation done with constant template arguments).
 So, how much CTFE does for you depends on the current state of the compiler
(and 
 likely whether you compiling with optimizations turned on), but it definitely
has 
 the capacity to run functions - especially pure ones - at compile time such
that 
 they may never get called at runtime.
As far as I know, currently optimization level doesn't influence CTFE. Bye, bearophile
Jul 20 2010
parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Tuesday, July 20, 2010 15:49:56 bearophile wrote:
 
 As far as I know, currently optimization level doesn't influence CTFE.
Well, like I said. It depends on what exactly the compiler does, and as far as I know, it only does CTFE when it has to for initialization. But theoretically, it could be used for optimizations just like inlining (particularly with pure functions), and that being the case, it probably will at some point. Not knowing exactly what it does right now, I don't _know_ that it doesn't do that right now, but I would expect that you're right and it doesn't currently use CTFE for optimizations. Hopefully that will eventually happen though. - Jonathan M Davis
Jul 20 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Jonathan M Davis:
 But theoretically, it 
 could be used for optimizations just like inlining (particularly with pure 
 functions), and that being the case, it probably will at some point.
Pure functions go well with CTFE because they don't use global state, but you can run code at compile time only if all the arguments are known at compile time :-) If some of their arguments are known at compile time and some of them are not known, and you want to optimize better, then you need partial compilation, that is a bit hard to implement, probably makes the compiler slow and complex. So for a practical form of manual partial compilation practical time ago I have proposed something similar to GCC __builtin_constant_p (http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Other-Builtins.html#index-g_t_005f_005fbuiltin_005 constant_005fp-2392 ), that is syntax sugar that you can use inside functions to turn them into templates (I don't know if you can also have a single entry point to keep the possibility of taking their address). You can use a static if with such __trait(isCTAgument, foo), to tell if each function argument as foo is known at compile time, and do something with it if it is. This is like defining those arguments as CT template arguments, that later the optimizer can optimize better (and perform the partial compilation). Bye, bearophile
Jul 20 2010
parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Tuesday, July 20, 2010 17:24:33 bearophile wrote:
 Jonathan M Davis:
 But theoretically, it
 could be used for optimizations just like inlining (particularly with
 pure functions), and that being the case, it probably will at some
 point.
Pure functions go well with CTFE because they don't use global state, but you can run code at compile time only if all the arguments are known at compile time :-) If some of their arguments are known at compile time and some of them are not known, and you want to optimize better, then you need partial compilation, that is a bit hard to implement, probably makes the compiler slow and complex. So for a practical form of manual partial compilation practical time ago I have proposed something similar to GCC __builtin_constant_p (http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Other-Builtins.html#index-g_t _005f_005fbuiltin_005fconstant_005fp-2392 ), that is syntax sugar that you can use inside functions to turn them into templates (I don't know if you can also have a single entry point to keep the possibility of taking their address). You can use a static if with such __trait(isCTAgument, foo), to tell if each function argument as foo is known at compile time, and do something with it if it is. This is like defining those arguments as CT template arguments, that later the optimizer can optimize better (and perform the partial compilation). Bye, bearophile
Partial compilation would indeed be a big step forward, but it would certainly be a lot of work. Just being able to have functions run with CTFE which could be fully run with CTFE would lead to some nice optimizations. Certainly, that would need to be achieved before we could look at having partial compilation. Still, partial compilation would certainly be cool if we ever reached that point. - Jonathan M Davis
Jul 20 2010
parent reply Don <nospam nospam.com> writes:
Jonathan M Davis wrote:
 On Tuesday, July 20, 2010 17:24:33 bearophile wrote:
 Jonathan M Davis:
 But theoretically, it
 could be used for optimizations just like inlining (particularly with
 pure functions), and that being the case, it probably will at some
 point.
Pure functions go well with CTFE because they don't use global state, but you can run code at compile time only if all the arguments are known at compile time :-) If some of their arguments are known at compile time and some of them are not known, and you want to optimize better, then you need partial compilation, that is a bit hard to implement, probably makes the compiler slow and complex. So for a practical form of manual partial compilation practical time ago I have proposed something similar to GCC __builtin_constant_p (http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Other-Builtins.html#index-g_t _005f_005fbuiltin_005fconstant_005fp-2392 ), that is syntax sugar that you can use inside functions to turn them into templates (I don't know if you can also have a single entry point to keep the possibility of taking their address). You can use a static if with such __trait(isCTAgument, foo), to tell if each function argument as foo is known at compile time, and do something with it if it is. This is like defining those arguments as CT template arguments, that later the optimizer can optimize better (and perform the partial compilation). Bye, bearophile
Partial compilation would indeed be a big step forward, but it would certainly be a lot of work. Just being able to have functions run with CTFE which could be fully run with CTFE would lead to some nice optimizations. Certainly, that would need to be achieved before we could look at having partial compilation. Still, partial compilation would certainly be cool if we ever reached that point. - Jonathan M Davis
While running the semantic on each function body, the compiler could fairly easily check to see if the function is CTFEable. (The main complication is that it has to guess about how many iterations are performed in loops). Then, when a CTFEable function is called with compile-time constants as arguments, it could run CTFE on it, even if it is not mandatory. Incidentally, having the function being marked as 'pure' doesn't really help at all for determining if it is CTFEable.
Jul 21 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Don wrote:
 While running the semantic on each function body, the compiler could 
 fairly easily check to see if the function is CTFEable. (The main 
 complication is that it has to guess about how many iterations are 
 performed in loops). Then, when a CTFEable function is called with 
 compile-time constants as arguments, it could run CTFE on it, even if it 
 is not mandatory.
I think this is the halting problem, and is insoluble. This is why the language does CTFE in well-defined circumstances and the CTFE must succeed else a compilation time error. I'm not seeing CTFE as a credible optimization tool, either, as none of my programs would benefit at all from it. For example, what's a text editor going to precompute?
Jul 21 2010
next sibling parent reply Don <nospam nospam.com> writes:
Walter Bright wrote:
 Don wrote:
 While running the semantic on each function body, the compiler could 
 fairly easily check to see if the function is CTFEable. (The main 
 complication is that it has to guess about how many iterations are 
 performed in loops). Then, when a CTFEable function is called with 
 compile-time constants as arguments, it could run CTFE on it, even if 
 it is not mandatory.
I think this is the halting problem, and is insoluble.
In general, yes. But some quite useful instances are trivial.
 
 This is why the language does CTFE in well-defined circumstances and the 
 CTFE must succeed else a compilation time error.
 
 I'm not seeing CTFE as a credible optimization tool, either, as none of 
 my programs would benefit at all from it. For example, what's a text 
 editor going to precompute?
All it would be, would be moving parts of the optimization step from the back-end into the front-end. In theory, it wouldn't gain you anything. In practice, there may be some optimisations which are easier to do in the front-end than the back-end. An existing example of this is with: if(false) { xxx; } where the entire xxx clause is dropped immediately, and not sent to the backend at all. It don't think this has any practical consequences, it's merely an extra bit of freedom for compiler writers to implement optimisation.
Jul 21 2010
parent reply Fawzi Mohamed <fawzi gmx.ch> writes:
On 21-lug-10, at 11:37, Don wrote:

 Walter Bright wrote:
 Don wrote:
 While running the semantic on each function body, the compiler  
 could fairly easily check to see if the function is CTFEable. (The  
 main complication is that it has to guess about how many  
 iterations are performed in loops). Then, when a CTFEable function  
 is called with compile-time constants as arguments, it could run  
 CTFE on it, even if it is not mandatory.
I think this is the halting problem, and is insoluble.
In general, yes. But some quite useful instances are trivial.
 This is why the language does CTFE in well-defined circumstances  
 and the CTFE must succeed else a compilation time error.
 I'm not seeing CTFE as a credible optimization tool, either, as  
 none of my programs would benefit at all from it. For example,  
 what's a text editor going to precompute?
All it would be, would be moving parts of the optimization step from the back-end into the front-end. In theory, it wouldn't gain you anything. In practice, there may be some optimisations which are easier to do in the front-end than the back-end. An existing example of this is with: if(false) { xxx; } where the entire xxx clause is dropped immediately, and not sent to the backend at all. It don't think this has any practical consequences, it's merely an extra bit of freedom for compiler writers to implement optimisation.
Yes, but normally I dislike too much *unpredictable* magic. yes you can try to evaluate some functions at compile time, but which ones? You try for like 1s and if the calculation does not complete postpone it to the runtime? This then becomes Haskell like in the sense that some small changes to the source give large changes to the runtime, in a way that is not clearly seen from the source code. I am with Walter on this, one thing should be either compile time or runtime, and a programmer should be able to tell which is which.
Jul 21 2010
parent Don <nospam nospam.com> writes:
Fawzi Mohamed wrote:
 
 On 21-lug-10, at 11:37, Don wrote:
 
 Walter Bright wrote:
 Don wrote:
 While running the semantic on each function body, the compiler could 
 fairly easily check to see if the function is CTFEable. (The main 
 complication is that it has to guess about how many iterations are 
 performed in loops). Then, when a CTFEable function is called with 
 compile-time constants as arguments, it could run CTFE on it, even 
 if it is not mandatory.
I think this is the halting problem, and is insoluble.
In general, yes. But some quite useful instances are trivial.
 This is why the language does CTFE in well-defined circumstances and 
 the CTFE must succeed else a compilation time error.
 I'm not seeing CTFE as a credible optimization tool, either, as none 
 of my programs would benefit at all from it. For example, what's a 
 text editor going to precompute?
All it would be, would be moving parts of the optimization step from the back-end into the front-end. In theory, it wouldn't gain you anything. In practice, there may be some optimisations which are easier to do in the front-end than the back-end. An existing example of this is with: if(false) { xxx; } where the entire xxx clause is dropped immediately, and not sent to the backend at all. It don't think this has any practical consequences, it's merely an extra bit of freedom for compiler writers to implement optimisation.
Yes, but normally I dislike too much *unpredictable* magic. yes you can try to evaluate some functions at compile time, but which ones? You try for like 1s and if the calculation does not complete postpone it to the runtime? This then becomes Haskell like in the sense that some small changes to the source give large changes to the runtime, in a way that is not clearly seen from the source code.
There is absolutely no difference between this and the optimiser.
 
 I am with Walter on this, one thing should be either compile time or 
 runtime, and a programmer should be able to tell which is which.
 
Jul 21 2010
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 I'm not seeing CTFE as a credible optimization tool, either, as none of my
 programs would benefit at all from it. For example, what's a text editor going
 to precompute?
The manual partial compilation I have explained is useful for numerical code. The example where I have seen improvements was a convolution with small mask. When the mask was known at compile time the compiler has able to speed up code almost twice. Partial compilation can be useful, but it's hard to implement, so I have suggested a practical way to implement it manually, with some syntax sugar. Bye, bearophile
Jul 21 2010
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Don wrote:
 While running the semantic on each function body, the compiler could 
 fairly easily check to see if the function is CTFEable. (The main 
 complication is that it has to guess about how many iterations are 
 performed in loops). Then, when a CTFEable function is called with 
 compile-time constants as arguments, it could run CTFE on it, even if 
 it is not mandatory.
I think this is the halting problem, and is insoluble.
On what basis? Andrei
Jul 21 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Don wrote:
 While running the semantic on each function body, the compiler could 
 fairly easily check to see if the function is CTFEable. (The main 
 complication is that it has to guess about how many iterations are 
 performed in loops). Then, when a CTFEable function is called with 
 compile-time constants as arguments, it could run CTFE on it, even if 
 it is not mandatory.
I think this is the halting problem, and is insoluble.
On what basis?
Trying to determine by static analysis if a function would evaluate within a "reasonable" amount of time, or even if it would finish at all. So let's suppose the compiler just attempts CTFE on those functions anyway, with a timeout if they take too long. Suddenly we've just thrown all the compiler performance out the window.
Jul 23 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Don wrote:
 While running the semantic on each function body, the compiler could 
 fairly easily check to see if the function is CTFEable. (The main 
 complication is that it has to guess about how many iterations are 
 performed in loops). Then, when a CTFEable function is called with 
 compile-time constants as arguments, it could run CTFE on it, even 
 if it is not mandatory.
I think this is the halting problem, and is insoluble.
On what basis?
Trying to determine by static analysis if a function would evaluate within a "reasonable" amount of time, or even if it would finish at all. So let's suppose the compiler just attempts CTFE on those functions anyway, with a timeout if they take too long. Suddenly we've just thrown all the compiler performance out the window.
Sorry for alerting the halting problem police. I feel "halting problem" is a kind of cop-out that is used instead of real analysis. Whenever a problem sounds difficult, "halting problem!" absolves us from looking into it. As far as I can tell, CTFE-ability can be deterministically computed through a static analysis that goes like this: 1. Associate a 3-state Boolean tracking CTFE-ability (yes/no/dunno) with each function in the program. 2. Start analysis with all functions in the program in the "dunno" state. 3. Add all "dunno" functions to a worklist. 4. Pop one function off the worklist. If it uses only CTFE-able expressions and statements and calls only CTFE-able functions, mark it with "yes". If it calls at least one non-CTFE-able function mark it with "no". 5. Repeat from 4 until the worklist is empty. 6. Repeat from 3 until you make no progress (two successive epochs cannot reduce the number of "dunno" functions). 7. Mark all remaining "dunno" functions as "yes" (they all call each other). At the end of this process you know all CTFE-able functions in the program. (A proof would be necessary to assess whether the fixed point is unique and independent of the order in which you look at functions etc.) To assess CTFEability of one given function for minimum work, this should suffice: 1. Just like above, associate a 3-state Boolean tracking CTFE-ability (yes/no/dunno) with each function in the program, initially "dunno". 2. Put the function of interest in a worklist 3. Pick one function from the worklist and tentatively mark it as "yes". 5. Look at all functions it calls and put all "dunno" functions in the worklist 5. Continue from 3 until the worklist is empty. Andrei
Jul 23 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Don wrote:
 While running the semantic on each function body, the compiler 
 could fairly easily check to see if the function is CTFEable. (The 
 main complication is that it has to guess about how many iterations 
 are performed in loops). Then, when a CTFEable function is called 
 with compile-time constants as arguments, it could run CTFE on it, 
 even if it is not mandatory.
I think this is the halting problem, and is insoluble.
On what basis?
Trying to determine by static analysis if a function would evaluate within a "reasonable" amount of time, or even if it would finish at all. So let's suppose the compiler just attempts CTFE on those functions anyway, with a timeout if they take too long. Suddenly we've just thrown all the compiler performance out the window.
Sorry for alerting the halting problem police. I feel "halting problem" is a kind of cop-out that is used instead of real analysis. Whenever a problem sounds difficult, "halting problem!" absolves us from looking into it.
I thought that this was exactly the halting problem.
 As far as I can tell, CTFE-ability can be deterministically computed 
 through a static analysis that goes like this:
 
 1. Associate a 3-state Boolean tracking CTFE-ability (yes/no/dunno) with 
 each function in the program.
 
 2. Start analysis with all functions in the program in the "dunno" state.
 
 3. Add all "dunno" functions to a worklist.
 
 4. Pop one function off the worklist. If it uses only CTFE-able 
 expressions and statements and calls only CTFE-able functions, mark it 
 with "yes". If it calls at least one non-CTFE-able function mark it with 
 "no".
 
 5. Repeat from 4 until the worklist is empty.
 
 6. Repeat from 3 until you make no progress (two successive epochs 
 cannot reduce the number of "dunno" functions).
 
 7. Mark all remaining "dunno" functions as "yes" (they all call each 
 other).
 
 At the end of this process you know all CTFE-able functions in the 
 program. (A proof would be necessary to assess whether the fixed point 
 is unique and independent of the order in which you look at functions etc.)
 
 To assess CTFEability of one given function for minimum work, this 
 should suffice:
 
 1. Just like above, associate a 3-state Boolean tracking CTFE-ability 
 (yes/no/dunno) with each function in the program, initially "dunno".
 
 2. Put the function of interest in a worklist
 
 3. Pick one function from the worklist and tentatively mark it as "yes".
 
 5. Look at all functions it calls and put all "dunno" functions in the 
 worklist
 
 5. Continue from 3 until the worklist is empty.
Let's set aside for a moment the problem that CTFE can take an arbitrarily long time to run and that this cannot be predicted by any sort of static analysis (isn't this what the halting problem is?). Consider the following function: int foo(int x, int y) { if (x == 3) return y + 1; printf("hello\n"); } Is that function CTFE'able? By your algorithm, no. But, consider the following call: const z = foo(3, 7); Yes, it works at compile time. The particular path taken through the function must be CTFE'able, not the entire function. And the path cannot be statically determined without actually attempting to execute it.
Jul 23 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/23/2010 03:18 PM, Walter Bright wrote:
 Let's set aside for a moment the problem that CTFE can take an
 arbitrarily long time to run and that this cannot be predicted by any
 sort of static analysis (isn't this what the halting problem is?).
The halting problem describes a program along with its inputs. The art is to find analyses that simplify the setup a little and produce conservative results while also guaranteeing they will end. My analysis ignores the data.
 Consider the following function:

 int foo(int x, int y)
 {
 if (x == 3)
 return y + 1;
 printf("hello\n");
 }

 Is that function CTFE'able? By your algorithm, no. But, consider the
 following call:

 const z = foo(3, 7);

 Yes, it works at compile time. The particular path taken through the
 function must be CTFE'able, not the entire function. And the path cannot
 be statically determined without actually attempting to execute it.
The analysis I discussed is a flow- and path-independent analysis. It always terminates and produces conservative results (i.e. it associates one Boolean with each function, not on each tuple of a function and inputs. This is how the current compiler works - if you tried your example, it will refuse to evaluate foo during compilation. I agree that if you want to associate CTFEability with actual inputs you run into the halting problem. So the trick is to evaluate CTFE in isolation from inputs. Andrei
Jul 23 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 The analysis I discussed is a flow- and path-independent analysis. It 
 always terminates and produces conservative results (i.e. it associates 
 one Boolean with each function, not on each tuple of a function and 
 inputs. This is how the current compiler works - if you tried your 
 example, it will refuse to evaluate foo during compilation.
That's because there's an unrelated bug in it :-( Here's the corrected example: import std.c.stdio; int foo(int x, int y) { if (x == 3) return y + 1; printf("hello\n"); return 0; } const z = foo(3, 7); The compiler does not work as you suggest it does.
Jul 24 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 The analysis I discussed is a flow- and path-independent analysis. It 
 always terminates and produces conservative results (i.e. it 
 associates one Boolean with each function, not on each tuple of a 
 function and inputs. This is how the current compiler works - if you 
 tried your example, it will refuse to evaluate foo during compilation.
That's because there's an unrelated bug in it :-( Here's the corrected example: import std.c.stdio; int foo(int x, int y) { if (x == 3) return y + 1; printf("hello\n"); return 0; } const z = foo(3, 7); The compiler does not work as you suggest it does.
That's actually good, meaning that the compiler can and will, when pressed, forge forward with CTFE even if unsure. The analysis I suggested would allow the compiler to automatically CTFE certain functions in confidence that they are, in fact, CTFE-able. Andrei
Jul 24 2010
prev sibling parent reply "Jim Balter" <Jim Balter.name> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i2cteh$ufq$1 digitalmars.com...
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Don wrote:
 While running the semantic on each function body, the compiler could 
 fairly easily check to see if the function is CTFEable. (The main 
 complication is that it has to guess about how many iterations are 
 performed in loops). Then, when a CTFEable function is called with 
 compile-time constants as arguments, it could run CTFE on it, even if 
 it is not mandatory.
I think this is the halting problem, and is insoluble.
On what basis?
Trying to determine by static analysis if a function would evaluate within a "reasonable" amount of time, or even if it would finish at all. So let's suppose the compiler just attempts CTFE on those functions anyway, with a timeout if they take too long. Suddenly we've just thrown all the compiler performance out the window.
Sorry for alerting the halting problem police. I feel "halting problem" is a kind of cop-out that is used instead of real analysis. Whenever a problem sounds difficult, "halting problem!" absolves us from looking into it.
I thought that this was exactly the halting problem.
A common misconception, but not so. The halting problem is to find an algorithm that will, for every possible function and data as input, correctly determine whether the function will terminate. Turing proved that such an algorithm is impossible, but that's irrelevant to any practical problem. Not only isn't it necessary to guarantee that the functions you decide not to attempt CTFE on really don't terminate, but no function in any real program looks anthing like the arcane function that Turing proved will baffle your algorithm. -- jqb
 As far as I can tell, CTFE-ability can be deterministically computed 
 through a static analysis that goes like this:

 1. Associate a 3-state Boolean tracking CTFE-ability (yes/no/dunno) with 
 each function in the program.

 2. Start analysis with all functions in the program in the "dunno" state.

 3. Add all "dunno" functions to a worklist.

 4. Pop one function off the worklist. If it uses only CTFE-able 
 expressions and statements and calls only CTFE-able functions, mark it 
 with "yes". If it calls at least one non-CTFE-able function mark it with 
 "no".

 5. Repeat from 4 until the worklist is empty.

 6. Repeat from 3 until you make no progress (two successive epochs cannot 
 reduce the number of "dunno" functions).

 7. Mark all remaining "dunno" functions as "yes" (they all call each 
 other).

 At the end of this process you know all CTFE-able functions in the 
 program. (A proof would be necessary to assess whether the fixed point is 
 unique and independent of the order in which you look at functions etc.)

 To assess CTFEability of one given function for minimum work, this should 
 suffice:

 1. Just like above, associate a 3-state Boolean tracking CTFE-ability 
 (yes/no/dunno) with each function in the program, initially "dunno".

 2. Put the function of interest in a worklist

 3. Pick one function from the worklist and tentatively mark it as "yes".

 5. Look at all functions it calls and put all "dunno" functions in the 
 worklist

 5. Continue from 3 until the worklist is empty.
Let's set aside for a moment the problem that CTFE can take an arbitrarily long time to run and that this cannot be predicted by any sort of static analysis (isn't this what the halting problem is?). Consider the following function: int foo(int x, int y) { if (x == 3) return y + 1; printf("hello\n"); } Is that function CTFE'able? By your algorithm, no. But, consider the following call: const z = foo(3, 7); Yes, it works at compile time. The particular path taken through the function must be CTFE'able, not the entire function. And the path cannot be statically determined without actually attempting to execute it.
Jul 24 2010
parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 24/07/10 12:43 PM, Jim Balter wrote:
 I thought that this was exactly the halting problem.
A common misconception, but not so. The halting problem is to find an algorithm that will, for every possible function and data as input, correctly determine whether the function will terminate. Turing proved that such an algorithm is impossible, but that's irrelevant to any practical problem. Not only isn't it necessary to guarantee that the functions you decide not to attempt CTFE on really don't terminate, but no function in any real program looks anthing like the arcane function that Turing proved will baffle your algorithm.
Walter is right. Yes, Turing did prove it using an unrealistic example, but the only reason he did so was to keep the proof simple. There are plenty of practical examples of functions which are difficult (or impossible) to determine whether or not they terminate. In particular, any recursion or unconstrained loops make it very difficult to prove termination, and there's plenty of practical examples of those e.g. quicksort.
Jul 24 2010
parent reply "Jim Balter" <Jim Balter.name> writes:
"Peter Alexander" <peter.alexander.au gmail.com> wrote in message 
news:i2er6d$1jkv$1 digitalmars.com...
 On 24/07/10 12:43 PM, Jim Balter wrote:
 I thought that this was exactly the halting problem.
A common misconception, but not so. The halting problem is to find an algorithm that will, for every possible function and data as input, correctly determine whether the function will terminate. Turing proved that such an algorithm is impossible, but that's irrelevant to any practical problem. Not only isn't it necessary to guarantee that the functions you decide not to attempt CTFE on really don't terminate, but no function in any real program looks anthing like the arcane function that Turing proved will baffle your algorithm.
Walter is right.
No, he's not, and I explained why, and you have failed to rebut anything I wrote.
 Yes, Turing did prove it using an unrealistic example, but the only reason 
 he did so was to keep the proof simple.
You have no absolutely no idea what you're talking about; Turing's proof is reductio ad absurdum based on a diagonalization argument -- the constructed undeterminable function is dependent on the algorithm purported to be able to decide it. See http://en.wikipedia.org/wiki/Halting_problem
 There are plenty of practical examples of functions which are difficult 
 (or impossible) to determine whether or not they terminate.
The halting problem proof isn't about "diffiicult", only "impossible", and you have failed to provide a function and a proof that it is *impossible* to determine whether it terminates.
 In particular, any recursion or unconstrained loops make it very difficult 
 to prove termination, and there's plenty of practical examples of those 
 e.g. quicksort.
Again, difficulty is not an issue, *impossibility* is -- *that* is what the halting problem is, and if you don't understand that then you don't understand the issue *at all*.
Jul 24 2010
parent reply "Jim Balter" <Jim Balter.name> writes:
"Jim Balter" <Jim Balter.name> wrote in message 
news:i2fkp7$2if$1 digitalmars.com...
 "Peter Alexander" <peter.alexander.au gmail.com> wrote in message 
 news:i2er6d$1jkv$1 digitalmars.com...
 On 24/07/10 12:43 PM, Jim Balter wrote:
 I thought that this was exactly the halting problem.
A common misconception, but not so. The halting problem is to find an algorithm that will, for every possible function and data as input, correctly determine whether the function will terminate. Turing proved that such an algorithm is impossible, but that's irrelevant to any practical problem. Not only isn't it necessary to guarantee that the functions you decide not to attempt CTFE on really don't terminate, but no function in any real program looks anthing like the arcane function that Turing proved will baffle your algorithm.
Walter is right.
No, he's not, and I explained why, and you have failed to rebut anything I wrote.
 Yes, Turing did prove it using an unrealistic example, but the only 
 reason he did so was to keep the proof simple.
You have no absolutely no idea what you're talking about; Turing's proof is reductio ad absurdum based on a diagonalization argument -- the constructed undeterminable function is dependent on the algorithm purported to be able to decide it. See http://en.wikipedia.org/wiki/Halting_problem
 There are plenty of practical examples of functions which are difficult 
 (or impossible) to determine whether or not they terminate.
The halting problem proof isn't about "diffiicult", only "impossible", and you have failed to provide a function and a proof that it is *impossible* to determine whether it terminates.
 In particular, any recursion or unconstrained loops make it very 
 difficult to prove termination, and there's plenty of practical examples 
 of those e.g. quicksort.
Again, difficulty is not an issue, *impossibility* is -- *that* is what the halting problem is, and if you don't understand that then you don't understand the issue *at all*.
P.S. The point about difficulty goes to why this is not a matter of the halting problem. Even if the halting problem were decidable, that would not help us in the slightest because we are trying to solve a *practical* problem. Even if you could prove, for every given function, whether it would halt on all inputs, that wouldn't tell you which ones to perform CTFE on -- we want CTFE to terminate before the programmer dies of old age. The halting problem is strictly a matter of theory; it's ironic to see someone who has designed a programming language based on *pragmatic* rather than theoretical considerations to invoke it.
Jul 24 2010
parent reply Rainer Deyke <rainerd eldwood.com> writes:
On 7/24/2010 15:34, Jim Balter wrote:
 The point about difficulty goes to why this is not a matter of the
 halting problem. Even if the halting problem were decidable, that would
 not help us in the slightest because we are trying to solve a
 *practical* problem. Even if you could prove, for every given function,
 whether it would halt on all inputs, that wouldn't tell you which ones
 to perform CTFE on -- we want CTFE to terminate before the programmer
 dies of old age. The halting problem is strictly a matter of theory;
 it's ironic to see someone who has designed a programming language based
 on *pragmatic* rather than theoretical considerations to invoke it.
That's exactly backwards. It's better to catch errors at compile time than at run time. A program that fails to terminate and fails to perform I/O is a faulty program. (A function that performs I/O is obviously not a candidate for CTFE.) I'd rather catch the faulty program by having the compiler lock up at compile time than by having the compiled program lock up after deployment. Testing whether the program terminates at compile time by attempting to execute the program at compile time is a feature, not a bug. -- Rainer Deyke - rainerd eldwood.com
Jul 24 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Rainer Deyke:
 (A function that performs I/O is
 obviously not a candidate for CTFE.)
I don't fully agree, see this enhancement request of mine: http://d.puremagic.com/issues/show_bug.cgi?id=3952 Bye, bearophile
Jul 24 2010
parent "Jim Balter" <Jim Balter.name> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:i2g42t$10ce$1 digitalmars.com...
 Rainer Deyke:
 (A function that performs I/O is
 obviously not a candidate for CTFE.)
I don't fully agree, see this enhancement request of mine: http://d.puremagic.com/issues/show_bug.cgi?id=3952 Bye, bearophile
"performs I/O" refers to the semantics of the running program; the I/O is done at the time the function is invoked. I/O done by pragma(msg, ...) or ctputs is performed by the compiler, not by the function. Obviously, as Rainer said, no CTFE function can perform I/O since that I/O would never get performed by the running program. Also, I think computing compile-time messages is an abuse of CTFE -- it certainly isn't its intended purpose.
Jul 25 2010
prev sibling parent reply "Jim Balter" <Jim Balter.name> writes:
"Rainer Deyke" <rainerd eldwood.com> wrote in message 
news:i2g3oo$von$1 digitalmars.com...
 On 7/24/2010 15:34, Jim Balter wrote:
 The point about difficulty goes to why this is not a matter of the
 halting problem. Even if the halting problem were decidable, that would
 not help us in the slightest because we are trying to solve a
 *practical* problem. Even if you could prove, for every given function,
 whether it would halt on all inputs, that wouldn't tell you which ones
 to perform CTFE on -- we want CTFE to terminate before the programmer
 dies of old age. The halting problem is strictly a matter of theory;
 it's ironic to see someone who has designed a programming language based
 on *pragmatic* rather than theoretical considerations to invoke it.
That's exactly backwards. It's better to catch errors at compile time than at run time. A program that fails to terminate and fails to perform I/O is a faulty program. (A function that performs I/O is obviously not a candidate for CTFE.) I'd rather catch the faulty program by having the compiler lock up at compile time than by having the compiled program lock up after deployment. Testing whether the program terminates at compile time by attempting to execute the program at compile time is a feature, not a bug.
You have a good point, and that point would imply that whether a function would terminate, or how long it would take in general, isn't relevant to the decision of whether CTFE should be done. But there are (at least) two problems: 1) you can't be certain that the code will be run at run time at all -- in generic code you could easily have function invocations with constant values that would fail in various ways but the function is never run with those values because of prior tests. But CTFE wouldn't be nearly as useful if it is performed only for code that you can be certain will run. If you can't be certain, then you need a conservative approach, and you must not report errors that might never occur at run time; if you can be certain, then you could forge ahead at compile time no matter how long the computation would take. But: 2) You do not have the debug facilities at compile time that you have at run time. If the program stalls at run time, you can attach a debugger to it and find out what it's doing. But if CTFE is running at compile time and the compiler stalls, you don't know why ... unless the compiler has a mechanism such that you can interrupt it and it can report an execution trace of the CTFE code. That still is not enough -- you really need full debug capabilites to trace the code, all available at compile time. That's just too much.
 -- 
 Rainer Deyke - rainerd eldwood.com 
Jul 25 2010
next sibling parent Don <nospam nospam.com> writes:
Jim Balter wrote:
 
 "Rainer Deyke" <rainerd eldwood.com> wrote in message 
 news:i2g3oo$von$1 digitalmars.com...
 On 7/24/2010 15:34, Jim Balter wrote:
 The point about difficulty goes to why this is not a matter of the
 halting problem. Even if the halting problem were decidable, that would
 not help us in the slightest because we are trying to solve a
 *practical* problem. Even if you could prove, for every given function,
 whether it would halt on all inputs, that wouldn't tell you which ones
 to perform CTFE on -- we want CTFE to terminate before the programmer
 dies of old age. The halting problem is strictly a matter of theory;
 it's ironic to see someone who has designed a programming language based
 on *pragmatic* rather than theoretical considerations to invoke it.
That's exactly backwards. It's better to catch errors at compile time than at run time. A program that fails to terminate and fails to perform I/O is a faulty program. (A function that performs I/O is obviously not a candidate for CTFE.) I'd rather catch the faulty program by having the compiler lock up at compile time than by having the compiled program lock up after deployment. Testing whether the program terminates at compile time by attempting to execute the program at compile time is a feature, not a bug.
You have a good point, and that point would imply that whether a function would terminate, or how long it would take in general, isn't relevant to the decision of whether CTFE should be done. But there are (at least) two problems: 1) you can't be certain that the code will be run at run time at all -- in generic code you could easily have function invocations with constant values that would fail in various ways but the function is never run with those values because of prior tests. But CTFE wouldn't be nearly as useful if it is performed only for code that you can be certain will run. If you can't be certain, then you need a conservative approach, and you must not report errors that might never occur at run time; if you can be certain, then you could forge ahead at compile time no matter how long the computation would take. But: 2) You do not have the debug facilities at compile time that you have at run time. If the program stalls at run time, you can attach a debugger to it and find out what it's doing. But if CTFE is running at compile time and the compiler stalls, you don't know why ... unless the compiler has a mechanism such that you can interrupt it and it can report an execution trace of the CTFE code. That still is not enough -- you really need full debug capabilites to trace the code, all available at compile time. That's just too much.
The D plugin for Eclipse included a compile-time debugger (!)
Jul 25 2010
prev sibling parent reply Rainer Deyke <rainerd eldwood.com> writes:
On 7/25/2010 13:41, Jim Balter wrote:
 But there are
 (at least) two problems: 1) you can't be certain that the code will be
 run at run time at all -- in generic code you could easily have function
 invocations with constant values that would fail in various ways but the
 function is never run with those values because of prior tests.
That's a good point - but at the same time, I'm having trouble thinking of any sensible example where this would be an issue. It would require at all of the following factors: - The function itself is a candidate for CTFE. - The arguments to the function can be evaluated at compile time, but are not simple constants (i.e. they depend on template parameters). - The code that calls the function depends on run-time parameters. - The function is never called with parameters that prevent it from terminating, but the compiler is unable to determine that this is the case. Something like this would work, I guess, but it's horribly contrived: int terminate_if_nonzero(int i) { return (i != 0) ? i : terminate_if_nonzero(i); } void f(int i)(bool call_it) { if (call_it) terminate_if_nonzero(i); } bool get_input_and_return_false() { // This function is not a CTFE candidate because it performs I/O. read_input(); return false; } void main() { f!(0)(get_input_and_return_false()); } -- Rainer Deyke - rainerd eldwood.com
Jul 26 2010
parent reply "Jim Balter" <Jim Balter.name> writes:
"Rainer Deyke" <rainerd eldwood.com> wrote in message 
news:i2jeja$1lha$1 digitalmars.com...
 On 7/25/2010 13:41, Jim Balter wrote:
 But there are
 (at least) two problems: 1) you can't be certain that the code will be
 run at run time at all -- in generic code you could easily have function
 invocations with constant values that would fail in various ways but the
 function is never run with those values because of prior tests.
That's a good point - but at the same time, I'm having trouble thinking of any sensible example where this would be an issue.
Consider some code that calls a pure function that uses a low-overhead exponential algorithm when the parameter is small, and otherwise calls a pure function that uses a high-overhead linear algorithm. The calling code happens not to be CTFEable and thus the test, even though it only depends on a constant, is not computed at compile time. The compiler sees two calls, one to each of the two functions, with the same parameter passed to each, but only one of the two will actually be called at run time. Trying to evaluate the low-overhead exponential algorithm with large parameters at compile time would be a lose without a timeout to terminate the attempt. It might be best if the compiler only attempts CTFE if the code explicitly requests it.
 It would require
 at all of the following factors:
  - The function itself is a candidate for CTFE.
  - The arguments to the function can be evaluated at compile time, but
 are not simple constants (i.e. they depend on template parameters).
  - The code that calls the function depends on run-time parameters.
  - The function is never called with parameters that prevent it from
 terminating, but the compiler is unable to determine that this is the 
 case.

 Something like this would work, I guess, but it's horribly contrived:

 int terminate_if_nonzero(int i) {
  return (i != 0) ? i : terminate_if_nonzero(i);
 }

 void f(int i)(bool call_it) {
  if (call_it) terminate_if_nonzero(i);
 }

 bool get_input_and_return_false() {
  // This function is not a CTFE candidate because it performs I/O.
  read_input();
  return false;
 }

 void main() {
  f!(0)(get_input_and_return_false());
 }


 -- 
 Rainer Deyke - rainerd eldwood.com 
Jul 26 2010
parent reply Rainer Deyke <rainerd eldwood.com> writes:
On 7/26/2010 18:30, Jim Balter wrote:
 Consider some code that calls a pure function that uses a low-overhead
 exponential algorithm when the parameter is small, and otherwise calls a
 pure function that uses a high-overhead linear algorithm. The calling
 code happens not to be CTFEable and thus the test, even though it only
 depends on a constant, is not computed at compile time. The compiler
 sees two calls, one to each of the two functions, with the same
 parameter passed to each, but only one of the two will actually be
 called at run time. Trying to evaluate the low-overhead exponential
 algorithm with large parameters at compile time would be a lose without
 a timeout to terminate the attempt. It might be best if the compiler
 only attempts CTFE if the code explicitly requests it.
It seems to me that you're describing something like this: if (x < some_limit) { return some_function(x); } else { return some_other_function(x); } This does not pose a problem, assuming 'some_limit' is a compile-time constant. If 'x' is known at compile, the test can be performed at compile time. If 'x' is not known at compile time, neither of the function invocations can be evaluated at compile time. The problem does exist in this code: if (complex_predicate(x)) { return some_function(x); } else { return some_other_function(x); } ...but only if 'complex_predicate' is not a candidate for CTFE but the other functions are. (This can happen if 'complex_predicate' performs any type of output, including debug logging, so the scenario is not entirely unlikely.) Actually I'm not entirely sure if CTFE is even necessary for producing optimal code. The optimizations enabled by CTFE seem like a subset of those enabled by aggressive inlining combined with other common optimizations. -- Rainer Deyke - rainerd eldwood.com
Jul 26 2010
parent Don <nospam nospam.com> writes:
Rainer Deyke wrote:
 On 7/26/2010 18:30, Jim Balter wrote:
 Consider some code that calls a pure function that uses a low-overhead
 exponential algorithm when the parameter is small, and otherwise calls a
 pure function that uses a high-overhead linear algorithm. The calling
 code happens not to be CTFEable and thus the test, even though it only
 depends on a constant, is not computed at compile time. The compiler
 sees two calls, one to each of the two functions, with the same
 parameter passed to each, but only one of the two will actually be
 called at run time. Trying to evaluate the low-overhead exponential
 algorithm with large parameters at compile time would be a lose without
 a timeout to terminate the attempt. It might be best if the compiler
 only attempts CTFE if the code explicitly requests it.
It seems to me that you're describing something like this: if (x < some_limit) { return some_function(x); } else { return some_other_function(x); } This does not pose a problem, assuming 'some_limit' is a compile-time constant. If 'x' is known at compile, the test can be performed at compile time. If 'x' is not known at compile time, neither of the function invocations can be evaluated at compile time. The problem does exist in this code: if (complex_predicate(x)) { return some_function(x); } else { return some_other_function(x); } ...but only if 'complex_predicate' is not a candidate for CTFE but the other functions are. (This can happen if 'complex_predicate' performs any type of output, including debug logging, so the scenario is not entirely unlikely.) Actually I'm not entirely sure if CTFE is even necessary for producing optimal code. The optimizations enabled by CTFE seem like a subset of those enabled by aggressive inlining combined with other common optimizations.
I think that's probably true. If the inliner is in the front-end (as is currently the case in DMD, but not DMC), then CTFE can produce better code. If functions can be inlined *after* optimisation, CTFE may be a more efficient way to perform certain optimisations, but otherwise it's probably not much of a win. From the programmer's point of view, it makes no difference whether it is done or not.
Jul 29 2010
prev sibling parent reply Don <nospam nospam.com> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Don wrote:
 While running the semantic on each function body, the compiler could 
 fairly easily check to see if the function is CTFEable. (The main 
 complication is that it has to guess about how many iterations are 
 performed in loops). Then, when a CTFEable function is called with 
 compile-time constants as arguments, it could run CTFE on it, even 
 if it is not mandatory.
I think this is the halting problem, and is insoluble.
On what basis?
Trying to determine by static analysis if a function would evaluate within a "reasonable" amount of time, or even if it would finish at all. So let's suppose the compiler just attempts CTFE on those functions anyway, with a timeout if they take too long. Suddenly we've just thrown all the compiler performance out the window.
But it doesn't need to perfectly solve the complete halting problem. It can be of benefit if it just identifies some functions which can be trivially shown to halt, and ignores any which it is unsure about. For example, a function which contains no loops and no recursive calls is always CTFEable. Admittedly, that probably doesn't include many functions which wouldn't be inlined anyway. But still, while checking for which functions are inlinable, the compiler could check for 'trivially CTFEable' at the same time. If a CTFEable call is made to an inlinable function, it's probably quicker to CTFE it rather than inline + optimize. Assuming of course that we have a fast implementation of CTFE. I completely agree that there will always be opportunities for CTFE which will be missed unless you allow the compile time to become arbitrarily long.
Jul 23 2010
next sibling parent reply div0 <div0 users.sourceforge.net> writes:
On 23/07/2010 21:54, Don wrote:
 I completely agree that there will always be opportunities for CTFE
 which will be missed unless you allow the compile time to become
 arbitrarily long.
Which is why ultimately it should be up to the programmer to decide. There should be a way to force the compiler to try CTFE, unless it can absolutely prove the function is not CTFE. -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk
Jul 23 2010
parent reply "Jim Balter" <Jim Balter.name> writes:
"div0" <div0 users.sourceforge.net> wrote in message 
news:i2d2ek$179m$1 digitalmars.com...
 On 23/07/2010 21:54, Don wrote:
 I completely agree that there will always be opportunities for CTFE
 which will be missed unless you allow the compile time to become
 arbitrarily long.
Which is why ultimately it should be up to the programmer to decide. There should be a way to force the compiler to try CTFE, unless it can absolutely prove the function is not CTFE.
That's what command line optimzation options are for.
 -- 
 My enormous talent is exceeded only by my outrageous laziness.
 http://www.ssTk.co.uk 
Jul 24 2010
parent reply div0 <div0 users.sourceforge.net> writes:
On 24/07/2010 12:46, Jim Balter wrote:
 "div0" <div0 users.sourceforge.net> wrote in message
 news:i2d2ek$179m$1 digitalmars.com...
 On 23/07/2010 21:54, Don wrote:
 I completely agree that there will always be opportunities for CTFE
 which will be missed unless you allow the compile time to become
 arbitrarily long.
Which is why ultimately it should be up to the programmer to decide. There should be a way to force the compiler to try CTFE, unless it can absolutely prove the function is not CTFE.
That's what command line optimzation options are for.
No they aren't. command line options apply to the entire module(s) not to specific invocations of a function. -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk
Jul 24 2010
parent bearophile <bearophileHUGS lycos.com> writes:
div0:
 No they aren't.
 command line options apply to the entire module(s) not to specific 
 invocations of a function.
In both GNU C and CLisp you can add annotations to change the optimization level on a function by function basis: http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html#index-g_t_0040code_007boptimize_007d-function-attribute-2419 Bye, bearophile
Jul 24 2010
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/23/2010 03:54 PM, Don wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Don wrote:
 While running the semantic on each function body, the compiler
 could fairly easily check to see if the function is CTFEable. (The
 main complication is that it has to guess about how many iterations
 are performed in loops). Then, when a CTFEable function is called
 with compile-time constants as arguments, it could run CTFE on it,
 even if it is not mandatory.
I think this is the halting problem, and is insoluble.
On what basis?
Trying to determine by static analysis if a function would evaluate within a "reasonable" amount of time, or even if it would finish at all. So let's suppose the compiler just attempts CTFE on those functions anyway, with a timeout if they take too long. Suddenly we've just thrown all the compiler performance out the window.
But it doesn't need to perfectly solve the complete halting problem. It can be of benefit if it just identifies some functions which can be trivially shown to halt, and ignores any which it is unsure about. For example, a function which contains no loops and no recursive calls is always CTFEable. Admittedly, that probably doesn't include many functions which wouldn't be inlined anyway. But still, while checking for which functions are inlinable, the compiler could check for 'trivially CTFEable' at the same time. If a CTFEable call is made to an inlinable function, it's probably quicker to CTFE it rather than inline + optimize. Assuming of course that we have a fast implementation of CTFE. I completely agree that there will always be opportunities for CTFE which will be missed unless you allow the compile time to become arbitrarily long.
By the way, remember you asked at some point what removeAny in std.container is good for? CTFEability analysis could use such a primitive... Andrei
Jul 23 2010
prev sibling next sibling parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Wednesday 21 July 2010 00:09:05 Don wrote:
 While running the semantic on each function body, the compiler could
 fairly easily check to see if the function is CTFEable. (The main
 complication is that it has to guess about how many iterations are
 performed in loops). Then, when a CTFEable function is called with
 compile-time constants as arguments, it could run CTFE on it, even if it
 is not mandatory.
 
 Incidentally, having the function being marked as 'pure' doesn't really
 help at all for determining if it is CTFEable.
Hmm. As I understood it, it did. But I guess that if it did, the compiler could technically determine whether a function was actually pure anyway by looking at its body. All the pure attribute would do is save it the trouble. So, the pure attribute wouldn't do much in that sense except save the compiler some computation. I can certainly see why being pure wouldn't be enough to be CTFEable (since pure doesn't guarantee anything about whether it's parameters are know at compile time), but I would have thought that the lack of global state would be required - at least the lack of global state which isn't known at compile time - for it to be CTFEable, in which case the pureness would help. In any case, I'd obviously have to study it up a bit more to understand exactly what's required for CTFEability. I have some idea, but obviously not enough. - Jonathan M Davis
Jul 21 2010
next sibling parent Don <nospam nospam.com> writes:
Jonathan M Davis wrote:
 On Wednesday 21 July 2010 00:09:05 Don wrote:
 While running the semantic on each function body, the compiler could
 fairly easily check to see if the function is CTFEable. (The main
 complication is that it has to guess about how many iterations are
 performed in loops). Then, when a CTFEable function is called with
 compile-time constants as arguments, it could run CTFE on it, even if it
 is not mandatory.

 Incidentally, having the function being marked as 'pure' doesn't really
 help at all for determining if it is CTFEable.
Hmm. As I understood it, it did. But I guess that if it did, the compiler could technically determine whether a function was actually pure anyway by looking at its body. All the pure attribute would do is save it the trouble. So, the pure attribute wouldn't do much in that sense except save the compiler some computation. I can certainly see why being pure wouldn't be enough to be CTFEable (since pure doesn't guarantee anything about whether it's parameters are know at compile time), but I would have thought that the lack of global state would be required - at least the lack of global state which isn't known at compile time - for it to be CTFEable, in which case the pureness would help.
The check for no globals is common to both CTFE and pure, but it's pretty trivial. There are functions which are pure but not CTFEable, eg CTFE requires source code to be available, pure does not. More interestingly, there are many functions which are CTFEable which are not pure. It's OK for a CTFE function to modify values which are reachable through its parameters. So although CTFEable and pure have significant overlap, neither one is a subset of the other.
 
 In any case, I'd obviously have to study it up a bit more to understand
exactly 
 what's required for CTFEability. I have some idea, but obviously not enough.
 
 - Jonathan M Davis
Jul 21 2010
prev sibling parent reply "Jim Balter" <Jim Balter.name> writes:
"Jonathan M Davis" <jmdavisprog gmail.com> wrote in message 
news:mailman.435.1279700666.24349.digitalmars-d puremagic.com...
 On Wednesday 21 July 2010 00:09:05 Don wrote:
 While running the semantic on each function body, the compiler could
 fairly easily check to see if the function is CTFEable. (The main
 complication is that it has to guess about how many iterations are
 performed in loops). Then, when a CTFEable function is called with
 compile-time constants as arguments, it could run CTFE on it, even if it
 is not mandatory.

 Incidentally, having the function being marked as 'pure' doesn't really
 help at all for determining if it is CTFEable.
Hmm. As I understood it, it did.
You understood right.
 But I guess that if it did, the compiler could
 technically determine whether a function was actually pure anyway by 
 looking at
 its body. All the pure attribute would do is save it the trouble. So, the 
 pure
 attribute wouldn't do much in that sense except save the compiler some
 computation.
pure functions must halt, so the attribute let's you tell the compiler something that it could only figure out with difficulty, if at all.
 I can certainly see why being pure wouldn't be enough to be CTFEable 
 (since pure
 doesn't guarantee anything about whether it's parameters are know at 
 compile
 time), but I would have thought that the lack of global state would be 
 required
 - at least the lack of global state which isn't known at compile time - 
 for it
 to be CTFEable, in which case the pureness would help.

 In any case, I'd obviously have to study it up a bit more to understand 
 exactly
 what's required for CTFEability. I have some idea, but obviously not 
 enough.

 - Jonathan M Davis 
Jul 24 2010
parent reply Don <nospam nospam.com> writes:
Jim Balter wrote:
 
 "Jonathan M Davis" <jmdavisprog gmail.com> wrote in message 
 news:mailman.435.1279700666.24349.digitalmars-d puremagic.com...
 On Wednesday 21 July 2010 00:09:05 Don wrote:
 While running the semantic on each function body, the compiler could
 fairly easily check to see if the function is CTFEable. (The main
 complication is that it has to guess about how many iterations are
 performed in loops). Then, when a CTFEable function is called with
 compile-time constants as arguments, it could run CTFE on it, even if it
 is not mandatory.

 Incidentally, having the function being marked as 'pure' doesn't really
 help at all for determining if it is CTFEable.
Hmm. As I understood it, it did.
You understood right.
 But I guess that if it did, the compiler could
 technically determine whether a function was actually pure anyway by 
 looking at
 its body. All the pure attribute would do is save it the trouble. So, 
 the pure
 attribute wouldn't do much in that sense except save the compiler some
 computation.
pure functions must halt, so the attribute let's you tell the compiler something that it could only figure out with difficulty, if at all.
Being marked as 'pure' is no guarantee that the function will terminate.
Jul 24 2010
next sibling parent "Jim Balter" <Jim Balter.name> writes:
"Don" <nospam nospam.com> wrote in message 
news:i2ffm7$2qf3$1 digitalmars.com...
 Jim Balter wrote:
 "Jonathan M Davis" <jmdavisprog gmail.com> wrote in message 
 news:mailman.435.1279700666.24349.digitalmars-d puremagic.com...
 On Wednesday 21 July 2010 00:09:05 Don wrote:
 While running the semantic on each function body, the compiler could
 fairly easily check to see if the function is CTFEable. (The main
 complication is that it has to guess about how many iterations are
 performed in loops). Then, when a CTFEable function is called with
 compile-time constants as arguments, it could run CTFE on it, even if 
 it
 is not mandatory.

 Incidentally, having the function being marked as 'pure' doesn't really
 help at all for determining if it is CTFEable.
Hmm. As I understood it, it did.
You understood right.
 But I guess that if it did, the compiler could
 technically determine whether a function was actually pure anyway by 
 looking at
 its body. All the pure attribute would do is save it the trouble. So, 
 the pure
 attribute wouldn't do much in that sense except save the compiler some
 computation.
pure functions must halt, so the attribute let's you tell the compiler something that it could only figure out with difficulty, if at all.
Being marked as 'pure' is no guarantee that the function will terminate.
A pure function has no side effects; one that doesn't terminate produces no return value either (throwing an exception is a form of termination). Thus a pure function that doesn't terminate is a bug. It would be perfectly reasonable for a compiler to assume that a pure function does terminate but to put a time limit on how long it spends evaluating it -- which it should do anyway, since even provably terminating functions can take longer than the time to the heat death of the universe to complete ... as I noted before, the halting problem isn't relevant in practice, because even if it were solvable, there are no time constraints on the solution.
Jul 24 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Don wrote:
 Being marked as 'pure' is no guarantee that the function will terminate.
Right, and the compiler makes no attempt to check this, either.
Jul 24 2010
parent reply "Jim Balter" <Jim Balter.name> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i2flvi$4io$1 digitalmars.com...
 Don wrote:
 Being marked as 'pure' is no guarantee that the function will terminate.
Right, and the compiler makes no attempt to check this, either.
Of course the compiler makes no attempt to check -- that *would* be the halting problem, but there's no reason for a compiler to attempt such a check even if it could be done. Pure functions invoked on constant values are excellent candidates for CTFE because they have no side effects and are computable at compile time -- if they terminate. But pure functions that don't terminate are almost entirely pointless; the only non-pointless example I can think of is a kernel''s idle process on machines without an interruptable halt instruction. Even if there were useful non-terminating pure functions, intentionally putting the keyword "pure" on a function that doesn't terminate is doable but dumb. Of course, unintentionally doing so is a mistake. But dumbth and mistakes do happen, so the compiler can guard against taking forever -- or too long, period (Ackermann's Function is a famous example of a pure function that takes astronomically long to compute even on small input values -- good luck on writing a compiler that can detect such functions) -- by setting a timer. If the timer goes off before the computation completes, the compiler should abandon CTFE and print a warning message (unless suppressed by a pragma), since the function is most likely buggy (this satisfies Rainer Deyke's point that errors, including run-on functions, should be detected at compile time rather than waiting for runtime).
Jul 25 2010
parent reply Don <nospam nospam.com> writes:
Jim Balter wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:i2flvi$4io$1 digitalmars.com...
 Don wrote:
 Being marked as 'pure' is no guarantee that the function will terminate.
Right, and the compiler makes no attempt to check this, either.
Of course the compiler makes no attempt to check -- that *would* be the halting problem, but there's no reason for a compiler to attempt such a check even if it could be done. Pure functions invoked on constant values are excellent candidates for CTFE because they have no side effects and are computable at compile time -- if they terminate. But pure functions that don't terminate are almost entirely pointless;
It's a bug. the only non-pointless example I can think of
 is a kernel''s idle process on machines without an interruptable halt 
 instruction. Even if there were useful non-terminating pure functions, 
 intentionally putting the keyword "pure" on a function that doesn't 
 terminate is doable but dumb. Of course, unintentionally doing so is a 
 mistake. But dumbth and mistakes do happen, so the compiler can guard 
 against taking forever -- or too long, period (Ackermann's Function is a 
 famous example of a pure function that takes astronomically long to 
 compute even on small input values -- good luck on writing a compiler 
 that can detect such functions) -- by setting a timer. If the timer goes 
 off before the computation completes, the compiler should abandon CTFE 
 and print a warning message (unless suppressed by a pragma), since the 
 function is most likely buggy (this satisfies Rainer Deyke's point that 
 errors, including run-on functions, should be detected at compile time 
 rather than waiting for runtime).
I don't think you can say that it's "most likely buggy". A function to calculate pi, for example, is a pure function that may take days to run.
Jul 25 2010
parent "Jim Balter" <Jim Balter.name> writes:
"Don" <nospam nospam.com> wrote in message 
news:i2ice7$2i7r$1 digitalmars.com...
 Jim Balter wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:i2flvi$4io$1 digitalmars.com...
 Don wrote:
 Being marked as 'pure' is no guarantee that the function will 
 terminate.
Right, and the compiler makes no attempt to check this, either.
Of course the compiler makes no attempt to check -- that *would* be the halting problem, but there's no reason for a compiler to attempt such a check even if it could be done. Pure functions invoked on constant values are excellent candidates for CTFE because they have no side effects and are computable at compile time -- if they terminate. But pure functions that don't terminate are almost entirely pointless;
It's a bug.
Yes, that was my point (I gave below the sole exception I'm aware of). That's why the fact that the pure keyword doesn't *guarantee* that the function terminates is irrelevant. In fact, the language could specify that a function marked "pure" must terminate and say that it's an error (which may or may not be detected by the compiler) if it does not.
 the only non-pointless example I can think of
 is a kernel''s idle process on machines without an interruptable halt 
 instruction. Even if there were useful non-terminating pure functions, 
 intentionally putting the keyword "pure" on a function that doesn't 
 terminate is doable but dumb. Of course, unintentionally doing so is a 
 mistake. But dumbth and mistakes do happen, so the compiler can guard 
 against taking forever -- or too long, period (Ackermann's Function is a 
 famous example of a pure function that takes astronomically long to 
 compute even on small input values -- good luck on writing a compiler 
 that can detect such functions) -- by setting a timer. If the timer goes 
 off before the computation completes, the compiler should abandon CTFE 
 and print a warning message (unless suppressed by a pragma), since the 
 function is most likely buggy (this satisfies Rainer Deyke's point that 
 errors, including run-on functions, should be detected at compile time 
 rather than waiting for runtime).
I don't think you can say that it's "most likely buggy". A function to calculate pi, for example, is a pure function that may take days to run.
I just mentioned Ackermann's function above, so obviously I am aware that pure functions *can* take a very long time -- and that's a case where an exact result can be computed in a finite amount of time, unlike a complete decimal expansion of pi. However, in the real world, it's quite unlikely that a pi calculator would be a pure function operating entirely on constant values (required for CTFE) that takes days to run -- in reality, the number of iterations or significant digits is variable, intermediate results are printed or stored (possibly memoized), short-lived pure functions are invoked by non-pure frameworks, multiple threads coordinate the work, etc. I DO think I can say that long-running pure functions are *most likely* buggy, which is not contradicted by anecdotal counterexamples. But such a Bayesian debate is not worth our time having; as I mentioned elsewhere, there could be a noctfe pragma for functions the programmer does not want the compiler to attempt to compute, and one could also leave it up to the programmer to specify whether they want to be warned about attempts to compute pure functions that take longer than some threshold to compute -- which would satisfy Rainer Deyke's desire to learn about (possible) bugs at compile time rather than waiting for run time.
Jul 26 2010
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Don wrote:
 Jonathan M Davis wrote:
 On Tuesday, July 20, 2010 17:24:33 bearophile wrote:
 Jonathan M Davis:
 But theoretically, it
 could be used for optimizations just like inlining (particularly with
 pure functions), and that being the case, it probably will at some
 point.
Pure functions go well with CTFE because they don't use global state, but you can run code at compile time only if all the arguments are known at compile time :-) If some of their arguments are known at compile time and some of them are not known, and you want to optimize better, then you need partial compilation, that is a bit hard to implement, probably makes the compiler slow and complex. So for a practical form of manual partial compilation practical time ago I have proposed something similar to GCC __builtin_constant_p (http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Other-Builtins.html#index-g_t _005f_005fbuiltin_005fconstant_005fp-2392 ), that is syntax sugar that you can use inside functions to turn them into templates (I don't know if you can also have a single entry point to keep the possibility of taking their address). You can use a static if with such __trait(isCTAgument, foo), to tell if each function argument as foo is known at compile time, and do something with it if it is. This is like defining those arguments as CT template arguments, that later the optimizer can optimize better (and perform the partial compilation). Bye, bearophile
Partial compilation would indeed be a big step forward, but it would certainly be a lot of work. Just being able to have functions run with CTFE which could be fully run with CTFE would lead to some nice optimizations. Certainly, that would need to be achieved before we could look at having partial compilation. Still, partial compilation would certainly be cool if we ever reached that point. - Jonathan M Davis
While running the semantic on each function body, the compiler could fairly easily check to see if the function is CTFEable. (The main complication is that it has to guess about how many iterations are performed in loops).
I don' think that's easy. The compiler would need to do a fixed-point iteration on the transitive closure of the functions called by the analyzed function. Andrei
Jul 21 2010
prev sibling parent BCS <none anon.com> writes:
Hello Justin,

 "What is the difference between a (constant/immutable) value and the
 result of calling a side-effect free function that returns the same?"
A const value can't be changed from the current context but could be changed via another alias or inside a non side effect free function or via one of those in another thread. An immutable value can't change, end of story. The value a side-effect free function returns can change as soon as anything changes, a local var (via nested functions) global var or anything addressable through them. The function most closely resembles the const var. -- ... <IXOYE><
Jul 21 2010