www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Thoughts about "Compile-time types" talk

reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
Skimmed over Luís Marques' slides on "compile-time types", and felt
compelled to say the following in response:  I think we're still
approaching things from the wrong level of abstraction.  This whole
divide between compile-time and runtime is IMO an artificial one, and
one that we must transcend in order to break new ground.

I haven't fully thought through this yet, but the basic concept is that
there should be *no need* to distinguish between compile-time and
runtime in the general case, barring a small number of situations where
a decision has to be made.  Ideally, a function should have just a
single parameter list -- which Luís has already hit upon -- but I'd go
even further and propose that there should be NO distinction made
between runtime and compile-time parameters, at least at the declaration
level.  It should be the *caller* that decides whether an argument is
compile-time or runtime. And that decision in the caller does not have
to be made within the caller itself, but could be dictated by *its*
caller, and so on. As far as *most* code is concerned, you're just
passing arguments from one function to another -- whether it's CT or RT
is immaterial. Only at the very top level do you need to make a decision
whether something is compile-time or runtime.

The result should be that you can make a single top-level change and a
whole cascade of function arguments down the call chain become
instantiated at compile-time, or resp. runtime.  Most of the code in
between *does not have to change* at all; the compiler *infers* whether
something is available at CT or has to be deferred to RT, based on the
caller's arguments.  Some things, of course, like alias parameters and
what-not will force CT (though in theory one could implement RT
counterparts for them), but that should be *transparent* to the
surrounding code.

Just like in OO, changing a class's implementation ought not to require
changing every piece of code that uses the class (encapsulation and
Liskov substitution principle), so the distinction between CT and RT
should be transparent to *most* code.  Only code that actually cares
about the difference should need to choose between one or the other.
The rest of the code should remain agnostic, and thus remain symmetric
under any decision to switch between RT/CT.

For example, we could have first class types in the language: a function
can take a type argument and do stuff to it, and depending on whether
this type is known at CT, the compiler could resolve it at compile-time
or implement it at runtime using the equivalent of typeid() -- and the
function *doesn't have to care which*.  You could sort a list of types,
use range algorithms on them, etc., and if you call it at compile-time,
it will produce a compile-time result; if you call it at runtime, it
will produce a runtime result.  Which one it will be doesn't have to be
a decision that's arbitrarily decided within the function itself; it can
be delegated to higher-level code that has the wider context with which
to make the best decision.

Another example, you could have general matrix multiplication code with
matrix size as a parameter, and at higher-level decide you only need,
say, 4D matrices, so the size can be made a CT argument for one project,
allowing the optimizer to generate the best code for the 4D-specific
case, but an RT argument for another project that wants to operate on
general n*n matrices, without needing to implement matrix multiplication
twice. The latter would have the size as an RT parameter, so you trade
off optimization for flexibility -- *at the decision of higher-level
code*, rather than arbitrarily tossing a coin or guessing what your
callers might want.


T

-- 
MASM = Mana Ada Sistem, Man!
May 09 2019
next sibling parent JS <JS.Music.Works gmail.com> writes:
On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote:
 Skimmed over Luís Marques' slides on "compile-time types", and 
 felt compelled to say the following in response:  I think we're 
 still approaching things from the wrong level of abstraction.  
 This whole divide between compile-time and runtime is IMO an 
 artificial one, and one that we must transcend in order to 
 break new ground.

 I haven't fully thought through this yet, but the basic concept 
 is that there should be *no need* to distinguish between 
 compile-time and runtime in the general case, barring a small 
 number of situations where a decision has to be made.  Ideally, 
 a function should have just a single parameter list -- which 
 Luís has already hit upon -- but I'd go even further and 
 propose that there should be NO distinction made between 
 runtime and compile-time parameters, at least at the 
 declaration level.  It should be the *caller* that decides 
 whether an argument is compile-time or runtime. And that 
 decision in the caller does not have to be made within the 
 caller itself, but could be dictated by *its* caller, and so 
 on. As far as *most* code is concerned, you're just passing 
 arguments from one function to another -- whether it's CT or RT 
 is immaterial. Only at the very top level do you need to make a 
 decision whether something is compile-time or runtime.

 The result should be that you can make a single top-level 
 change and a whole cascade of function arguments down the call 
 chain become instantiated at compile-time, or resp. runtime.  
 Most of the code in between *does not have to change* at all; 
 the compiler *infers* whether something is available at CT or 
 has to be deferred to RT, based on the caller's arguments.  
 Some things, of course, like alias parameters and what-not will 
 force CT (though in theory one could implement RT counterparts 
 for them), but that should be *transparent* to the surrounding 
 code.

 Just like in OO, changing a class's implementation ought not to 
 require changing every piece of code that uses the class 
 (encapsulation and Liskov substitution principle), so the 
 distinction between CT and RT should be transparent to *most* 
 code.  Only code that actually cares about the difference 
 should need to choose between one or the other. The rest of the 
 code should remain agnostic, and thus remain symmetric under 
 any decision to switch between RT/CT.

 For example, we could have first class types in the language: a 
 function can take a type argument and do stuff to it, and 
 depending on whether this type is known at CT, the compiler 
 could resolve it at compile-time or implement it at runtime 
 using the equivalent of typeid() -- and the function *doesn't 
 have to care which*.  You could sort a list of types, use range 
 algorithms on them, etc., and if you call it at compile-time, 
 it will produce a compile-time result; if you call it at 
 runtime, it will produce a runtime result.  Which one it will 
 be doesn't have to be a decision that's arbitrarily decided 
 within the function itself; it can be delegated to higher-level 
 code that has the wider context with which to make the best 
 decision.

 Another example, you could have general matrix multiplication 
 code with matrix size as a parameter, and at higher-level 
 decide you only need, say, 4D matrices, so the size can be made 
 a CT argument for one project, allowing the optimizer to 
 generate the best code for the 4D-specific case, but an RT 
 argument for another project that wants to operate on general 
 n*n matrices, without needing to implement matrix 
 multiplication twice. The latter would have the size as an RT 
 parameter, so you trade off optimization for flexibility -- *at 
 the decision of higher-level code*, rather than arbitrarily 
 tossing a coin or guessing what your callers might want.


 T
I have actually come up with this idea several months before. You are basically spot on. The old paradigm of programming needs to change. My idea is a big more elaborate though: The type system is hierarchical and recursive. Essentially all programming is based on levels which is essentially a parameterized type system on the level. One works at various levels and uses the same syntax at all levels. The compiler the simply tries to "collapse" all the levels to the lowest known level. The lowest level is runtime which uses unknown inputs from the user and so cannot be collapsed any more. I've been trying to think about how to represent this well in programming but haven't worked on it much. What I'm saying may not make much sense but think about it like D's CT and RT... they are just two levels. But we can have higher levels that compile code in to CT code for a higher level of abstraction. All higher levels of abstraction must be known so the code can be reduced to the lowest level, which is RT. RT code would be anything that is not known at compile time and hence the program must be run to determine the information. Essentially RT is just the file compilation step. Such code though is unified at all levels. Same semantics, same syntax, same type system, same everything. Essentially every statement is a statement that exists at some lowest level and that level depends on what the statement "see's" and what "see's" it. My initial idea was to have levels specified by the user as what level of abstraction they are working in. The compiler then reduces the highest levels of abstraction filling out the inputs for the level right below it... which then should allow it to be reduced and everything cascades down to the final level of RT, which uses things that cannot be reduced. The compiler realizes it can only be reduced by "executing" the program. The program itself sort of is the compiler then, filling out the rest of the information to finally "compile" the entire program. I've been trying to come up with a universal syntax but haven't really thought about it much as it's a long term project. I'm trying to get away from C's syntax but my initial conceptual idea was to attach levels to statements or blocks. One then just works in the level they want and to build code for lower levels. e.g., int x[4] = 34; is an int at level 4. It is known to all lower levels of code who will, if the value didn't change at any higher level, be 34 to all the lower levels. e.g., int x[3] = x[4]*3; // compiler can reduce this to a constant in level 3 after level 4 has been reduced The main idea, which we are both hitting on here, is that information that is known is known and information that isn't requires "running" the program. But really these are, in some sense, the same. The runtime information is known, but only by the user... as if the user "completes" the program. e.g., readln() The reason the statement above cannot reduce is because it is not known at "compile time"... but that is relatively arbitrary if one thinks of runtime as just the "completion of compile time". If one abstracts out that the entire process, including the user input is "compilation" and it's all just reducing to a known state(which for user input is delayed). The idea with the levels above is all the code syntax is the same at every level. In D, RT and CT are not unified. CTFE bridges it but static if and if are not the same syntax. They distinguish between the two levels. To get all of it to work may be complicated. I'm only in the initial investigations of it and I don't plan on putting a lot of time in to it. I'm just recording my ideas as I come up with them. In any case all I have worked on is creating the TYPE. The TYPE is simply any type... and everything is a type. It's so general as to include anything(functions, statements, other types, etc). It's what unifies everything. The compiler then just attempts to reduce any types to "known" values. At the end of "compilation" anything that is not known is either an error or runtime(the type cannot be reduced without user input). Typing of course is categorical and one then has to create abstract subtyping but it unifies the type system, sorta like object does with oop. I believe though that not just this level of abstraction is required but a new IDE that supports the level of abstract along with a higher level of text/graphical entry.
May 09 2019
prev sibling next sibling parent reply Martin Tschierschke <mt smartdolphin.de> writes:
On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote:
[...]

 I haven't fully thought through this yet, but the basic concept 
 is that there should be *no need* to distinguish between 
 compile-time and runtime in the general case, barring a small 
 number of situations where a decision has to be made.
[...] I was thinking and working in the same direction, when using regEx you can use the runtime (RT) or the compile time (CT) version, but why not let the compiler make the decisions? The solution was from someone else, in the forum, but it resulted in the following code: auto reg(alias var)() { static if (__traits(compiles, {enum ctfeFmt = var;}) ) { // "Promotion" to compile time value enum ctfeReg = var ; pragma(msg, "ctRegex used"); return(ctRegex!ctfeReg); }else{ return(regex(var)); pragma(msg,"regex used"); } } The trick is the alias var, which then can be checked, with __traits(compiles, {enum ctfeFmt = var;} is it an immutable value present at compiletime or not. Now you may use auto myregex = reg!(Regex_expression); At every place in your code. And depending on the question, does it compile to the ctfeReg or not the version is selected. So there is away to promote a ctVariable to runtime, if this would be possible with a runtime value, too, you would get rid of the '!' -syntax. Regards mt.
May 13 2019
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke via Digitalmars-d
wrote:
 On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote:
 [...]
 I haven't fully thought through this yet, but the basic concept is
 that there should be *no need* to distinguish between compile-time
 and runtime in the general case, barring a small number of
 situations where a decision has to be made.
[...] I was thinking and working in the same direction, when using regEx you can use the runtime (RT) or the compile time (CT) version, but why not let the compiler make the decisions?
It would be great if the compiler could make all of the decisions, but I think at some point, some level of control would be nice or even necessary. What I envision is this: - Most parameters are not specifically designated compile-time or runtime; they are just general parameters. So your function fun(x,y,z) has 3 parameters that could be runtime, or compile-time, or a mix of either. Which, exactly, is not decided at the declaration of the function. - At some higher level along the call chain, perhaps in main() but likely in some subordinate but still high-level function, a decision will be made to, for example, call fun() with a literal argument, a compile-time known value, and a runtime argument obtained from user input. This then causes a percolation of CT/RT designations down the call chain: if x is bound to a literal, the compiler can pass it as a compile-time argument. Then inside fun(), it may pass x to another function gun(p,q,r). That in turn fixes one or more parameters of gun() as compile-time or runtime, and so on. Similarly, if y as a parameter of fun() is bound to a runtime value, then if y is also passed to gun() inside fun()'s body, then that "forces" the corresponding parameter to be bound to runtime value. You could think of it as all functions being templates by default, and they get instantiated when called with a specific combination of runtime/compile-time arguments. With the added bonus that you don't have to decide which parameters are template parameters and which are runtime parameters; the compiler infers that for you based on what kind of arguments were passed to it. Of course, sometimes you want to force a certain parameter to be either runtime or compile-time, e.g., to control template bloat. So perhaps some kind of designation like ct or rt on a parameter: // Tentative syntax auto fun( ct int x, rt int y) { ... } This would force x to always be known at compile-time, whereas y can accept either (you can think of it as ct "implicitly converts to" rt, but not the other way round). If a parameter is not designated either way, then the compiler is free to choose how it will be implemented. T -- If it tastes good, it's probably bad for you.
May 14 2019
parent reply NaN <divide by.zero> writes:
On Tuesday, 14 May 2019 at 17:44:17 UTC, H. S. Teoh wrote:
 On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke 
 via Digitalmars-d wrote:
 On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote: [...]
 I haven't fully thought through this yet, but the basic 
 concept is that there should be *no need* to distinguish 
 between compile-time and runtime in the general case, 
 barring a small number of situations where a decision has to 
 be made.
[...] I was thinking and working in the same direction, when using regEx you can use the runtime (RT) or the compile time (CT) version, but why not let the compiler make the decisions?
It would be great if the compiler could make all of the decisions, but I think at some point, some level of control would be nice or even necessary.
If you envisage a regular if statement being able to be both CT/RT depending on whether the value inside its brackets is known at CT or not what do you do about whether it introduces a new scope or not? Static if does not, but regular if does, if you want them unified something has to give.
May 15 2019
next sibling parent reply Alex <AJ gmail.com> writes:
On Wednesday, 15 May 2019 at 18:31:57 UTC, NaN wrote:
 On Tuesday, 14 May 2019 at 17:44:17 UTC, H. S. Teoh wrote:
 On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke 
 via Digitalmars-d wrote:
 On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote: 
 [...]
 I haven't fully thought through this yet, but the basic 
 concept is that there should be *no need* to distinguish 
 between compile-time and runtime in the general case, 
 barring a small number of situations where a decision has 
 to be made.
[...] I was thinking and working in the same direction, when using regEx you can use the runtime (RT) or the compile time (CT) version, but why not let the compiler make the decisions?
It would be great if the compiler could make all of the decisions, but I think at some point, some level of control would be nice or even necessary.
If you envisage a regular if statement being able to be both CT/RT depending on whether the value inside its brackets is known at CT or not what do you do about whether it introduces a new scope or not? Static if does not, but regular if does, if you want them unified something has to give.
That is not true. Any type is either known at "CT" or not. If it is not then it can't be simplified. If it is known then it can. A compiler just simplifies everything it can at CT and then runs the program to do the rest of the simplification. The reason why it confuses you is that you are thinking in the wrong paradigm. To unify requires a language that is built under the unified concept. D's language was not designed with this unified concept, but it obviously does most of the work because it it does do CT compilation. Most compilers do. Any optimization/constant folding is CT compilation because the compiler knows that something can be computed. So ideally, internally, a compiler would simply determine which statements are computable at compile time and simplify them, possibly simplifying an entire program... what is not known then is left to runtime to figure out. D already does all this but the language clearly was not designed with the intention to unify them. For example, int x = 3; if (x == 3) fart; Hear the compiler should be able to reason that x = 3 and to optimize it all to fart; This requires flow analysis but if everything is 100% analyzed correctly the compiler could in theory determine if any statement is reducible and cascade everything if necessary until all that's left are things that are not reducible. It's pretty simple in theory but probably very difficult to modify a pre-existing compiler that was designed without it to use it. Any special cases then are handled by special keywords or syntaxes... which ideally would not have to exist at all. Imagine if all statements in a program were known at compile time, even things like readln(as if it could see in to the future)... The a compiler would compile everything down to a single return. It could evaluate everything, every mouse click, every file io or user choice, etc... A program is simply a compiler that is compiling as it is being run, the users adding in the missing bits.
May 15 2019
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, May 15, 2019 at 06:57:02PM +0000, Alex via Digitalmars-d wrote:
[...]
 D already does all this but the language clearly was not designed with
 the intention to unify them.
 
 For example,
 
 int x = 3;
 if (x == 3) fart;
 
 Hear the compiler should be able to reason that x = 3 and to optimize
 it all to
 
 fart;
 
 
 This requires flow analysis but if everything is 100% analyzed
 correctly the compiler could in theory determine if any statement is
 reducible and cascade everything if necessary until all that's left
 are things that are not reducible.
 
 It's pretty simple in theory but probably very difficult to modify a
 pre-existing compiler that was designed without it to use it.
Good news: this is not just theory, and you don't have to modify any compiler to achieve this. Just compile the above code with LDC (ldc2 -O3) and look at the disassembly. :-) In fact, the LDC optimizer is capable of far more than you might think. I've seen it optimize an entire benchmark, complete with functions, user-defined types, etc., into a single return instruction because it determined that the program's output does not depend on any of it. Writing the output into a file (cause a visible effect to prevent the optimizer from eliding the entire program) doesn't always help either, because the optimizer would then run the entire program at compile-time then emit the equivalent of: enum answer = 123; outputfile.writeln(answer); The only way to get an accurate benchmark is to make it do non-trivial work -- non-trivial meaning every operation the program does contributes to its output, and said operations cannot be (easily) simplified into a precomputed result. The easiest way to do this is for the program to read some input that can only be known at runtime, then perform the calculation on this input. (Of course, there's a certain complexity limit to LDC's aggressive optimizer; I don't think it'd optimize away an NP-complete problem with fixed input at compile-time, for example. But it's much easier to read an integer at runtime than to implement a SAT solver just so LDC won't optimize it all away at compile-time. :-D Well actually, I'm sure the LDC optimizer will give up long before you give it an NP-complete problem, but in theory it *could* just run the entire program at compile-time if all inputs are already known.) The DMD optimizer, by comparison, is a sore loser in this game. That's why these days I don't even bother considering it where performance is concerned. But the other point to all this, is that while LDC *can* do all of this at compile-time, meaning the LLVM backend can do all of this, for other languages like C and C++ too, what it *cannot* do without language support is to use the result of such a computation to influence program structure. That's where D's CTFE + AST manipulation becomes such a powerful tool. And that's where further unification of RT/CT concepts in the language will give us an even more powerfully-expressive language. T -- Computers aren't intelligent; they only think they are.
May 15 2019
parent Alex <AJ gmail.com> writes:
On Wednesday, 15 May 2019 at 19:28:21 UTC, H. S. Teoh wrote:
 On Wed, May 15, 2019 at 06:57:02PM +0000, Alex via 
 Digitalmars-d wrote: [...]
 D already does all this but the language clearly was not 
 designed with the intention to unify them.
 
 For example,
 
 int x = 3;
 if (x == 3) fart;
 
 Hear the compiler should be able to reason that x = 3 and to 
 optimize it all to
 
 fart;
 
 
 This requires flow analysis but if everything is 100% analyzed 
 correctly the compiler could in theory determine if any 
 statement is reducible and cascade everything if necessary 
 until all that's left are things that are not reducible.
 
 It's pretty simple in theory but probably very difficult to 
 modify a pre-existing compiler that was designed without it to 
 use it.
Good news: this is not just theory, and you don't have to modify any compiler to achieve this. Just compile the above code with LDC (ldc2 -O3) and look at the disassembly. :-)
Yeah, I was just using it as an example.... most modern compilers can do some mixture and in fact do.
 In fact, the LDC optimizer is capable of far more than you 
 might think. I've seen it optimize an entire benchmark, 
 complete with functions, user-defined types, etc., into a 
 single return instruction because it determined that the 
 program's output does not depend on any of it.
Yes, this is because the evolution of compilers is moving towards what we are talking about. We are not the first to think about things this way, in fact, all things evolve in such ways. The "first compiler"(the abacus?) was meta programming at the time.
 Writing the output into a file (cause a visible effect to 
 prevent the optimizer from eliding the entire program) doesn't 
 always help either, because the optimizer would then run the 
 entire program at compile-time then emit the equivalent of:

 	enum answer = 123;
 	outputfile.writeln(answer);

 The only way to get an accurate benchmark is to make it do 
 non-trivial work -- non-trivial meaning every operation the 
 program does contributes to its output, and said operations 
 cannot be (easily) simplified into a precomputed result.  The 
 easiest way to do this is for the program to read some input 
 that can only be known at runtime, then perform the calculation 
 on this input.  (Of course, there's a certain complexity limit 
 to LDC's aggressive optimizer; I don't think it'd optimize away 
 an NP-complete problem with fixed input at compile-time, for 
 example.  But it's much easier to read an integer at runtime 
 than to implement a SAT solver just so LDC won't optimize it 
 all away at compile-time. :-D  Well actually, I'm sure the LDC 
 optimizer will give up long before you give it an NP-complete 
 problem, but in theory it *could* just run the entire program 
 at compile-time if all inputs are already known.)
A programming is, is a very complicated mathematical equation. A compiler "simplifies" the equation so that it is easier to work with(faster, less space, etc). What's more mind blowing is that this is actually true... that is, the universe seems to be one giant mathematical processing machine. Men 200 years ago working on the foundations of computing had no idea about this stuff and that there would be these deep relationships between math, computers, and life itself. I think humanity is just scratching the surface though. In any case, a program is just an equation, a compiler a simplifier. A compiler attempts to compile everything down to a final result, certain inputs are not known at compile time so they are determined at "run time". Imagine this: Imagine you have some complex program, say a pc video game. What is the purpose of this program? Is it to run it and experience? NAY! It is a final result ultimately! If the compiler could, hypothetically, compile it down to a final value, that would be ideal. What is the final value though? Well, it is the experience of game in to the human mind. Imagine you could experience it without having to waste hours and hours... that would be the ideal compiler. Obviously here I'm taking a very general definition of compiler... but again, this is where things are headed. The universe has time and space... the only way to make more time is to reduce the costs and increase the space. Eventually humans will have uC in their brains where they can experience things much quicker, interface with the "compiler" much quicker, etc... [probably thousands of years off if humanity makes it].
 The DMD optimizer, by comparison, is a sore loser in this game. 
 That's why these days I don't even bother considering it where 
 performance is concerned.


 But the other point to all this, is that while LDC *can* do all 
 of this at compile-time, meaning the LLVM backend can do all of 
 this, for other languages like C and C++ too, what it *cannot* 
 do without language support is to use the result of such a 
 computation to influence program structure.  That's where D's 
 CTFE + AST manipulation becomes such a powerful tool.  And 
 that's where further unification of RT/CT concepts in the 
 language will give us an even more powerfully-expressive 
 language.
It may be able to do such things as you describe. I'm not claiming anything is impossible, quite the contrary. I'm mainly talking about what is and what could be. LDC may do quite more work in this area. Ultimately syntax is an abstraction and it is a hurdle. Ideally we would have one symbol for one code, a unique hashing for all programs(Godel theory). Then it would be very easy to write a program! ;) Of course looking up the right symbol would take for ever. In fact, one could describe programming as precisely looking up this code, and it is quite complex and easy to get the wrong code(e.g., all programs are correct, we just choose the wrong one). My main point with D's language is that it has separate CT and RT constructs. enum is CT. This complicates things. If D was designed with the concept of LDC and was able to simply optimize all "RT" code that could be optimized then the distinction isn't needed... although the separation does make it easier to reason about... everyone knows an enum is CT. My way of thinking is this: All programming should be thought of as in "CT". That is, all data is known, it just may be specifically known in the future only. A compiler cannot reduce the future state since it is "unknown" at the present. So it delays the compilation until the future(when you click the button or press a key or insert the USB device). This is of course just thinking of CT and RT slightly different. It is more functional. But what it does is shift the emphasis in the right direction. Why? Because if one writes code as if it were all RT then one tends to prevent optimizations from occurring(as DMD does). If one thinks of CT one usually has the implicit idea that things are going to be reduced(as LDC does). It's more of a mindset but it has repercussions. e.g., if one always use static if as default then one is thinking in CT. If one always defaults to if then one is thinking of RT. The difference is that the compiler always can optimize the static if while it may or may not(LDC or DMD) the standard if. Which, as you have pointed out, usually there is no true difference. Either the if can or cannot be optimized depending on the state. LDC is simply a more powerful compiler that "reasons" about the program. That is, it understands what it is doing more than DMD. DMD does things blind. Makes a lot of assumptions. Until there is a massive paradigm shift in programming(e.g., from punch cards to assembly) the only way to optimize code is going to be to design languages and compilers that are optimal. That is the progression of compilers as we see with LDC vs DMD. Programming is getting more complex, not less. But it is also becoming more optimal... that is the progression of all things... even compilers evolve(in the real sense of evolution). I think my overall point is that D's language design itself has made the artificial distinction between CT and RT and now we have to live with it... The distinction was made up front(in the language) rather than waiting to the last minute(in the compiler). Of course, this problem started way back with the "first" programming language. Some point in the future someone will be saying the same types of things about LDC and some other "advanced" concept. The more I program in D, the more I find meta programming a burden. Not because it is not powerful but that I have to think with two hats on at the same time. It's not difficult until I stop programming in D for a while and have to find the other hat and get good balancing both of them on my head again. D's meta programming is powerful but it is not natural. Ideally we would have a language that is both powerful and natural. I think Haskell might be like this but it is unnatural in other ways. Of course, at the end of the day, it is what it is...
May 15 2019
prev sibling parent reply NaN <divide by.zero> writes:
On Wednesday, 15 May 2019 at 18:57:02 UTC, Alex wrote:
 On Wednesday, 15 May 2019 at 18:31:57 UTC, NaN wrote:
 On Tuesday, 14 May 2019 at 17:44:17 UTC, H. S. Teoh wrote:
 On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke 
 via Digitalmars-d wrote:
 If you envisage a regular if statement being able to be both 
 CT/RT depending on whether the value inside its brackets is 
 known at CT or not what do you do about whether it introduces 
 a new scope or not? Static if does not, but regular if does, 
 if you want them unified something has to give.
That is not true. Any type is either known at "CT" or not. If it is not then it can't be simplified. If it is known then it can. A compiler just simplifies everything it can at CT and then runs the program to do the rest of the simplification.
You're conflating how it is implemented with the semantics of the actual language. I understand how static if works, what im saying is if you want to just have "if" and for the compiler to infer whether it's CT or RT, then you have the same construct with different semantics depending on the context.
May 16 2019
parent reply Alex <AJ gmail.com> writes:
On Thursday, 16 May 2019 at 08:12:49 UTC, NaN wrote:
 On Wednesday, 15 May 2019 at 18:57:02 UTC, Alex wrote:
 On Wednesday, 15 May 2019 at 18:31:57 UTC, NaN wrote:
 On Tuesday, 14 May 2019 at 17:44:17 UTC, H. S. Teoh wrote:
 On Mon, May 13, 2019 at 08:35:39AM +0000, Martin 
 Tschierschke via Digitalmars-d wrote:
 If you envisage a regular if statement being able to be both 
 CT/RT depending on whether the value inside its brackets is 
 known at CT or not what do you do about whether it introduces 
 a new scope or not? Static if does not, but regular if does, 
 if you want them unified something has to give.
That is not true. Any type is either known at "CT" or not. If it is not then it can't be simplified. If it is known then it can. A compiler just simplifies everything it can at CT and then runs the program to do the rest of the simplification.
You're conflating how it is implemented with the semantics of the actual language. I understand how static if works, what im saying is if you want to just have "if" and for the compiler to infer whether it's CT or RT, then you have the same construct with different semantics depending on the context.
No, you don't get it. We are talking about a hypothetical compiler that doesn't have to have different contexts. In D, the contexts arbitrarily separated in the language... What we are ultimately talking about here is that CT and RT is not two different concepts but one concept with some very minute distinction for RT. As a programmer you shouldn't care if something is CT or RT and hence you shouldn't even know there is a difference. What are you saying is that you have one if and two contexts. What I'm saying is hat you have one if and one context. That the CT programming and runtime programming are NOT treated as being two different universes with some overlap but the same universe with a slight boundary. For example, in D we have enum x = 4; and int y = 4; That is explicitly two different programming contexts. On CT and the other RT. But that is the fallacy. They are EXACTLY identical programmatically. Only when y is modified at some point later in the code does things potentially change. enum says we will never change x at CT... but why can't the compiler figure that out automatically rather than forcing us to play by it's rules and have a separate context? CTFE sort of blends the two and it is a step in the direction of unifying CT and RT. Of course, it will never remove enum from the language... The point is that D, and most programming languages, create a very strong distinction between CT and RT when the truth is that there is virtually no distinction. This happens because programming languages started out as almost entirely RT and CT as added on top of them. This created the separation, not because there actually is a theoretical one. In the context of category theory, RT is simply the single unknown category in which any code that depends on must be deferred from computation until it is known(which occurs when we run the program and "compilation" can finish). The problem is that 99.9% of programmers think almost entirely in terms RT, even when they do CT programming. For them a compiler just takes code and spits out machine code... but they do not see how it is all connected. A cpu is also a compiler and part of the compilation process. It takes certain bit patterns and compiles them down to others. Seeing things in this larger process shows one the bigger picture and how there are many artificial boundaries and some are no longer needed.
May 16 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 16 May 2019 at 09:35:16 UTC, Alex wrote:
 For example, in D we have

 enum x = 4;
 and
 int y = 4;

 That is explicitly two different programming contexts. On CT 
 and the other RT. But that is the fallacy. They are EXACTLY 
 identical programmatically.

 Only when y is modified at some point later in the code does 
 things potentially change. enum says we will never change x at 
 CT... but why can't the compiler figure that out automatically 
 rather than forcing us to play by it's rules and have a 
 separate context?
I agree with your basic goals. But there are a few issues: 1. to avoid the special-casing you need to add type-variables as proper variables in the language. Then you can have functions as meta-level type-constructors. 2. you need to provide a construct for ensuring that a value is resolved at compile time. 3. how do you ensure that the library code you write is structured in a way that doesn't seem to arbitrarily break? 4. corollary of 3, how to you ensure that library code doesn't generate slow code paths spent on resolving typing information at runtime? Anyway, I don't disagree with your goals, but you would have a completely different language.
May 16 2019
parent reply Alex <AJ gmail.com> writes:
On Thursday, 16 May 2019 at 13:05:52 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 16 May 2019 at 09:35:16 UTC, Alex wrote:
 For example, in D we have

 enum x = 4;
 and
 int y = 4;

 That is explicitly two different programming contexts. On CT 
 and the other RT. But that is the fallacy. They are EXACTLY 
 identical programmatically.

 Only when y is modified at some point later in the code does 
 things potentially change. enum says we will never change x at 
 CT... but why can't the compiler figure that out automatically 
 rather than forcing us to play by it's rules and have a 
 separate context?
I agree with your basic goals. But there are a few issues: 1. to avoid the special-casing you need to add type-variables as proper variables in the language. Then you can have functions as meta-level type-constructors. 2. you need to provide a construct for ensuring that a value is resolved at compile time. 3. how do you ensure that the library code you write is structured in a way that doesn't seem to arbitrarily break? 4. corollary of 3, how to you ensure that library code doesn't generate slow code paths spent on resolving typing information at runtime? Anyway, I don't disagree with your goals, but you would have a completely different language.
Yeah, I didn't mean that that D it self would be this. My point was that this is where languages are ultimately headed. Knowing where we are going helps us get there. D itself might not be able to do this but the goal would be to get closer to the ideal. How that is achieved properly for D is not in my domain of expertise.
May 16 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 17 May 2019 at 00:25:36 UTC, Alex wrote:
 My point was that this is where languages are ultimately 
 headed.  Knowing where we are going helps us get there.
It would be interesting to use a language like that, for sure.
May 17 2019
parent reply Alex <AJ gmail.com> writes:
On Friday, 17 May 2019 at 09:15:14 UTC, Ola Fosheim Grøstad wrote:
 On Friday, 17 May 2019 at 00:25:36 UTC, Alex wrote:
 My point was that this is where languages are ultimately 
 headed.  Knowing where we are going helps us get there.
It would be interesting to use a language like that, for sure.
You'll have to wait a few millennia ;/ And it will require people to create it. After all, a thousand years ago the Greeks were contemplating such things in their own limited way wishing there was a better way to do something... Socrates would be amazing at what we have just as you will be amazing at what the future brings(if humans don't destroy themselves in the process). But there is a clear evolutionary goal of computation, a direction to where things are moving that is beyond humans control. 100 years ago the computer scientists had no clue about the complexities of programming yet everything did was a stepping stone in the natural logical evolution of computation. Mathematics was the first computer, physic then was created which helped speed up things tremendously, who knows what is next. With quantum computing, if it is in the right direction, a new programming paradigm and languages will need to be laid to make it practical. Imagine when you first learned programming and how primitive you thought compared to now. That is a microcosm of what is happening on the large scale. Compilers are still in the primitive stage on the higher level just as your programming knowledge is primitive on a higher level(and very advanced on a lower level). The thing is, abstraction is the key to being able to deal with complexity. That is the only way humans can do what they do. Compilers that make it hard to do abstraction become very difficult to use for complexity. The only languages I see that handle complexity on any meaningful level are functional programming languages. Procedural languages actually seem to create an upper limit where it becomes exponentially harder to do anything past a certain amount of complexity. The problem with functional programming languages is they are difficult to use for simple stuff and since all programs start out as simple, it becomes very hard to find a happy medium. It's as if one needs a D+Haskell combo that work seamlessly together where Haskell can build the abstraction and D can handle the nitty gritty details but that there would be a perfect blend between the procedural to the functional and one can work at any level at any time without getting stuck(hence one chooses the right amount for the particular task(which might be coding a graphics function or designing a oop like hierarchy)). Most languages are hammers, you are stuck with using them to solve all the problems and if they don't solve a particular problem well then you are screwed, you just have to hammer away until you get somewhere. Unfortunately D+Haskell would be an entirely new language. One way to see this is the wave/particle duality. Humans are notorious in choosing one view or the other in things... reality is that there is no distinction... there is just one thing. Same goes for programming. Procedural and functional are just two different extremes and making the distinction is actually limiting in the long run. The goal with such things is choosing the right view for the right problem at the right time and then one can generate the solution very easily. Difficulty is basically using the wrong tool for the job. [And you'll notice that most D users use D in a specific job that it excels at and then believes that it is a great language because it does their job well... they just chose the right tool(relatively speaking) for their job. They generally fail to realize there are many tools and many jobs. The same goes for most Haskell users and most python users, etc.]
May 17 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 17 May 2019 at 10:51:48 UTC, Alex wrote:
 You'll have to wait a few millennia ;/
Not necessarily. A dynamic language like Python is quite close in some respects, except you usually don't want to be bogged down with types when doing practical programming in Python. Anyway, if you want to design such a language the best way to go would probably be to first build an interpreted language that has the desired semantics. Then try to envision how that could be turned into a compiled language and do several versions of alternative designs.
 Socrates would be amazing at what we have just as you will be 
 amazing at what the future brings(if humans don't destroy 
 themselves in the process).
Yes, there is that caveat.
 With quantum computing, if it is in the right direction, a new 
 programming paradigm and languages will need to be laid to make 
 it practical.
I don't think quantum computing is a prerequisite. What I believe will happen is that "machine learning" will fuel the making of new hardware with less "programmer control" and much more distributed architectures for computation. So, when that new architecture becomes a commodity then we'll se a demand for new languages too. But the market is too smal in the foreseeable future, so at least for the next decades we will just get more of the same good old IBM PC like hardware design (evolved, sure, but there are some severe limitations in step-by-step backwards compatibility). I guess there will be a market shift if/when robots become household items. Lawnmowers is a beginning, I guess.
 That is a microcosm of what is happening on the large scale.
I don't think it is happening yet though. The biggest change is in the commercial usage of machine learning. But I think contemporary applied machine learning is still at a very basic level, but the ability to increase the scale has made it much more useful.
 Compilers are still in the primitive stage on the higher level 
 just as your programming knowledge is primitive on a higher 
 level(and very advanced on a lower level).
Yes, individual languages are very primitive. However, if you look at the systems built on top of them, then there is some level of sophistication.
 difficult to use for complexity. The only languages I see that 
 handle complexity on any meaningful level are functional 
 programming languages.
Logic programming languages, but they are difficult to utilize outside narrow domains. Although I believe metaprogramming for the type system would be better done with a logic PL. How to make it accessible is a real issue, though. Another class would be languages with builtin proof systems.
 Procedural languages actually seem to create an upper limit 
 where it becomes exponentially harder to do anything past a 
 certain amount of complexity.
You can do functional programming in imperative languages too, it is just that you tend not to do it. Anyway, there are mixed languages.
 The problem with functional programming languages is they are 
 difficult to use for simple stuff and since all programs start 
 out as simple, it becomes very hard to find a happy medium.
Well, I don't know. I think the main issue is that all programming languages lacks the ability to create a visual expression that makes the code easy to reason about. Basically, the code looks visually too uniform and similar and we have to struggle to read meaning into the code. So as a result we need to have the model for large sections of the program in our head. Which is hard. So basically a language should be designed together with an accompanying editor with some visual modelling capabilities, but we don't know how to do that well… We just know how to do it in a "better than nothing" fashion.
 Unfortunately D+Haskell would be an entirely new language.
I think you would find it hard to bring those two together anyway. The aims were quite different in the design. IIRC Haskell was designed to be a usable vehicle for research so that FP research teams could have some synergies from working on the same language model. D was designed in a more immediate fashion, first as a clean up of C++, then as a series of extensions based on perceived user demand.
 thing. Same goes for programming. Procedural and functional are 
 just two different extremes and making the distinction is 
 actually limiting in the long run.
Well, I'm not sure they are extremes, you are more constrained with a FP, but that also brings you coherency and less state to consider when reasoning about the program. Interestingly a logic programming language could be viewed as a generalization of a functional programming language. But I'll have to admit that there are languages that sort of makes for a completely different approach to practical programming, like Erlang or Idris. But then you have research languages that try to be more regular interpretative, while still having it as a goal to provide a prover, like Whiley and some languages built by people at Microsoft that is related to Z3. These are only suitable for toy programs at this point, though.
 The goal with such things is choosing the right view for the 
 right problem at the right time and then one can generate the 
 solution very easily.
Yes. However, it takes time to learn a completely different tool. but there is no easy path when moving from C++ to Haskell.
May 17 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 17 May 2019 at 13:52:26 UTC, Ola Fosheim Grøstad wrote:
 But then you have research languages that try to be more 
 regular interpretative, while still having it as a goal to
I meant imperative/procedural, not interpretative…
May 17 2019
prev sibling parent reply Alex <AJ gmail.com> writes:
On Friday, 17 May 2019 at 13:52:26 UTC, Ola Fosheim Grøstad wrote:
 On Friday, 17 May 2019 at 10:51:48 UTC, Alex wrote:
 You'll have to wait a few millennia ;/
Not necessarily. A dynamic language like Python is quite close in some respects, except you usually don't want to be bogged down with types when doing practical programming in Python. Anyway, if you want to design such a language the best way to go would probably be to first build an interpreted language that has the desired semantics. Then try to envision how that could be turned into a compiled language and do several versions of alternative designs.
 Socrates would be amazing at what we have just as you will be 
 amazing at what the future brings(if humans don't destroy 
 themselves in the process).
Yes, there is that caveat.
 With quantum computing, if it is in the right direction, a new 
 programming paradigm and languages will need to be laid to 
 make it practical.
I don't think quantum computing is a prerequisite. What I believe will happen is that "machine learning" will fuel the making of new hardware with less "programmer control" and much more distributed architectures for computation. So, when that new architecture becomes a commodity then we'll se a demand for new languages too. But the market is too smal in the foreseeable future, so at least for the next decades we will just get more of the same good old IBM PC like hardware design (evolved, sure, but there are some severe limitations in step-by-step backwards compatibility). I guess there will be a market shift if/when robots become household items. Lawnmowers is a beginning, I guess.
 That is a microcosm of what is happening on the large scale.
I don't think it is happening yet though. The biggest change is in the commercial usage of machine learning. But I think contemporary applied machine learning is still at a very basic level, but the ability to increase the scale has made it much more useful.
 Compilers are still in the primitive stage on the higher level 
 just as your programming knowledge is primitive on a higher 
 level(and very advanced on a lower level).
Yes, individual languages are very primitive. However, if you look at the systems built on top of them, then there is some level of sophistication.
 difficult to use for complexity. The only languages I see that 
 handle complexity on any meaningful level are functional 
 programming languages.
Logic programming languages, but they are difficult to utilize outside narrow domains. Although I believe metaprogramming for the type system would be better done with a logic PL. How to make it accessible is a real issue, though. Another class would be languages with builtin proof systems.
 Procedural languages actually seem to create an upper limit 
 where it becomes exponentially harder to do anything past a 
 certain amount of complexity.
You can do functional programming in imperative languages too, it is just that you tend not to do it. Anyway, there are mixed languages.
 The problem with functional programming languages is they are 
 difficult to use for simple stuff and since all programs start 
 out as simple, it becomes very hard to find a happy medium.
Well, I don't know. I think the main issue is that all programming languages lacks the ability to create a visual expression that makes the code easy to reason about. Basically, the code looks visually too uniform and similar and we have to struggle to read meaning into the code. So as a result we need to have the model for large sections of the program in our head. Which is hard. So basically a language should be designed together with an accompanying editor with some visual modelling capabilities, but we don't know how to do that well… We just know how to do it in a "better than nothing" fashion.
 Unfortunately D+Haskell would be an entirely new language.
I think you would find it hard to bring those two together anyway. The aims were quite different in the design. IIRC Haskell was designed to be a usable vehicle for research so that FP research teams could have some synergies from working on the same language model. D was designed in a more immediate fashion, first as a clean up of C++, then as a series of extensions based on perceived user demand.
 thing. Same goes for programming. Procedural and functional 
 are just two different extremes and making the distinction is 
 actually limiting in the long run.
Well, I'm not sure they are extremes, you are more constrained with a FP, but that also brings you coherency and less state to consider when reasoning about the program. Interestingly a logic programming language could be viewed as a generalization of a functional programming language. But I'll have to admit that there are languages that sort of makes for a completely different approach to practical programming, like Erlang or Idris. But then you have research languages that try to be more regular interpretative, while still having it as a goal to provide a prover, like Whiley and some languages built by people at Microsoft that is related to Z3. These are only suitable for toy programs at this point, though.
 The goal with such things is choosing the right view for the 
 right problem at the right time and then one can generate the 
 solution very easily.
Yes. However, it takes time to learn a completely different versa, but there is no easy path when moving from C++ to Haskell.
All I will say about this is that all the different programming languages are just different expressions of the same. No matter how different they seem, they all attempt to accomplish the same. In mathematics, it has been found that all the different branches are identical and just look different because the "inventors" approaches it from different angles with different intents and experiences. Everything you describe is simply mathematical logic implemented using different syntactical and semantical constructs that all reduce to the same underlying boolean logic. We already have general theorem solving languages and any compiler is a theorem solver because all programs are theorems. The problem is not so much the logic side but the ability to deal with complexity. We can, even in machine code, write extremely complex programs... but, as you mention, the human brain really can't deal with the complexity. Visual methods and design patterns must be used to allow humans to abstract complexity. Functional programs do this well because they are directly based in abstraction(category theory). Procedural does not. You basically get functions and maybe oop on top of that and then you have no way to manage all that stuff properly using the language and tools. As the program goes in complexity so does the code because there is no higher levels of abstraction to deal with it. It all boils down to abstraction and that is the only way humans can deal with complexity. A programming language needs to be designed with that fact as the basis to be the most effective. All the other details are irrelevant if that isn't covered. This is why no one programs in assembly... not because it's a bad language necessarily but because it doesn't allow for abstraction. [I realize there are a lot of people that still program in assembly but only because they have to or their problems are not complex] I don't use Haskell much to know if it has similar limitations but my feeling is that because it is directly based in category theory it has the abstraction problem solved... it just has a lot of other issues that makes it not a great language for practical usage. Ultimately only the future will tell it's secrets. I'm just trying to extrapolate from my experiences and where I see the direction going. No one here will actually be alive to find out if I'm right or wrong so ultimately I can say what I want ;) [But there is a clear evolution of programming languages, mathematics, and computation that does provide history and hence a direction of the future]
May 17 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 17 May 2019 at 14:33:59 UTC, Alex wrote:
 All I will say about this is that all the different programming 
 languages are just different expressions of the same. No matter 
 how different they seem, they all attempt to accomplish the 
 same. In mathematics, it has been found that all the different 
 branches are identical and just look different because the 
 "inventors" approaches it from different angles with different 
 intents and experiences.
Not exactly sure what you are talking about here, if you are thinking about Turing Machines then that is sort of a misnomer as that only deals with the possibility of writing batch-programs that computes the same result. It doesn't say if it is feasible to actually do it with a real world programmer. In terms of applied programming languages are very different once we move outside of imperative languages.
 Everything you describe is simply mathematical logic 
 implemented using different syntactical and semantical 
 constructs that all reduce to the same underlying boolean logic.
Not really. If you limit the input and output to fixed sizes then all programs can be implemented boolean expressions. However, that is not what we are talking about here. We are talking about modelling and type systems.
 We already have general theorem solving languages and any 
 compiler is a theorem solver because all programs are theorems.
Usually not. Almost all compiled real world programs defer resolution to runtime, thus they have a solver that is too weak. Compiler do as much as they can, then they emit runtime checks (or programs are simply left incorrect and crashes occationally at runtime).
 Functional programs do this well because they are directly 
 based in abstraction(category theory).
Actually, I think it has more to do with limiting the size of the available state. Since you seldom write FP programs with high speed in mind you also can relax more and pick more readable (but less efficient) data-structures. However there are many types of algorithms that are far easier to implement in imperative languages. I'd say most real world performance oriented algorithms fall into that category.
 As the program goes in complexity so does the code because 
 there is no higher levels of abstraction to deal with it.
I don't think this is correct. Abstraction is a property of modelling, not really a property of the language. The language may provide more or less useful modelling mechanisms, but the real source for abstraction-failures are entirely human for any sensible real world language.
 covered. This is why no one programs in assembly... not because 
 it's a bad language necessarily but because it doesn't allow 
 for abstraction.
That's not quite true either. You can do abstraction just fine in assembly with a readable assmbly language like Motoroal 68K and a good macro assembler. It is more work, easier to make mistakes, but whether you manage to do abstraction well is mostly a property of the programmer (if the basic tooling is suitable).
 it just has a lot of other issues that makes it not a great 
 language for practical usage.
It was not designed for writing large programs, it is more of a PL-exploration platform than a software engineering solution. AFAIK, Haskell lacks abstraction mechanisms for programming in the large.
 No one here will actually be alive to find out if I'm right or 
 wrong so ultimately I can say what I want ;)
It is a safe bet to say that you are both right and wrong (and so are we all, at the end of the day).
May 17 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 17 May 2019 at 15:02:51 UTC, Ola Fosheim Grøstad wrote:
 That's not quite true either. You can do abstraction just fine 
 in assembly with a readable assmbly language like Motoroal 68K 
 and a good macro assembler. It is more work, easier to make 
 mistakes, but whether you manage to do abstraction well is 
 mostly a property of the programmer (if the basic tooling is 
 suitable).
That said, a real problem with machine language is that restructuring the code becomes very expensive. So you need a waterfall development model. It is not very suitable for iterative development models. Today, compilers often generate better code, so it is mostly pointless to use machine language unless you need to save space or want to utilize special properties of the hadware. (sorry about the typos in the previous answer)
May 17 2019
prev sibling parent reply Alex <AJ gmail.com> writes:
On Friday, 17 May 2019 at 15:02:51 UTC, Ola Fosheim Grøstad wrote:
 On Friday, 17 May 2019 at 14:33:59 UTC, Alex wrote:
 All I will say about this is that all the different 
 programming languages are just different expressions of the 
 same. No matter how different they seem, they all attempt to 
 accomplish the same. In mathematics, it has been found that 
 all the different branches are identical and just look 
 different because the "inventors" approaches it from different 
 angles with different intents and experiences.
Not exactly sure what you are talking about here, if you are thinking about Turing Machines then that is sort of a misnomer as that only deals with the possibility of writing batch-programs that computes the same result. It doesn't say if it is feasible to actually do it with a real world programmer. In terms of applied programming languages are very different once we move outside of imperative languages.
 Everything you describe is simply mathematical logic 
 implemented using different syntactical and semantical 
 constructs that all reduce to the same underlying boolean 
 logic.
Not really. If you limit the input and output to fixed sizes then all programs can be implemented boolean expressions. However, that is not what we are talking about here. We are talking about modelling and type systems.
 We already have general theorem solving languages and any 
 compiler is a theorem solver because all programs are theorems.
Usually not. Almost all compiled real world programs defer resolution to runtime, thus they have a solver that is too weak. Compiler do as much as they can, then they emit runtime checks (or programs are simply left incorrect and crashes occationally at runtime).
 Functional programs do this well because they are directly 
 based in abstraction(category theory).
Actually, I think it has more to do with limiting the size of the available state. Since you seldom write FP programs with high speed in mind you also can relax more and pick more readable (but less efficient) data-structures. However there are many types of algorithms that are far easier to implement in imperative languages. I'd say most real world performance oriented algorithms fall into that category.
 As the program goes in complexity so does the code because 
 there is no higher levels of abstraction to deal with it.
I don't think this is correct. Abstraction is a property of modelling, not really a property of the language. The language may provide more or less useful modelling mechanisms, but the real source for abstraction-failures are entirely human for any sensible real world language.
 covered. This is why no one programs in assembly... not 
 because it's a bad language necessarily but because it doesn't 
 allow for abstraction.
That's not quite true either. You can do abstraction just fine in assembly with a readable assmbly language like Motoroal 68K and a good macro assembler. It is more work, easier to make mistakes, but whether you manage to do abstraction well is mostly a property of the programmer (if the basic tooling is suitable).
 it just has a lot of other issues that makes it not a great 
 language for practical usage.
It was not designed for writing large programs, it is more of a PL-exploration platform than a software engineering solution. AFAIK, Haskell lacks abstraction mechanisms for programming in the large.
 No one here will actually be alive to find out if I'm right or 
 wrong so ultimately I can say what I want ;)
It is a safe bet to say that you are both right and wrong (and so are we all, at the end of the day).
We must have different concepts of abstraction. Category theory is the theory of abstraction. Abstraction is generalization. What abstraction does is allow for symbolic representation of complex processes. d/dx, int, sum, interface, functor, category, set, point, etc are all abstractions of concepts that are complex. But having a "name"(a symbol) for these complex processes we can substitute the symbol for the process and it makes it much easier to use and remember. All words are abstractions. All symbols are abstractions, and just about everything we humans do is to abstract away complexity. Definitions are abstractions that use other words(Abstractions) to create a new abstraction. A oop hierarchy is an abstraction as is a struct. They are used to bind bits(which are abstractions) together in to a certain structure. You cannot abstract in masm because masm does not have language capabilities to abstract. There is no class. There is a struct which does some abstraction but it is limited because you can't generalize it by inclusion/use. Tasm is the same as it does not have these capabilities. Instruction mneumonics are abstractions. mov ax, 34 is an abstraction. The assembler converts the symbols in to bits and those bits then toggle transistor states, which are also abstractions since transistors are made up of different configurations of silica and doped with different oxides to create a switching effect, which is abstracting logical operations. Logic itself is the most concrete form of anything as everything is logic. Even a baby playing is an abstraction of logic because neurons that fire in the baby's brain are just abstractions of logic and very much related to transistors(neurons are switches and interconnect to form matrices just like transistors do... it's not a coincidence because transistors were abstracted from neurology). The point is that we humans see the world in abstraction. There are no lines in the world. A cup of water IS an abstraction. The boundaries that we see do not exist. It is true there is a change, but the change we see is an abstraction of that change. In computer programming, we use abstractions which are expressions. These expressions, whatever their representation, eventually get compiled in to an abstraction that computers understand(which is machine code) and ultimately down in to transistor configurations. But I've only went in one direction. All that stuff was built up from the most concrete to the most abstract in reality by humans. Humans NEVER build from the abstract to the concrete, it doesn't even make sense to do so(we can specialize abstractions but but all abstractions start out as less abstract things). Now, you are very well aware that computer languages have evolved to add more abstraction and those abstractions allow for more powerful programs. C->C++ is probably the most well known advancement in this area. C did not have oop. You can write any program that is written in C++ in C or assembly or hex. It is not a matter of representation but a matter of abstraction. OOP allowed one to write far more complex programs SOLELY because it allowed one to abstract the bits and pieces and the abstraction allows the human mind to understand it. See, a typical 1MB program would be impossible to memorize as the sequence of bits in it's executable... but not terribly hard to memorize in terms of it's abstractions. The details are not required to be memorized. For example, suppose you had to *reimplement* a program from scratch but you could memorize the HL code or machine code... which one one would you choose? If you had to do it perfectly you'd want to use machine. If you only needed a good approximation you'd go with the abstraction. Now, my point with all this is that this thing I'm calling abstraction is in everything. Mathematics is all about abstraction. We start with the integers and get the rationals, the rationals were abstracted to the reals, the reals to the complex, and then one gets more such as padics, quaternions, and then vectors come out of and then tensors out of vectors... A building is built in a certain way. One does not or cannot start with the windows... the windows require something to attach too[It's true you can build the windows off site but you can never start with the windows first and place them in 3D space and then build the building around them]. OOP is simply the ability to abstract. Inheritance is abstraction. When one inherits from a class they are generalizing or abstracting that class. It's very powerful when you had no ability to do this before(C). What I am talking about is 1. the progression of things towards more abstraction 2. How that is playing out in programming languages. Ultimately it has nothing to do with specific languages except that they provide examples of where we are at in the process of all this. Abstraction is power. If you could not abstract you could not think. Even animals abstract, just not so well. Thinking is basically abstracting. As far as programming goes, the better we can create abstracting abstractions the more powerful the programming language and the more complicated programs we can create. Visualization is an abstraction as it allows one to represent something in more general terms. If you understand all that then what I'm saying is that category theory IS the theory of abstraction(there is no other theory that deals with it rigorously and correctly except other theories that are equivalent). Category theory is the theory of abstraction(by design) and therefor is very important in programming as it is in math and many other things in life(if not all). So when I speak of abstraction I am talking about a very specific process. It is a process of relationship. It is also called generalization. None of these definitions are what I mean: the quality of dealing with ideas rather than events. "topics will vary in degrees of abstraction" something which exists only as an idea. plural noun: abstractions "the question can no longer be treated as an academic abstraction" synonyms: concept, idea, notion, thought, generality, generalization, theory, theorem, formula, hypothesis, speculation, conjecture, supposition, presumption "his style of writing focuses on facts rather than abstractions" antonyms: fact, material consideration 2. freedom from representational qualities in art. "geometric abstraction has been a mainstay in her work" an abstract work of art. 3. a state of preoccupation. "she sensed his momentary abstraction" synonyms: absentmindedness, distraction, preoccupation, daydreaming, dreaminess, inattentiveness, inattention, woolgathering, absence, heedlessness, obliviousness; More thoughtfulness, pensiveness, musing, brooding, absorption, engrossment, raptness "she sensed his momentary abstraction" antonyms: attention 4. the process of considering something independently of its associations, attributes, or concrete accompaniments. "duty is no longer determined in abstraction from the consequences" 5. the process of removing something, especially water from a river or other source. https://en.wikipedia.org/wiki/Abstraction_(computer_science) Is far closer. But my abstraction is even more abstract than the above because I include the abstractions of mathematics since they are essentially the same. E.g., an equation such as int(x^2 + cos(x),x=0..43) is actually a program and this is why we can write a program in D to compute the equation... because they are actually both programs that use different language. Math is a language and all languages are programming languages(even English). You have to realize that I'm looking at the forest from outer space... it covers the whole planet. It's one organism, it's not a bunch of arbitrarily marked off boundaries created by man due to his ignorance. [In case it's not clear, I'm taking a very inclusive view of programming in general... all programming, and trying to abstract it in to a single concept that applies to it all. What this concept is, is more universal and hence more powerful. It has to throw away details but it has to keep the underlying structure that binds all programming together. That underlying structure is what is important, it is what makes a programming language a programming language. Without it, it would be something else] Of course, if one abstracts too much one arrives a singular point! Which turns out to be quite powerful! ;)
May 17 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 17 May 2019 at 16:35:16 UTC, Alex wrote:
 We must have different concepts of abstraction.
Well, if you are talking about abstraction mechanisms that languages provide then I kinda get you. But in general abstraction starts with modelling. One does this best with pencil and paper, no need for a fixed theoretical framework. What one can benefit from is experience with various modelling techniques, or just experience. The most important aspect of this, in the beginning is to build intuition, which is modelling, albeit a fuzzy and "emotional" model.
 What abstraction does is allow for symbolic representation of 
 complex processes.
Simplified representation, usually. So, it is a view or an interpretation of reality, but not actually a true representation of reality.
 A oop hierarchy is an abstraction as is a struct.
Not quite. You start with object oriented modelling, then you pick a set of tools that lets you implement the model, and possibly evolve it over time. That is the real purpose of OOP, to enable the implementation of a model and to make it maintainable. Keep in mind that Nygaard and Dahl created Simula for doing simulations. It provided a set of mechanism to enable that such as coroutines, block prefixing, class main program with an inner injection point (IIRC) and class inheritance with virtual functions. So, the abstraction (of reality) is the model. The OOP mechanisms are mechanisms for implementing the model.
 You cannot abstract in masm because masm does not have language 
 capabilities to abstract.
I don't know about masm, but there is no problem implementing an OO model in machine language or C.
 everything is logic. Even a baby playing is an abstraction of 
 logic because neurons that fire in the baby's brain are just
No, you are modelling a baby using logic, that model is the abstraction, the baby is reality.
 The point is that we humans see the world in abstraction. There 
 are no lines in the world. A cup of water IS an abstraction.
Yes, we create models of reality in our heads and classify parts of reality that we have segmented into objects. Our visual system has that in part hardcoded. So yes, everything that isn't the signal of a sensing cell is an interpretation that is turned into a model.
 But I've only went in one direction. All that stuff was built 
 up from the most concrete to the most abstract in reality by 
 humans. Humans NEVER build from the abstract to the concrete, 
 it doesn't even make sense to do so(we can specialize 
 abstractions but but all abstractions start out as less 
 abstract things).
That's not quite true. Say, an artist can start by drawing a circle, then put two more circles within that circle and then gradually refine it into a face. You can do the same with programming. Also, in genetic programming there isn't necessarily programming level abstraction in play, except for the fitness function (measuring degree of success). It could be viewed as a randomized search within a technical architecture (that implements a low level model, that model is an abstraction of what happens in the hardware). But it might not be fruitful to view the mechanisms as abstractions, since they are not approached as such. Thus abstraction is something that is in our heads, in our model, something entirely subjective. Although some might argue that programmers have an inter-subjective model of the programming field. But it is not like all programmers view or use the same constructs for the same purposes. So a programming construct has the potential to be used to implement a model (an abstraction), but the meaning of how the construct was used depends on the individual programmer.
 OOP allowed one to write far more complex programs SOLELY 
 because it allowed one to abstract the bits and pieces and the 
 abstraction allows the human mind to understand it.
Mostly by making programs more maintainable and easier to evolve. Of course, the availability of OO mechanisms in languages created a market for OO modelling techniques. So yes, more people started modelling differently because of the availability. However, the modelling should happen before you program...
 abstraction is in everything. Mathematics is all about 
 abstraction.
Well, that is one interpretation. Another one is that it just is a space of possible symbolic models that one (try to) prove to be internally consistent. Then you can try to do abstractions over the interpretation of the symbolic models and come up with concepts like rings, groups etc. The school math is one particular version, but there are infinitely many others available... perhaps not useful or interesting, but they are available, you just have to define them. No real abstraction in that. It could be viewed as uninterpreted symbolic machinery. Doesn't have to represent anything in particular. That is of course unusual, but that doesn't mean that you cannot do it.
 We start with the integers and get the rationals, the rationals 
 were abstracted to the reals, the reals to the complex, and 
 then one gets more such as padics, quaternions, and then 
 vectors come out of and then tensors out of vectors...
Yes, but those are interpretations or rather, the model that you map over to the symbolic machinery. So you have a mapping from your model to something more concrete.
 OOP is simply the ability to abstract. Inheritance is 
 abstraction. When one inherits from a class they are 
 generalizing or abstracting that class. It's very powerful when 
 you had no ability to do this before(C).
No. OO-modelling is a tool for abstraction. Inheritance is a relationship that is in the model. When you inherit you specialize the more abstract super class. The superclass is more abstract because it lacks more information about the phenomon (reality) being modelled than the subclass. So the subclass contains more details, and closer reflects the properties of reality than the superclass. However, the language constructs are just tools that provide machinery for implementing the model. Just like the symbolic machinery of logic is a tool for implementing integers and operations over integers in math. You can have many different symbolic implementations of integers. Right? Unfortunately, in math people might say that a specific symbolic representation is a model of integer. Whereas in computer-terminology "integer" would be the model and the representation would be the implementation.
 Ultimately it has nothing to do with specific languages except 
 that they provide examples of where we are at in the process of 
 all this.
Well, the whole is greater than the sum of the individual parts, so what really does make a difference is how everything work together. That includes the IDE and different tools for interactive code analysis.
 If you understand all that then what I'm saying is that 
 category theory IS the theory of abstraction
There is no doubt that it is A theory of abstraction...
 Of course, if one abstracts too much one arrives a singular 
 point! Which turns out to be quite powerful! ;)
The problem with abstraction is that one deliberately (or mistakenly) ignores information, that is true.
May 17 2019
parent reply Alex <AJ gmail.com> writes:
On Friday, 17 May 2019 at 18:09:16 UTC, Ola Fosheim Grøstad wrote:
 On Friday, 17 May 2019 at 16:35:16 UTC, Alex wrote:
 We must have different concepts of abstraction.
Well, if you are talking about abstraction mechanisms that languages provide then I kinda get you. But in general abstraction starts with modelling. One does this best with pencil and paper, no need for a fixed theoretical framework. What one can benefit from is experience with various modelling techniques, or just experience. The most important aspect of this, in the beginning is to build intuition, which is modelling, albeit a fuzzy and "emotional" model.
 What abstraction does is allow for symbolic representation of 
 complex processes.
Simplified representation, usually. So, it is a view or an interpretation of reality, but not actually a true representation of reality.
 A oop hierarchy is an abstraction as is a struct.
Not quite. You start with object oriented modelling, then you pick a set of tools that lets you implement the model, and possibly evolve it over time. That is the real purpose of OOP, to enable the implementation of a model and to make it maintainable. Keep in mind that Nygaard and Dahl created Simula for doing simulations. It provided a set of mechanism to enable that such as coroutines, block prefixing, class main program with an inner injection point (IIRC) and class inheritance with virtual functions. So, the abstraction (of reality) is the model. The OOP mechanisms are mechanisms for implementing the model.
 You cannot abstract in masm because masm does not have 
 language capabilities to abstract.
I don't know about masm, but there is no problem implementing an OO model in machine language or C.
 everything is logic. Even a baby playing is an abstraction of 
 logic because neurons that fire in the baby's brain are just
No, you are modelling a baby using logic, that model is the abstraction, the baby is reality.
 The point is that we humans see the world in abstraction. 
 There are no lines in the world. A cup of water IS an 
 abstraction.
Yes, we create models of reality in our heads and classify parts of reality that we have segmented into objects. Our visual system has that in part hardcoded. So yes, everything that isn't the signal of a sensing cell is an interpretation that is turned into a model.
 But I've only went in one direction. All that stuff was built 
 up from the most concrete to the most abstract in reality by 
 humans. Humans NEVER build from the abstract to the concrete, 
 it doesn't even make sense to do so(we can specialize 
 abstractions but but all abstractions start out as less 
 abstract things).
That's not quite true. Say, an artist can start by drawing a circle, then put two more circles within that circle and then gradually refine it into a face. You can do the same with programming. Also, in genetic programming there isn't necessarily programming level abstraction in play, except for the fitness function (measuring degree of success). It could be viewed as a randomized search within a technical architecture (that implements a low level model, that model is an abstraction of what happens in the hardware). But it might not be fruitful to view the mechanisms as abstractions, since they are not approached as such. Thus abstraction is something that is in our heads, in our model, something entirely subjective. Although some might argue that programmers have an inter-subjective model of the programming field. But it is not like all programmers view or use the same constructs for the same purposes. So a programming construct has the potential to be used to implement a model (an abstraction), but the meaning of how the construct was used depends on the individual programmer.
 OOP allowed one to write far more complex programs SOLELY 
 because it allowed one to abstract the bits and pieces and the 
 abstraction allows the human mind to understand it.
Mostly by making programs more maintainable and easier to evolve. Of course, the availability of OO mechanisms in languages created a market for OO modelling techniques. So yes, more people started modelling differently because of the availability. However, the modelling should happen before you program...
 abstraction is in everything. Mathematics is all about 
 abstraction.
Well, that is one interpretation. Another one is that it just is a space of possible symbolic models that one (try to) prove to be internally consistent. Then you can try to do abstractions over the interpretation of the symbolic models and come up with concepts like rings, groups etc. The school math is one particular version, but there are infinitely many others available... perhaps not useful or interesting, but they are available, you just have to define them. No real abstraction in that. It could be viewed as uninterpreted symbolic machinery. Doesn't have to represent anything in particular. That is of course unusual, but that doesn't mean that you cannot do it.
 We start with the integers and get the rationals, the 
 rationals were abstracted to the reals, the reals to the 
 complex, and then one gets more such as padics, quaternions, 
 and then vectors come out of and then tensors out of vectors...
Yes, but those are interpretations or rather, the model that you map over to the symbolic machinery. So you have a mapping from your model to something more concrete.
 OOP is simply the ability to abstract. Inheritance is 
 abstraction. When one inherits from a class they are 
 generalizing or abstracting that class. It's very powerful 
 when you had no ability to do this before(C).
No. OO-modelling is a tool for abstraction. Inheritance is a relationship that is in the model. When you inherit you specialize the more abstract super class. The superclass is more abstract because it lacks more information about the phenomon (reality) being modelled than the subclass. So the subclass contains more details, and closer reflects the properties of reality than the superclass. However, the language constructs are just tools that provide machinery for implementing the model. Just like the symbolic machinery of logic is a tool for implementing integers and operations over integers in math. You can have many different symbolic implementations of integers. Right? Unfortunately, in math people might say that a specific symbolic representation is a model of integer. Whereas in computer-terminology "integer" would be the model and the representation would be the implementation.
 Ultimately it has nothing to do with specific languages except 
 that they provide examples of where we are at in the process 
 of all this.
Well, the whole is greater than the sum of the individual parts, so what really does make a difference is how everything work together. That includes the IDE and different tools for interactive code analysis.
 If you understand all that then what I'm saying is that 
 category theory IS the theory of abstraction
There is no doubt that it is A theory of abstraction...
 Of course, if one abstracts too much one arrives a singular 
 point! Which turns out to be quite powerful! ;)
The problem with abstraction is that one deliberately (or mistakenly) ignores information, that is true.
You keep saying that something is not an abstraction then use that abstraction. E.g., oop is a tool used to abstract... Do you realize that a tool itself is an abstraction? Until we can agree on a precise definition of abstraction we will just go in circles. If you agree that your brain takes sensory data and presents it to you as abstractions. E.g., some cells fire on your finger tips and eventually through the magic of life your brain tells you that you just chopped off the tip of your finger is a process of abstraction(your brain builds up a model about reality and what happened given the sensory data and presents it to you as the reality(but it is still a model))... then, in fact, everything you know is an abstraction... because all your information came through you by sensory data which was abstracted. You don't think about pumping your heart, your body does it for you... but your perception of what is going on is really an abstraction. The problem we have is you seem to think abstractions are not real. That is the typical way to view them... But abstractions are not like a unicorn... abstracts are very real. Without them we couldn't even have this conversation, literally. I'm presenting you with my extended definition of what abstraction is, you can keep using use or choose to use mine and see something new. My definition includes yours, where do you think I got mine from? (and that itself was an abstracting process) Stop thinking of abstraction as what you learned from Stroustrup and think of it in modern terms. Concepts grow and evolve for a reason. In mathematics, one of the most basic principles is the concept of a point/element... something that is so abstracted as to include everything and nothing at the same time. That singular concept IS what as allowed mathematics to do very real things. Abstractions are not imaginary. A baby is very much an abstraction. What a baby really is far more complex than anything our brains understand. Most people have no clue what a baby is but some simple abstraction of something that whines, shits, and giggles.
May 17 2019
next sibling parent reply dayllenger <dayllenger protonmail.com> writes:
On Friday, 17 May 2019 at 19:43:15 UTC, Alex wrote:
 If you agree that your brain takes sensory data and presents it 
 to you as abstractions.
Who is "you" then, if you say like the brain is separated from that "you"?
 then, in fact, everything you know is an abstraction...
So, abstraction becomes knowledge, in your definition?
 because all your information came through you by sensory data 
 which was abstracted.
Not quite true (protip: every act of cognition assume that you already know about space or time).
 The problem we have is you seem to think abstractions are not 
 real.
Depends on what is real.
 Without them we couldn't even have this conversation, literally.
Maybe, but it does not imply that they are real: "Conversation is real and powered by abstractions." "Magpies are black-and-white and powered by food."
 Abstractions are not imaginary.
If so, and real is something that I theoretically can touch, can I touch a number?
May 17 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 18 May 2019 at 01:23:38 UTC, dayllenger wrote:
 Depends on what is real.
That's right, in our context "reality" is the problem domain we model and the context the program is going to be used in. So, if you we create a Star Trek themed RPG then we model the imaginary reality of the Star Trek universe. Or, if we create a program to support an engineering discipline then "reality" is the practices of that engineering discipline (which could include parts of linear algebra for instance)
May 18 2019
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 17 May 2019 at 19:43:15 UTC, Alex wrote:
 You keep saying that something is not an abstraction then use 
 that abstraction. E.g., oop is a tool used to abstract...
Not sure what you mean. OO abstractions are in your head. OOP-language-features are just mechanisms that makes it easier to express those abstractions in code.
 Do you realize that a tool itself is an abstraction?
No. Why would it be? The tool itself "just is". You may interpret it, that interpretation embodies abstractions.
 Until we can agree on a precise definition of abstraction we 
 will just go in circles.
Seems like it.
 it to you as the reality(but it is still a model))... then, in 
 fact, everything you know is an abstraction... because all your 
 information came through you by sensory data which was 
 abstracted.
Yes, the brain is an abstraction machine. Current generation compilers are not. There are some attempts in that direction, like refactoring engines. E.g. deforesting, where you expand a program until it contains no functions, then you restructure the program, find similar branches and restructure those branches back into functions. But in general, compilers and common tooling don't do much abstraction. Some optimization passes work with abstractions, to gain performance, but that is not happening on a semantic level.
 In mathematics, one of the most basic principles is the concept 
 of a point/element... something that is so abstracted as to 
 include everything and nothing at the same time. That singular 
 concept IS what as allowed mathematics to do very real things.
Not sure what you mean by saying that a point includes everything and nothing... Anyway, what the most basic concept is, is a matter of viewpoint. Some would say that a set is the most crucial concept. In general, I don't think such arguments leads anywhere. It is a matter of interpretation based on what you are interested in expressing.
 Abstractions are not imaginary. A baby is very much an 
 abstraction. What a baby really is far more complex than 
 anything our brains understand. Most people have no clue what a 
 baby is but some simple abstraction of something that whines, 
 shits, and giggles.
The baby is physical matter. Our conceptualization of a baby is our individual abstractions of what we have perceived. My understanding of what a baby is changed drastically after I become a father. Anyway, how does this relate to programming languages? Programming languages are just symbolic machinery. Those languages makes it possible to encode models. Those models embody abstractions. Same as in math. You can encode ("model") integers like this: "0" = Zero. "1" = Successor(Zero). "2" = Successor(Successor(Zero)). The right hand side is what we express in the programming language to be executed on a machine. The left hand side is how we map it to the conceptual model (or abstraction) that we have in our head.
May 18 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
FWIW, the notion of "abstract datatypes" is tangential to the 
above discussion.
May 18 2019
prev sibling parent reply NaN <divide by.zero> writes:
On Thursday, 16 May 2019 at 09:35:16 UTC, Alex wrote:
 On Thursday, 16 May 2019 at 08:12:49 UTC, NaN wrote:

 No, you don't get it.
Unsurprisingly I feel that neither do you, at least you dont get what Im saying, or rather you think Im saying something that Im not.
 We are talking about a hypothetical compiler that doesn't have 
 to have different contexts. In D, the contexts arbitrarily 
 separated in the language...
I don't see how it can not have different contexts, it runs at compile time, produces a binary and that runs at runtime. You can hide that fact, make it so code looks and works the same in each context as much as possible. But you cant not have that be what actually happens.
 What we are ultimately talking about here is that CT and RT is 
 not two different concepts but one concept with some very 
 minute distinction for RT.
Yes, exactly my point, there will be differences. Andrei did a talk on why C++s static if was next to useless because it introduces a new scope. That's all I was pointing out, regular if and static if behave differently in D. And with good reason.
 As a programmer you shouldn't care if something is CT or RT and 
 hence you shouldn't even know there is a difference.
Like most things that "just work" you dont care until you do.
 What are you saying is that you have one if and two contexts. 
 What I'm saying is  hat you have one if and one context. That 
 the CT programming and runtime programming are NOT treated as 
 being two different universes with some overlap but the same 
 universe with a slight boundary.
100% it'd be great if RT/CT were "same rules apply", but i was just pointing out one of the "boundaries" where that is currently not the case.
 enum x = 4;
 and
 int y = 4;

 That is explicitly two different programming contexts. On CT 
 and the other RT. But that is the fallacy. They are EXACTLY 
 identical programmatically.
static if (x) {} and if (y) {} Are currently not identical grammatically. One introduces a new scope and one does not.
 Only when y is modified at some point later in the code does 
 things potentially change. enum says we will never change x at 
 CT... but why can't the compiler figure that out automatically 
 rather than forcing us to play by it's rules and have a 
 separate context?
Defining a constant is not you informing the compiler you wont change the value, it's you telling the compiler to make sure that you dont. That cant be inferred by the compiler because the act of modifying it would destroy the inference that it should not be modified. It's catch 22.
May 17 2019
parent reply Alex <AJ gmail.com> writes:
On Friday, 17 May 2019 at 08:47:19 UTC, NaN wrote:
 On Thursday, 16 May 2019 at 09:35:16 UTC, Alex wrote:
 On Thursday, 16 May 2019 at 08:12:49 UTC, NaN wrote:

 No, you don't get it.
Unsurprisingly I feel that neither do you, at least you dont get what Im saying, or rather you think Im saying something that Im not.
 We are talking about a hypothetical compiler that doesn't have 
 to have different contexts. In D, the contexts arbitrarily 
 separated in the language...
I don't see how it can not have different contexts, it runs at compile time, produces a binary and that runs at runtime. You can hide that fact, make it so code looks and works the same in each context as much as possible. But you cant not have that be what actually happens.
 What we are ultimately talking about here is that CT and RT is 
 not two different concepts but one concept with some very 
 minute distinction for RT.
Yes, exactly my point, there will be differences. Andrei did a talk on why C++s static if was next to useless because it introduces a new scope. That's all I was pointing out, regular if and static if behave differently in D. And with good reason.
 As a programmer you shouldn't care if something is CT or RT 
 and hence you shouldn't even know there is a difference.
Like most things that "just work" you dont care until you do.
 What are you saying is that you have one if and two contexts. 
 What I'm saying is  hat you have one if and one context. That 
 the CT programming and runtime programming are NOT treated as 
 being two different universes with some overlap but the same 
 universe with a slight boundary.
100% it'd be great if RT/CT were "same rules apply", but i was just pointing out one of the "boundaries" where that is currently not the case.
 enum x = 4;
 and
 int y = 4;

 That is explicitly two different programming contexts. On CT 
 and the other RT. But that is the fallacy. They are EXACTLY 
 identical programmatically.
static if (x) {} and if (y) {} Are currently not identical grammatically. One introduces a new scope and one does not.
 Only when y is modified at some point later in the code does 
 things potentially change. enum says we will never change x at 
 CT... but why can't the compiler figure that out automatically 
 rather than forcing us to play by it's rules and have a 
 separate context?
Defining a constant is not you informing the compiler you wont change the value, it's you telling the compiler to make sure that you dont. That cant be inferred by the compiler because the act of modifying it would destroy the inference that it should not be modified. It's catch 22.
There is a different between what the D compiler does, what it should do and what all compilers should do. Compilers must evolve and it requires changes. The reason why static if does not create a new scope is because one cannot create variables inside a normal if and them be seen outside. This is a limitation of if, not the other way around. For example, if D had two types of scoping, say [] and {} then would not need static if. if (x) [ ] <- does not create a scope for variable creation if (x) { } <- standard Many times we have to declare the variable outside of an if statement just to get things to work. Now, of course this is not great practice because the variable may not be created but used afterwards. static if already has that issue though. The point is that you are trying to explain the issues that make it hard for D to do certain things when I'm talking about what D should do as if it didn't have the issues. D was designed with flaws in it... that has to be accepted. But if you take those flaws as set in stone then there is never any point of making D better. You basically assume that it has no flaws(if you assume they can't be changed then you are effectively assuming it is perfect). You need to learn to think about side the box a little more... thinking inside the box is easy. Everyone knows what D does and it's flaws(more or less, but flaws are generally easy to spot). But if you can't imagine what D could be without the flaws then really, you don't see the flaws. To truly see the flaws in something requires that you also see beyond those flaws, as if D existed in an alternate universe and didn't have them. From there one can work on fixing those flaws and making D better. So, yes, D has issues. We have static if and if and CT and RT boundaries. Just arbitrarily trying to combine them will be impossible in D as we know it. That should be obvious(that is why the distinction exists and why we are talking about it. If it didn't have these issues we would be talking about other issues or none at all). But the question is, how to fix these issues in the right way? First, are they fixable? Well, everything is fixable because one can just throw it away and start afresh. Is it worth fixing? That is up to the beholder. Generally though, people that want something better(to fix a flaw) work towards a solution first by trying to quantify the flaw and think outside the box. By thinking about what they flaw is a flaw. This requires moving past what is and towards what could be. You are stuck in the IS part of the equation. I am stuck in the COULD BE. But why you are wrong is that you don't seem to realize this is a COULD BE thread. It's true that in some sense one ultimately has to deal with the practical issues of IS, but one has to get the order right. First one has to know where to go(the could be) before they actually start the travel(the IS). So, yes, everything you have said, in the context of what IS, is true. But it is not helpful. The whole discussion is about the flaw in what IS and how to get beyond it. In the context of COULD BE you are wrong, because you are staying at home not going anywhere(you are focusing on the flaw and letting the flaw stay a flaw by not thinking beyond it). Your thinking pattern probably invades every aspect of your life. You focus on more practical rather than theoretical aspects of stuff. Hence it's hard for you to think outside the box but you can think inside the box well. You get in to arguments with people because what they say doesn't make sense in your box. You fail to realize there are many boxes... many different levels to view, and understand. I too have a similar problem, completely opposite in some sense. I am an outside thinker. I tend to forget that some people are inside thinkers. I can think inside the box, but it is not my preferred medium(it's very limiting to me). I think you should ask yourself if you can think outside the box. If not then it is a flaw you have and you should work on fixing it as it will make you a much more balanced and powerful human. If it's just your preference then just try to keep in mind that you will interact with other types of thinkers in the world. (and try to parse which box people are thinking in based on context) [And the fact is everyone thinks in different boxes and there tends to be a lot of confusion the world because "every" assumes everyone else thinks in the same box]
May 17 2019
parent NaN <divide by.zero> writes:
On Friday, 17 May 2019 at 11:06:57 UTC, Alex wrote:
 Your thinking pattern probably invades every aspect of your 
 life. You focus on more practical rather than theoretical 
 aspects of stuff. Hence it's hard for you to think outside the 
 box but you can think inside the box well. You get in to 
 arguments with people because what they say doesn't make sense 
 in your box. You fail to realize there are many boxes... many 
 different levels to view, and understand. I too have a similar 
 problem, completely opposite in some sense. I am an outside 
 thinker. I tend to forget that some people are inside thinkers. 
 I can think inside the box, but it is not my preferred 
 medium(it's very limiting to me). I think you should ask 
 yourself if you can think outside the box. If not then it is a 
 flaw you have and you should work on fixing it as it will make 
 you a much more balanced and powerful human. If it's just your 
 preference then just try to keep in mind that you will interact 
 with other types of thinkers in the world. (and try to parse 
 which box people are thinking in based on context)
Actually what you seem to have forgotten is that telling someone what personal traits they need to work on "fixing" is a pretty dickish thing to do. :)
May 17 2019
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, May 15, 2019 at 06:31:57PM +0000, NaN via Digitalmars-d wrote:
 On Tuesday, 14 May 2019 at 17:44:17 UTC, H. S. Teoh wrote:
 On Mon, May 13, 2019 at 08:35:39AM +0000, Martin Tschierschke via
 Digitalmars-d wrote:
 On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote: [...]
 I haven't fully thought through this yet, but the basic concept
 is that there should be *no need* to distinguish between
 compile-time and runtime in the general case, barring a small
 number of situations where a decision has to be made.
[...] I was thinking and working in the same direction, when using regEx you can use the runtime (RT) or the compile time (CT) version, but why not let the compiler make the decisions?
It would be great if the compiler could make all of the decisions, but I think at some point, some level of control would be nice or even necessary.
If you envisage a regular if statement being able to be both CT/RT depending on whether the value inside its brackets is known at CT or not what do you do about whether it introduces a new scope or not? Static if does not, but regular if does, if you want them unified something has to give.
[...] I think the simplest solution would be for regular if to always introduce a new scope, and static if behaves as before. Due to their different semantics on the surrounding code, blindly unifying them would probably be a bad idea. But taking a step back, there are really two (perhaps more) usages of static if: (1) to statically select a branch of code based on a known CT value, or (2) to statically inject declarations into the surrounding code based on some CT condition. Case (1) is relatively straightforward to unify with regular if. Case (2) would seem to be solely in the domain of static if, and I don't see much point in trying to unify it with regular if, even if such were possible. For example, static if can appear outside a function body, whereas regular if can't. Static if can therefore be used to choose a different set of declarations at the module level. If we were to hypothetically unify that with regular if, that would mean that we have to somehow support switching between two or more sets of likely-conflicting declarations at *runtime*. I don't see any way of making that happen without creating a huge mess, unclear semantics, and hard-to-understand code -- not to mention the result is unlikely to be very useful. So I'd say static if in the sense of (2) should remain as-is, since it serves a unique purpose that isn't subsumed by regular if. Static if in the sense of (1), however, can probably be unified with regular if. T -- Genius may have its limitations, but stupidity is not thus handicapped. -- Elbert Hubbard
May 15 2019
parent NaN <divide by.zero> writes:
On Wednesday, 15 May 2019 at 19:09:27 UTC, H. S. Teoh wrote:
 On Wed, May 15, 2019 at 06:31:57PM +0000, NaN via Digitalmars-d 
 wrote:

 Case (1) is relatively straightforward to unify with regular if.

 Case (2) would seem to be solely in the domain of static if, 
 and I don't see much point in trying to unify it with regular 
 if, even if such were possible.  For example, static if can 
 appear outside a function body, whereas regular if can't. 
 Static if can therefore be used to choose a different set of 
 declarations at the module level. If we were to hypothetically 
 unify that with regular if, that would mean that we have to 
 somehow support switching between two or more sets of 
 likely-conflicting declarations at *runtime*.  I don't see any 
 way of making that happen without creating a huge mess, unclear 
 semantics, and hard-to-understand code -- not to mention the 
 result is unlikely to be very useful.

 So I'd say static if in the sense of (2) should remain as-is, 
 since it serves a unique purpose that isn't subsumed by regular 
 if.  Static if in the sense of (1), however, can probably be 
 unified with regular if.
So what you really have is (1) which is an optimisation problem. Enable parameters that can be either CT or RT, so that the same code can be used in each instance, and the compiler can do dead code elimination when a parameter is CT. Or (2), which is conditional compilation, which stays as it is. So it is more a question of unifying CT & RT parameters than it is maybe unifying CT & RT language constructs. In fact maybe it doesnt do anything for the later? So it comes down to enabling parameters that can do both CT and RT so the compiler can do dead code elimination on those parameters if CT. Because at the moment the choice of CT?RT is fixed by the function being called.
May 16 2019
prev sibling parent Alex <AJ gmail.com> writes:
On Monday, 13 May 2019 at 08:35:39 UTC, Martin Tschierschke wrote:
 On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote:
 [...]

 I haven't fully thought through this yet, but the basic 
 concept is that there should be *no need* to distinguish 
 between compile-time and runtime in the general case, barring 
 a small number of situations where a decision has to be made.
[...] I was thinking and working in the same direction, when using regEx you can use the runtime (RT) or the compile time (CT) version, but why not let the compiler make the decisions? The solution was from someone else, in the forum, but it resulted in the following code: auto reg(alias var)() { static if (__traits(compiles, {enum ctfeFmt = var;}) ) { // "Promotion" to compile time value enum ctfeReg = var ; pragma(msg, "ctRegex used"); return(ctRegex!ctfeReg); }else{ return(regex(var)); pragma(msg,"regex used"); } } The trick is the alias var, which then can be checked, with __traits(compiles, {enum ctfeFmt = var;} is it an immutable value present at compiletime or not. Now you may use auto myregex = reg!(Regex_expression); At every place in your code. And depending on the question, does it compile to the ctfeReg or not the version is selected. So there is away to promote a ctVariable to runtime, if this would be possible with a runtime value, too, you would get rid of the '!' -syntax. Regards mt.
The compiler should do this all internally. All it has to do is keep track of simple "Known" state of a variable. Any expression or line that is compiled uses other things. The compiler has to known these things and it keeps track of them in the AST. But it should also know if those things are known. That is what CTFE basically does and also how things get optimized... But somewhere it breaks down. It most likely does this because of the way things are specified in D as being RT. e.g., ctRegex and regex.... two different things that break the unity. The compiler isn't going to know they are related and hence why you had to build a relation. This is a problem of the language itself. All programming should start as if it were CT with RT being an after effect. One would even reason about obvious RT things like readln as if they were compile time. One should even think of program execution as just finishing the compilation process. We should even have compilers written with this mentality and languages designed with it. It puts the proper emphasis on writing code. One could say that programming language design got it backwards.. we started with RT programming when it should have been about CT. [Of course when dealing with punch cards it's hard to get it right] By starting with CT and assuming everything is CT until it is proven not to be, the compiler knows it can optimize things(until it can't). When one starts with RT and assumes everything is RT there is no optimization that can take place. Obviously most languages and compilers use some blend. Functional languages tend to focus more in the CT side and then have special cases like IO and state that deal with the RT. A prototypical example would be readln. Why is that RT? Does it have to be? What if you want to mock up a program, then the compiler could optimize it out! I don't mean. //string s = readln(); s = "mock". But string s = readln(); and some point inside readln it reads a mock file at compile time because of a flag and returns mock. (and we don't have to modify source code to switch from RT to CT) As one digs down in to readln there will eventually be some code that cannot be compiled down in to a known state at compile time. The compiler should be able to figure this out... e.g., it there is an OS call and all OS calls are marked as "RT". (but we could replace those calls with a mocking set that then is known at compile time and then it could be reduced) All such things though require the compiler to be aware... they are far more aware than ever before, but only will get better over time as people make improvements, but it does require people to realize what can be improved.
May 15 2019
prev sibling next sibling parent reply =?UTF-8?B?THXDrXM=?= Marques <luis luismarques.eu> writes:
On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote:
 Skimmed over Luís Marques' slides on "compile-time types", and 
 felt compelled to say the following in response:  I think we're 
 still approaching things from the wrong level of abstraction.  
 This whole divide between compile-time and runtime is IMO an 
 artificial one, and one that we must transcend in order to 
 break new ground.
Thanks for the feedback. I think your counter-proposal makes some sense. Whether it is the right way to attack the problem I don't know. My presentation was fairly high-level, but as part of my prototyping efforts I've gone over a lot of the details, the problems they would create, how they could be solved and so on. When I read your counter-proposal my knee jerk reaction was that it would address some deficiencies with my approach but also introduce other difficult practical problems. I'll think more about it, but in the end I believe the only way to adequately gauge the pros and cons of each approach is to actually experiment with them. That's something I'll be doing, albeit at a fairly slow pace. BTW, thank you also for the Wiki article, I think it's a little gem.
May 13 2019
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, May 13, 2019 at 10:26:11AM +0000, Luís Marques via Digitalmars-d wrote:
 On Friday, 10 May 2019 at 00:33:04 UTC, H. S. Teoh wrote:
 Skimmed over Luís Marques' slides on "compile-time types", and felt
 compelled to say the following in response:  I think we're still
 approaching things from the wrong level of abstraction.  This whole
 divide between compile-time and runtime is IMO an artificial one,
 and one that we must transcend in order to break new ground.
Thanks for the feedback. I think your counter-proposal makes some sense. Whether it is the right way to attack the problem I don't know. My presentation was fairly high-level, but as part of my prototyping efforts I've gone over a lot of the details, the problems they would create, how they could be solved and so on. When I read your counter-proposal my knee jerk reaction was that it would address some deficiencies with my approach but also introduce other difficult practical problems.
[...] I'd love to hear what are the difficult practical problems you have in mind, if you have any specific examples. The underlying idea behind my proposal is to remove artificial distinctions and unify CT and RT so that, as far as is practical, they are symmetric to each other. Currently, the asymmetry between CT and RT leads to a lot of incidental complexity: we have std.algorithm.filter and std.meta.Filter, std.algorithm.map and std.meta.Map, and so on, which are needless duplications that are necessary only because of the artificial distinction between CT and RT. Also, UFCS only applies to RT values, so to chain std.meta.Filter, we'd have to write ugly nested expressions where the RT counterpart is already miles ahead in terms of readability and writability. Then the ugly-looking !() vs. () between CT and RT arguments. I'll admit !() was a very clever invention in the early days of D when templates were first introduced -- it's definitely much better than C++'s nasty ambiguous <> syntax. But at the end of the day, it's still an artifact that only arose out of the artificial distinction between CT and RT parameters. Looking forward, one asks, is this CT/RT distinction a *necessary* one? Certainly, at some point, the compiler must know whether something is available at compile-time or should be deferred to runtime. But is this a decision that must be made *every single time* you declare a bunch of parameters? Is it a decision so important that it has to be made *right then and there*? Perhaps not. Perhaps we can do better by passing this decision to the caller, who probably has a better idea of what context we're being called in, and who can make a more meaningful decision. Hence, the idea of unifying CT and RT (to the extent possible) by not differentiating between them until necessary. T -- Why waste time learning, when ignorance is instantaneous? -- Hobbes, from Calvin & Hobbes
May 14 2019
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 14.05.19 19:58, H. S. Teoh wrote:
 The underlying idea behind my proposal is to remove artificial
 distinctions and unify CT and RT so that, as far as is practical, they
 are symmetric to each other.
As long as templates are not statically type checked, this does not seem particularly appealing. The main argument Andrei uses against typed templates is precisely that it is not necessary because they are expanded at compile time. The vision itself makes sense of course, it's basically system λ* [1]. [1] https://en.wikiversity.org/wiki/Foundations_of_Functional_Programming/Pure_type_systems#%CE%BB*_(na%C3%AFve_type_theory)
May 17 2019
prev sibling parent "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/9/19 8:33 PM, H. S. Teoh wrote:
 [...]
+1, pretty much the same conclusion I'd came to when mulling it over before. I haven't looked at the talk or slides yet though, and I'm def. interested in seeing that take on it, too.
May 15 2019