www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Programming Language for Games, part 3

reply "bearophile" <bearophileHUGS lycos.com> writes:
Third part of the "A Programming Language for Games", by Jonathan 
Blow:
https://www.youtube.com/watch?v=UTqZNujQOlA

Discussions:
http://www.reddit.com/r/programming/comments/2kxi89/jonathan_blow_a_programming_language_for_games/

His language seems to disallow comparisons of different types:

void main() {
     int x = 10;
     assert(x == 10.0); // Refused.
}


I like the part about compile-time tests for printf:
http://youtu.be/UTqZNujQOlA?t=38m6s

The same strategy is used to validate game data statically:
http://youtu.be/UTqZNujQOlA?t=55m12s

A screenshot for the printf case:
http://oi57.tinypic.com/2m5b680.jpg

He writes a function that is called to verify at compile-time the 
arguments of another function. This does the same I am asking for 
a "static precondition", but it has some disadvantages and 
advantages. One advantage is that the testing function doesn't 
need to be in the same module as the function, unlike static 
enums. So you can have the function compiled (separated 
compilation). Perhaps it's time for DIP.

Bye,
bearophile
Nov 01 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 1 November 2014 at 11:31:32 UTC, bearophile wrote:
 Third part of the "A Programming Language for Games", by 
 Jonathan Blow:
 https://www.youtube.com/watch?v=UTqZNujQOlA
Thanks for the link. I only have time to skim it, but I think the region-based allocation that he was concerned about in the previous talk might be handled with some kind of tuple-magic? bike := uniqptr_tuple<Frame,Wheel,Wheel>(myallocator) // =>uniq_ptr to tupleof(frameinstance,wheelinstance,wheelinstance)
Nov 01 2014
prev sibling next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 01.11.2014 um 12:31 schrieb bearophile:
 Third part of the "A Programming Language for Games", by Jonathan Blow:
 https://www.youtube.com/watch?v=UTqZNujQOlA

 Discussions:
 http://www.reddit.com/r/programming/comments/2kxi89/jonathan_blow_a_programming_language_for_games/


 His language seems to disallow comparisons of different types:

 void main() {
      int x = 10;
      assert(x == 10.0); // Refused.
 }


 I like the part about compile-time tests for printf:
 http://youtu.be/UTqZNujQOlA?t=38m6s

 The same strategy is used to validate game data statically:
 http://youtu.be/UTqZNujQOlA?t=55m12s

 A screenshot for the printf case:
 http://oi57.tinypic.com/2m5b680.jpg

 He writes a function that is called to verify at compile-time the
 arguments of another function. This does the same I am asking for a
 "static precondition", but it has some disadvantages and advantages. One
 advantage is that the testing function doesn't need to be in the same
 module as the function, unlike static enums. So you can have the
 function compiled (separated compilation). Perhaps it's time for DIP.

 Bye,
 bearophile
Just started watched the beginning, will watch the rest later. I find interesting that he also bases part of the language in how the ML languages look like. So it seems that being C like is out for language design, as most modern languages are following ML like grammars. Another trend, which I find positive, is how many people are now (finally!) assuming that C widespread into the industry was after all not that good, in terms of bugs/line of code. Now we need another 30 years until D, Rust, Swift, Nim, <place language name here>, get to replace C and C++. -- Paulo
Nov 01 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 1 November 2014 at 17:17:34 UTC, Paulo Pinto wrote:
 Another trend, which I find positive, is how many people are 
 now (finally!) assuming that C widespread into the industry was 
 after all not that good, in terms of bugs/line of code.

 Now we need another 30 years until D, Rust, Swift, Nim, <place 
 language name here>, get to replace C and C++.
Jonathan referenced Mike Action, who when asked about what C++ vs C said he preferred C and that using C++ was cultural: http://www.youtube.com/watch?v=rX0ItVEVjHc&feature=youtu.be&t=1m23s He also stated time and time again that the hardware is the platform. I think that aspect is missing a bit from D unfortunately. But in 30 years hardware will have changed a lot…
Nov 01 2014
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 01.11.2014 um 22:20 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>":
 On Saturday, 1 November 2014 at 17:17:34 UTC, Paulo Pinto wrote:
 Another trend, which I find positive, is how many people are now
 (finally!) assuming that C widespread into the industry was after all
 not that good, in terms of bugs/line of code.

 Now we need another 30 years until D, Rust, Swift, Nim, <place
 language name here>, get to replace C and C++.
Jonathan referenced Mike Action, who when asked about what C++ vs C said he preferred C and that using C++ was cultural:
I mean in terms of unsafe code and the amount of money spent in research and bug fixing that could have been avoided, if C wasn't as it is. I wouldn't be that vocal about C if: - arrays were bound checked (just use a compiler flags and dataflow to remove them like any sane language) - enums were strong typed - had reference parameters - had namespaces or real modules - no implicit type conversions - had a sane macro system But I guess D already covers it... -- Paulo
Nov 01 2014
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 01.11.2014 um 23:23 schrieb Paulo Pinto:
 Am 01.11.2014 um 22:20 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
 <ola.fosheim.grostad+dlang gmail.com>":
 On Saturday, 1 November 2014 at 17:17:34 UTC, Paulo Pinto wrote:
 Another trend, which I find positive, is how many people are now
 (finally!) assuming that C widespread into the industry was after all
 not that good, in terms of bugs/line of code.

 Now we need another 30 years until D, Rust, Swift, Nim, <place
 language name here>, get to replace C and C++.
Jonathan referenced Mike Action, who when asked about what C++ vs C said he preferred C and that using C++ was cultural:
I mean in terms of unsafe code and the amount of money spent in research and bug fixing that could have been avoided, if C wasn't as it is. I wouldn't be that vocal about C if: - arrays were bound checked (just use a compiler flags and dataflow to remove them like any sane language) - enums were strong typed - had reference parameters - had namespaces or real modules - no implicit type conversions - had a sane macro system But I guess D already covers it... -- Paulo
Forgot to mention, proper strings.
Nov 01 2014
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Paulo Pinto:

 - arrays were bound checked (just use a compiler flags and 
 dataflow to remove them like any sane language)
D removes very little bound checks. No data flow is used for this.
 - enums were strong typed
D enums are only half strongly typed.
 - had namespaces or real modules
D module system has holes like Swiss cheese. And its design is rather simplistic.
 - no implicit type conversions
D has a large part of the bad implicit type conversions of C.
 - had a sane macro system
There's no macro system in D. Mixins are an improvement over the preprocessor, but they lead to messy code.
 But I guess D already covers it...
D solves only part of the problems. And you have not listed several important things. There's still a lot of way to go to have good enough system languages. Bye, bearophile
Nov 01 2014
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 01.11.2014 um 23:32 schrieb bearophile:
 Paulo Pinto:

 - arrays were bound checked (just use a compiler flags and dataflow to
 remove them like any sane language)
D removes very little bound checks. No data flow is used for this.
 - enums were strong typed
D enums are only half strongly typed.
 - had namespaces or real modules
D module system has holes like Swiss cheese. And its design is rather simplistic.
 - no implicit type conversions
D has a large part of the bad implicit type conversions of C.
 - had a sane macro system
There's no macro system in D. Mixins are an improvement over the preprocessor, but they lead to messy code.
 But I guess D already covers it...
D solves only part of the problems. And you have not listed several important things. There's still a lot of way to go to have good enough system languages. Bye, bearophile
Maybe I should spend more time playing around with D, instead of just advocating it. However JVM/.NET languages with a grain of C++ salt for JNI/PInvoke, are what my employer and our customers care about, so I can't justify to our customers any alternatives. As for the issues, I was being nice to C as those are the issues I find more problematic. -- Paulo
Nov 01 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 1 November 2014 at 22:50:27 UTC, Paulo Pinto wrote:
 However JVM/.NET languages with a grain of C++ salt for 
 JNI/PInvoke, are what my employer and our customers care about, 
 so I can't justify to our customers any alternatives.
I don't think anyone would say that C/C++ would be alternatives to JVM/.NET. Which is what I find a bit frustrating about D forums. Whenever system level programming is discussed people are not really arguing from a performance perspective, but then I don't think they really need C/C++/D… Anyway, I believe you can turn on bound checks with some C-compilers if you want it, but I don't think anyone who is looking for performance want them in release. Related to game programming: I noticed Jonathan being very negative to the web and web programming. Which is not without merits, but the funny thing is that the only thing that keeps the web from being a solid gaming platform is the lack of payment service that has a very low threshold. I see some clear benefits with browsers/javascript's ability to compile directly to machine language on the fly. Just see what the demo scene are doing with code generators. So yeah, the code is slower, but perhaps not skillful use of it. Maybe we'll see 4K demo compos for asm.js and WebGL. Btw, I read the Oberon07 spec the other day and interestingly Wirth included add/subtract with carry. So Oberon has an edge there… :)
Nov 01 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 1 November 2014 at 23:04:04 UTC, Ola Fosheim Grøstad 
wrote:
 I see some clear benefits with browsers/javascript's ability to 
 compile directly to machine language on the fly. Just see what 
 the demo scene are doing with code generators. So yeah, the 
 code is slower, but perhaps not skillful use of it. Maybe we'll 
 see 4K demo compos for asm.js and WebGL.
Oh well, I am out of touch with developments. Turns out they have 1K demo compos for javascript: http://www.pouet.net/party.php?which=1570&when=2014
Nov 01 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 4:04 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 Anyway, I believe you can turn on bound checks with some C-compilers if you
want
 it,
Won't work, because C arrays decay to pointers whenever passed to a function, so you lose all hope of bounds checking except in the most trivial of cases. http://www.drdobbs.com/architecture-and-design/cs-biggest-mistake/228701625
Nov 01 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 2 November 2014 at 00:47:16 UTC, Walter Bright wrote:
 On 11/1/2014 4:04 PM, "Ola Fosheim Grøstad" 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 Anyway, I believe you can turn on bound checks with some 
 C-compilers if you want
 it,
Won't work, because C arrays decay to pointers whenever passed to a function, so you lose all hope of bounds checking except in the most trivial of cases.
There are bounds-checking extensions to GCC.
Nov 01 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 2 November 2014 at 00:56:37 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 2 November 2014 at 00:47:16 UTC, Walter Bright wrote:
 On 11/1/2014 4:04 PM, "Ola Fosheim Grøstad" 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 Anyway, I believe you can turn on bound checks with some 
 C-compilers if you want
 it,
Won't work, because C arrays decay to pointers whenever passed to a function, so you lose all hope of bounds checking except in the most trivial of cases.
There are bounds-checking extensions to GCC.
And papers: https://cseweb.ucsd.edu/~wchuang/HiPEAC-07-TaintBounds.pdf http://www3.imperial.ac.uk/pls/portallive/docs/1/18619746.PDF And projects: http://sourceforge.net/projects/boundschecking/
Nov 01 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
More papers on C bounds checking:

http://llvm.org/pubs/2006-05-24-SAFECode-BoundsCheck.html

Bounds checking on flight control software for Mars expedition:

http://ti.arc.nasa.gov/m/profile/ajvenet/pldi04.pdf
Nov 01 2014
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 02.11.2014 um 02:23 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>":
 More papers on C bounds checking:

 http://llvm.org/pubs/2006-05-24-SAFECode-BoundsCheck.html

 Bounds checking on flight control software for Mars expedition:

 http://ti.arc.nasa.gov/m/profile/ajvenet/pldi04.pdf
The amount of money that went into such (bad) design decision... And it won't stop bleeding so long C and C++ exist.
Nov 02 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 2 November 2014 at 07:29:25 UTC, Paulo Pinto wrote:
 The amount of money that went into such (bad) design decision...

 And it won't stop bleeding so long C and C++ exist.
Yes, that is true (if we ignore esoteric C dialects that add safer features). Ada is a better solution if you want reliable software. On the plus side: the effort that goes into semantic analysis of C probably bring about some knowledge that is generally useful. But it is expensive, agree.
Nov 02 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 6:05 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 2 November 2014 at 00:56:37 UTC, Ola Fosheim Grøstad wrote:
 On Sunday, 2 November 2014 at 00:47:16 UTC, Walter Bright wrote:
 On 11/1/2014 4:04 PM, "Ola Fosheim Grøstad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 Anyway, I believe you can turn on bound checks with some C-compilers if you
 want
 it,
Won't work, because C arrays decay to pointers whenever passed to a function, so you lose all hope of bounds checking except in the most trivial of cases.
There are bounds-checking extensions to GCC.
I proposed a C extension, too. http://www.drdobbs.com/architecture-and-design/cs-biggest-mistake/228701625
Nov 01 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 5:56 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 2 November 2014 at 00:47:16 UTC, Walter Bright wrote:
 On 11/1/2014 4:04 PM, "Ola Fosheim Grøstad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 Anyway, I believe you can turn on bound checks with some C-compilers if you
want
 it,
Won't work, because C arrays decay to pointers whenever passed to a function, so you lose all hope of bounds checking except in the most trivial of cases.
There are bounds-checking extensions to GCC.
Yup, -fbounds-check, and it only works for local arrays. Once the array is passed to a function, poof! no more bounds checking. http://www.delorie.com/gnu/docs/gcc/gcc_13.html
Nov 01 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 2 November 2014 at 01:43:32 UTC, Walter Bright wrote:
 There are bounds-checking extensions to GCC.
Yup, -fbounds-check, and it only works for local arrays. Once the array is passed to a function, poof! no more bounds checking.
No. Please read the links. There are solutions that do full checking by checking every pointer access at runtime. And there are other solutions.
Nov 01 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 11:13 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 2 November 2014 at 01:43:32 UTC, Walter Bright wrote:
 There are bounds-checking extensions to GCC.
Yup, -fbounds-check, and it only works for local arrays. Once the array is passed to a function, poof! no more bounds checking.
No. Please read the links. There are solutions that do full checking by checking every pointer access at runtime. And there are other solutions.
Yeah, I looked at them. For example, http://www3.imperial.ac.uk/pls/portallive/docs/1/18619746.PDF has the money quote: "The ’A’ series, which is a group of classic artificial benchmarks, and the ’B’ series, which is a selection of CPU-intensive real-world code, performed particularly poorly, ranging from several hundred to several thousand times slower." This is not a solution. C has successfully resisted all attempts to add bounds checking.
Nov 01 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 2 November 2014 at 06:39:14 UTC, Walter Bright wrote:
 This is not a solution. C has successfully resisted all 
 attempts to add bounds checking.
That was a student project, but the paper presented an overview of techniques which is why I linked to it. A realistic solution is probably at 10-50 times slower on regular hardware and is suitable for debugging, and you can probably improve it a lot using global semantic analysis. To quote the Nasa paper's conclusion: «We have shown in this paper that the array bound checking of large C programs can be performed with a high level of precision (around 80%) in nearly the same time as compilation. The key to achieve this result is the specialization of the analysis towards a particular family of software.» So no, C has not resisted all attempts at adding bounds checking. People are doing it.
Nov 02 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/2/2014 12:06 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 2 November 2014 at 06:39:14 UTC, Walter Bright wrote:
 This is not a solution. C has successfully resisted all attempts to add bounds
 checking.
That was a student project, but the paper presented an overview of techniques which is why I linked to it.
Sorry, I had presumed you intended the links to be practical, workable solutions.
 A realistic solution is probably at 10-50 times
 slower on regular hardware and is suitable for debugging, and you can probably
 improve it a lot using global semantic analysis.

 To quote the Nasa paper's conclusion:

 «We have shown in this paper that the array bound checking of large C programs
 can be performed with a high level of precision (around 80%) in nearly the same
 time as compilation. The key to achieve this result is the specialization of
the
 analysis towards a particular family of software.»

 So no, C has not resisted all attempts at adding bounds checking.

 People are doing it.
10 to 50 times slower is not a solution. If your app can stand such a degradation, it would be better off written in Python. If there was a practical solution for C, it likely would have been incorporated into clang and gcc.
Nov 02 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 2 November 2014 at 08:22:08 UTC, Walter Bright wrote:
 10 to 50 times slower is not a solution. If your app can stand 
 such a degradation, it would be better off written in Python. 
 If there was a practical solution for C, it likely would have 
 been incorporated into clang and gcc.
Python is a dynamic language… so I don't think it is more stable than C at runtime, but the consequences are less severe. For a practical solution, this paper suggests just checking bounds when you write to an array as a trade off: http://www4.comp.polyu.edu.hk/~csbxiao/paper/2005/ITCC-05.pdf There are also some proprietary C compilers for embedded programming that claim to support bound checks, but I don't know how far they go or if they require language extensions/restrictions.
Nov 02 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 2 November 2014 at 08:39:26 UTC, Ola Fosheim Grøstad 
wrote:
 There are also some proprietary C compilers for embedded 
 programming that claim to support bound checks, but I don't 
 know how far they go or if they require language 
 extensions/restrictions.
Btw, related to this is the efforts on bounded model checking: http://llbmc.org/files/papers/VSTTE12.pdf LLBMC apparently takes LLVM IR as input and checks the program using a SMT solver. Basically the same type of solver that proof systems use. This is of course a more challenging problem than arrays as it aims to check a lot of things at the cost of putting some limits on recursion depth etc: - Arithmetic overflow and underflow - Logic or arithmetic shift exceeding the bit-width - Memory access at invalid addresses - Invalid memory allocation - Invalid memory de-allocation - Overlapping memory regions in memcpy - Memory leaks - User defined assertions - Insufficient specified bounds for the checker - C assert()
Nov 02 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 3:32 PM, bearophile wrote:
 Paulo Pinto:

 - arrays were bound checked (just use a compiler flags and dataflow to remove
 them like any sane language)
D removes very little bound checks. No data flow is used for this.
This is false.
 - enums were strong typed
D enums are only half strongly typed.
This is on purpose, because otherwise about half of what enums are used for would no longer be possible - such as bit flags. A strongly typed enum can be made using a struct.
 - had namespaces or real modules
D module system has holes like Swiss cheese. And its design is rather simplistic.
Oh come on.
 - no implicit type conversions
D has a large part of the bad implicit type conversions of C.
D has removed implicit conversions that result in data loss. Removing the rest would force programs to use casting instead, which is far worse.
 - had a sane macro system
There's no macro system in D. Mixins are an improvement over the preprocessor, but they lead to messy code.
D doesn't have AST macros for very deliberate reasons, discussed at length here. It is not an oversight.
 But I guess D already covers it...
D solves only part of the problems. And you have not listed several important things. There's still a lot of way to go to have good enough system languages.
D does more than any other system language.
Nov 01 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

Thank you for your answers.

 D removes very little bound checks. No data flow is used for 
 this.
This is false.
Oh, good, what are the bound checks removed by the D front-end? I remember only one case (and I wrote the enhancement request for it). Recently I argued that we should add a little more removal of redundant bound checks. But probably the "GC and C++" mantra is more urgent thank everything else.
 This is on purpose, because otherwise about half of what enums 
 are used for would no longer be possible - such as bit flags.
On the other hand we could argue that bit flags are a sufficiently different purpose to justify an annotation (as in C#) or a Phobos struct (like for the bitfields) that uses mixin that implements them (there is a pull request for Phobos, but I don't know how much good it is).
 D module system has holes like Swiss cheese. And its design is 
 rather simplistic.
Oh come on.
ML modules are vastly more refined than D modules (and more refined than modules in most other languages). I am not asking to put ML-style modules in D (because ML modules are too much complex for C++/Python programmers and probably even unnecessary given the the kind of template-based generics that D has), but arguing that D modules are refined is unsustainable. (And I still hope Kenji fixes some of their larger holes).
 - no implicit type conversions
D has a large part of the bad implicit type conversions of C.
D has removed implicit conversions that result in data loss. Removing the rest would force programs to use casting instead, which is far worse.
This is a complex situation, there are several things that are suboptimal in D management of implicit casts (one example is the signed/unsigned comparison situation). But I agree with you that this situation seems to ask for a middle ground solution. Yet there are functional languages without implicit casts (is Rust allowing implicit casts?), they use two kinds of casts, the safe and unsafe casts. I think the size casting that loses bits is still regarded as safe.
 - had a sane macro system
There's no macro system in D. Mixins are an improvement over the preprocessor, but they lead to messy code.
D doesn't have AST macros for very deliberate reasons, discussed at length here. It is not an oversight.
I am not asking for AST macros in D. I was just answering to a laundry list of things that C doesn't have (I was answering that D doesn't either).
 But I guess D already covers it...
D solves only part of the problems. And you have not listed several important things. There's still a lot of way to go to have good enough system languages.
D does more than any other system language.
Perhaps this is true (despite Rust is more more refined/safer regarding memory tracking), that's why I am using D instead of other languages, despite all the problems. But fifteen years from now I hope to use something much better than D for system programming :-) Bye, bearophile
Nov 01 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 6:41 PM, bearophile wrote:
 Walter Bright:

 Thank you for your answers.

 D removes very little bound checks. No data flow is used for this.
This is false.
Oh, good, what are the bound checks removed by the D front-end?
It does some flow analysis based on previous bounds checks.
 This is on purpose, because otherwise about half of what enums are used for
 would no longer be possible - such as bit flags.
On the other hand we could argue that bit flags are a sufficiently different purpose to justify an annotation (as in C#) or a Phobos struct (like for the bitfields) that uses mixin that implements them (there is a pull request for Phobos, but I don't know how much good it is).
More annotations => more annoyance for programmers. Jonathan Blow characterizes this as "friction" and he's got a very good point. Programmers have a limited tolerance for friction, and D must be very careful not to step over the line into being a "bondage and discipline" language that nobody uses.
 D module system has holes like Swiss cheese. And its design is rather
 simplistic.
Oh come on.
ML modules are vastly more refined than D modules (and more refined than modules in most other languages). I am not asking to put ML-style modules in D (because ML modules are too much complex for C++/Python programmers and probably even unnecessary given the the kind of template-based generics that D has), but arguing that D modules are refined is unsustainable. (And I still hope Kenji fixes some of their larger holes).
I didn't say they were "refined", whatever that means. I did take issue with your characterization. I don't buy the notion that more complex is better. Simple and effective is the sweet spot.
 - no implicit type conversions
D has a large part of the bad implicit type conversions of C.
D has removed implicit conversions that result in data loss. Removing the rest would force programs to use casting instead, which is far worse.
This is a complex situation, there are several things that are suboptimal in D management of implicit casts (one example is the signed/unsigned comparison situation).
It is not suboptimal. There are lot of tradeoffs with this, and it has been discussed extensively. D is at a reasonable optimum point for this. The implication that this is thoughtlessly thrown together against all reason is just not correct.
 I think the size casting that loses bits is still regarded as safe.
It is memory safe.
Nov 01 2014
prev sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Nov 01, 2014 at 05:53:00PM -0700, Walter Bright via Digitalmars-d wrote:
 On 11/1/2014 3:32 PM, bearophile wrote:
Paulo Pinto:
[...]
- no implicit type conversions
D has a large part of the bad implicit type conversions of C.
D has removed implicit conversions that result in data loss. Removing the rest would force programs to use casting instead, which is far worse.
[...] While D has removed *some* of the most egregious implicit conversions in C/C++, there's still room for improvement. For example, D still has implicit conversion between signed and unsigned types, which is a source of bugs. I argue that using casts to convert between signed and unsigned is a good thing, because it highlights the fact that things might go wrong, whereas right now, the compiler happily accepts probably-wrong code like this: uint x; int y = -1; x = y; // accepted with no error D also allows direct assignment of non-character types to character types and vice versa, which is another source of bugs: int x = -1; dchar c = x; // accepted with no error Again, requiring the use of a cast in this case is a good thing. It highlights an operation that may potentially produce wrong or unexpected results. It also self-documents the intent of the code, rather than leaving it as a trap for the unwary. On the other hand, D autopromotes arithmetic expressions involving sub-int quantities to int, thus requiring ugly casts everywhere such arithmetic is employed: byte a, b; a = b - a; // Error: cannot implicitly convert expression (cast(int)b - cast(int)a) of type int to byte You are forced to write this instead: byte a, b; a = cast(byte)(b - a); I know the rationale is to prevent inadvertent overflow of byte values, but if we're going to be so paranoid about correctness, why not also require explicit casts for conversion between signed/unsigned, or between character and non-character values, which are just as error-prone? Besides, expressions like (b-a) can overflow for int values too, yet the compiler happily accepts them rather than promoting to long and requiring casts. T -- What do you mean the Internet isn't filled with subliminal messages? What about all those buttons marked "submit"??
Nov 01 2014
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 4:31 AM, bearophile wrote:
 His language seems to disallow comparisons of different types:

 void main() {
      int x = 10;
      assert(x == 10.0); // Refused.
 }
More than that, he disallows mixing different integer types, even if no truncation would occur.
 I like the part about compile-time tests for printf:
 http://youtu.be/UTqZNujQOlA?t=38m6s
Unnecessary with D because writeln checks it all. Even so, if printf were a template function, D can also check these things at compile time.
 The same strategy is used to validate game data statically:
 http://youtu.be/UTqZNujQOlA?t=55m12s
D allows extensive use of compile time validation.
 He writes a function that is called to verify at compile-time the arguments of
 another function. This does the same I am asking for a "static precondition",
 but it has some disadvantages and advantages. One advantage is that the testing
 function doesn't need to be in the same module as the function, unlike static
 enums. So you can have the function compiled (separated compilation). Perhaps
 it's time for DIP.
D can run arbitrary functions at compile time even if they are in different files.
Nov 01 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 4:31 AM, bearophile wrote:
 Third part of the "A Programming Language for Games", by Jonathan Blow:
 https://www.youtube.com/watch?v=UTqZNujQOlA
Jonathan is reinventing D with a somewhat different syntax. Some points on the video: * The defer statement works pretty much exactly like D's scope guard: http://dlang.org/statement.html#ScopeGuardStatement * "for n: 1..count" is the same as D's "foreach (n; 1..count)" * dynamic arrays work pretty much the same as D's * for over an array in D: foreach (it; results) ... * D does the check function thing using compile time function execution to check template arguments. * D also has full compile time function execution - it's a very heavily used feature. It's mainly used for metaprogramming, introspection, checking of template arguments, etc. Someone has written a ray tracer that runs at compile time in D. D's compile time execution doesn't go as far as running external functions in DLLs. * D has static assert, which runs the code at compile time, too. The space invaders won't run at compile time, because D's compile time code running doesn't call external functions in DLLs. I actually suspect that could be a problematic feature, because it allows the compiler to execute user supplied code which can do anything to your system - a great vector for supplying malware to an unsuspecting developer. The ascii_map function will work, however.
Nov 01 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 * for over an array in D:
     foreach (it; results) ...
D is better here, because it doesn't introduce magically named variables.
 * D does the check function thing using compile time function 
 execution to check template arguments.
This is not nearly enough. I have written a lot about this.
 * D also has full compile time function execution - it's a very 
 heavily used feature. It's mainly used for metaprogramming, 
 introspection, checking of template arguments, etc. Someone has 
 written a ray tracer that runs at compile time in D. D's 
 compile time execution doesn't go as far as running external 
 functions in DLLs.
His "compile time execution" is different and probably better: the whole language is available because it uses an intermediate bytecode. This makes it more flexible and avoids the need of having essentially two different implementations of the language.
 The ascii_map function will work, however.
The ASCII map example doesn't work in D because of reasons I have explained a lot in past posts. Bye, bearophile
Nov 01 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 1:33 PM, bearophile wrote:
 Walter Bright:
 D is better here, because it doesn't introduce magically named variables.
I agree that the implicit variable is not good.
 * D does the check function thing using compile time function execution to
 check template arguments.
This is not nearly enough. I have written a lot about this.
I don't agree. Compile time checking can only be done on compile time arguments (obviously) and template functions can arbitrarily check compile time arguments. I know you've suggested extensive data flow analysis, but Jonathan's language doesn't do that at all and he neither mentioned nor alluded to the concept of that.
 * D also has full compile time function execution - it's a very heavily used
 feature. It's mainly used for metaprogramming, introspection, checking of
 template arguments, etc. Someone has written a ray tracer that runs at compile
 time in D. D's compile time execution doesn't go as far as running external
 functions in DLLs.
His "compile time execution" is different and probably better: the whole language is available because it uses an intermediate bytecode. This makes it more flexible and avoids the need of having essentially two different implementations of the language.
He has two implementations - a bytecode interpreter, and a C code generator. D's CTFE restrictions are: 1. no global variables 2. no pointer math 3. no external code execution While this prevents you from running space invaders at compile time, I haven't seen much of any practical limitation for things that CTFE is used for.
 The ascii_map function will work, however.
The ASCII map example doesn't work in D because of reasons I have explained a lot in past posts.
Like what?
Nov 01 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 I know you've suggested extensive data flow analysis,
The "static enum" (and related ideas) I've suggested require no flow analysis.
 Compile time checking can only be done on compile time 
 arguments (obviously) and template functions can arbitrarily 
 check compile time arguments.
In D it's easy to define a function that you call at compile-time to test that some compile-time data is well formed, I do this often. This is a simple example: import std.range, std.algorithm; alias Nibble = ubyte; // 4 bits used. alias SBox = immutable Nibble[16][8]; private bool _validateSBox(in SBox data) safe pure nothrow nogc { return data[].all!((ref row) => row[].all!(ub => ub < 16)); } struct GOST(s...) if (s.length == 1 && s[0]._validateSBox) { private static generate(ubyte k)() safe pure nothrow { return k87.length.iota .map!(i=> (s[0][k][i >> 4] << 4) | s[0][k - 1][i & 0xF]) .array; } // ... } void main() { enum SBox cbrf = [ [ 4, 10, 9, 2, 13, 8, 0, 14, 6, 11, 1, 12, 7, 15, 5, 3], [14, 11, 4, 12, 6, 13, 15, 10, 2, 3, 8, 1, 0, 7, 5, 9], [ 5, 8, 1, 13, 10, 3, 4, 2, 14, 15, 12, 7, 6, 0, 9, 11], [ 7, 13, 10, 1, 0, 8, 9, 15, 14, 4, 6, 12, 11, 2, 5, 3], [ 6, 12, 7, 1, 5, 15, 13, 8, 4, 10, 9, 14, 0, 3, 11, 2], [ 4, 11, 10, 0, 7, 2, 1, 13, 3, 6, 8, 5, 9, 12, 15, 14], [13, 11, 4, 1, 3, 15, 5, 9, 0, 10, 14, 7, 6, 8, 2, 12], [ 1, 15, 13, 0, 5, 7, 10, 4, 9, 2, 3, 14, 6, 11, 8, 12]]; GOST!cbrf g; // ... } But you can run such compile-time tests only on template arguments, or on regular arguments of functions/constructors that are forced to run at compile-time. But for me this is _not_ enough. You can't implement the printf test example he shows (unless you turn the formatting string into a template argument of printf, this introduces template bloat and forbids you to have run-time format strings, or forces you to use two different syntaxes or to create two different print functions). I'd like a way to run compile-time tests for the arguments of a regular function/constructor if they are known at compile-time. So here I'd like a way to perform a compile-time test of the arguments of the call of #1 (and to not perform them for the call #2 because its argument is not a compile-time constant) (note that here both foo calls are not run at compile-time, and this is good): void main() { auto x = foo(1); // #1 int n = bar(); auto y = foo(n); // #2 } Currently if you want to do the same thing in D you have to use something like: void main() { auto x = foo(ctEval!test(1)); // #1b } (Where "test" is a function that tests the argument and "ctEval" is a little template that forces to run "test" at compile time (here "foo" itself is not run). This becomes not much practical if you have arrays of values, or lot of data, etc, it's not *transparent* at all for the user, and the user can forget to use ctEval). So this is useful in a large number of cases. If instead of foo() there's a call to a constructor, we become able to verify "game data" at compile time where possible while avoiding templates, and running the actual functions only at run-time. Probably there are various ways to solve this problem. A lot of time ago I have suggested a "enum precondition": int foo(in int x) enum in(x) { // Optional enum pre-condition. } in { // Optional normal pre-condition. } body { // Function body. } The idea is that if foo is called with literals or compile-time (enum) arguments (here just the x argument is required to be known at compile-time) then it performs the tests inside the enum precondition at compile-time. If the arguments are run-time values then the enum precondition is ignored (and eventually the normal pre condition runs at run-time. Sometimes the two pre-conditions contain the same code or call the same testing function). If you want to implement this idea very well, you can keep the enum precondition as source code (like with templates) so you can run it at compile-time when the arguments are known at compile-time. Bye, bearophile
Nov 01 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 3:26 PM, bearophile wrote:
 But you can run such compile-time tests only on template arguments, or on
 regular arguments of functions/constructors that are forced to run at
 compile-time. But for me this is _not_ enough. You can't implement the printf
 test example he shows (unless you turn the formatting string into a template
 argument of printf, this introduces template bloat
D has writefln which does not have printf's issues. There's no reason to add a feature for printf. When I look at my code, it is very rare that I pass arguments to functions that would benefit from compile time checking. For those that might, there's always a rethinking of the feature, such as with printf/writefln.
Nov 01 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 D has writefln which does not have printf's issues. There's no 
 reason to add a feature for printf.
The feature we are talking about is not just for D writeln, as I've tried to explain several times. And D writeln is not verified at compile-time, this is silly for a language that tries to be reliable. (Rust printing function is actually a macro and it verifies the formatting string at compile-time when possible. That's the only good enough option for me for a modern statically compiled language).
 When I look at my code,
As the designer of the language you have to look at code written by other people too! Because your D code is probably very different from mine. Take a look at Haskell code, Rust code, Erlang code, and learn new idioms and new paradigms. In the long run this will help D more than fixing a couple more bugs.
 it is very rare that I pass arguments to functions that would 
 benefit from compile time checking.
To me this happens. It doesn't happen all the time. As usual it's not easy to quantify the frequency of such cases. (On the other hand your "very rare" is unsupported by statistical evidence as well. Your judgement is probably better than mine, of course and I respect your opinions).
 For those that might, there's always a rethinking of the
 feature, such as with printf/writefln.
I regard D writefln as currently _broken_. D has static typing, templates and compile time execution, and yet such things are not used enough in one of the most common functions, the one to print on the console. Now even GCC catches many of those printf usage bugs at compile-time. The desire for some compile-time enforcement of some contracts is not replaced by rethinking. Bye, bearophile
Nov 01 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sun, Nov 02, 2014 at 01:25:23AM +0000, bearophile via Digitalmars-d wrote:
[...]
 I regard D writefln as currently _broken_. D has static typing,
 templates and compile time execution, and yet such things are not used
 enough in one of the most common functions, the one to print on the
 console. Now even GCC catches many of those printf usage bugs at
 compile-time.
[...] GCC verification of printf usage bugs is a hack. It's something hardcoded into the compiler that only works for printf formats. You cannot extend it to statically verify other types of formats you might want to also verify at compile-time. While writefln can be improved (Andrei has preapproved my enhancement request to support compile-time format string, for example), there's no way to make such improvements to GCC's format checking short of modifying the compiler itself. T -- Real Programmers use "cat > a.out".
Nov 01 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/1/14 6:48 PM, H. S. Teoh via Digitalmars-d wrote:
 While writefln can be improved (Andrei has preapproved my enhancement
 request to support compile-time format string, for example), there's no
 way to make such improvements to GCC's format checking short of
 modifying the compiler itself.
Oh so it was you :o). Great idea, time to follow with implementation! -- Andrei
Nov 02 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 6:25 PM, bearophile wrote:
 As the designer of the language you have to look at code written by other
people
 too! Because your D code is probably very different from mine. Take a look at
 Haskell code, Rust code, Erlang code, and learn new idioms and new paradigms.
In
 the long run this will help D more than fixing a couple more bugs.
I don't see the use cases, in mine or other code. There's a reason why people always trot out printf - it's about the only one. Designing a language feature around printf is a mistake.
 it is very rare that I pass arguments to functions that would benefit from
 compile time checking.
To me this happens. It doesn't happen all the time. As usual it's not easy to quantify the frequency of such cases. (On the other hand your "very rare" is unsupported by statistical evidence as well. Your judgement is probably better than mine, of course and I respect your opinions).
I've considered the feature, and looked at code. It just doesn't happen very often. All features have a cost/benefit to them. The costs are never zero. Laying on more and more features of minor benefit will destroy the language, and even you won't use it.
 For those that might, there's always a rethinking of the
 feature, such as with printf/writefln.
I regard D writefln as currently _broken_.
Oh come on. writefln is typesafe and will not crash. You could also write: formattedwrite!"the format string %s %d"(args ...) if you like. The fact that nobody has bothered to suggests that it doesn't add much value over writefln().
Nov 01 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 I don't see the use cases, in mine or other code.
 Designing a language feature around printf is a mistake.
I agree. Let's talk about other use cases.
 I've considered the feature, and looked at code. It just 
 doesn't happen very often.
I have written plenty of D code (perhaos 240 thousand lines so far) and I've seen several cases where the ideas I've suggested can be useful. If you define a rangedInt struct: void main() { alias I10 = rangedInt!(int, 0, 10); I10[] arr = [1, 5, 12, 3]; } My ideas should be able to spot the bug in that array literal at compile-time. Ada2012 is able to do the same. Currently D can't do that. The same is possible with other values, including complex ones as kinds of game data. If I define a finite state machine the enum precondition is able to spot malformed machines at compile time. I am able to give you more usage examples on request. It happens often enough to justify a similar feature in Ada2012. (This is the main point of this whole discussion. The other parts of this answer are less important).
 All features have a cost/benefit to them. The costs are never 
 zero. Laying on more and more features of minor benefit will 
 destroy the language, and even you won't use it.
I agree. But freezing D design is not a good idea. (Note: so far I don't care for C++ interoperativity much. And I think the a good way to face GC-derived problems is to introduce memory ownership tracking in the type system).
 Oh come on. writefln is typesafe and will not crash.
It shows bugs at runtime, where they can be avoided (turning them to compile time ones) at essentially no cost for the programmer. For me this is a broken/unacceptable design (and I'm saying this since years. Glad to see Rust people agree. I think this is an example of this phenomenon: http://en.wikipedia.org/wiki/Punctuated_equilibrium in the programming language design world).
 You could also write:
 
    formattedwrite!"the format string %s %d"(args ...)
 
 if you like. The fact that nobody has bothered to suggests that 
 it doesn't add much value over writefln().
Plenty of people have bothered, there's an implementation.
 It does some flow analysis based on previous bounds checks.
I didn't know this. I'll need to do some experiments :-)
 On the other hand we could argue that bit flags are a 
 sufficiently different
 purpose to justify an annotation (as in C#) or a Phobos struct 
 (like for the
 bitfields) that uses mixin that implements them (there is a 
 pull request for
 Phobos, but I don't know how much good it is).
More annotations => more annoyance for programmers. Jonathan Blow characterizes this as "friction" and he's got a very good point. Programmers have a limited tolerance for friction, and D must be very careful not to step over the line into being a "bondage and discipline" language that nobody uses.
The annotation is used only once at the definition point of the flags. So the "annotation" here is essentially a way to tell the compiler that you don't want a regular enumeration, but a flags. It's like having two different language constructs, enums and flags. So it's a way to offer the programmer a chance to express intent and make the code more expressive/readable. And this allows to make the semantics of enums more strict. It's a win-win-win situation. The real downside is increased language complexity, but as I explained in past, well designed clean features are not the main source of complexity. And formalizing a programmer idiom is often not a bad idea.
 I don't buy the notion that more complex is better. Simple and 
 effective is the sweet spot.
I am not asking for ML-style modules in D. But ML modules aren't complex for free, they introduce important qualities in ML language and its "generics".
 It is not suboptimal.
 D is at a reasonable optimum point for this.
In my opinion it has some faults. I am not alone with this opinion. So I think it's not at the optimum.
 There are lot of tradeoffs with this, and it has been discussed 
 extensively.
I agree, but the situation is not improving much so far. I see mostly stasis on this.
 The implication that this is thoughtlessly thrown together 
 against all reason is just not correct.
I didn't say D implicit casts are randomly designed :-) I said that they are currently not very good or the best possible.
 I think the size casting that loses bits is still regarded as 
 safe.
It is memory safe.
Probably that's why there are two kind of casts in Haskell. Bye, bearophile
Nov 02 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/2/2014 5:44 AM, bearophile wrote:
 It happens often enough to justify a similar feature in Ada2012. (This is the
 main point of this whole discussion. The other parts of this answer are less
 important).
Why aren't you using Ada if this is critical to you? (I'm not being sarcastic, this is a fair question.)
 All features have a cost/benefit to them. The costs are never zero. Laying on
 more and more features of minor benefit will destroy the language, and even
 you won't use it.
I agree. But freezing D design is not a good idea.
I'm not saying "freeze the design". I'm saying that if things are wrapped in enough bubble wrap, few programmers will want to use the language. After all, I don't wear a firesuit or helmet when I drive my car.
 (Note: so far I don't care
 for C++ interoperativity much.
I understand. But poor C++ interop is preventing quite a few people from using D. Validating printf format strings is not.
 You could also write:

    formattedwrite!"the format string %s %d"(args ...)

 if you like. The fact that nobody has bothered to suggests that it doesn't add
 much value over writefln().
Plenty of people have bothered, there's an implementation.
1. I've heard proposals, but no implementation. 2. If it exists, why aren't you using it? 3. It is obviously doable in D. No language extension required. You can even write one and contribute it!
 More annotations => more annoyance for programmers. Jonathan Blow
 characterizes this as "friction" and he's got a very good point. Programmers
 have a limited tolerance for friction, and D must be very careful not to step
 over the line into being a "bondage and discipline" language that nobody uses.
The annotation is used only once at the definition point of the flags. So the "annotation" here is essentially a way to tell the compiler that you don't want a regular enumeration, but a flags. It's like having two different language constructs, enums and flags.
I understand that - yet another basic type in the system. I believe you majorly underestimate the costs of these things, or even assign zero cost to them.
 So it's a way to offer the programmer a chance to
 express intent and make the code more expressive/readable. And this allows to
 make the semantics of enums more strict. It's a win-win-win situation.
Again, you badly underestimate the costs or just pretend they aren't there.
 The real
 downside is increased language complexity, but as I explained in past, well
 designed clean features are not the main source of complexity.
I must ask, have you ever designed a house? Everything is a tradeoff. Want a bigger closet? What becomes smaller as a result? Do you want a view from the kitchen window or do you want a convenient door from the kitchen to the garage? If you give the view to the kitchen, are you willing to give up the view from the study? Do you accept this change will add $10,000 to the budget? This other change will require approval from the zoning people, causing delays. How will the position of the windows make the house look from the street? And on and on. Language design is the same thing. You can't just "explain" that the solution is to make "clean" features. Like a house design, every feature in a language interacts with every other feature.
 And formalizing a programmer idiom is often not a bad idea.
Sorry, this is just hand-waving.
 It is not suboptimal.
 D is at a reasonable optimum point for this.
In my opinion it has some faults. I am not alone with this opinion. So I think it's not at the optimum.
A reasonable optimum point is not equal to "nobody can find any fault with it". A reasonable optimum point is where the faults are more acceptable than the known alternatives.
 The implication that this is thoughtlessly thrown together against all reason
 is just not correct.
I didn't say D implicit casts are randomly designed :-) I said that they are currently not very good
I strongly reject that notion.
 or the best possible.
It's the best anyone has come up with for now.
 I think the size casting that loses bits is still regarded as safe.
It is memory safe.
Probably that's why there are two kind of casts in Haskell.
x & 0xFF loses bits as well. What do you propose to do about that flaw?
Nov 02 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 Why aren't you using Ada if this is critical to you? (I'm not 
 being sarcastic, this is a fair question.)
It's not critical... Ada is not fun, too much new stuff to learn and to adapt to, and I can't influence Ada evolution in any way. My hope for preferred future system language is not in Ada. Still, Ada contain some nice ideas.
 I'm not saying "freeze the design".
But we are currently close to this... Lately the only ideas I've seen seriously discussed are the ones about reference counting by Andrei. Even the proposal about the tracking of memory ownership was not much discussed.
 I'm saying that if things are wrapped in enough bubble wrap, 
 few programmers will want to use the language. After all, I 
 don't wear a firesuit or helmet when I drive my car.
The new pre-condition is optional (you use it for Phobos structs, etc), and for the programmer that later uses the function/struct/class its usage its totally transparent, so the wearing of firesuit/helmet metaphor is not good enough. If I define a library type named Nibble that is represented with an ubyte and accepts only values in [0, 15] with an enum precondition, I can use it like (once Kenji patch to convert arrays of structs is merged): Nibble[] arr = [5, 18, 3, 1]; The usage is totally transparent for the user, no firesuits or helmets are necessary or visible (yet that code will give a compile-time error).
 I understand. But poor C++ interop is preventing quite a few 
 people from using D.
I understand and I encourage this part of D design to keep going on.
 Validating printf format strings is not.
I don't care much of printf/writef. That was just an example, and not even the most important.
 2. If it exists, why aren't you using it?
If a writeln template with compile-time format string testing goes in Phobos I'll surely use it (despite the template bloat it will cause).
 I understand that - yet another basic type in the system.
Perhaps a good enough FlagsEnum can be implemented with pure D library code.
 And formalizing a programmer idiom is often not a bad idea.
Sorry, this is just hand-waving.
I have seen it's often true ::-) New languages are essentially invented for this purpose: to turn programmer idioms into compiler-enforced features with a short nice syntax. The idiom of passing a pointer + length to a C function is replaced by a much better dynamic array of D. Even OOP is an idiom that used to be implemented badly in C by lot of people. This list of examples can become very long.
 A reasonable optimum point is not equal to "nobody can find any 
 fault with it". A reasonable optimum point is where the faults 
 are more acceptable than the known alternatives.
I think the free mixing of signed and unsigned integral values is not a good idea in D. I think that there are various ways to refine immutable value range propagation. Bye, bearophile
Nov 02 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/2/2014 12:12 PM, bearophile wrote:
 I think the free mixing of signed and unsigned integral values is not a good
 idea in D.
It's simply not workable to put a wall between them. Every proposal for it has entailed various unfortunate, ugly, and arbitrary consequences. Languages like Java have "solved" the problem by simply not having any unsigned types. That isn't going to work for a systems programming language.
Nov 02 2014
parent reply Nick Treleaven <ntrel-pub mybtinternet.com> writes:
On 02/11/2014 20:33, Walter Bright wrote:
 On 11/2/2014 12:12 PM, bearophile wrote:
 I think the free mixing of signed and unsigned integral values is not
 a good
 idea in D.
It's simply not workable to put a wall between them. Every proposal for it has entailed various unfortunate, ugly, and arbitrary consequences.
We need warnings like gcc has: "-Wsign-compare Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. -Wconversion Warn for implicit conversions that may alter a value. This includes ... conversions between signed and unsigned, like unsigned ui = -1 ... Warnings about conversions between signed and unsigned integers can be disabled by using -Wno-sign-conversion. " It is really unfortunate that D is more bug-prone than gcc in this case. There was some promising work here: https://github.com/D-Programming-Language/dmd/pull/1913
Nov 03 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/3/2014 10:03 AM, Nick Treleaven wrote:
 On 02/11/2014 20:33, Walter Bright wrote:
 It's simply not workable to put a wall between them. Every proposal for
 it has entailed various unfortunate, ugly, and arbitrary consequences.
We need warnings like gcc has: "-Wsign-compare Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. -Wconversion Warn for implicit conversions that may alter a value. This includes ... conversions between signed and unsigned, like unsigned ui = -1 ... Warnings about conversions between signed and unsigned integers can be disabled by using -Wno-sign-conversion. "
I find these to suffer from the same problems as all the proposals to "fix" the issue - they motivate the user to "fix" them with unfortunate, ugly, and arbitrary consequences. We need to be very careful with the idea of "just add a warning". Warnings are a sure sign of wishy-washy language design where the designers cannot make up their mind, so they dump it on the user. One person's warning become another person's must fix, and the language becomes balkanized, which is not good for portability, comprehensibility, and best practices.
 It is really unfortunate that D is more bug-prone than gcc in this case.
I'm afraid that is a matter of opinion.
 There was some promising work here:

 https://github.com/D-Programming-Language/dmd/pull/1913
Nov 03 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Mon, Nov 03, 2014 at 04:29:17PM -0800, Walter Bright via Digitalmars-d wrote:
 On 11/3/2014 10:03 AM, Nick Treleaven wrote:
On 02/11/2014 20:33, Walter Bright wrote:
It's simply not workable to put a wall between them. Every proposal
for it has entailed various unfortunate, ugly, and arbitrary
consequences.
We need warnings like gcc has: "-Wsign-compare Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. -Wconversion Warn for implicit conversions that may alter a value. This includes ... conversions between signed and unsigned, like unsigned ui = -1 ... Warnings about conversions between signed and unsigned integers can be disabled by using -Wno-sign-conversion. "
I find these to suffer from the same problems as all the proposals to "fix" the issue - they motivate the user to "fix" them with unfortunate, ugly, and arbitrary consequences. We need to be very careful with the idea of "just add a warning". Warnings are a sure sign of wishy-washy language design where the designers cannot make up their mind, so they dump it on the user. One person's warning become another person's must fix, and the language becomes balkanized, which is not good for portability, comprehensibility, and best practices.
[...] Don't add a warning, just make it outright illegal to assign signed to unsigned and vice versa unless an explicit cast is given. Code that *needs* to assign signed to unsigned *should* be self-documented with a cast indicating a reinterpretation of the bit representation of the value, and code that *unintentionally* mixes signs is buggy and therefore *should* result in a compile error so that the programmer can fix the problem. There are no "unfortunate", "ugly", or "arbitrary" consequences here. Much like the recent (or not-so-recent) change of prohibiting implicit conversion of a pointer to bool in an if-condition, or the requirement of a default case in a non-final switch, or so many other improvements in D over C/C++, such a change will (1) make problematic code an error so that it will get fixed, and (2) force users to rewrite non-problematic code to be more self-documenting so that their intent is clearer. Sounds like a win-win situation to me. T -- Bomb technician: If I'm running, try to keep up.
Nov 03 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/3/2014 4:49 PM, H. S. Teoh via Digitalmars-d wrote:
 Don't add a warning, just make it outright illegal to assign signed to
 unsigned and vice versa unless an explicit cast is given.
This has been proposed before.
 There are no "unfortunate", "ugly", or "arbitrary" consequences here.
 Much like the recent (or not-so-recent) change of prohibiting implicit
 conversion of a pointer to bool in an if-condition, or the requirement
 of a default case in a non-final switch, or so many other improvements
 in D over C/C++, such a change will (1) make problematic code an error
 so that it will get fixed, and (2) force users to rewrite
 non-problematic code to be more self-documenting so that their intent is
 clearer. Sounds like a win-win situation to me.
Should be careful with analogies like that. Each case is different. Your proposal (which has been proposed many times before) requires, as you say, explicit casting. You are glossing over and dismissing the problems with explicit casts, and the problems with overloading, etc.
Nov 03 2014
parent reply Nick Treleaven <ntrel-pub mybtinternet.com> writes:
On 04/11/2014 02:00, Walter Bright wrote:
 You are glossing over and dismissing the problems with explicit casts,
 and the problems with overloading, etc.
Can't solving any overloading problem be deferred? An incomplete solution is better than nothing. As for explicit casts, they are easily avoided using std.conv: uint u = unsigned(-1); int i = signed(uint.max); The compiler can recommend these instead of explicit casts. Also, please note the pull request I linked tries hard using VRP to avoid nagging the user with warnings/errors that it can detect are unnecessary. Given that Andrei pre-approved the design last April year*, it seems surprising there's not yet been a solution. * https://issues.dlang.org/show_bug.cgi?id=259#c35
Nov 06 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/6/2014 9:16 AM, Nick Treleaven wrote:
 On 04/11/2014 02:00, Walter Bright wrote:
 You are glossing over and dismissing the problems with explicit casts,
 and the problems with overloading, etc.
Can't solving any overloading problem be deferred? An incomplete solution is better than nothing.
If the overloading issue can't reasonably be dealt with, then we have more problems.
Nov 06 2014
prev sibling parent "Dominikus Dittes Scherkl" writes:
On Tuesday, 4 November 2014 at 00:51:10 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Mon, Nov 03, 2014 at 04:29:17PM -0800, Walter Bright via 
 Digitalmars-d wrote:
 On 11/3/2014 10:03 AM, Nick Treleaven wrote:
On 02/11/2014 20:33, Walter Bright wrote:
It's simply not workable to put a wall between them. Every 
proposal
for it has entailed various unfortunate, ugly, and arbitrary
consequences.
We need warnings like gcc has: "-Wsign-compare Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. -Wconversion Warn for implicit conversions that may alter a value. This includes ... conversions between signed and unsigned, like unsigned ui = -1 ... Warnings about conversions between signed and unsigned integers can be disabled by using -Wno-sign-conversion. "
I find these to suffer from the same problems as all the proposals to "fix" the issue - they motivate the user to "fix" them with unfortunate, ugly, and arbitrary consequences. We need to be very careful with the idea of "just add a warning". Warnings are a sure sign of wishy-washy language design where the designers cannot make up their mind, so they dump it on the user. One person's warning become another person's must fix, and the language becomes balkanized, which is not good for portability, comprehensibility, and best practices.
[...] Don't add a warning, just make it outright illegal to assign signed to unsigned and vice versa unless an explicit cast is given. Code that *needs* to assign signed to unsigned *should* be self-documented with a cast indicating a reinterpretation of the bit representation of the value, and code that *unintentionally* mixes signs is buggy and therefore *should* result in a compile error so that the programmer can fix the problem. There are no "unfortunate", "ugly", or "arbitrary" consequences here. Much like the recent (or not-so-recent) change of prohibiting implicit conversion of a pointer to bool in an if-condition, or the requirement of a default case in a non-final switch, or so many other improvements in D over C/C++, such a change will (1) make problematic code an error so that it will get fixed, and (2) force users to rewrite non-problematic code to be more self-documenting so that their intent is clearer. Sounds like a win-win situation to me.
Simply change the comparison to something that always works: /// Returns negative value if a < b, 0 if they are equal or positive value if a > b. /// This will always yield a correct result, no matter which integral types are compared. /// It uses one extra comparison operation if and only if /// one type is signed and the other unsigned but has bigger max. /// For comparison with floating point values the buildin /// operations have no problem, so we don't handle them here. C opCmp(T, U)(const(T) a, const(U) b) pure safe nogc nothrow if(isIntegral!T && isIntegral!U) { alias Signed!CommonType!(T, U) C; static if(isSigned!T && isUnsigned!U && T.sizeof <= U.sizeof) { return (b > cast(U)T.max) ? -1 : cast(C)a - cast(C)b; } else static if(isUnsigned!T && isSigned!U && T.sizeof >= U.sizeof) { return (a > cast(T)U.max) ? 1 : cast(C)a - cast(C)b; } else // both signed or both unsigned or the unsigned type is smaller and can therefore be safely cast to the signed type { return cast(C)a - cast(C)b; } }
Nov 04 2014
prev sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Mon, 03 Nov 2014 16:29:17 -0800
schrieb Walter Bright <newshound2 digitalmars.com>:

 On 11/3/2014 10:03 AM, Nick Treleaven wrote:
 On 02/11/2014 20:33, Walter Bright wrote:
 It's simply not workable to put a wall between them. Every
 proposal for it has entailed various unfortunate, ugly, and
 arbitrary consequences.
We need warnings like gcc has: "-Wsign-compare Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. -Wconversion Warn for implicit conversions that may alter a value. This includes ... conversions between signed and unsigned, like unsigned ui = -1 ... Warnings about conversions between signed and unsigned integers can be disabled by using -Wno-sign-conversion. "
I find these to suffer from the same problems as all the proposals to "fix" the issue - they motivate the user to "fix" them with unfortunate, ugly, and arbitrary consequences. We need to be very careful with the idea of "just add a warning". Warnings are a sure sign of wishy-washy language design where the designers cannot make up their mind, so they dump it on the user. One person's warning become another person's must fix, and the language becomes balkanized, which is not good for portability, comprehensibility, and best practices.
Although I might agree that warnings can indicate 'wishy-washy language design' you can not simply assume the reverse/negation. There's obviously a problem and just 'not adding warnings' doesn't magically solve this 'wishy-washy language design' issue. And as long as there is no other solution warnings are better than simply ignoring the problem. But I think it's likely this check will be implemented in Dscanner (https://github.com/Hackerpilot/Dscanner/issues/204) and in the end it doesn't really matter where it's implemented.
Nov 04 2014
parent Nick Treleaven <ntrel-pub mybtinternet.com> writes:
On 04/11/2014 14:18, Johannes Pfau wrote:
 And as long as there is no other solution warnings are better than
 simply ignoring the problem.
+1
 But I think it's likely this check will be implemented in
 Dscanner (https://github.com/Hackerpilot/Dscanner/issues/204) and
 in the end it doesn't really matter where it's implemented.
While that's better than nothing, it does matter where it's implemented. It means you only detect problems when remembering to run Dscanner. If you wire it into your build system, you've added a new dependency* to keep track of. You have to hope Dscanner continues to be maintained and kept in sync with your D compiler (not just that it builds, but it understands the latest D syntax and semantic changes). * For this reason, in practice, most small D projects won't use Dscanner regularly.
Nov 06 2014
prev sibling parent "Meta" <jared771 gmail.com> writes:
On Sunday, 2 November 2014 at 20:12:17 UTC, bearophile wrote:
 Perhaps a good enough FlagsEnum can be implemented with pure D 
 library code.
https://github.com/D-Programming-Language/phobos/pull/2058
Nov 02 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/1/14 6:25 PM, bearophile wrote:
 Walter Bright:

 D has writefln which does not have printf's issues. There's no reason
 to add a feature for printf.
The feature we are talking about is not just for D writeln, as I've tried to explain several times.
Well maybe then it's time to reassess whether the point was valid and interesting.
 And D writeln is not verified at compile-time, this is silly for a
 language that tries to be reliable.
Wasn't there a pull request that allowed `writef!"%s %s"(1, 2)` in addition to what we have now? Should be easy to integrate.
 (Rust printing function is actually
 a macro and it verifies the formatting string at compile-time when
 possible. That's the only good enough option for me for a modern
 statically compiled language).
Is that a best-effort kind of approach? If so, that would be pretty bad... Andrei
Nov 02 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 Is that a best-effort kind of approach? If so, that would be 
 pretty bad...
I don't exactly know how that Rust macro works, sorry, I am still rather ignorant about Rust. Bye, bearophile
Nov 02 2014
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Sunday, 2 November 2014 at 22:11:51 UTC, bearophile wrote:
 Andrei Alexandrescu:

 Is that a best-effort kind of approach? If so, that would be 
 pretty bad...
I don't exactly know how that Rust macro works, sorry, I am still rather ignorant about Rust. Bye, bearophile
There is a guide in case you aren't aware of it, http://doc.rust-lang.org/guide-macros.html I think the plan is to have them comparable to Lisp macros in expressiveness. -- Paulo
Nov 02 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/2/14 11:11 PM, bearophile wrote:
 Andrei Alexandrescu:

 Is that a best-effort kind of approach? If so, that would be pretty
 bad...
I don't exactly know how that Rust macro works, sorry, I am still rather ignorant about Rust.
Then don't mention it in the first place. You either make points you can stand by on don't. Don't fumble around. -- Andrei
Nov 03 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 Then don't mention it in the first place. You either make 
 points you can stand by on don't. Don't fumble around. -- Andrei
There's also a third option, offer the information I have, if it's valuable, even when it's not complete because others can get interested and find the full information themselves (note: this is how science works in laboratories). Bye, bearophile
Nov 03 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/3/14 12:12 PM, bearophile wrote:
 Andrei Alexandrescu:

 Then don't mention it in the first place. You either make points you
 can stand by on don't. Don't fumble around. -- Andrei
There's also a third option, offer the information I have, if it's valuable, even when it's not complete because others can get interested and find the full information themselves (note: this is how science works in laboratories).
That's fine so long as it comes with clear disclosure. -- Andrei
Nov 03 2014
prev sibling parent Shriramana Sharma via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, Nov 3, 2014 at 3:42 PM, bearophile via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 There's also a third option, offer the information I have, if it's valuable,
 even when it's not complete because others can get interested and find the
 full information themselves
That's true. But when making points to high-level decision makers like Andrei, it is often more productive to do the research oneself and present the result and not just chime in with pointers because they don't have the time to follow those pointers. (pun intended) -- Shriramana Sharma ஶ்ரீரமணஶர்மா श्रीरमणशर्मा
Nov 03 2014
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Nov 01, 2014 at 06:04:21PM -0700, Walter Bright via Digitalmars-d wrote:
 On 11/1/2014 3:26 PM, bearophile wrote:
But you can run such compile-time tests only on template arguments,
or on regular arguments of functions/constructors that are forced to
run at compile-time. But for me this is _not_ enough. You can't
implement the printf test example he shows (unless you turn the
formatting string into a template argument of printf, this introduces
template bloat
D has writefln which does not have printf's issues. There's no reason to add a feature for printf. When I look at my code, it is very rare that I pass arguments to functions that would benefit from compile time checking. For those that might, there's always a rethinking of the feature, such as with printf/writefln.
I've been thinking about refactoring writefln (well, actually std.format.formattedWrite, which includes that and more) with compile-time validated format strings. I'd say that 90% of code that uses format strings use a static format string, so there's no reason to force everyone to use runtime format strings as is currently done. The advantages of compile-time format strings are: 1) Compile-time verification of format arguments -- passing the wrong number of arguments or arguments of mismatching type will force compilation failure. Currently, it will compile successfully but fail at runtime. 2) Minimize dependencies: the actual formatting routines needed for a particular format string can be determined at compile-time, so that only the code necessary to format that particular format string will be referenced in the generated code. This is particularly important w.r.t. function attributes: currently, you can't use std.string.format from nothrow or nogc code, because parts of the formatting code may throw or allocate, even if your particular format string never actually reaches those parts. Analysing the format string at compile-time would enable us to decouple the nogc/nothrow parts of format() from the allocating / throwing parts, and only pull in the latter when the format string requires it, thereby making format() usable from nothrow / nogc code as long as your format string doesn't require allocation / formatting code that may throw. 3) Compile-time parsing of format string: instead of the runtime code parsing the format string every time, you do it only once at compile-time, and at runtime it's just sequential list of calls to the respective formatting functions without the parsing overhead. This gives a slight performance boost. Granted, this is not that big a deal, but it's a nice side-benefit of having compile-time format strings. The best part about doing this in D is that the same codebase can be used for processing both compile-time format strings and runtime format strings, so we can minimize code duplication; whereas if it were C++, you'd have to implement format() twice, once in readable code, and once as an unreadable tangle of C++ recursive templates. T -- Answer: Because it breaks the logical sequence of discussion. Question: Why is top posting bad?
Nov 01 2014
parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Sunday, 2 November 2014 at 01:28:15 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 1) Compile-time verification of format arguments -- passing the 
 wrong
 number of arguments or arguments of mismatching type will force
 compilation failure. Currently, it will compile successfully  
 but fail at
 runtime.
+1000! That would be awesome! It would be a _great_ boost in productivity during the debugging phase, or when we are under pressure and can't do a great job in code coverage. --- Paolo
Nov 02 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/2/14 2:11 AM, Paolo Invernizzi wrote:
 On Sunday, 2 November 2014 at 01:28:15 UTC, H. S. Teoh via Digitalmars-d
 wrote:
 1) Compile-time verification of format arguments -- passing the wrong
 number of arguments or arguments of mismatching type will force
 compilation failure. Currently, it will compile successfully but fail at
 runtime.
+1000! That would be awesome! It would be a _great_ boost in productivity during the debugging phase, or when we are under pressure and can't do a great job in code coverage.
Compile-time checking of format strings is nice to have, but I hardly see it as a major productivity boost. Maybe the better effect would be it serving as an example for other libraries to follow. -- Andrei
Nov 02 2014
parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Sunday, 2 November 2014 at 22:02:05 UTC, Andrei Alexandrescu 
wrote:
 On 11/2/14 2:11 AM, Paolo Invernizzi wrote:
 On Sunday, 2 November 2014 at 01:28:15 UTC, H. S. Teoh via 
 Digitalmars-d
 wrote:
 1) Compile-time verification of format arguments -- passing 
 the wrong
 number of arguments or arguments of mismatching type will 
 force
 compilation failure. Currently, it will compile successfully 
 but fail at
 runtime.
+1000! That would be awesome! It would be a _great_ boost in productivity during the debugging phase, or when we are under pressure and can't do a great job in code coverage.
Compile-time checking of format strings is nice to have, but I hardly see it as a major productivity boost. Maybe the better effect would be it serving as an example for other libraries to follow. -- Andrei
For sure it is a boost: the raising of such exception is not so uncommon, and I assure that here at work is one of the top 5 cursed things when it happens. Top of the pop, for cursing, when it happens in production. --- Paolo
Nov 02 2014
prev sibling next sibling parent Rikki Cattermole <alphaglosined gmail.com> writes:
 * D does the check function thing using compile time function execution
 to check template arguments.

 * D also has full compile time function execution - it's a very heavily
 used feature. It's mainly used for metaprogramming, introspection,
 checking of template arguments, etc. Someone has written a ray tracer
 that runs at compile time in D. D's compile time execution doesn't go as
 far as running external functions in DLLs.
The video has actually got me thinking about how we can expand CTFE's capabilities while also keeping it secure-ish. As an example having blocks such as: __ctfe { pragma(msg, __ctfe.ast.getModules()); } Could output at compile time all the modele names that's being compiled currently. The way I'm looking at it is that files act how they do now but will ignore __ctfe blocks unless that file was passed with e.g. -ctfe=mymodule.d Of course how we get symbols ext. into it is another thing all together. Compiler plugin? maybe. Or we do the dirty and go for extern support.
 * D has static assert, which runs the code at compile time, too. The
 space invaders won't run at compile time, because D's compile time code
 running doesn't call external functions in DLLs. I actually suspect that
 could be a problematic feature, because it allows the compiler to
 execute user supplied code which can do anything to your system - a
 great vector for supplying malware to an unsuspecting developer. The
 ascii_map function will work, however.
You really don't want arbitrary code to run with access to system libs. Agreed. A __ctfe block could be rather interesting in that it can only exist at compile time and it is known it will execute only when it is passed via -ctfe. Could also remove part of my need for livereload where it creates a file specifically to tell the binary what modules is compiled in. Not to mention gets round the whole but how do you know its the final compilation yada yada ya. Doesn't matter. In the context of dub, to make it safe by default just require a --with-ctfe switch on e.g. build. For people like me this would be really huge. Like ridiculously. But at the same time, I don't believe its a good idea to make it so easy that we have people writing games to run at compile time and being multi threaded. Of course this does raise one question, about __traits compared to __ctfe.ast functionality. Could be a little double up ish but at the same time, you shouldn't be able to use __ctfe.ast outside of a __ctfe block. For reference, __traits is a missing a LOT to the point I couldn't properly create a ctfe uml generator. So recap: suggestion allowing __ctfe blocks that can run code at compile time which can utilise external code such as c functions. But to add them they must be specifically enabled on the compiler. The purpose of having such functionality is for generation of document or registration of routes without any form of explicit registration. Perhaps even going so far as to say, don't bother importing e.g. Cmsed if you use Route UDA on a function. Needs to be refined a lot, but could open up a lot of opportunities here.
Nov 01 2014
prev sibling parent reply "John" <john.joyus gmail.com> writes:
On Saturday, 1 November 2014 at 20:14:15 UTC, Walter Bright wrote:
 Jonathan is reinventing D with a somewhat different syntax. 
 Some points on the video:
May be you should have a couple of beers with him too, just like you did with Andrei a long time ago! :)
Nov 03 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/3/14 10:10 PM, John wrote:
 On Saturday, 1 November 2014 at 20:14:15 UTC, Walter Bright wrote:
 Jonathan is reinventing D with a somewhat different syntax. Some
 points on the video:
May be you should have a couple of beers with him too, just like you did with Andrei a long time ago! :)
I'm considering writing an open letter responding to all videos, give Jonathan a first crack at reviewing it, and then meeting for beers. -- Andrei
Nov 03 2014
parent reply "MattCoder" <nospam mail.com> writes:
On Monday, 3 November 2014 at 21:11:07 UTC, Andrei Alexandrescu
wrote:
 and then meeting for beers. -- Andrei
Be aware that he doesn't drink (alcohol) too much: https://twitter.com/Jonathan_Blow/status/515268581525700608 Matheus.
Nov 03 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/3/14 11:27 PM, MattCoder wrote:
 On Monday, 3 November 2014 at 21:11:07 UTC, Andrei Alexandrescu
 wrote:
 and then meeting for beers. -- Andrei
Be aware that he doesn't drink (alcohol) too much: https://twitter.com/Jonathan_Blow/status/515268581525700608 Matheus.
Thanks. Coffee is even better! -- Andrei
Nov 03 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/3/2014 12:10 PM, John wrote:
 On Saturday, 1 November 2014 at 20:14:15 UTC, Walter Bright wrote:
 Jonathan is reinventing D with a somewhat different syntax. Some points on the
 video:
May be you should have a couple of beers with him too, just like you did with Andrei a long time ago! :)
I'd like that. Jonathan is quite a likable fellow, and we've been exchanging some nice emails.
Nov 03 2014
prev sibling next sibling parent "ponce" <contact gam3sfrommars.fr> writes:
On Saturday, 1 November 2014 at 11:31:32 UTC, bearophile wrote:
 Third part of the "A Programming Language for Games", by 
 Jonathan Blow:
 https://www.youtube.com/watch?v=UTqZNujQOlA

 Discussions:
 http://www.reddit.com/r/programming/comments/2kxi89/jonathan_blow_a_programming_language_for_games/
Impressive work, worth watching. It finds it particularly telling that string mixins + CTFE seem to have been implemented before arrays or templates. (There is even if(__ctfe) around 1:31:00) It will be nice to see his take on objects, arrays, constness, and how he does templates.
Nov 02 2014
prev sibling next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 11/1/14, 8:31 AM, bearophile wrote:
 Third part of the "A Programming Language for Games", by Jonathan Blow:
 https://www.youtube.com/watch?v=UTqZNujQOlA

 Discussions:
 http://www.reddit.com/r/programming/comments/2kxi89/jonathan_blow_a_programming_language_for_games/


 His language seems to disallow comparisons of different types:

 void main() {
      int x = 10;
      assert(x == 10.0); // Refused.
 }


 I like the part about compile-time tests for printf:
 http://youtu.be/UTqZNujQOlA?t=38m6s

 The same strategy is used to validate game data statically:
 http://youtu.be/UTqZNujQOlA?t=55m12s

 A screenshot for the printf case:
 http://oi57.tinypic.com/2m5b680.jpg
That is called a linter. A general linter works on an abstract syntax tree with possibly type annotations. His "linter" only works on functions. I guess he will extend it later, but he's not inventing anything new. My opinion is that he knows C++ a lot and he's tired of some of its stuff so he's inventing a language around those. I don't think that's a good way to design a language. D can run (some) stuff at compile time. Crystal can run (any) stuff at compile time. Rust too. Many modern languages already understood that it is very important to run things at compile time, be it to generate code or to check things. I can understand his excitement because I got excited too when I was able to run stuff at compile time :-) About the bytecode he generates: as someone said in the reddit discussion, having to maintain two separate language implementations (compiled and interpreted) can lead to small and subtle bugs. And, running code via an intepreter is slower than compiled code, even if the interpreter is really good. So I don't think the bytecode stuff is a really good idea. Also, why have a dynamic array as a built-in? You can implement it yourself with pointers...
Nov 02 2014
parent "thedeemon" <dlang thedeemon.com> writes:
On Sunday, 2 November 2014 at 18:04:18 UTC, Ary Borenszweig wrote:

 About the bytecode he generates: as someone said in the reddit 
 discussion, having to maintain two separate language 
 implementations (compiled and interpreted) can lead to small 
 and subtle bugs. And, running code via an intepreter is slower 
 than compiled code, even if the interpreter is really good. So 
 I don't think the bytecode stuff is a really good idea.
Well, D maintains several implementations (interpreter for CTFE and the backends), and the interpreter doesn't even use byte code so it's probably even slower. Is it really a problem? Sometimes, probably, but not too often.
Nov 04 2014
prev sibling next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sun, Nov 02, 2014 at 12:49:47PM -0800, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 11/1/14 6:25 PM, bearophile wrote:
[...]
And D writeln is not verified at compile-time, this is silly for a
language that tries to be reliable.
Wasn't there a pull request that allowed `writef!"%s %s"(1, 2)` in addition to what we have now? Should be easy to integrate.
There's only an enhancement request, nowhere near a PR yet. I did post some proof-of-concept code, but obviously it's nowhere near usable in actual code at the moment. [...] On Sun, Nov 02, 2014 at 12:52:30PM -0800, Andrei Alexandrescu via Digitalmars-d wrote:
 On 11/1/14 6:48 PM, H. S. Teoh via Digitalmars-d wrote:
While writefln can be improved (Andrei has preapproved my enhancement
request to support compile-time format string, for example), there's
no way to make such improvements to GCC's format checking short of
modifying the compiler itself.
Oh so it was you :o). Great idea, time to follow with implementation! -- Andrei
I'll see what I can do, but my free time is very limited these days and I'm not sure when I'll get to it. T -- "A man's wife has more power over him than the state has." -- Ralph Emerson
Nov 02 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-11-01 12:31, bearophile wrote:
 Third part of the "A Programming Language for Games", by Jonathan Blow:
 https://www.youtube.com/watch?v=UTqZNujQOlA

 Discussions:
 http://www.reddit.com/r/programming/comments/2kxi89/jonathan_blow_a_programming_language_for_games/


 His language seems to disallow comparisons of different types:

 void main() {
      int x = 10;
      assert(x == 10.0); // Refused.
 }


 I like the part about compile-time tests for printf:
 http://youtu.be/UTqZNujQOlA?t=38m6s

 The same strategy is used to validate game data statically:
 http://youtu.be/UTqZNujQOlA?t=55m12s

 A screenshot for the printf case:
 http://oi57.tinypic.com/2m5b680.jpg

 He writes a function that is called to verify at compile-time the
 arguments of another function. This does the same I am asking for a
 "static precondition", but it has some disadvantages and advantages. One
 advantage is that the testing function doesn't need to be in the same
 module as the function, unlike static enums. So you can have the
 function compiled (separated compilation). Perhaps it's time for DIP.
LLVM has a JIT compiler, LDC uses LLVM. Perhaps time to see if it's possible to use the JIT compiler for CTFE. -- /Jacob Carlborg
Nov 04 2014
parent reply "Meta" <jared771 gmail.com> writes:
On Tuesday, 4 November 2014 at 08:26:36 UTC, Jacob Carlborg wrote:
 LLVM has a JIT compiler, LDC uses LLVM. Perhaps time to see if 
 it's possible to use the JIT compiler for CTFE.
Isn't SDC already able to do JIT compilation for CTFE? I swear I've seen Deadalnix mention it before...
Nov 04 2014
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 4 November 2014 at 08:48:13 UTC, Meta wrote:
 On Tuesday, 4 November 2014 at 08:26:36 UTC, Jacob Carlborg 
 wrote:
 LLVM has a JIT compiler, LDC uses LLVM. Perhaps time to see if 
 it's possible to use the JIT compiler for CTFE.
Isn't SDC already able to do JIT compilation for CTFE? I swear I've seen Deadalnix mention it before...
Yes, SDC use LLVM's JIT capability to do CTFE.
Nov 04 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-11-04 10:08, deadalnix wrote:

 Yes, SDC use LLVM's JIT capability to do CTFE.
Can't it access parts of the system that DMD's CTFE cannot? -- /Jacob Carlborg
Nov 04 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 4 November 2014 at 17:18:25 UTC, Jacob Carlborg wrote:
 On 2014-11-04 10:08, deadalnix wrote:

 Yes, SDC use LLVM's JIT capability to do CTFE.
Can't it access parts of the system that DMD's CTFE cannot?
Yes, I have yet to implement a check for ctfeability.
Nov 04 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-11-04 23:18, deadalnix wrote:

 Yes, I have yet to implement a check for ctfeability.
Cool, perhaps you should not add it :) -- /Jacob Carlborg
Nov 04 2014
parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 5 November 2014 at 07:33:37 UTC, Jacob Carlborg
wrote:
 On 2014-11-04 23:18, deadalnix wrote:

 Yes, I have yet to implement a check for ctfeability.
Cool, perhaps you should not add it :)
My plan is to add the check for CTFE, but at the point in the semantic analysis, not in the JIT part, so it is still possible to run it without the check if wanted from 3rd party code (REPL anyone ?).
Nov 05 2014
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-11-04 09:48, Meta wrote:

 Isn't SDC already able to do JIT compilation for CTFE? I swear I've seen
 Deadalnix mention it before...
Forgot about that. -- /Jacob Carlborg
Nov 04 2014