www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - D vs C++

reply Caligo <iteronvexor gmail.com> writes:
This is the page that would require your attention:
http://unthought.net/c++/c_vs_c++.html

I'm going to ignore the C version because it's ugly and uses a hash.  I'm
also going to ignore the fastest C++ version because it uses a digital trie
(it's very fast but extremely memory hungry; the complexity is constant over
the size of the input and linear over the length of the word being searched
for).  I just wanted to focus on the language and the std library and not
have to implement a data structure.

Here is the C++ code:

#include <unordered_set>
#include <string>
#include <iostream>
#include <stdio.h>

int main(int argc, char* argv[]){

  using namespace std;
  char buf[8192];
  string word;
  unordered_set<string> wordcount;
  while( scanf("%s", buf) != EOF ) wordcount.insert(buf);
  cout << "Words: " << wordcount.size() << endl;

  return 0;
}

For D I pretty much used the example from TDPL.  As far as I can tell, the
associate array used is closer to std::map (or maybe std::unordered_map ?)
than std::unordered_set, but I don't know of any other data structures in D
for this (I'm still learning).
Here is the D code:

import std.stdio;
import std.string;

void main(){

  size_t[string] dictionary;
  foreach(line; stdin.byLine()){
    foreach(word; splitter(strip(line))){
      if(word in dictionary) continue;
      dictionary[word.idup] = 1;
    }
  }
  writeln("Words: ", dictionary.length);
}

Here are the measurements (average of 3 runs):

C++
===
Data size: 990K with 23K unique words
real    0m0.055s
user   0m0.046s
sys     0m0.000

Data size: 9.7M with 23K unique words
real    0m0.492s
user   0m0.470s
sys    0m0.013

Data size: 5.1M with 65K unique words
real    0m0.298s
user   0m0.277s
sys    0m0.013

Data size: 51M with 65K unique words
real    0m2.589s
user   0m2.533s
sys    0m0.070


DMD D 2.051
===
Data size: 990K with 23K unique words
real    0m0.064s
user   0m0.053s
sys     0m0.006

Data size: 9.7M with 23K unique words
real    0m0.513s
user   0m0.487s
sys    0m0.013

Data size: 5.1M with 65K unique words
real    0m0.305s
user   0m0.287s
sys    0m0.007

Data size: 51M with 65K unique words
real    0m2.683s
user   0m2.590s
sys    0m0.103


GDC D 2.051
===
Data size: 990K with 23K unique words
real    0m0.146s
user   0m0.140s
sys     0m0.000

Data size: 9.7M with 23K unique words
Segmentation fault

Data size: 5.1M with 65K unique words
Segmentation fault

Data size: 51M with 65K unique words
Segmentation fault


GDC fails for some reason with large number of unique words and/or large
data.  Also, GDC doesn't always give correct results; the word count is
usually off by a few hundred.

D and C++ are very close.  Without scanf() C++ is almost twice as slow.
Also, using std::unordered_set, the performance almost doubles.

I'm interested to see a better D version than the one I posted.

P.S.
No flame wars please.
Dec 24 2010
next sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
== Quote from Caligo (iteronvexor gmail.com)'s article
 --000e0cd215b8b968a004982e3775
 Content-Type: text/plain; charset=ISO-8859-1
 This is the page that would require your attention:
 http://unthought.net/c++/c_vs_c++.html
 I'm going to ignore the C version because it's ugly and uses a hash.  I'm
 also going to ignore the fastest C++ version because it uses a digital trie
 (it's very fast but extremely memory hungry; the complexity is constant over
 the size of the input and linear over the length of the word being searched
 for).  I just wanted to focus on the language and the std library and not
 have to implement a data structure.
 Here is the C++ code:
 #include <unordered_set>
 #include <string>
 #include <iostream>
 #include <stdio.h>
 int main(int argc, char* argv[]){
   using namespace std;
   char buf[8192];
   string word;
   unordered_set<string> wordcount;
   while( scanf("%s", buf) != EOF ) wordcount.insert(buf);
   cout << "Words: " << wordcount.size() << endl;
   return 0;
 }
 For D I pretty much used the example from TDPL.  As far as I can tell, the
 associate array used is closer to std::map (or maybe std::unordered_map ?)
 than std::unordered_set, but I don't know of any other data structures in D
 for this (I'm still learning).
 Here is the D code:
 import std.stdio;
 import std.string;
 void main(){
   size_t[string] dictionary;
   foreach(line; stdin.byLine()){
     foreach(word; splitter(strip(line))){
       if(word in dictionary) continue;
       dictionary[word.idup] = 1;
     }
   }
   writeln("Words: ", dictionary.length);
 }
 Here are the measurements (average of 3 runs):
 C++
 ===
 Data size: 990K with 23K unique words
 real    0m0.055s
 user   0m0.046s
 sys     0m0.000
 Data size: 9.7M with 23K unique words
 real    0m0.492s
 user   0m0.470s
 sys    0m0.013
 Data size: 5.1M with 65K unique words
 real    0m0.298s
 user   0m0.277s
 sys    0m0.013
 Data size: 51M with 65K unique words
 real    0m2.589s
 user   0m2.533s
 sys    0m0.070
 DMD D 2.051
 ===
 Data size: 990K with 23K unique words
 real    0m0.064s
 user   0m0.053s
 sys     0m0.006
 Data size: 9.7M with 23K unique words
 real    0m0.513s
 user   0m0.487s
 sys    0m0.013
 Data size: 5.1M with 65K unique words
 real    0m0.305s
 user   0m0.287s
 sys    0m0.007
 Data size: 51M with 65K unique words
 real    0m2.683s
 user   0m2.590s
 sys    0m0.103
 GDC D 2.051
 ===
 Data size: 990K with 23K unique words
 real    0m0.146s
 user   0m0.140s
 sys     0m0.000
 Data size: 9.7M with 23K unique words
 Segmentation fault
 Data size: 5.1M with 65K unique words
 Segmentation fault
 Data size: 51M with 65K unique words
 Segmentation fault
 GDC fails for some reason with large number of unique words and/or large
 data.  Also, GDC doesn't always give correct results; the word count is
 usually off by a few hundred.
 D and C++ are very close.  Without scanf() C++ is almost twice as slow.
 Also, using std::unordered_set, the performance almost doubles.
 I'm interested to see a better D version than the one I posted.
 P.S.
 No flame wars please.
System details, compiler flags and the test data you used would be helpful. Else can't be sure what you mean by "doesn't always give correct results". :~)
Dec 24 2010
parent reply Caligo <iteronvexor gmail.com> writes:
If there are, say, 14 unique words then the executable compiled with GDC
doesn't always output the correct result and sometimes it gives segmentation
fault. 14 in this case would be the correct result, and 32 would not.  It
seems to work fine with very small data sets, but things start to go wrong
with larger ones.

As for the system, it's a 64-bit GNU/Linux, no multilib.  What else do you
need?

For GDC I've used gcc-4.4.5 and the following compiler flags:
'gdc -O2 -o count_d count.d'

I can't post the data because it's too large, but it shouldn't be too
difficult to generate it. 1MB of text file should work.

On Fri, Dec 24, 2010 at 6:49 PM, Iain Buclaw <ibuclaw ubuntu.com> wrote:

 == Quote from Caligo (iteronvexor gmail.com)'s article
 --000e0cd215b8b968a004982e3775
 Content-Type: text/plain; charset=ISO-8859-1
 This is the page that would require your attention:
 http://unthought.net/c++/c_vs_c++.html
 I'm going to ignore the C version because it's ugly and uses a hash.  I'm
 also going to ignore the fastest C++ version because it uses a digital
trie
 (it's very fast but extremely memory hungry; the complexity is constant
over
 the size of the input and linear over the length of the word being
searched
 for).  I just wanted to focus on the language and the std library and not
 have to implement a data structure.
 Here is the C++ code:
 #include <unordered_set>
 #include <string>
 #include <iostream>
 #include <stdio.h>
 int main(int argc, char* argv[]){
   using namespace std;
   char buf[8192];
   string word;
   unordered_set<string> wordcount;
   while( scanf("%s", buf) != EOF ) wordcount.insert(buf);
   cout << "Words: " << wordcount.size() << endl;
   return 0;
 }
 For D I pretty much used the example from TDPL.  As far as I can tell,
the
 associate array used is closer to std::map (or maybe std::unordered_map
?)
 than std::unordered_set, but I don't know of any other data structures in
D
 for this (I'm still learning).
 Here is the D code:
 import std.stdio;
 import std.string;
 void main(){
   size_t[string] dictionary;
   foreach(line; stdin.byLine()){
     foreach(word; splitter(strip(line))){
       if(word in dictionary) continue;
       dictionary[word.idup] = 1;
     }
   }
   writeln("Words: ", dictionary.length);
 }
 Here are the measurements (average of 3 runs):
 C++
 ===
 Data size: 990K with 23K unique words
 real    0m0.055s
 user   0m0.046s
 sys     0m0.000
 Data size: 9.7M with 23K unique words
 real    0m0.492s
 user   0m0.470s
 sys    0m0.013
 Data size: 5.1M with 65K unique words
 real    0m0.298s
 user   0m0.277s
 sys    0m0.013
 Data size: 51M with 65K unique words
 real    0m2.589s
 user   0m2.533s
 sys    0m0.070
 DMD D 2.051
 ===
 Data size: 990K with 23K unique words
 real    0m0.064s
 user   0m0.053s
 sys     0m0.006
 Data size: 9.7M with 23K unique words
 real    0m0.513s
 user   0m0.487s
 sys    0m0.013
 Data size: 5.1M with 65K unique words
 real    0m0.305s
 user   0m0.287s
 sys    0m0.007
 Data size: 51M with 65K unique words
 real    0m2.683s
 user   0m2.590s
 sys    0m0.103
 GDC D 2.051
 ===
 Data size: 990K with 23K unique words
 real    0m0.146s
 user   0m0.140s
 sys     0m0.000
 Data size: 9.7M with 23K unique words
 Segmentation fault
 Data size: 5.1M with 65K unique words
 Segmentation fault
 Data size: 51M with 65K unique words
 Segmentation fault
 GDC fails for some reason with large number of unique words and/or large
 data.  Also, GDC doesn't always give correct results; the word count is
 usually off by a few hundred.
 D and C++ are very close.  Without scanf() C++ is almost twice as slow.
 Also, using std::unordered_set, the performance almost doubles.
 I'm interested to see a better D version than the one I posted.
 P.S.
 No flame wars please.
System details, compiler flags and the test data you used would be helpful. Else can't be sure what you mean by "doesn't always give correct results". :~)
Dec 24 2010
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
== Quote from Caligo (iteronvexor gmail.com)'s article
 --000e0cd3329a1dd988049835b35d
 Content-Type: text/plain; charset=ISO-8859-1
 If there are, say, 14 unique words then the executable compiled with GDC
 doesn't always output the correct result and sometimes it gives segmentation
 fault. 14 in this case would be the correct result, and 32 would not.  It
 seems to work fine with very small data sets, but things start to go wrong
 with larger ones.
 As for the system, it's a 64-bit GNU/Linux, no multilib.  What else do you
 need?
 For GDC I've used gcc-4.4.5 and the following compiler flags:
 'gdc -O2 -o count_d count.d'
 I can't post the data because it's too large, but it shouldn't be too
 difficult to generate it. 1MB of text file should work.
As far as I'm aware, something either GC or TLS related is most likely to be problem in D runtime for you. 64bit runtime has been a bit flimsy in D2 since circa 2.040. No one's yet bisected the repository, probably expecting me to prod and find the bad commit merged from DMD using whatever 64bit hardware I don't have. :~)
Dec 25 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Caligo:

 I'm going to ignore the C version because it's ugly and uses a hash.
Some of the others too use a hash. You can write nice looking code in C too, but you need more skills :-)
 I'm also going to ignore the fastest C++ version because it uses a digital trie
 (it's very fast but extremely memory hungry; the complexity is constant over
 the size of the input and linear over the length of the word being searched
 for).
The fastest C++ version uses more memory, but sometimes if you need more performance it may become the right choice.
 I just wanted to focus on the language and the std library and not
 have to implement a data structure.
One of the few advantages of D over Python is that in D you are able to implement efficient and custom data structures without leaving the D language itself :-)
 For D I pretty much used the example from TDPL.  As far as I can tell, the
 associate array used is closer to std::map (or maybe std::unordered_map ?)
D built-in AAs are a hash map, but they use comparisons to resolve collisions. This makes D AAs strong against malicious attacks. Python dicts are faster but they are a pure hash map.
 than std::unordered_set, but I don't know of any other data structures in D
 for this (I'm still learning).
A unordered_set is not present in stc.collections yet.
 Here are the measurements (average of 3 runs):
Your timings lack information about the CPU, compilation switches used, and C++ compiler version used. Are those really averages?
 I'm interested to see a better D version than the one I posted.
If you want to use only the built-ins and std lib I think you can't improve your code a lot. To go faster you need to go lower level. Regarding your code, break and continue statements are not Structured Programming, so it's better to avoid them when possible. I write your code like this: import std.stdio, std.string; void main() { size_t[string] dictionary; foreach (line; stdin.byLine()) foreach (word; line.strip().splitter()) if (word !in dictionary) dictionary[word.idup] = 1; writeln("Words: ", dictionary.length); } This Python2 version is as fast as the D-DMD version with a 8.7 MB file that contains about 120_000 words: from sys import stdin import psyco def main(): dictionary = {} for line in stdin: for word in line.split(): if word not in dictionary: dictionary[word] = 1 print "Words:", len(dictionary) psyco.bind(main) main() Bye, bearophile
Dec 24 2010
next sibling parent Caligo <iteronvexor gmail.com> writes:
On Sat, Dec 25, 2010 at 12:21 AM, bearophile <bearophileHUGS lycos.com>wrote:

 Caligo:


 Here are the measurements (average of 3 runs):
Your timings lack information about the CPU, compilation switches used, and C++ compiler version used. Are those really averages? I used gcc version 4.4.4 to compile my C++ code. The only switch to
optimize that I used is '-O2'. Same for GDC, but GDC was compiled with gcc 4.4.5. And yes, those are averages. For DMD I used 'dmd -release count.d' to compile. And here is my CPU info: processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 67 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 6400+ stepping : 3 cpu MHz : 3214.495 cache size : 1024 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy bogomips : 6428.99 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc processor : 1 vendor_id : AuthenticAMD cpu family : 15 model : 67 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 6400+ stepping : 3 cpu MHz : 3214.495 cache size : 1024 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy bogomips : 6429.30 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc
Dec 24 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/25/10 12:21 AM, bearophile wrote:
 D built-in AAs are a hash map, but they use comparisons to resolve collisions.
This makes D AAs strong against malicious attacks. Python dicts are faster but
they are a pure hash map.
What is a pure hash map?
 than std::unordered_set, but I don't know of any other data structures in D
 for this (I'm still learning).
A unordered_set is not present in stc.collections yet.
Well the built-in AAs are unordered sets.
 Regarding your code, break and continue statements are not Structured
Programming, so it's better to avoid them when possible.
I guess I'd be the guilty one :o). I like break and continue, not to mention scope, and I think structured programming is all too often noncritically as "good". Anyway, this is minutia not to bother a new member with! Andrei
Dec 25 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei:

 What is a pure hash map?
I meant that to implement the dict protocol in Python you just need to implement an equality and an __hash__ methods, because the collisions are not managed with a tree as in D. With pure hash map I meant that it doesn't contain trees and it doesn't need less-than comparisons.
 Well the built-in AAs are unordered sets.
Built-in AAs are not sets because they force you to keep a value associated with each key (so they use more memory than a set) and their syntax requires a value for each key, and they don't support the normal set operations you expect from a set (intersection, union, and so on. See the operations done by the Python built-in sets).
 I guess I'd be the guilty one :o). I like break and continue, not to 
 mention scope, and I think structured programming is all too often 
 noncritically as "good".
Structured programming is good because it usually helps code readability. But it's not Verb, so in some less common cases a goto, break or continue help improve the code. Misra C Rules totally forbid break and continue, but more human coding guidelines just suggest to avoid them when possible, they are not evil. In the code shown on the original post the continue was worsening the code with no gain.
 Anyway, this is minutia not to bother a new member with!
What's the right moment to bother people with a good way to program? I think it's always the right time. Bye, bearophile
Dec 25 2010
next sibling parent reply spir <denis.spir gmail.com> writes:
On Sat, 25 Dec 2010 11:08:17 -0500
bearophile <bearophileHUGS lycos.com> wrote:

 Well the built-in AAs are unordered sets. =20
=20 Built-in AAs are not sets because they force you to keep a value associat=
ed with each key (so they use more memory than a set) and their syntax requ= ires a value for each key, and they don't support the normal set operations= you expect from a set (intersection, union, and so on. See the operations = done by the Python built-in sets). See https://bitbucket.org/denispir/denispir-d/src/b543fb352803/collections.= d for a prototype Set type based on D AAs (just like python's). Denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 25 2010
parent bearophile <bearophileHUGS lycos.com> writes:
spir:

 See https://bitbucket.org/denispir/denispir-d/src/b543fb352803/collections.d
for a prototype Set type based on D AAs (just like python's).
I have a Python-Like set in my dlibs1 too :-) But while the keys of an AA are a set, the AA itself is not a set. Bye, bearophile
Dec 25 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/25/10 10:08 AM, bearophile wrote:
 Andrei:

 What is a pure hash map?
I meant that to implement the dict protocol in Python you just need to implement an equality and an __hash__ methods, because the collisions are not managed with a tree as in D. With pure hash map I meant that it doesn't contain trees and it doesn't need less-than comparisons.
This behavior has been changed since a few releases ago to use singly-linked lists for solving collisions. Andrei
Dec 25 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei:

 This behavior has been changed since a few releases ago to use 
 singly-linked lists for solving collisions.
I didn't know it, it seems I miss changes all the time :-) This page says: http://www.digitalmars.com/d/2.0/hash-map.html
Classes can be used as the KeyType. For this to work, the class definition must
override the following member functions of class Object:
* hash_t toHash() * bool opEquals(Object) * int opCmp(Object) So that page now needs to list just toHash and opEquals, there's no need of opCmp to create an unsorted linked list. Then now D AAs are about as fragile as Python dicts, because AAs can degenerate in O(n) behaviour. Are D AAs on average faster with this change? Bye, bearophile
Dec 25 2010
prev sibling parent spir <denis.spir gmail.com> writes:
On Sat, 25 Dec 2010 11:46:43 -0600
Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:

 On 12/25/10 10:08 AM, bearophile wrote:
 Andrei:

 What is a pure hash map?
I meant that to implement the dict protocol in Python you just need to =
implement an equality and an __hash__ methods, because the collisions are n= ot managed with a tree as in D. With pure hash map I meant that it doesn't = contain trees and it doesn't need less-than comparisons.
=20
 This behavior has been changed since a few releases ago to use=20
 singly-linked lists for solving collisions.
Did not know that. This change certainly simplifies implementation. Then, with a slight change, it would probably be possible to implement an o= rdered AA, like in Ruby: http://www.igvita.com/2009/02/04/ruby-19-internals= -ordered-hash/. The change is to add a // series of 'next' pointers to keep= inserton order (for iteration only). The cost in time is neglectable (and = happens only at insertion time); the cost in space is one pointer per node. (I guess it was not possible in Python because their dict 'buckets' are not= linked lists). denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 25 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Structured programming is good because it usually helps code readability. But
 it's not Verb, so in some less common cases a goto, break or continue help
 improve the code.
 
 Misra C Rules totally forbid break and continue, but more human coding
 guidelines just suggest to avoid them when possible, they are not evil.
I thought the idea that break and continue were bad died about 25 years ago. Pascal didn't allow them, and pretty much everyone hated the workaround of having to use flag variables.
Dec 25 2010
next sibling parent reply spir <denis.spir gmail.com> writes:
On Sat, 25 Dec 2010 14:03:42 -0800
Walter Bright <newshound2 digitalmars.com> wrote:

 bearophile wrote:
 Structured programming is good because it usually helps code readabilit=
y. But
 it's not Verb, so in some less common cases a goto, break or continue h=
elp
 improve the code.
=20
 Misra C Rules totally forbid break and continue, but more human coding
 guidelines just suggest to avoid them when possible, they are not evil.
=20 =20 I thought the idea that break and continue were bad died about 25 years a=
go.=20
 Pascal didn't allow them, and pretty much everyone hated the workaround o=
f=20
 having to use flag variables.
Sure, they're both equivalent to a goto. But what they mean makes sense, an= d it's clear. As you say, workarounds have always been ugly. For me, _that_= is important. Denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 25 2010
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 26.12.2010 01:36, schrieb spir:
 On Sat, 25 Dec 2010 14:03:42 -0800
 Walter Bright<newshound2 digitalmars.com>  wrote:

 bearophile wrote:
 Structured programming is good because it usually helps code readability. But
 it's not Verb, so in some less common cases a goto, break or continue help
 improve the code.

 Misra C Rules totally forbid break and continue, but more human coding
 guidelines just suggest to avoid them when possible, they are not evil.
I thought the idea that break and continue were bad died about 25 years ago. Pascal didn't allow them, and pretty much everyone hated the workaround of having to use flag variables.
Sure, they're both equivalent to a goto.
I don't think so. They're much more clean and readable than goto (they just restart/jump behind the current loop or, if you use them with labels, an outer loop - IMHO that's quite different from jumping to arbitrary labels). I guess this is the reason why break and continue are supported in Java but goto isn't.
 But what they mean makes sense, and it's clear. As you say, workarounds have
always been ugly. For me, _that_ is important.
I agree.
 Denis
Cheers, - Daniel
Dec 25 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Daniel Gibson wrote:
 I don't think so. They're much more clean and readable than goto (they 
 just restart/jump behind the current loop or, if you use them with 
 labels, an outer loop - IMHO that's quite different from jumping to 
 arbitrary labels).
 I guess this is the reason why break and continue are supported in Java 
 but goto isn't.
Use of break and continue guarantee two important characteristics over goto: 1. initialization of a variable cannot be skipped 2. loops can only have one entry point The latter is called having a "reducible flow graph", which is an important requirement for many optimizations. (1), of course, can hide an ugly problem.
Dec 26 2010
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday 26 December 2010 02:38:53 Walter Bright wrote:
 Daniel Gibson wrote:
 I don't think so. They're much more clean and readable than goto (they
 just restart/jump behind the current loop or, if you use them with
 labels, an outer loop - IMHO that's quite different from jumping to
 arbitrary labels).
 I guess this is the reason why break and continue are supported in Java
 but goto isn't.
Use of break and continue guarantee two important characteristics over goto: 1. initialization of a variable cannot be skipped 2. loops can only have one entry point The latter is called having a "reducible flow graph", which is an important requirement for many optimizations. (1), of course, can hide an ugly problem.
Essentially any conditional or loop construct translates to jump commands in assembly. So, in that sense, _everything_ is a goto. However, by using if statements and for loops and the like, that jumping around is tightly controlled and doesn't make code hard to read or understand. Even statements such as goto case 2; are highly controlled in comparison to jumps in assembly code. I do think that code should be written in a manner which is clear and does not have undue jumping around, but what we have generally leads to quite readable and understandable code unless is someone is incompetent or purposely trying to be obtuse. The original complaints put forth about goto some 30+ years ago apply to a _very_ different time and a _very_ different coding style than _anything_ we see now. Personally, I use break and continue all the time. Does my average loop use them? No. But they're frequently useful really the only way to get around them is to use extra variables in conditions, which just makes code _harder_ to read and more bug-prone. Sure, there's plenty of poorly-written code out there and it makes sense to complain about it when you have to deal with it, but that doesn't mean that the language constructs are bug-prone or poorly-designed, just that they're poorly used. _Any_ language construct can be abused or misused. break and continue are fine constructs, and the labeled break and continue in D just makes them that much better, and I think that that extra ability actully _reduces_ bugs, because it allows you to simplify the code in many multi-level loops. - Jonathan M Davis
Dec 26 2010
parent bearophile <bearophileHUGS lycos.com> writes:
Jonathan M Davis:

 and the labeled break and continue in D just makes them that 
 much better,
I agree that D labeled break and continue are nice to have, I miss them in Python :-) In Python where you need a labeled break you sometimes have to replace the whole block of code with a function and replace the labeled break with a return statement (on the other hand some people say this forces you to write shorter functions and more readable code). Bye, bearophile
Dec 26 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 I thought the idea that break and continue were bad died about 25 years ago. 
 Pascal didn't allow them, and pretty much everyone hated the workaround of 
 having to use flag variables.
You need to add some shades of grey to your palette. break, continue and goto are bad, and it's better to limit their usage. But unless you are using very strict coding guidelines, you can use them where not using them produces worse code. Computed gotos are worse than normal gotos, but they too are sometimes useful (there is right now a person that asks for them in D.learn). Bye, bearophile
Dec 25 2010
next sibling parent reply Don <nospam nospam.com> writes:
bearophile wrote:
 Walter Bright:
 
 I thought the idea that break and continue were bad died about 25 years ago. 
 Pascal didn't allow them, and pretty much everyone hated the workaround of 
 having to use flag variables.
You need to add some shades of grey to your palette. break, continue and goto are bad,
Why are break and continue bad? I haven't heard anyone make that claim for a very long time. BTW everyone I've known who thought they were evil, also wanted to ban multiple return statements in a single function. Most of them didn't like case statements, either.
Dec 25 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Don:

Why are break and continue bad?<
Using fuzzy logic they are "25% bad" :-) There is an interesting discussion on the C2 wiki, they mostly agree with you: http://c2.com/cgi/wiki?InternalLoopExitsAreOk
I haven't heard anyone make that claim for a very long time.<
If 50 years from now people will use the ZZ language that (like D) is essentially C plus some other things, then advices about the C-class languagages from 1980 will be mostly good still.
BTW everyone I've known who thought they were evil, also wanted to ban multiple
return statements in a single function.<
Multiple return statements are OK iff the function/method is not too much long (or complex). There are less common situations where even in long functions multiple returns are an improvement.
Most of them didn't like case statements, either.<
They are good, if fall through is not the default and if there is some built-in way to make sure you have considered all cases. Bye, bearophile
Dec 25 2010
prev sibling parent reply foobar <foo bar.com> writes:
Don Wrote:

 bearophile wrote:
 Walter Bright:
 
 I thought the idea that break and continue were bad died about 25 years ago. 
 Pascal didn't allow them, and pretty much everyone hated the workaround of 
 having to use flag variables.
You need to add some shades of grey to your palette. break, continue and goto are bad,
Why are break and continue bad? I haven't heard anyone make that claim for a very long time. BTW everyone I've known who thought they were evil, also wanted to ban multiple return statements in a single function. Most of them didn't like case statements, either.
Isn't this subjective and depends on what you compare with and also depends on use cases? Structured programming is considered a huge improvement over gotos and spaghetti code and I thought that OO is considered better than Structured programming. Isn't using polymorphism considered usually better than explicitly maintaining a switch statement? Of course, all of that depends on your use case and on the programmer. For instance a compiler writer may make better use of gotos compared to structured programming while an average programmer should stick with structured programming to avoid bugs. My personal opinion is that D should not limit programming styles and should allow got/break/continue/etc. Of course that doesn't mean that the official D style guide should recommend writing long functions with lots of control statements. :)
Dec 26 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
foobar:

 Structured programming is considered a huge improvement over gotos and
spaghetti code and I thought that OO is considered better than Structured
programming.
Unfortunately both biological evolution and software evolution are not a March of Progress :-) So OOP doesn't automatically mean "better". Well written OO code is better for certain kinds of large programs. There are other situations where OO leads to equally good or worse code. In some situations in D2 I prefer to use a functional style with mostly pure functions instead of OOP.
 Isn't using polymorphism considered usually better than explicitly maintaining
a switch statement?<
This is sometimes right, expecially if your compiler is able to perform devirtualization, or if that part of your code doesn't need max performance. Sometimes replacing a little switch with a lot of polymorphic code doesn't make the code simpler to understand. Bye, bearophile
Dec 26 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
foobar wrote:
 Isn't this subjective and depends on what you compare with and also depends
 on use cases?
I think we should be careful about deciding what constructs are "bug prone" and which aren't. My attitudes on it are based on being a programmer for 35 years - my own experience with bugs, working with programming teams in companies, doing compiler tech support, working on safety critical systems, looking at bug reports for various systems, and talking with professional programmers. Break and continue have never been on the radar as being a source of confusion or bugs. On the other hand, things like: for (i = 0; i < 10; i++); foo(); do show up now and then, and cause much grief. I'd much rather address issues that are known to cause problems. Bearophile's post about break and continue being bug prone is the first complaint I've heard about it since around 1980. And C/C++/Java/etc programmers are *not* shy about complaining about things they think are causing them grief.
Dec 26 2010
prev sibling next sibling parent spir <denis.spir gmail.com> writes:
On Sun, 26 Dec 2010 00:45:42 -0500
bearophile <bearophileHUGS lycos.com> wrote:

 You need to add some shades of grey to your palette. break, continue and =
goto are bad, and it's better to limit their usage. You need to soften your rocks of certitudes, Bearophile. cycle (action0) if condition continue/break action is a common scheme. Existence of continue/break allows simple, clear, corre= ct expression of this scheme. By "correct", I mean they mirror what they me= an. Without them, we are left to _wrong_, or even more wrong, convolutions = that do not easily show the sense of the code. Either there is no structured programming structure to idiom for that, or i= t is precisely using those magic keywords. Lua has no continue: it is a pai= n and a constant request (even if far less often needed than break). Byt I agree the originally posted code did not require it. Denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 26 2010
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 26/12/2010 05:45, bearophile wrote:
 Walter Bright:

 I thought the idea that break and continue were bad died about 25 years ago.
 Pascal didn't allow them, and pretty much everyone hated the workaround of
 having to use flag variables.
You need to add some shades of grey to your palette. break, continue and goto are bad, and it's better to limit their usage.
Ehh?! Awww man, not this crap again... (http://www.digitalmars.com/d/archives/digitalmars/D/The_singleton_design_pattern_in_D_C_and_Java_113474.html#N115036) -- Bruno Medeiros - Software Engineer
Jan 27 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 I guess I'd be the guilty one :o). I like break and continue,
If you must break the loop, do it to seize power; in all other cases continue. -- Julius C'ster
Dec 25 2010
parent reply dolive <dolive89 sina.com> writes:
Walter Bright Wrote:

 Andrei Alexandrescu wrote:
 I guess I'd be the guilty one :o). I like break and continue,
If you must break the loop, do it to seize power; in all other cases continue. -- Julius C'ster
d How much longer can be used for business development? 10 years or 20 years? phobos to engage in complex with d language, only for the great master to use, not for general programmers. d language still make sense to you?
Dec 25 2010
parent reply dolive <dolive89 sina.com> writes:
dolive Wrote:

 Walter Bright Wrote:
 
 Andrei Alexandrescu wrote:
 I guess I'd be the guilty one :o). I like break and continue,
If you must break the loop, do it to seize power; in all other cases continue. -- Julius C'ster
d How much longer can be used for business development? 10 years or 20 years? phobos to engage in complex with d language, only for the great master to use, not for general programmers. d language still make sense to you?
template+range Like a heap dog feces, become increasingly do not understand.
Dec 25 2010
parent dolive <dolive89 sina.com> writes:
dolive Wrote:

 dolive Wrote:
 
 Walter Bright Wrote:
 
 Andrei Alexandrescu wrote:
 I guess I'd be the guilty one :o). I like break and continue,
If you must break the loop, do it to seize power; in all other cases continue. -- Julius C'ster
d How much longer can be used for business development? 10 years or 20 years? phobos to engage in complex with d language, only for the great master to use, not for general programmers. d language still make sense to you?
template+range Like a heap dog feces, become increasingly do not understand.
d's naming philosophy to learn java, It may have many faults, but the benefits are able to understand the world, Abbreviation or oddity's word for non-native English speakers do not understand completely.
Dec 25 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 One of the few advantages of D over Python is that in D you are able to
 implement efficient and custom data structures without leaving the D language
 itself :-)
few? How about: 1. scope guard 2. multithreaded programming (the GIL doesn't count) 3. inline assembler 4. immutability 5. purity 6. far faster performance 7. RAII 8. direct interface to C 9. templates 10. CTFE 11. generative programming
Dec 26 2010
next sibling parent reply Seth Hoenig <seth.a.hoenig gmail.com> writes:
This is certainly a personal preference, but I would add static typing to
that list.



On Sun, Dec 26, 2010 at 2:06 PM, Walter Bright
<newshound2 digitalmars.com>wrote:

 bearophile wrote:

 One of the few advantages of D over Python is that in D you are able to
 implement efficient and custom data structures without leaving the D
 language
 itself :-)
few? How about: 1. scope guard 2. multithreaded programming (the GIL doesn't count) 3. inline assembler 4. immutability 5. purity 6. far faster performance 7. RAII 8. direct interface to C 9. templates 10. CTFE 11. generative programming
Dec 26 2010
parent reply Gour <gour atmarama.net> writes:
On Sun, 26 Dec 2010 14:33:25 -0600
 "Seth" =3D=3D Seth Hoenig <seth.a.hoenig gmail.com> wrote:
Seth> This is certainly a personal preference, but I would add static Seth> typing to that list. +1 --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Dec 27 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/27/10 2:19 AM, Gour wrote:
 On Sun, 26 Dec 2010 14:33:25 -0600
 "Seth" == Seth Hoenig<seth.a.hoenig gmail.com>  wrote:
Seth> This is certainly a personal preference, but I would add static Seth> typing to that list. +1
Conversely, I wonder how we can improve the dynamic typing capabilities of D. For example, I'd be very interested in hearing experience with using Variant almost exclusively as the type of choice. Andrei
Dec 27 2010
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I don't know about Variant, but D's auto is a real time-saver for me,
especially when I'm converting some C code to D (app code, not
libraries). It almost feels like coding in a dynamic language.

On 12/27/10, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
 On 12/27/10 2:19 AM, Gour wrote:
 On Sun, 26 Dec 2010 14:33:25 -0600
 "Seth" == Seth Hoenig<seth.a.hoenig gmail.com>  wrote:
Seth> This is certainly a personal preference, but I would add static Seth> typing to that list. +1
Conversely, I wonder how we can improve the dynamic typing capabilities of D. For example, I'd be very interested in hearing experience with using Variant almost exclusively as the type of choice. Andrei
Dec 27 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei:

 Conversely, I wonder how we can improve the dynamic typing capabilities of D.
On the Lambda the Ultimate blog I have found few interesting comments about dynamic typing (all the text below are quotations): Dim objDoc objDoc = CreateObject("Word.Application") objDoc.Open(strFilename) That's enough VBScript. The point here is that different versions of Word have different signatures for the "Open" method (the number of parameters increases with each successive version). If the parameters are "optional", then VBScript lets you get away with not supplying them; all bar the first parameter are optional. There is no common interface or base class that all the different of two things: 1. Simulate dynamic dispatch via reflection, or 2. Write adaptor classes for each of the different versions of Word we might find on the target system. [...] Frank goes a bridge too far when he says "Static typing is superior to dynamic typing in every way". Indeed it is superior, but only asympotically, for software big enough. For tiny scripts, there is little advantage to justify typing more letters. About VBScript example: how about a static-typed language bundled with library which is equally powerful as VBScript library. So we could type with a few additional keystrokes: COMObj objDoc = COMObj.create("Word.Application"); objDoc.invoke("Open", new COMData[] { new COMString(strFilename) }; Alas, the world is not so easy. What good are static types for OLE Automation? After all it is a dynamic typed invocation, so using it annihilates all the advantages of static typing (unless you have some additional type info). Similarly is with SQL (dynamic typed), XML (dynamic typed mostly), and most of external technologies -- even if they are statically typed, their type system is incompatible with yours or at least type info is unavailable at compile time. Therefore, when writing pieces of code which merely glues together some external technologies (and certainly over 95% software produced falls into this category), static typing is badly hindered. But if your software really does something on its own, static type system is your friend. -------------- There you go again. I think using the term "relax" to talk about increasing the expressivity of typing is exactly the wrong way to think about it. It's not about relaxing type systems so we can annotate more untyped programs; it's about making type systems more rigorous so we can express more dynamic behaviors. That's precisely one of the points of disagreement. You're taking the above point as an article of faith, since you can't point to a type system that provides all the capabilities of dynamically-checked languages, including e.g. runtime evolvability. I'm not saying you're wrong, necessarily - but how many years will it be before you can demonstrate that you're right, with an actual language? The more you relax something, the less you can say about it. Exactly. And that's a feature - when you're prototyping, for example, or when you're developing a system whose specification is evolving as you develop it, and there are many aspects of the system that you can't say much about. You earlier mentioned the idea of types as a skeleton for an application - well, in the real world, having an application's skeleton be very flexible, even weakly defined, can be an enormous asset! The idea that more rigour is better is simply one of perspective - it doesn't apply in all situations. While you're figuring out how to statically type the universe, people have real projects to get done, and if we want them to take advantage of better type systems, more relaxed type systems are one of the things that are going to be needed. Take a look at the holes in the Java and C++ type systems - some of them are there for a reason. Upcasting and downcasting etc. are not necessarily things to be eliminated, they're features! However, the rest of those type systems could presumably be done better. And you could probably usefully add more holes into those type systems to produce useful languages. The problem with what I'm saying is that there are certainly no end of applications for which more rigour is better, and with the predilections of academics such as yourself, that's what's going to get focused on, and you'll be able to point to high-tech applications and say "see?" But then you shouldn't be surprised when this stuff doesn't translate into the mainstream - it's because it's not delivering some of the features that count in those contexts. Bye, bearophile
Dec 27 2010
next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
bearophile, let me reply to some of your quotations briefly.

I think those comments are directed toward older generation static
languages. D blows them out of the water. And surpasses older
dynamic languages, like Javascript, at the same time. Observe:


 to do one of two things [...]
Or 3) Use overloads or default parameters. Easy.
 COMObj objDoc = COMObj.create("Word.Application");
 objDoc.invoke("Open", new COMData[] { new COMString(strFilename) };
Gah, D could do the VBScript example literally using a variadic template on opDispatch. It's dynamic, but the dynamicness is limited to only the part of the program where it is needed. (Note that I've actually done this. My web.d code lets you call D functions by passing it a string[string] and my pretty.d from my dmdscript d2 port allows you to call script objects from D with a virtually identical syntax to calling them from inside javascript itself. My dom.d also uses opDispatch to allow easy access to XML attributes, very similarly to how you do it from inside Javascript. So this isn't an in-theory "could", this is something I use daily in production. Interestingly, this is *easier* to do in D than it is in Javascript too! Mozilla JS has something similar to opDispatch, but the other implementations don't, so it can't often be used in real world code...)
 Similarly is with SQL (dynamic typed)
With SQL, it is still advantageous to have a static type in some areas (not all) to confirm you are actually getting what you need to use. When I wrote Ruby, one of my biggest sources of bugs was due to a SQL query not returning the type I was expecting. In D, that's a simple compile error, not a runtime bug. Dynamicness is sometimes good, but D lets it flow in pretty easily where it belongs and keeps things sane everywhere else.
Dec 27 2010
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2010-12-27 18:58, bearophile wrote:
 Andrei:

 Conversely, I wonder how we can improve the dynamic typing capabilities of D.
On the Lambda the Ultimate blog I have found few interesting comments about dynamic typing (all the text below are quotations): Dim objDoc objDoc = CreateObject("Word.Application") objDoc.Open(strFilename) That's enough VBScript. The point here is that different versions of Word have different signatures for the "Open" method (the number of parameters increases with each successive version). If the parameters are "optional", then VBScript lets you get away with not supplying them; all bar the first parameter are optional. There is no common interface or base class that all the different of two things: 1. Simulate dynamic dispatch via reflection, or 2. Write adaptor classes for each of the different versions of Word we might find on the target system. [...]
default arguments and named arguments just to be able to solve to problem you mention above.
 Frank goes a bridge too far when he says "Static typing is superior to dynamic
typing in every way". Indeed it is superior, but only asympotically, for
software big enough. For tiny scripts, there is little advantage to justify
typing more letters.

 About VBScript example: how about a static-typed language bundled with library
which is equally powerful as VBScript library. So we could type with a few
additional keystrokes:

 COMObj objDoc = COMObj.create("Word.Application");
 objDoc.invoke("Open", new COMData[] { new COMString(strFilename) };

 Alas, the world is not so easy. What good are static types for OLE Automation?
After all it is a dynamic typed invocation, so using it annihilates all the
advantages of static typing (unless you have some additional type info).
Similarly is with SQL (dynamic typed), XML (dynamic typed mostly), and most of
external technologies -- even if they are statically typed, their type system
is incompatible with yours or at least type info is unavailable at compile time.

 Therefore, when writing pieces of code which merely glues together some
external technologies (and certainly over 95% software produced falls into this
category), static typing is badly hindered. But if your software really does
something on its own, static type system is your friend.

 --------------

      There you go again. I think using the term "relax" to talk about
increasing the expressivity of typing is exactly the wrong way to think about
it. It's not about relaxing type systems so we can annotate more untyped
programs; it's about making type systems more rigorous so we can express more
dynamic behaviors.

 That's precisely one of the points of disagreement. You're taking the above
point as an article of faith, since you can't point to a type system that
provides all the capabilities of dynamically-checked languages, including e.g.
runtime evolvability. I'm not saying you're wrong, necessarily - but how many
years will it be before you can demonstrate that you're right, with an actual
language?

      The more you relax something, the less you can say about it.

 Exactly. And that's a feature - when you're prototyping, for example, or when
you're developing a system whose specification is evolving as you develop it,
and there are many aspects of the system that you can't say much about. You
earlier mentioned the idea of types as a skeleton for an application - well, in
the real world, having an application's skeleton be very flexible, even weakly
defined, can be an enormous asset!

 The idea that more rigour is better is simply one of perspective - it doesn't
apply in all situations. While you're figuring out how to statically type the
universe, people have real projects to get done, and if we want them to take
advantage of better type systems, more relaxed type systems are one of the
things that are going to be needed.

 Take a look at the holes in the Java and C++ type systems - some of them are
there for a reason. Upcasting and downcasting etc. are not necessarily things
to be eliminated, they're features! However, the rest of those type systems
could presumably be done better. And you could probably usefully add more holes
into those type systems to produce useful languages.

 The problem with what I'm saying is that there are certainly no end of
applications for which more rigour is better, and with the predilections of
academics such as yourself, that's what's going to get focused on, and you'll
be able to point to high-tech applications and say "see?" But then you
shouldn't be surprised when this stuff doesn't translate into the mainstream -
it's because it's not delivering some of the features that count in those
contexts.

 Bye,
 bearophile
-- /Jacob Carlborg
Dec 27 2010
prev sibling parent reply Jimmy Cao <jcao219 gmail.com> writes:
On Mon, Dec 27, 2010 at 11:58 AM, bearophile <bearophileHUGS lycos.com>wrote:

  we could type with a few additional keystrokes:

 COMObj objDoc = COMObj.create("Word.Application");
 objDoc.invoke("Open", new COMData[] { new COMString(strFilename) };
Now that it's been mentioned, dynamic typing abilities within D would be Take a look: dynamic wordapp = new Word.Application(); dynamic doc = wordapp.Documents.Open(FileName: "MyDoc.docx"); Take a look at the IronPython project (which I have an amount experience with) initially sponsored by Microsoft (which later dropped its support). ASP.NET and Silverlight. It is awesome (still a community project right now, it's still going on without Microsoft's official support). Anyways, dynamic typing abilities in D would be very nice, and it opens up quite a number possibilities.
Dec 28 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Jimmy Cao wrote:
 Anyways, dynamic typing abilities in D would be very nice, and it opens 
 up quite a number possibilities.
And D2 has it! See opDispatch.
Dec 28 2010
prev sibling parent reply Max Samukha <spambox d-coding.com> writes:
On 12/27/2010 07:09 PM, Andrei Alexandrescu wrote:
 On 12/27/10 2:19 AM, Gour wrote:
 On Sun, 26 Dec 2010 14:33:25 -0600
 "Seth" == Seth Hoenig<seth.a.hoenig gmail.com> wrote:
Seth> This is certainly a personal preference, but I would add static Seth> typing to that list. +1
Conversely, I wonder how we can improve the dynamic typing capabilities of D. For example, I'd be very interested in hearing experience with using Variant almost exclusively as the type of choice. Andrei
I have had some experience with Qt's analogue of Variant - QVariant. Variant looks superior to QVariant in almost all respects. Where it is lacking is implicit conversions from static types: void foo(Variant v); foo(1); Quite a nuisance if one wants to use Variant exclusively. Another QVariant feature I would like to see in Variant is a constructor taking the type descriptor and a void pointer to the value. For example, it is needed for constructing Variants from variadic arguments.
Dec 28 2010
parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 28 Dec 2010 04:49:54 -0700, Max Samukha <spambox d-coding.com>  
wrote:
 Another QVariant feature I would like to see in Variant is a constructor  
 taking the type descriptor and a void pointer to the value. For example,  
 it is needed for constructing Variants from variadic arguments.
For what it's worth, I've been working on improving Variant and added this to my to do list when I read Issue 2846 a while ago. I've also checked it off the to do list :)
Dec 28 2010
parent Max Samukha <spambox d-coding.com> writes:
On 12/29/2010 02:38 AM, Robert Jacques wrote:
 On Tue, 28 Dec 2010 04:49:54 -0700, Max Samukha <spambox d-coding.com>
 wrote:
 Another QVariant feature I would like to see in Variant is a
 constructor taking the type descriptor and a void pointer to the
 value. For example, it is needed for constructing Variants from
 variadic arguments.
For what it's worth, I've been working on improving Variant and added this to my to do list when I read Issue 2846 a while ago. I've also checked it off the to do list :)
I hope to see your improvements in the standard lib. Thanks!
Dec 29 2010
prev sibling next sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Walter Bright wrote:
 bearophile wrote:
 One of the few advantages of D over Python is that in D you are able t=
o
 implement efficient and custom data structures without leaving the D
 language
 itself :-)
=20 few? =20 How about: =20 1. scope guard
Agreed
 2. multithreaded programming (the GIL doesn't count)
Agreed
 3. inline assembler
I have almost never used inline assembler even in languages that support it. Of course, this is only a sub-point of your point 6: using inline assembly in a language as slow as Python would be completely pointless.
 4. immutability
 5. purity
I would not count them as advantages per se. Some of their consequences might be seen as advantages once we have enough experience with them.
 6. far faster performance
Agreed
 7. RAII
Python has it too (since 2.6 IIRC, see the "with" keyword). Moreover, Python makes it clear that RAII is happening by requiring a special syntax at the call point.
 8. direct interface to C
Cython gives it too: it is as easy to write a Cython interface module as to write a D interface file for a C library.
 9. templates
Since Python uses duck typing everywhere, you could argue that everything in Python is a template.
 10. CTFE
This is not an advantage per se. It is useful because it allows generative programming, so see point 11.
 11. generative programming
Python has that (like most dynamic languages) through "eval". Well, that makes it 3 valid points out of 11 still ;) Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Dec 26 2010
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Jérôme M. Berger wrote:
 3. inline assembler
I have almost never used inline assembler even in languages that support it. Of course, this is only a sub-point of your point 6: using inline assembly in a language as slow as Python would be completely pointless.
Inline assembly isn't just for speed. There are a lot of special system instructions.
 4. immutability
 5. purity
I would not count them as advantages per se. Some of their consequences might be seen as advantages once we have enough experience with them.
They are not new concepts, and have been well proven to be advantageous in other languages.
 7. RAII
Python has it too (since 2.6 IIRC, see the "with" keyword). Moreover, Python makes it clear that RAII is happening by requiring a special syntax at the call point.
The 'with' statement is extremely limited. For example, it can't be used in an expression, as a function parameter, etc. I wouldn't call it RAII.
 8. direct interface to C
Cython gives it too: it is as easy to write a Cython interface module as to write a D interface file for a C library.
Cython is a separate language from Python.
 9. templates
Since Python uses duck typing everywhere, you could argue that everything in Python is a template.
Templates are far more than just generics.
 10. CTFE
This is not an advantage per se. It is useful because it allows generative programming, so see point 11.
That happens at compile time.
 11. generative programming
Python has that (like most dynamic languages) through "eval".
That happens at run time. D's happens at compile time.
Dec 26 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
J.M. Berger:

 	Well, that makes it 3 valid points out of 11 still ;)
Please Jerome, this time don't feed the list owner :-) Bye, bearophile
Dec 26 2010
parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
bearophile wrote:
 J.M. Berger:
=20
 	Well, that makes it 3 valid points out of 11 still ;)
=20 Please Jerome, this time don't feed the list owner :-) =20 Bye, bearophile
:D Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Dec 26 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Je'rome M. Berger:

I have almost never used inline assembler even in languages that support it. Of
course, this is only a sub-point of your point 6: using inline assembly in a
language as slow as Python would be completely pointless.<
For scientific computing this is better than D inline asm: http://www.corepy.org/
I would not count them as advantages per se. Some of their consequences might
be seen as advantages once we have enough experience with them.<
In Python frozensets, tuples, namedtuples and strings are built-in immutables. And it's easy to find frozendicts too. They cover many usages. (They are head-const, sometimes). And in the end Python design is based on different principles. If you look at Python and all you see is it lacking "const", "private" and "protected" then you miss the most important thing. A language doesn't fail because it lacks a feature, a language is like an old ecology, its parts are adapted to each other. This means that the lack of const is covered by other qualities of the language or its Zen. In practice I still create less bugs in Python than D. Google builds many systems using Python, and they work.
Python has it too (since 2.6 IIRC, see the "with" keyword). Moreover, Python
makes it clear that RAII is happening by requiring a special syntax at the call
point.<
CPython GC is a reference counter (+ cycle breaker), so deallocations are often deterministic.
Cython gives it too: it is as easy to write a Cython interface module as to
write a D interface file for a C library.<
This is built-in: http://docs.python.org/library/ctypes.html It's not hard to embed or extend Python in C. Plus there are tens of ways of bridging the two, like SWIG, PIL, Boost Python, etc, plus there is ShedSkin, Cython, etc etc. ------------------- Walter:
 Templates are far more than just generics.
But an army of people argue that using templates for more than generics is bad. In C++ you use templates for generic data structures and classes, for metaprogramming, for type-level computing, and probably for other things. For metaprogramming even D doesn't use templates much any more (after the introduction of CTFE), most other ways to perform metaprogramming are better than doing it with C++ templates. Type level computing is better done with staged compilation, a type to represent a type, more flexible type sytems, etc. See modern functional languages.
 That happens at compile time.
 That happens at run time. D's happens at compile time.
Python has a wonderful advantage over D: there is no compilation! You write your code and you run it! So no need to let things happen at compile-time. If you want to pre-compute things you can just split your program in two levels and run a level before another, or use eval/exec. So Python is better here. No compilation, no problems :-) Generative programming in Python is way better than D :-) Generally I don't post a message in a sub-thread like this. In the end what's the purpose of this sub thread? Is Python better than D? Who cares? They are very different languages, for different people doing different things. Even if D is ten times better than Python, the world will not stop using Python tomorrow. In future compiled languages, especially system languages that don't run on a VM will be just a small percentage of the whole computing world. They will not go away, but for any program written in C++ or D, in the next years people will write 1000 or more programs in JavaScript, Python, Ruby, PHP, VB, Bye, bearophile
Dec 27 2010
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/27/10, bearophile <bearophileHUGS lycos.com> wrote:
 In practice I still create less bugs in Python
 than D. Google builds many systems using Python, and they work.
If you used D for several years and then switched to Python, you would without a doubt create many bugs. In any case, let's not forget that Python is a 20 year old language (from its implementation) and had enough time to grow a huge community, which spawned all those projects you mentioned - C to Python linking, Python->C translators, different Py implementations, etcetera. D is still new, especially D2 (a baby!) which is the version we're comparing most of the time on these forums.
Dec 27 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrej Mitrovic:

 If you used D for several years and then switched to Python, you would
 without a doubt create many bugs.
I'm using D for enough years, so I don't believe this argument any more. Bye, bearophile
Dec 27 2010
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/27/10, bearophile <bearophileHUGS lycos.com> wrote:
 Andrej Mitrovic:

 If you used D for several years and then switched to Python, you would
 without a doubt create many bugs.
I'm using D for enough years, so I don't believe this argument any more. Bye, bearophile
Okay, but why do you often experience bugs? Is it because of lack of good documentation so you use the language/library incorrectly but not by your fault, are they mostly implementation bugs, or is it a fault of the language itself? I doubt the language itself is to blame, it has a great set of features to fight off bugs. Do you use often invariants and unittests, for example? Do you write D code using safe D features, or do you often use pointers and casts and traverse arrays by hand (and not using foreach for example), and use unsafe C functions (printf)? What I'm saying is D code needs to be written in an idiomatic way to take advantage of all those safety features it provides. I'm pretty sure you can find a ton of potential bugs at compile time if you stick with the safe features of D. And implementation bugs are getting fixed, so the language itself shouldn't be judged based on that.
Dec 27 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 On 12/27/10, bearophile <bearophileHUGS lycos.com> wrote:
  In practice I still create less bugs in Python
 than D. Google builds many systems using Python, and they work.
If you used D for several years and then switched to Python, you would without a doubt create many bugs.
From my own experience, it's true that anytime you learn a new language, you will be creating more bugs in it than the one you are well experienced in.
Dec 27 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/27/10 2:57 AM, bearophile wrote:
 Je'rome M. Berger:

 I have almost never used inline assembler even in languages that support it.
Of course, this is only a sub-point of your point 6: using inline assembly in a
language as slow as Python would be completely pointless.<
For scientific computing this is better than D inline asm: http://www.corepy.org/
 I would not count them as advantages per se. Some of their consequences might
be seen as advantages once we have enough experience with them.<
In Python frozensets, tuples, namedtuples and strings are built-in immutables. And it's easy to find frozendicts too. They cover many usages. (They are head-const, sometimes). And in the end Python design is based on different principles. If you look at Python and all you see is it lacking "const", "private" and "protected" then you miss the most important thing. A language doesn't fail because it lacks a feature, a language is like an old ecology, its parts are adapted to each other. This means that the lack of const is covered by other qualities of the language or its Zen. In practice I still create less bugs in Python than D. Google builds many systems using Python, and they work.
 Python has it too (since 2.6 IIRC, see the "with" keyword). Moreover, Python
makes it clear that RAII is happening by requiring a special syntax at the call
point.<
CPython GC is a reference counter (+ cycle breaker), so deallocations are often deterministic.
 Cython gives it too: it is as easy to write a Cython interface module as to
write a D interface file for a C library.<
This is built-in: http://docs.python.org/library/ctypes.html It's not hard to embed or extend Python in C. Plus there are tens of ways of bridging the two, like SWIG, PIL, Boost Python, etc, plus there is ShedSkin, Cython, etc etc. ------------------- Walter:
 Templates are far more than just generics.
But an army of people argue that using templates for more than generics is bad. In C++ you use templates for generic data structures and classes, for metaprogramming, for type-level computing, and probably for other things. For metaprogramming even D doesn't use templates much any more (after the introduction of CTFE), most other ways to perform metaprogramming are better than doing it with C++ templates. Type level computing is better done with staged compilation, a type to represent a type, more flexible type sytems, etc. See modern functional languages.
 That happens at compile time.
 That happens at run time. D's happens at compile time.
Python has a wonderful advantage over D: there is no compilation! You write your code and you run it! So no need to let things happen at compile-time. If you want to pre-compute things you can just split your program in two levels and run a level before another, or use eval/exec. So Python is better here. No compilation, no problems :-) Generative programming in Python is way better than D :-)
With rdmd I have the feeling that you can say the same about D.
 Generally I don't post a message in a sub-thread like this. In the end what's
the purpose of this sub thread? Is Python better than D? Who cares? They are
very different languages, for different people doing different things. Even if
D is ten times better than Python, the world will not stop using Python
tomorrow. In future compiled languages, especially system languages that don't
run on a VM will be just a small percentage of the whole computing world. They
will not go away, but for any program written in C++ or D, in the next years
people will write 1000 or more programs in JavaScript, Python, Ruby, PHP, VB,

It's currently a growing niche as sequential speed doesn't scale anymore by Moore's law. Depending on the interplay of discoveries in the coming years, I believe it's not impossible that serial languages that spend CPU cycles on dynamic interpretation might become a historical curiosity caused by a fleeting context: (a) serial speed is large enough to allow wasting some of it, (b) I/O is much slower than CPU and dominates the performance profile of many programs, (c) many of today's computing needs are materially covered with relatively little CPU effort. Any and all such conditions may change in the future. Andrei
Dec 27 2010
parent reply foobar <foo bar.com> writes:
Andrei Alexandrescu Wrote:
 It's currently a growing niche as sequential speed doesn't scale anymore 
 by Moore's law. Depending on the interplay of discoveries in the coming 
 years, I believe it's not impossible that serial languages that spend 
 CPU cycles on dynamic interpretation might become a historical curiosity 
 caused by a fleeting context: (a) serial speed is large enough to allow 
 wasting some of it, (b) I/O is much slower than CPU and dominates the 
 performance profile of many programs, (c) many of today's computing 
 needs are materially covered with relatively little CPU effort. Any and 
 all such conditions may change in the future.
 
 
 Andrei
No one can predict the future, but I feel that your conclusion is in conflict with your above description. Because sequential speed does not scale, there is a search for non sequential solutions. Those steer _away_ from hand managed systems languages that make such programming harder. In fact, it makes even more sense to go dynamic to adapt the code for different platforms and scenarios. Erlang is an excellent example and is dynamic.
Dec 27 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/27/10 2:34 PM, foobar wrote:
 Andrei Alexandrescu Wrote:
 It's currently a growing niche as sequential speed doesn't scale anymore
 by Moore's law. Depending on the interplay of discoveries in the coming
 years, I believe it's not impossible that serial languages that spend
 CPU cycles on dynamic interpretation might become a historical curiosity
 caused by a fleeting context: (a) serial speed is large enough to allow
 wasting some of it, (b) I/O is much slower than CPU and dominates the
 performance profile of many programs, (c) many of today's computing
 needs are materially covered with relatively little CPU effort. Any and
 all such conditions may change in the future.


 Andrei
No one can predict the future, but I feel that your conclusion is in conflict with your above description. Because sequential speed does not scale, there is a search for non sequential solutions. Those steer _away_ from hand managed systems languages that make such programming harder. In fact, it makes even more sense to go dynamic to adapt the code for different platforms and scenarios. Erlang is an excellent example and is dynamic.
Good point. Yet Erlang's dynamism has little to do with its concurrency capabilities and more to do with hot swapping. At any rate, the current crop of successful dynamic languages (Ruby, Python, PHP) seem to be worse equipped than the current statically-typed languages (Java, C++0x), which are rather ill-prepared themselves. I hope I placed a winning bet with D's NDS (no-default-sharing) concurrency model; only time will tell. Andrei
Dec 27 2010
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/27/10, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
 I hope I placed a winning bet with D's NDS (no-default-sharing)
 concurrency model; only time will tell.
Excerpt from a recent article: "Initial multicore chip architectures depended on a set of protocols that assures that each core has the same view of the system's memory, a technique called cache coherency. As more cores are added to chips, this approach becomes problematic insofar that "the protocol overhead per core grows with the number of cores, leading to a 'coherency wall' beyond which the overhead exceeds the value of adding cores," the paper accompanying Mattson's talk noted. Mattson has argued that a better approach would be to eliminate cache coherency and instead allow cores to pass messages among one another. " http://www.goodgearguide.com.au/article/368762/intel_1_000-core_processor_possible/
Dec 27 2010
prev sibling parent foobar <foo bar.com> writes:
Andrei Alexandrescu Wrote:

 On 12/27/10 2:34 PM, foobar wrote:
 Andrei Alexandrescu Wrote:
 It's currently a growing niche as sequential speed doesn't scale anymore
 by Moore's law. Depending on the interplay of discoveries in the coming
 years, I believe it's not impossible that serial languages that spend
 CPU cycles on dynamic interpretation might become a historical curiosity
 caused by a fleeting context: (a) serial speed is large enough to allow
 wasting some of it, (b) I/O is much slower than CPU and dominates the
 performance profile of many programs, (c) many of today's computing
 needs are materially covered with relatively little CPU effort. Any and
 all such conditions may change in the future.


 Andrei
No one can predict the future, but I feel that your conclusion is in conflict with your above description. Because sequential speed does not scale, there is a search for non sequential solutions. Those steer _away_ from hand managed systems languages that make such programming harder. In fact, it makes even more sense to go dynamic to adapt the code for different platforms and scenarios. Erlang is an excellent example and is dynamic.
Good point. Yet Erlang's dynamism has little to do with its concurrency capabilities and more to do with hot swapping. At any rate, the current crop of successful dynamic languages (Ruby, Python, PHP) seem to be worse equipped than the current statically-typed languages (Java, C++0x), which are rather ill-prepared themselves. I hope I placed a winning bet with D's NDS (no-default-sharing) concurrency model; only time will tell. Andrei
As you said, both groups are ill-prepared (I would've used stronger words..), but I don't agree that the dynamic languages are worse in this regard. Take a look at how Ruby changed it's thread model in the transition to 1.9. It is easier to accomplish than in a compiled language. I agree that D's no-default-sharing is a _huge_ thing, this is also one of the big pros of Erlang. It is an important step but it is not enough and there are many more aspects to consider.
Dec 27 2010
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Templates are far more than just generics.
But an army of people argue that using templates for more than generics is bad.
Not surprising considering how awful templates are in C++. Don't make the mistake of transferring that to D, which does things significantly differently.
 In C++ you use templates for generic data structures and classes, for
 metaprogramming, for type-level computing, and probably for other things. For
 metaprogramming even D doesn't use templates much any more (after the
 introduction of CTFE), most other ways to perform metaprogramming are better
 than doing it with C++ templates. Type level computing is better done with
 staged compilation, a type to represent a type, more flexible type sytems,
 etc. See modern functional languages.
Nobody here is arguing that C++ nailed it with templates.
 That happens at compile time. That happens at run time. D's happens at
 compile time.
Python has a wonderful advantage over D: there is no compilation! You write your code and you run it!
The compilation being hidden from you doesn't mean it isn't happening.
 So no need to let things happen at compile-time. If
 you want to pre-compute things you can just split your program in two levels
 and run a level before another, or use eval/exec. So Python is better here. 
 No compilation, no problems :-) Generative programming in Python is way
 better than D :-)
There is no "pre-computing" things in python. It's all redone from scratch every time you run a python program.
 Generally I don't post a message in a sub-thread like this. In the end what's
 the purpose of this sub thread? Is Python better than D? Who cares?
You started off this thread claiming that D had almost no advantages over Python.
Dec 27 2010
prev sibling parent reply Don <nospam nospam.com> writes:
bearophile wrote:
 Je'rome M. Berger:
 
 I have almost never used inline assembler even in languages that support it.
Of course, this is only a sub-point of your point 6: using inline assembly in a
language as slow as Python would be completely pointless.<
For scientific computing this is better than D inline asm: http://www.corepy.org/
Based on a quick look at the website, that looks _extremely_ unlikely to be true. It would have been fair enough to say "this is an option for Python programmers", and provide the link. I think we're all getting rather tired of these ridiculous, sweeping statements, made without presenting any evidence whatsoever.
Dec 28 2010
next sibling parent Jimmy Cao <jcao219 gmail.com> writes:
On Tue, Dec 28, 2010 at 3:03 AM, Don <nospam nospam.com> wrote:

 bearophile wrote:

 Je'rome M. Berger:

  I have almost never used inline assembler even in languages that support
 it. Of course, this is only a sub-point of your point 6: using inline
 assembly in a language as slow as Python would be completely pointless.<
For scientific computing this is better than D inline asm: http://www.corepy.org/
Based on a quick look at the website, that looks _extremely_ unlikely to be true. It would have been fair enough to say "this is an option for Python programmers", and provide the link. I think we're all getting rather tired of these ridiculous, sweeping statements, made without presenting any evidence whatsoever.
Well, at the University of Texas at Austin, they only use Perl/Python for the cloud computing genetic sequencing machine things. (I'm not in college yet, but a teacher at my school asked if I could help him do some genetic analysis using these machines with my Python skills, and that's how I know). So inherently, I believe scientific programming is one of the places where dynamic languages such as Python tend to excel in popularity. Speed is not too much of an issue, considering the optimizations available for Python.
Dec 28 2010
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Don Wrote:

 bearophile wrote:
 Je'rome M. Berger:
 
 I have almost never used inline assembler even in languages that support it.
Of course, this is only a sub-point of your point 6: using inline assembly in a
language as slow as Python would be completely pointless.<
For scientific computing this is better than D inline asm: http://www.corepy.org/
Based on a quick look at the website, that looks _extremely_ unlikely to be true.
This seems like an extravagant claim: "CorePy. . . regularly outperforms compiled languages for common computational tasks (as hand-coded assembly often does)." They are talking about interpreted assembly code, correct?
Dec 28 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/28/10 9:30 AM, Sean Kelly wrote:
 Don Wrote:

 bearophile wrote:
 Je'rome M. Berger:

 I have almost never used inline assembler even in languages that support it.
Of course, this is only a sub-point of your point 6: using inline assembly in a
language as slow as Python would be completely pointless.<
For scientific computing this is better than D inline asm: http://www.corepy.org/
Based on a quick look at the website, that looks _extremely_ unlikely to be true.
This seems like an extravagant claim: "CorePy. . . regularly outperforms compiled languages for common computational tasks (as hand-coded assembly often does)." They are talking about interpreted assembly code, correct?
It's generated during runtime and then ran straight. Andrei
Dec 28 2010
parent reply Sean Kelly <sean invisibleduck.org> writes:
Andrei Alexandrescu Wrote:

 On 12/28/10 9:30 AM, Sean Kelly wrote:
 Don Wrote:

 bearophile wrote:
 Je'rome M. Berger:

 I have almost never used inline assembler even in languages that support it.
Of course, this is only a sub-point of your point 6: using inline assembly in a
language as slow as Python would be completely pointless.<
For scientific computing this is better than D inline asm: http://www.corepy.org/
Based on a quick look at the website, that looks _extremely_ unlikely to be true.
This seems like an extravagant claim: "CorePy. . . regularly outperforms compiled languages for common computational tasks (as hand-coded assembly often does)." They are talking about interpreted assembly code, correct?
It's generated during runtime and then ran straight.
Yeah, I mulled it over and figured out how this works. For long-running sequences of code I imagine it's quite fast.
Dec 28 2010
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/28/10 9:48 AM, Sean Kelly wrote:
 Andrei Alexandrescu Wrote:

 On 12/28/10 9:30 AM, Sean Kelly wrote:
 Don Wrote:

 bearophile wrote:
 Je'rome M. Berger:

 I have almost never used inline assembler even in languages that support it.
Of course, this is only a sub-point of your point 6: using inline assembly in a
language as slow as Python would be completely pointless.<
For scientific computing this is better than D inline asm: http://www.corepy.org/
Based on a quick look at the website, that looks _extremely_ unlikely to be true.
This seems like an extravagant claim: "CorePy. . . regularly outperforms compiled languages for common computational tasks (as hand-coded assembly often does)." They are talking about interpreted assembly code, correct?
It's generated during runtime and then ran straight.
Yeah, I mulled it over and figured out how this works. For long-running sequences of code I imagine it's quite fast.
Also, it's not a contender to D's built-in inline asm. It's a library! If D needs to generate assembler dynamically, copying CorePy's API (which I find well thought out) is an easy proposition. Andrei
Dec 28 2010
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Sean Kelly:

 Yeah, I mulled it over and figured out how this works.  For long-running
sequences of code I imagine it's quite fast.
CorePy also allows to write loops in a higher level style, to produce efficient code (unrolled, and maybe tiled too) with short code. Bye, bearophile
Dec 28 2010
prev sibling next sibling parent reply spir <denis.spir gmail.com> writes:
On Sun, 26 Dec 2010 22:44:04 +0100
"J=C3=A9r=C3=B4me M. Berger" <jeberger free.fr> wrote:

 8. direct interface to C =20
Cython gives it too: it is as easy to write a Cython interface module as to write a D interface file for a C library.
Hum, I do not agree at all. As I see it, D binds to C directly, Lua binds t= o C rather easily, Python binds to C "complicatedly". (Lua's C interface la= yer is far simpler than Python's, but it still cannot compare to D's direct= calls in both directions. The only issue AFAIK is that types, qualifiers a= nd conventions do not exactly match.)
 	Well, that makes it 3 valid points out of 11 still ;)
I would say 4;-) Denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 27 2010
parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
spir wrote:
 On Sun, 26 Dec 2010 22:44:04 +0100
 "J=C3=A9r=C3=B4me M. Berger" <jeberger free.fr> wrote:
=20
 8. direct interface to C =20
Cython gives it too: it is as easy to write a Cython interface module as to write a D interface file for a C library.
=20 Hum, I do not agree at all. As I see it, D binds to C directly, Lua bin=
ds to C rather easily, Python binds to C "complicatedly". (Lua's C interf= ace layer is far simpler than Python's, but it still cannot compare to D'= s direct calls in both directions. The only issue AFAIK is that types, qu= alifiers and conventions do not exactly match.)
=20
cdef extern double fooC (int bar) def fooPy (bar): return fooC (bar) I don't know how Lua binds to C, but I doubt it is any easier. Or you could use swig which is even easier. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Dec 27 2010
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/27/10 3:19 PM, "Jérôme M. Berger" wrote:
 spir wrote:
 On Sun, 26 Dec 2010 22:44:04 +0100
 "Jérôme M. Berger"<jeberger free.fr>  wrote:

 8. direct interface to C
Cython gives it too: it is as easy to write a Cython interface module as to write a D interface file for a C library.
Hum, I do not agree at all. As I see it, D binds to C directly, Lua binds to C rather easily, Python binds to C "complicatedly". (Lua's C interface layer is far simpler than Python's, but it still cannot compare to D's direct calls in both directions. The only issue AFAIK is that types, qualifiers and conventions do not exactly match.)
cdef extern double fooC (int bar) def fooPy (bar): return fooC (bar) I don't know how Lua binds to C, but I doubt it is any easier. Or you could use swig which is even easier. Jerome
How would one be able to pass pointers around? Andrei
Dec 27 2010
prev sibling parent reply KennyTM~ <kennytm gmail.com> writes:
On Dec 28, 10 05:19, "Jérôme M. Berger" wrote:
 spir wrote:
 On Sun, 26 Dec 2010 22:44:04 +0100
 "Jérôme M. Berger"<jeberger free.fr>  wrote:

 8. direct interface to C
Cython gives it too: it is as easy to write a Cython interface module as to write a D interface file for a C library.
Hum, I do not agree at all. As I see it, D binds to C directly, Lua binds to C rather easily, Python binds to C "complicatedly". (Lua's C interface layer is far simpler than Python's, but it still cannot compare to D's direct calls in both directions. The only issue AFAIK is that types, qualifiers and conventions do not exactly match.)
cdef extern double fooC (int bar) def fooPy (bar): return fooC (bar) I don't know how Lua binds to C, but I doubt it is any easier. Or you could use swig which is even easier. Jerome
Cython ≠ Python. In pure Python you bind C code with the 'ctypes' module. from ctypes import * xso = CDLL('x.so') xso.fooC.restype = c_double xso.fooC.argtypes = [c_int] ... xso.fooC(4)
Dec 28 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
KennyTM~ wrote:
 In pure Python you bind C code with the 'ctypes' module.
 
 from ctypes import *
 xso = CDLL('x.so')
 xso.fooC.restype = c_double
 xso.fooC.argtypes = [c_int]
 ...
 xso.fooC(4)
Compare that with: extern(C) double foo(int); foo(4); and you don't need to build a .so either.
Dec 28 2010
prev sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
J=C3=A9r=C3=B4me M. Berger wrote:
 ...
I should perhaps add a couple of points: - I like D (or I would not be here); - D has some advantages over Python (mostly to do with low level programming and performance); - D and Python have some features that are on a par with each other; - Python has some advantages over D too (reflection comes to mind). We will not advance the cause of D by pretending that it is better at everything than all other languages. If we try to, we will simply annoy people who will see that we lied somewhere and simply assume that we lied everywhere. Seeing D's strength (and they are many) is all very good, but we must not be blind to the fact that others have strengths too. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Dec 27 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/27/10 3:33 PM, "Jérôme M. Berger" wrote:
 Jérôme M. Berger wrote:
 ...
I should perhaps add a couple of points: - I like D (or I would not be here); - D has some advantages over Python (mostly to do with low level programming and performance); - D and Python have some features that are on a par with each other; - Python has some advantages over D too (reflection comes to mind). We will not advance the cause of D by pretending that it is better at everything than all other languages. If we try to, we will simply annoy people who will see that we lied somewhere and simply assume that we lied everywhere. Seeing D's strength (and they are many) is all very good, but we must not be blind to the fact that others have strengths too. Jerome
Strongly agree. What I think presses some people's buttons is the following pattern: 1. Some strong statement is aired on a subjective topic, e.g. in this case a certain comparative aspect of two languages. Many people aren't equally experienced in both so they need to choose between going with the poster's assertiveness or spend time on doing due research. 2. If nobody answers, the strong statement "stays" and spreads possibly inaccurate rumor. 3. On occasion someone _will_ carry the due diligence and would reveal the issues with the claim. 4. In these rare instances, the poster subsequently dilutes the statement by qualifications, amendments, and retractions, sometimes relying on the ultimate placating device "I still have a lot to learn". It's a risk worth taking: most of the time everything stops at point 2 and in the worst case the person who spent time debunking is silenced by playing the modesty card. Andrei
Dec 27 2010
prev sibling next sibling parent spir <denis.spir gmail.com> writes:
On Sun, 26 Dec 2010 12:06:04 -0800
Walter Bright <newshound2 digitalmars.com> wrote:

 11. generative programming
Does someone have a pointer to any kind of doc about this? (in D) Denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 27 2010
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday 27 December 2010 04:41:37 spir wrote:
 On Sun, 26 Dec 2010 12:06:04 -0800
 
 Walter Bright <newshound2 digitalmars.com> wrote:
 11. generative programming
Does someone have a pointer to any kind of doc about this? (in D)
Anything on templates, template mixins, and string mixins. All of them generate code. And some people have done some pretty crazy stuff with them (especially string mixins). - Jonathan M Davis
Dec 27 2010
prev sibling next sibling parent reply Caligo <iteronvexor gmail.com> writes:
On Mon, Dec 27, 2010 at 7:20 AM, Jonathan M Davis <jmdavisProg gmx.com>wrote:

 On Monday 27 December 2010 04:41:37 spir wrote:
 On Sun, 26 Dec 2010 12:06:04 -0800

 Walter Bright <newshound2 digitalmars.com> wrote:
 11. generative programming
Does someone have a pointer to any kind of doc about this? (in D)
Anything on templates, template mixins, and string mixins. All of them generate code. And some people have done some pretty crazy stuff with them (especially string mixins). - Jonathan M Davis
So is it like template metaprogramming in C++? a small D example would be helpful. There doesn't seem to be anything about it in TDPL. As for CTFE, does this mean I could call 'writeln()' at compile time and have it print a message to stdout while compiling?
Dec 27 2010
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 27.12.2010 17:01, schrieb Caligo:
 On Mon, Dec 27, 2010 at 7:20 AM, Jonathan M Davis <jmdavisProg gmx.com
 <mailto:jmdavisProg gmx.com>> wrote:

     On Monday 27 December 2010 04:41:37 spir wrote:
      > On Sun, 26 Dec 2010 12:06:04 -0800
      >
      > Walter Bright <newshound2 digitalmars.com
     <mailto:newshound2 digitalmars.com>> wrote:
      > > 11. generative programming
      >
      > Does someone have a pointer to any kind of doc about this? (in D)

     Anything on templates, template mixins, and string mixins. All of
     them generate
     code. And some people have done some pretty crazy stuff with them
     (especially
     string mixins).

     - Jonathan M Davis


 So is it like template metaprogramming in C++?  a small D example would
 be helpful.  There doesn't seem to be anything about it in TDPL.
http://www.digitalmars.com/d/2.0/templates-revisited.html
 As for CTFE, does this mean I could call 'writeln()' at compile time and
 have it print a message to stdout while compiling?
writeln() not. you can call pure functions (but not yet all of them) See http://www.digitalmars.com/d/2.0/function.html#interpretation for further information.
Dec 27 2010
parent reply Mariusz =?utf-8?q?Gliwi=C5=84ski?= <alienballance gmail.com> writes:
  charset="utf-8"
Content-Transfer-Encoding: quoted-printable

Monday 27 December 2010   17:18:17 Daniel Gibson:
 Am 27.12.2010 17:01, schrieb Caligo:
 On Mon, Dec 27, 2010 at 7:20 AM, Jonathan M Davis <jmdavisProg gmx.com
     <mailto:newshound2 digitalmars.com>> wrote:
      > > 11. generative programming
      >=20
      > Does someone have a pointer to any kind of doc about this? (in D)
    =20
     Anything on templates, template mixins, and string mixins. All of
     them generate
     code. And some people have done some pretty crazy stuff with them
     (especially
     string mixins).
    =20
     - Jonathan M Davis
=20
 So is it like template metaprogramming in C++?  a small D example would
 be helpful.  There doesn't seem to be anything about it in TDPL.
=20 http://www.digitalmars.com/d/2.0/templates-revisited.html
=46irstly, I admit I'm still new in programming so treat me like that but... On my peasant-like brain, if You can't store compilation-time variable, to= =20 read it later... from other template, even module with normal language rule= s=20 but in compile time, it's not *fully* generative programming, is it? You ca= n't=20 make many things without that. As I said, I'm new in programming so maybe that's why, but D was my ideal=20 language (so i could express everything i imagined). But this little thing= =20 makes templates only small spice to what I've seen before, instead of big s= tep=20 forward. I understand it might be hard to implement with clear rules of usa= ge,=20 but I abstracted it out. Ps. I want compile-time raytracer downloadable again, please :) Sincerely, Mariusz Gliwi=C5=84ski
Dec 27 2010
parent Don <nospam nospam.com> writes:
Mariusz Gliwiński wrote:
 Monday 27 December 2010   17:18:17 Daniel Gibson:
 Am 27.12.2010 17:01, schrieb Caligo:
 On Mon, Dec 27, 2010 at 7:20 AM, Jonathan M Davis <jmdavisProg gmx.com
     <mailto:newshound2 digitalmars.com>> wrote:
      > > 11. generative programming
      > 
      > Does someone have a pointer to any kind of doc about this? (in D)
     
     Anything on templates, template mixins, and string mixins. All of
     them generate
     code. And some people have done some pretty crazy stuff with them
     (especially
     string mixins).
     
     - Jonathan M Davis

 So is it like template metaprogramming in C++?  a small D example would
 be helpful.  There doesn't seem to be anything about it in TDPL.
http://www.digitalmars.com/d/2.0/templates-revisited.html
Firstly, I admit I'm still new in programming so treat me like that but... On my peasant-like brain, if You can't store compilation-time variable, to read it later... from other template, even module with normal language rules but in compile time, it's not *fully* generative programming, is it? You can't make many things without that.
You can store all compile-time results in local variables. (When a couple of implementation bugs get fixed, you'll be able to store it in heap-allocated variables as well). So yes, with C++ style template metaprogramming, there's not so much you can do. D CTFE metaprogramming is far more powerful, and it's also simple to understand.
 As I said, I'm new in programming so maybe that's why, but D was my ideal 
 language (so i could express everything i imagined). But this little thing 
 makes templates only small spice to what I've seen before, instead of big step 
 forward. I understand it might be hard to implement with clear rules of usage, 
 but I abstracted it out.
 
 Ps. I want compile-time raytracer downloadable again, please :)
BTW -- I don't recommend doing anything complicated with template metaprogramming. It becomes incomprehensible very quickly. CTFE, on the other hand, scales very nicely.
Dec 27 2010
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/27/10 10:01 AM, Caligo wrote:
 On Mon, Dec 27, 2010 at 7:20 AM, Jonathan M Davis <jmdavisProg gmx.com
 <mailto:jmdavisProg gmx.com>> wrote:

     On Monday 27 December 2010 04:41:37 spir wrote:
      > On Sun, 26 Dec 2010 12:06:04 -0800
      >
      > Walter Bright <newshound2 digitalmars.com
     <mailto:newshound2 digitalmars.com>> wrote:
      > > 11. generative programming
      >
      > Does someone have a pointer to any kind of doc about this? (in D)

     Anything on templates, template mixins, and string mixins. All of
     them generate
     code. And some people have done some pretty crazy stuff with them
     (especially
     string mixins).

     - Jonathan M Davis


 So is it like template metaprogramming in C++?  a small D example would
 be helpful.  There doesn't seem to be anything about it in TDPL.
Look up the index for "mixin". Most, if not all, examples of string mixins are generative. The canonical example I give is std.bitmanip.bitfields, see http://www.dsource.org/projects/phobos/browser/trunk/phobos/std/bitmanip.d
 As for CTFE, does this mean I could call 'writeln()' at compile time and
 have it print a message to stdout while compiling?
You can't because that's not pure. Incidentally you can use pragma(msg, "hello") as an alternate mechanism. Helped a lot while debugging std.bitmanip.bitfields :o). Andrei
Dec 27 2010
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/27/10, Caligo <iteronvexor gmail.com> wrote:
 As for CTFE, does this mean I could call 'writeln()' at compile time and
 have it print a message to stdout while compiling?
You need to use pragma(msg, "your message here") for that. Not everything in D is CTFE-able, there are some limitations.
Dec 27 2010
prev sibling parent reply Caligo <iteronvexor gmail.com> writes:
On Mon, Dec 27, 2010 at 6:41 AM, spir <denis.spir gmail.com> wrote:

 On Sun, 26 Dec 2010 12:06:04 -0800
 Walter Bright <newshound2 digitalmars.com> wrote:

 11. generative programming
Does someone have a pointer to any kind of doc about this? (in D) Denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
I just read the section on mixins in chapter 3 and my jaw hit the floor.
Dec 28 2010
parent Christopher Nicholson-Sauls <ibisbasenji gmail.com> writes:
On 12/28/10 12:04, Caligo wrote:
 
 I just read the section on mixins in chapter 3 and my jaw hit the floor.
 
Yeah, I had that reaction as well. Combined with CTFE, mixins and string mixins can do some pretty amazing things. Sometimes the addition of Tuples can make it even better. For example, how nice is it to pre-generate a complicated partial argument list, without having to pre-generate the entire function call? Pretty darn nice. -- Chris N-S
Dec 29 2010
prev sibling parent nobody <someone somewhere.com> writes:
On 12/25/2010 3:56 AM, Caligo wrote:
 This is the page that would require your attention:
 http://unthought.net/c++/c_vs_c++.html

 #include <unordered_set>
If you don't hesitate to use gnu extension, then standard unorderd_set is much slower than gnu's pb_ds. http://gcc.gnu.org/onlinedocs/libstdc++/ext/pb_ds/index.html The C++ knucleotide benchmark on Alioth Shootout uses pb_ds: http://shootout.alioth.debian.org/u32q/benchmark.php?test=knucleotide&lang=all&lang2=gcc
Dec 25 2010