www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: Stroustrup's talk on C++0x

reply Robert Fraser <fraserofthenight gmail.com> writes:
eao197 Wrote:

 Yes! But C++ is doing that without breaking existing codebase. So  
 significant amount of C++ programmers needn't look to D -- they will have  
 new advanced features without dropping their old tools, IDE and libraries.
 
 I'm affraid that would play against D :(
 
 Current C++ is far behind D, but D is not stable, not mature, not equiped  
 by tools/libraries as C++. So it will took several years to make D  
 competitive with C++ in that area. But if in 2010 (it is only 2.5 year  
 ahead) C++ will have things like lambdas and autos (and tons of libraries  
 and army of programmers), what will be D 'killer feature' to attract  
 C++ programmers? And not only C++, at this time D would compete with new  
 versions of C#, Java, Scala, Nemerle (probably) and with some of  
 functional languages (like Haskell and OCaml).

You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.
Aug 20 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser  
<fraserofthenight gmail.com> wrote:

 eao197 Wrote:

 Yes! But C++ is doing that without breaking existing codebase. So
 significant amount of C++ programmers needn't look to D -- they will  
 have
 new advanced features without dropping their old tools, IDE and  
 libraries.

 I'm affraid that would play against D :(

 Current C++ is far behind D, but D is not stable, not mature, not  
 equiped
 by tools/libraries as C++. So it will took several years to make D
 competitive with C++ in that area. But if in 2010 (it is only 2.5 year
 ahead) C++ will have things like lambdas and autos (and tons of  
 libraries
 and army of programmers), what will be D 'killer feature' to attract
 C++ programmers? And not only C++, at this time D would compete with new
 versions of C#, Java, Scala, Nemerle (probably) and with some of
 functional languages (like Haskell and OCaml).

You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.

I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet. To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications. -- Regards, Yauheni Akhotnikau
Aug 20 2007
next sibling parent Charles D Hixson <charleshixsn earthlink.net> writes:
eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser 
 <fraserofthenight gmail.com> wrote:
 
 eao197 Wrote:
 ...


I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet. To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications.

dependable libraries. A secondary problem is lack of run-time flexibility (ala Python, etc.), but that may be intractable in a language that intends to be fast. Well... the libraries problem is intractable, also. Just, perhaps, less so. OTOH, it is crucial that new releases not break working libraries. If they do it will not only prevent the accumulation over time of working libraries, but will also discourage people from working on them.
Aug 20 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser 
 You seem to forget that D is evolving, too. C++ might get a lot of the 
 cool D features (albiet with ugly syntax), but by that time, D might 
 have superpowers incomprehensible to the C++ mind.

I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.

I don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.
 To outperform C++ in 2009-2010 D must have full strength now and must be 
 stable during some years to proof that strength in some killer 
 applications.

C++0x's new features are essentially all present in D 1.0.
Aug 22 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser
 You seem to forget that D is evolving, too. C++ might get a lot of 
 the cool D features (albiet with ugly syntax), but by that time, D 
 might have superpowers incomprehensible to the C++ mind.

I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.

I don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.
 To outperform C++ in 2009-2010 D must have full strength now and must 
 be stable during some years to proof that strength in some killer 
 applications.

C++0x's new features are essentially all present in D 1.0.

..but C++98's features that were missing from D are still missing (both good and bad ones). --bb
Aug 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 C++0x's new features are essentially all present in D 1.0.

..but C++98's features that were missing from D are still missing (both good and bad ones).

Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.
Aug 23 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 C++0x's new features are essentially all present in D 1.0.

..but C++98's features that were missing from D are still missing (both good and bad ones).

Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.

The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting. 2) lack of a way to return a reference. 3) From what I can tell "const ref" doesn't work for parameters in D 2.0. Oh, and 4) real struct constructors. Just a syntactic annoyance, but still an annoyance. --bb
Aug 23 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 C++0x's new features are essentially all present in D 1.0.

..but C++98's features that were missing from D are still missing (both good and bad ones).

Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.

The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting.

Sorry for the self-follow-up, but I just wanted to add that really C++ smart pointers are themselves kind of klunky due to the fact that _all_ you have access to is that operator*/operator-> thing. So for instance if you make a boost::shared_ptr<std::map>, you end up always having to dereference to do anything interesting involving operator overloads. mymap["foo"] doesn't work, you need to use (*mymap)["foo"]. What you really want most of the time is something more like "smart references". This kind of thing is coming close to possibility with the reflection stuff some people are doing. Basically shapred_ptr!(T) would do introspection on T and populate itself with basic foward-to-T implementations of all of T's methods. But that seems kind of heavyweight to me. All you really want to do is define a fallback -- when the compiler sees foo[x] and foo is a shared_ptr!(T), there should be a way to tell it to check T for an opIndex if the shared_ptr itself doesn't have one. That would handle the access syntax. But that still leaves the destructor/copy constructors necessary to get a real smart pointer.
 2) lack of a way to return a reference.

This would also be less critical given a way to fall-back to a member's implementation.
 3) From what I can tell "const ref" doesn't work for parameters in D 
 2.0. Oh, and
 4) real struct constructors.  Just a syntactic annoyance, but still an 
 annoyance.

--bb
Aug 23 2007
parent reply Regan Heath <regan netmail.co.nz> writes:
Bill Baxter wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 C++0x's new features are essentially all present in D 1.0.

..but C++98's features that were missing from D are still missing (both good and bad ones).

Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.

The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting.

Sorry for the self-follow-up, but I just wanted to add that really C++ smart pointers are themselves kind of klunky due to the fact that _all_ you have access to is that operator*/operator-> thing. So for instance if you make a boost::shared_ptr<std::map>, you end up always having to dereference to do anything interesting involving operator overloads. mymap["foo"] doesn't work, you need to use (*mymap)["foo"]. What you really want most of the time is something more like "smart references". This kind of thing is coming close to possibility with the reflection stuff some people are doing. Basically shapred_ptr!(T) would do introspection on T and populate itself with basic foward-to-T implementations of all of T's methods. But that seems kind of heavyweight to me. All you really want to do is define a fallback -- when the compiler sees foo[x] and foo is a shared_ptr!(T), there should be a way to tell it to check T for an opIndex if the shared_ptr itself doesn't have one. That would handle the access syntax. But that still leaves the destructor/copy constructors necessary to get a real smart pointer.
 2) lack of a way to return a reference.

This would also be less critical given a way to fall-back to a member's implementation.

Funny, after reading you post I was thinking that you would provide a way to fallback by returning a reference :P eg. ref T opDereference() { return ptr; } which would then automatically be called when using [] . etc on a T* I guess we wait and see what Walter cooks up for us in 2.0 :) Regan
Aug 24 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Regan Heath wrote:
 Bill Baxter wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 C++0x's new features are essentially all present in D 1.0.

..but C++98's features that were missing from D are still missing (both good and bad ones).

Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.

The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting.

Sorry for the self-follow-up, but I just wanted to add that really C++ smart pointers are themselves kind of klunky due to the fact that _all_ you have access to is that operator*/operator-> thing. So for instance if you make a boost::shared_ptr<std::map>, you end up always having to dereference to do anything interesting involving operator overloads. mymap["foo"] doesn't work, you need to use (*mymap)["foo"]. What you really want most of the time is something more like "smart references". This kind of thing is coming close to possibility with the reflection stuff some people are doing. Basically shapred_ptr!(T) would do introspection on T and populate itself with basic foward-to-T implementations of all of T's methods. But that seems kind of heavyweight to me. All you really want to do is define a fallback -- when the compiler sees foo[x] and foo is a shared_ptr!(T), there should be a way to tell it to check T for an opIndex if the shared_ptr itself doesn't have one. That would handle the access syntax. But that still leaves the destructor/copy constructors necessary to get a real smart pointer.
 2) lack of a way to return a reference.

This would also be less critical given a way to fall-back to a member's implementation.

Funny, after reading you post I was thinking that you would provide a way to fallback by returning a reference :P eg. ref T opDereference() { return ptr; } which would then automatically be called when using [] . etc on a T* I guess we wait and see what Walter cooks up for us in 2.0 :)

Really I'd rather have something that gives a little more control. Returning a reference is like pulling down your pants in public. --bb
Aug 24 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 The things that have me banging my head most often are
 1) the few things preventing an implementation of smart pointers 
 [destructors, copy constructors and opDot].  There are some cases where 
 you just want to refcount objects.  This is the one hole in D that I 
 haven't heard any reasonable workaround for.  I don't necessarily _want_ 
 copy constructors in general but they seem to be necessary for 
 implementing automatic reference counting.
 2) lack of a way to return a reference.
 3) From what I can tell "const ref" doesn't work for parameters in D 
 2.0. Oh, and
 4) real struct constructors.  Just a syntactic annoyance, but still an 
 annoyance.

These will all be addressed in 2.0.
Aug 23 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 The things that have me banging my head most often are
 1) the few things preventing an implementation of smart pointers 
 [destructors, copy constructors and opDot].  There are some cases 
 where you just want to refcount objects.  This is the one hole in D 
 that I haven't heard any reasonable workaround for.  I don't 
 necessarily _want_ copy constructors in general but they seem to be 
 necessary for implementing automatic reference counting.
 2) lack of a way to return a reference.
 3) From what I can tell "const ref" doesn't work for parameters in D 
 2.0. Oh, and
 4) real struct constructors.  Just a syntactic annoyance, but still an 
 annoyance.

These will all be addressed in 2.0.

Hot diggity. Looking forward to it. --bb
Aug 24 2007
prev sibling next sibling parent reply Reiner Pope <some address.com> writes:
Walter Bright wrote:
 eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser
 You seem to forget that D is evolving, too. C++ might get a lot of 
 the cool D features (albiet with ugly syntax), but by that time, D 
 might have superpowers incomprehensible to the C++ mind.

I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.

I don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.
 To outperform C++ in 2009-2010 D must have full strength now and must 
 be stable during some years to proof that strength in some killer 
 applications.

C++0x's new features are essentially all present in D 1.0.

All except Concepts. I know there was a small discussion of Concepts here after someone posted a Doug Gregor video on Concepts, but other than that they haven't really got much attention. I know that a lot of the problems they solve in simplifying template error messages can be done alternatively in D with static-if, is() and now __traits, in conjunction with the 'static unittest' idiom, but even then, I think C++0x Concepts give a nicer syntax for expressing exactly what you want, and they also allow overloading on Concepts (which AFAIK there is no way to emulate in D). Two characteristic examples (the first one is in would-be D with Concepts): // if D had Concepts void sort(T :: RandomAccessIteratorConcept)(T t) {...} // currently void sort(T)(T t) { static assert(IsRandomAccessIterator!(T), T.stringof ~ " isn't a random access iterator"); ... } alias sort!(MinimalRandomAccessIterator) _sort__UnitTest; It isn't syntactically clean, so people won't be encouraged to support this idiom, and it doesn't allow the Concepts features of overloading or concept maps (I think concept maps can be emulated, but they currently break IFTI). I'm interested in knowing your thoughts/plans for this. -- Reiner
Aug 23 2007
parent reply Reiner Pope <some address.com> writes:
Reiner Pope wrote:
 Walter Bright wrote:
 eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser
 You seem to forget that D is evolving, too. C++ might get a lot of 
 the cool D features (albiet with ugly syntax), but by that time, D 
 might have superpowers incomprehensible to the C++ mind.

I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.

I don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.
 To outperform C++ in 2009-2010 D must have full strength now and must 
 be stable during some years to proof that strength in some killer 
 applications.

C++0x's new features are essentially all present in D 1.0.

All except Concepts. I know there was a small discussion of Concepts here after someone posted a Doug Gregor video on Concepts, but other than that they haven't really got much attention. I know that a lot of the problems they solve in simplifying template error messages can be done alternatively in D with static-if, is() and now __traits, in conjunction with the 'static unittest' idiom, but even then, I think C++0x Concepts give a nicer syntax for expressing exactly what you want, and they also allow overloading on Concepts (which AFAIK there is no way to emulate in D). Two characteristic examples (the first one is in would-be D with Concepts): // if D had Concepts void sort(T :: RandomAccessIteratorConcept)(T t) {...} // currently void sort(T)(T t) { static assert(IsRandomAccessIterator!(T), T.stringof ~ " isn't a random access iterator"); ... } alias sort!(MinimalRandomAccessIterator) _sort__UnitTest; It isn't syntactically clean, so people won't be encouraged to support this idiom, and it doesn't allow the Concepts features of overloading or concept maps (I think concept maps can be emulated, but they currently break IFTI). I'm interested in knowing your thoughts/plans for this. -- Reiner

I see Walter has now said elsewhere in this thread that 'concepts aren't a whole lot more than interface specialization, which is already supported in D.' True; what I'm really wondering, though, is 1. Will specialisation be "fixed" to work with IFTI? 2. Will there be a way to support user-defined specialisations, for instance once which don't depend on the inheritance hierarchy? -- Reiner
Aug 23 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Reiner Pope wrote:
  1. Will specialisation be "fixed" to work with IFTI?

You can simply specialize the parameter to the function.
  2. Will there be a way to support user-defined specialisations, for 
 instance once which don't depend on the inheritance hierarchy?

I don't know what that means - interfaces are already user-defined.
Aug 23 2007
parent reply Reiner Pope <some address.com> writes:
Walter Bright wrote:
 Reiner Pope wrote:
  1. Will specialisation be "fixed" to work with IFTI?

You can simply specialize the parameter to the function.

I'm not sure what you mean. But what I refer to is the part of the spec (the templates page, under Function Templates) that says "Function template type parameters that are to be implicitly deduced may not have specializations:" and gives the example: void Foo(T : T*)(T t) { ... } int x,y; Foo!(int*)(x); // ok, T is not deduced from function argument Foo(&y); // error, T has specialization Perhaps you mean that you can write void Foo(T)(T* t) { ... } ... int x; Foo(&x); Sure. But the following doesn't work: void Foo(T)(T t) { ... } void Foo(T)(T* t) { /* different implementation for this specialisation */ } ... int x; Foo(x); Foo(&x); // ambiguous and using template parameter specialisation, IFTI breaks.
 
  2. Will there be a way to support user-defined specialisations, for 
 instance once which don't depend on the inheritance hierarchy?

I don't know what that means - interfaces are already user-defined.

They are, but they only allow you to stipulate requirements on the types place in the inheritance hierarchy. Two things that inheritance doesn't cover is structural conformance, and complicated predicates. Structural conformance is clearly important simply because templates make it possible and it avoids the overheads of inheriting from an interface. This is what C++ Concepts have on D interface specialisation. As to complicated predicates, I refer to the common idiom in D templates which looks like the following: template Foo(T) { static assert(SomeComplicatedRequirement!(T), "T doesn't meet condition"); ... // implementation } (SomeComplicatedRequirement is something inexpressible with the inheritance system; something like "a static array with a size that is a multiple of 1KB") Some people have suggested (Don Clugston, from memory) that failing the static assert should cause the compiler to try another template overload. I thought this would be easier if you allowed custom specialisations on templates. This would allow the above idiom to turn into something like template Foo(T :: SomeComplicatedRequirement) { ... } (The rest is just how I think it should work) The user-defined specialisation would be an alias which must define two templates which can answer the two questions: - does a given type meet the requirements of this specialisation? - is this specialisation a superset or subset of this other specialisation, or can't you tell? (giving the partial ordering rules) This allows user-defined predicates to fit in neatly with partial ordering of templates. -- Reiner
Aug 24 2007
next sibling parent Oskar Linde <oskar.lindeREM OVEgmail.com> writes:
Reiner Pope wrote:
 Walter Bright wrote:
 Reiner Pope wrote:
  1. Will specialisation be "fixed" to work with IFTI?

You can simply specialize the parameter to the function.

I'm not sure what you mean.

Neither am I... [snip]
 Sure. But the following doesn't work:
 
 void Foo(T)(T t) { ... }
 void Foo(T)(T* t) { /* different implementation for this specialisation 
  */ }
 ...
 int x;
 Foo(x);
 Foo(&x); // ambiguous
 
 and using template parameter specialisation, IFTI breaks.

This is the workaround I've been using: void Foo_(T: T*)(T* a) { writefln("ptr"); } void Foo_(T)(T a) { writefln("non-ptr"); } // dispatcher void Foo(T)(T x) { Foo_!(T)(x); } void main() { int x; Foo(x); Foo(&x); }
  2. Will there be a way to support user-defined specialisations, for 
 instance once which don't depend on the inheritance hierarchy?

I don't know what that means - interfaces are already user-defined.

They are, but they only allow you to stipulate requirements on the types place in the inheritance hierarchy. Two things that inheritance doesn't cover is structural conformance, and complicated predicates. Structural conformance is clearly important simply because templates make it possible and it avoids the overheads of inheriting from an interface. This is what C++ Concepts have on D interface specialisation. As to complicated predicates, I refer to the common idiom in D templates which looks like the following: template Foo(T) { static assert(SomeComplicatedRequirement!(T), "T doesn't meet condition"); ... // implementation } (SomeComplicatedRequirement is something inexpressible with the inheritance system; something like "a static array with a size that is a multiple of 1KB") Some people have suggested (Don Clugston, from memory) that failing the static assert should cause the compiler to try another template overload. I thought this would be easier if you allowed custom specialisations on templates. This would allow the above idiom to turn into something like template Foo(T :: SomeComplicatedRequirement) { ... }

 The user-defined specialisation would be an alias which must define two 
 templates which can answer the two questions:
 
  - does a given type meet the requirements of this specialisation?
  - is this specialisation a superset or subset of this other 
 specialisation, or can't you tell? (giving the partial ordering rules)
 
 This allows user-defined predicates to fit in neatly with partial 
 ordering of templates.

My suggestion has been the following: template Foo(T : <compile time expression yielding boolean value>), where the expression may depend on T. E.g: template Foo(T: RandomIndexableContainer!(T)) { ... } template RandomIndexableContainer(T) { const RandomIndexableContainer = HasMember!(T, "ValueType") && HasMember!(T, "length") && HasMember!(T, "opIndex",int); } Even something like this should be possible: struct RandomIndexableContainerConcept {...} template Foo(T: Implements!(T, RandomIndexableContainerConcept)) { } or something. This suggestion lacks the partial ordering of specializations, but those could be probably imposed on a case by case basis by nesting the conditions. -- Oskar
Aug 24 2007
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Reiner Pope wrote:
 Perhaps you mean that you can write
 
 void Foo(T)(T* t) { ... }
 ...
 int x;
 Foo(&x);
 
 Sure. But the following doesn't work:
 
 void Foo(T)(T t) { ... }
 void Foo(T)(T* t) { /* different implementation for this specialisation 
  */ }
 ...
 int x;
 Foo(x);
 Foo(&x); // ambiguous
 
 and using template parameter specialisation, IFTI breaks.

You can write the templates as: void Foo(T)(T t) { ... } void Foo(T, dummy=void)(T* t) { /* different implementation for this specialisation */ } Not so pretty, but it works.
 As to complicated predicates, I refer to the common idiom in D templates 
 which looks like the following:

Sean Kelly had a solution for that of the form:
 More often, I use an additional value parameter to specialize against:
 
 template Foo(T, bool isValid : true = PassesSomeTest!(T)) {}

Aug 26 2007
prev sibling parent reply eao197 <eao197 intervale.ru> writes:
On Thu, 23 Aug 2007 10:14:39 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser
 You seem to forget that D is evolving, too. C++ might get a lot of the  
 cool D features (albiet with ugly syntax), but by that time, D might  
 have superpowers incomprehensible to the C++ mind.

problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.

I don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.

AFAIK, C++0x doesn't break compatibility with C++98. So if I teach students C++98 now they could use C++0x. Moreover they could use in C++0x all their C++98 code. Now I see D 2.0 as very different language from D 1.0.
 To outperform C++ in 2009-2010 D must have full strength now and must  
 be stable during some years to proof that strength in some killer  
 applications.

C++0x's new features are essentially all present in D 1.0.

Yes, but C++ doesn't require programmers to change their language, tools and libraries. Such change require a lot of time and efforts. Such effors could be applied to the current projects instead of switching to D. But, if D could afford something else, something that completely missing from C++0x (like non-null reference/pointers, some kind of functional programming (pattern-matching) and so on) than such switching would be much more attractive. I know that you work very hard on D, but D 1.0 took almost 7 years. D 2.0 started in 2007, so final D 2.0 could be in 2014? -- Regards, Yauheni Akhotnikau
Aug 23 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 AFAIK, C++0x doesn't break compatibility with C++98. So if I teach 
 students C++98 now they could use C++0x. Moreover they could use in 
 C++0x all their C++98 code.

It's not a perfect superset, but the breakage is very small.
 Now I see D 2.0 as very different language from D 1.0.

There is more breakage from 1.0 to 2.0, but the changes required are straightforward to find and correct.
 To outperform C++ in 2009-2010 D must have full strength now and must 
 be stable during some years to proof that strength in some killer 
 applications.

C++0x's new features are essentially all present in D 1.0.

Yes, but C++ doesn't require programmers to change their language, tools and libraries. Such change require a lot of time and efforts. Such effors could be applied to the current projects instead of switching to D. But, if D could afford something else, something that completely missing from C++0x (like non-null reference/pointers, some kind of functional programming (pattern-matching) and so on) than such switching would be much more attractive.

D 1.0 provides a lot of things completely missing in C++0x: 1) unit tests 2) documentation generation 3) modules 4) string mixins 5) template string & floating point parameters 6) compile time function execution 7) contract programming 8) nested functions 9) inner classes 10) delegates 11) scope statement 12) try-finally statement 13) static if 14) exported templates that are implementable 15) compilation speeds that are an order of magnitude faster 16) unambiguous template syntax 17) easy creation of tools that need to parse D code 18) synchronized functions 19) template metaprogramming that can be done by mortals 20) comprehensive support for array slicing 21) inline assembler 22) no crazy quilt dependent/non-dependent 2 level lookup rules that major compilers still get wrong and for which I still regularly get 'bug' reports because DMC++ does it according to the Standard 23) standard I/O that runs several times faster 24) portable sizes for types 25) guaranteed initialization 26) out function parameters 27) imaginary types 28) forward referencing of declarations
 I know that you work very hard on D, but D 1.0 took almost 7 years. D 
 2.0 started in 2007, so final D 2.0 could be in 2014?

Even if it does take that long, D 1.0 is still far ahead, and is available now. To see how much more productive D is, compare Kirk McDonald's amazing PyD http://pyd.dsource.org/dconf2007/presentation.html with Boost Python. To see what D can do that C++ can't touch, see Don Clugston's incredible optimal code generator at http://s3.amazonaws.com/dconf2007/Don.ppt
Aug 25 2007
parent eao197 <eao197 intervale.ru> writes:
The first of all -- thanks for your patience!

On Sun, 26 Aug 2007 10:35:47 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Now I see D 2.0 as very different language from D 1.0.

There is more breakage from 1.0 to 2.0, but the changes required are straightforward to find and correct.

Yes, but I mean changes not only in syntax, but in program design. And see yet another comment on that below.
 C++0x's new features are essentially all present in D 1.0.

tools and libraries. Such change require a lot of time and efforts. Such effors could be applied to the current projects instead of switching to D. But, if D could afford something else, something that completely missing from C++0x (like non-null reference/pointers, some kind of functional programming (pattern-matching) and so on) than such switching would be much more attractive.

D 1.0 provides a lot of things completely missing in C++0x: 1) unit tests 2) documentation generation 3) modules 4) string mixins 5) template string & floating point parameters 6) compile time function execution 7) contract programming 8) nested functions 9) inner classes 10) delegates 11) scope statement 12) try-finally statement 13) static if 14) exported templates that are implementable 15) compilation speeds that are an order of magnitude faster 16) unambiguous template syntax 17) easy creation of tools that need to parse D code 18) synchronized functions 19) template metaprogramming that can be done by mortals 20) comprehensive support for array slicing 21) inline assembler 22) no crazy quilt dependent/non-dependent 2 level lookup rules that major compilers still get wrong and for which I still regularly get 'bug' reports because DMC++ does it according to the Standard 23) standard I/O that runs several times faster 24) portable sizes for types 25) guaranteed initialization 26) out function parameters 27) imaginary types 28) forward referencing of declarations

In November 2006 in a Russian developers forum I noticed [1] the following D's advantages: 1) fixed and portable data type sizes (byte, short,...); 2) type properties (like .min, .max, ...); 3) all variables and members have default init values; 4) local variables can't be defined without initial values; 5) type inference in 'auto' declaration and in foreach; 6) unified type casting with 'cast'; 7) strict 'typedef' and relaxed 'alias'; 8) array have 'length' property and slicing operations; 9) exception in switch if no appropriate 'case'; 10) string values in 'case'; 11) static constructors and destructors for classes/modules; 12) class invariants; 13) unit tests; 14) static assert; 15) Error as root for all exception classes; 16) scope constructs; 17) nested classes, structs, functions; 18) there aren't macros, all symbol mean exactly what they mean; 19) typesafe variadic functions; 20) floats and strings as template parameters; 21) template parameters specialization. There are a lot of intersections in our lists ;)
 I know that you work very hard on D, but D 1.0 took almost 7 years. D  
 2.0 started in 2007, so final D 2.0 could be in 2014?

Even if it does take that long, D 1.0 is still far ahead, and is available now.

As I can see from your D conf's presentation D 2.0 is in the begining of long road. I've seen from your presentation what will D provide as an ultimate answer to C++ and some others languages. As for me, D 2.0 is a descendant of D (almost as D is descendant of C++). So it is better to think that now we have modern language D 1.0 and we will have better language D 2.0 in time (may be it is better to chose new name for D 2.0, something like D-Bright ;) ). And now the key factor to make D successful is creating D 1.0 tools, libraries, docs and applications. And show how D 1.0 outperform C++ and others. If we will made this than D 2.0 will come on the prepared ground. So it is time for pragmatics to focus on D 1.0 and let language enthusiasts play with D 2.0 prototypes. [1] http://www.rsdn.ru/forum/message/2222569.aspx -- Regards, Yauheni Akhotnikau
Aug 26 2007
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
eao197 wrote:
 I know that you work very hard on D, but D 1.0 took almost 7 years. D 
 2.0 started in 2007, so final D 2.0 could be in 2014?

It's very amusing to read how Walter described D 1.0, seven years ago. It wasn't going to have templates, for example.
Aug 29 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Wed, 29 Aug 2007 15:56:29 +0400, Don Clugston <dac nospam.com.au> wrote:

 eao197 wrote:
 I know that you work very hard on D, but D 1.0 took almost 7 years. D  
 2.0 started in 2007, so final D 2.0 could be in 2014?

It's very amusing to read how Walter described D 1.0, seven years ago. It wasn't going to have templates, for example.

Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. It looks as D have never been a stable language. -- Regards, Yauheni Akhotnikau
Aug 29 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. 
 It looks as D have never been a stable language.

I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.
Aug 29 2007
next sibling parent kris <foo bar.com> writes:
Walter Bright wrote:
 eao197 wrote:
 Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. 
 It looks as D have never been a stable language.

I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.

I guess there's "stable" and there's "stable"? The history of Simula67 illustrates what can happen when a language is nailed to the wall :)
Aug 29 2007
prev sibling parent reply eao197 <eao197 intervale.ru> writes:
On Wed, 29 Aug 2007 23:15:26 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 eao197 wrote:
 Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year.  
 It looks as D have never been a stable language.

I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.

I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new major version didn't break existing code (for example: Java, C#, Ruby, Python, even C++ sometimes). -- Regards, Yauheni Akhotnikau
Aug 29 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 On Wed, 29 Aug 2007 23:15:26 +0400, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 eao197 wrote:
 Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 
 year. It looks as D have never been a stable language.

I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.

I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new major version didn't break existing code (for example: Java, C#, Ruby, Python, even C++ sometimes).

C++ has been around for 20+ years now. I'll grant that for maybe 2 of those years (10%) it was stable. C++ has the rather dubious distinction of it being very hard to get two different compilers to compile non-trivial code without some sort of code customization needed. As evidence of that, just browse the STL and Boost sources. While the C++ standard has been stable for a couple years (C++98, C++03), it being nearly impossible to implement has meant the implementations have been unstable. For example, name lookup rules vary significantly *today* even among the major compilers. I regularly get bug reports that DMC++ does it wrong, even though it actually does it according to the Standard, and it's other compilers that get it wrong. On the other hand, when C++ has been stable, it rapidly lost ground relative to other languages. The recent about face in rationale and flurry of core language additions to C++0x is evidence of that. I haven't programmed long term in the other languages, so don't have a good basis for commenting on their stability. I have been programming in C++ since 1987. It's pretty normal to take a C++ project from the past and have to dink around with it to get it to compile with a modern compiler. The odds of taking a few thousand lines of C++ pulled off the web that's set up to compile with C++ Brand X are about 0% for getting it to compile with C++ Brand Y without changes.
Aug 30 2007
parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 30 de agosto a las 00:07 me escribiste:
dead language.

languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new major version didn't break existing code (for example: Java, C#, Ruby, Python, even C++ sometimes).

C++ has been around for 20+ years now. I'll grant that for maybe 2 of those years (10%) it was stable. C++ has the rather dubious distinction of it being very hard to get two different compilers to compile non-trivial code without some sort of code customization needed. As evidence of that, just browse the STL and Boost sources. While the C++ standard has been stable for a couple years (C++98, C++03), it being nearly impossible to implement has meant the implementations have been unstable. For example, name lookup rules vary significantly *today* even among the major compilers. I regularly get bug reports that DMC++ does it wrong, even though it actually does it according to the Standard, and it's other compilers that get it wrong. On the other hand, when C++ has been stable, it rapidly lost ground relative to other languages. The recent about face in rationale and flurry of core language additions to C++0x is evidence of that. I haven't programmed long term in the other languages, so don't have a good basis for commenting on their stability.

Forget about C++ for a second. Try with Python. It is an stable, or at least *predictable* language. It's evolution is well structured, so you know you will have no suprises, and you know the language will evolve. Python is *really* community driven (besides the BDFL[1] ;). It has a formal proposal system to make changes to the language: PEPs[2]. When a PEP is aproved, it's included in the next version and can be used *optionally* (if it could break backward compatibility). For example, you can use now the "future" behavoir of division:
 10/3



 from __future__ import division
 10/3



In the next python version, the new feature is included without need to import __future__, and the old behavior is deprecated (for example, with libraries, when something changes, in the first version you can ask for the new feature, in the second the new feature is the default but you can fallback to the old behavior, and in the third version teh old behavior is completely removed). [1] http://en.wikipedia.org/wiki/BDFL [2] http://www.python.org/dev/peps/
 I have been programming in C++ since 1987. It's pretty normal to take a C++
project from the past 
 and have to dink around with it to get it to compile with a modern compiler.
The odds of taking a 
 few thousand lines of C++ pulled off the web that's set up to compile with C++
Brand X are about 0% 
 for getting it to compile with C++ Brand Y without changes.

You are talking about 20 years. D evolves in a daily basis and the worst is that this evolution is without any formal procedure. Forking D 2.0 was a huge improvement in this matter, but I think there's is more work to be done so D can success as a long term language (or at least to be trusted). Another good step forward this could be to maintain phobos (or whatever the standard library would be :P) as an open source project. You can create a repository (please use git! :) so people can track its development and send patches easier. Same for the D frontend. It's almost impossible for someone who is used to colaborate in open source projects to do it with D. And that's a shame... -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ .------------------------------------------------------------------------, \ GPG: 5F5A8D05 // F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05 / '--------------------------------------------------------------------' Pa' ella cociné, pa' ella lavé, pa' ella soñe Paella completa, $2,50 Pero, la luz mala me tira, y yo? yo soy ligero pa'l trote La luz buena, está en el monte, allá voy, al horizonte
Aug 30 2007
prev sibling parent reply 0ffh <spam frankhirsch.net> writes:
eao197 wrote:
 I mean changes in languages which break compatibility with previous 
 code. AFAIK, successful languages always had some periods (usually 2-3 
 years, sometimes more) when there were no additions to language and new 
 major version didn't break existing code (for example: Java, C#, Ruby, 
 Python, even C++ sometimes).

I rather think, that a "new major version" of any language that "doesn't break existing code" could hardly justify it's new major version number. A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version. An D /does have/ a stable language version, D1. Regards, Frank
Aug 30 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:

 eao197 wrote:
 I mean changes in languages which break compatibility with previous  =


 code. AFAIK, successful languages always had some periods (usually 2-=


 years, sometimes more) when there were no additions to language and n=


 major version didn't break existing code (for example: Java, C#, Ruby=


 Python, even C++ sometimes).

I rather think, that a "new major version" of any language that "doesn=

 break existing code" could hardly justify it's new major version numbe=

 A complete rewrite of the compiler, e.g., would justify a majer new
 compiler version, but not even a teeny-minor new language version.

Java 1.5 (with generics) and C# 2.0 ware major versions, but didn't brea= k = old code.
 An D /does have/ a stable language version, D1.

http://d.puremagic.com/issues/show_bug.cgi?id=3D302 -- very strange bag= for = _stable_ version. Try to imagine _stable_ Eiffel with broken DesignByContract support :-/ -- = Regards, Yauheni Akhotnikau
Aug 30 2007
next sibling parent Downs <default_357-line yahoo.de> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

eao197 wrote:
 Java 1.5 (with generics) and C# 2.0 ware major versions, but didn't
 break old code.
 
 An D /does have/ a stable language version, D1.

http://d.puremagic.com/issues/show_bug.cgi?id=302 -- very strange bag for _stable_ version. Try to imagine _stable_ Eiffel with broken DesignByContract support :-/

feature that's documented, but not implemented yet (GC, I'm looking at you) or supposed to be working, but broken in strange ways, I can't help thinking D isn't nearly 1.0 yet, let alone 2.0. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG2E3ApEPJRr05fBERAtJaAJ9U065ri1iBTuDOlg//ZHPVwUbMMACgjXly R0bTvNP7b3ivgQkdC5UC2sE= =vc38 -----END PGP SIGNATURE-----
Aug 31 2007
prev sibling next sibling parent reply Don Clugston <dac nospam.com.au> writes:
eao197 wrote:
 On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:
 
 eao197 wrote:
 I mean changes in languages which break compatibility with previous 
 code. AFAIK, successful languages always had some periods (usually 
 2-3 years, sometimes more) when there were no additions to language 
 and new major version didn't break existing code (for example: Java, 
 C#, Ruby, Python, even C++ sometimes).

I rather think, that a "new major version" of any language that "doesn't break existing code" could hardly justify it's new major version number. A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version.

Java 1.5 (with generics) and C# 2.0 ware major versions, but didn't break old code.

Actually, I think new features that make old code obsolete (even if it still compiles and works perfectly) are even more of a problem -- breaking "mental compatibility". I don't think Java and C# have avoided this. It's certainly been a problem for C++ and D. If you get 500 compile errors you need to fix, that's annoying and tedious. But when your code uses a technique that still works, but isn't supported by recent libraries, you're locked into the past forever.
Sep 04 2007
parent eao197 <eao197 intervale.ru> writes:
On Tue, 04 Sep 2007 12:34:14 +0400, Don Clugston <dac nospam.com.au> wrote:

 If you get 500 compile errors you need to fix, that's annoying and  
 tedious.

If you get 500 compile errors in old 10KLOC project its annoying. If you would get 500 compile errors in each of tens of legacy projects that is much more that simply 'annoying and tedious'.
 But when your code uses a technique that still works, but isn't  
 supported by recent libraries, you're locked into the past forever.

There is a good example in C++ world: the ACE library. It has been started a long time ago, it has been ported to various systems, it outlived many changes in the language and suffered from different compilers. Because of that ACE use C++ almost as "C++ with classes", even without usage of exceptions. In comparision with modern C++ (over)designed libraries (like Crypto++ or parts of Boost) ACE is an ugly old monster. But it has no real competitors in C++ and it allow me to write complex software more easyly than if I try to write part of ACE on modern C++ myself. So I don't think that old ACE library look me in the past (even if I can't use STL and exceptions with ACE). IMHO, the real power of any language is its code base -- all projects which have been developed using the language. And any actions which descriminate legacy code lead to decreasing the language's power. -- Regards, Yauheni Akhotnikau
Sep 04 2007
prev sibling next sibling parent Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
eao197 wrote:

 On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:
 
 eao197 wrote:
 I mean changes in languages which break compatibility with previous
 code. AFAIK, successful languages always had some periods (usually 2-3
 years, sometimes more) when there were no additions to language and new
 major version didn't break existing code (for example: Java, C#, Ruby,
 Python, even C++ sometimes).

I rather think, that a "new major version" of any language that "doesn't break existing code" could hardly justify it's new major version number. A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version.

Java 1.5 (with generics) and C# 2.0 ware major versions, but didn't break old code.

Oh, btw, Java 1.5 did break old code. I used to use Gentoo during the transition phase so I had some experience compiling stuff. :) There were at least a couple of commonly used libraries and programs that broke. One minor problem was the new 'enum' keyword. Of course at least Sun Java compiler allows compiling in 1.4 mode too. I think Gentoo has a common practice nowadays to compile each Java program using the oldest compatible compiler profile for best compatibility. IIRC there were also some incompatible ABI changes because of the generics.
Sep 07 2007
prev sibling parent 0ffh <spam frankhirsch.net> writes:
eao197 wrote:
 On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:
 I rather think, that a "new major version" of any language that "doesn't
 break existing code" could hardly justify it's new major version number.
 A complete rewrite of the compiler, e.g., would justify a majer new
 compiler version, but not even a teeny-minor new language version.

break old code.

Well, yeah, maybe (apart from what Jari-Matti said about Java 1.5 breaking code). But anyways, adding something to a language without breaking old code does only work so often. C++ tried to add to C without breaking code (it still does, but it tried) and you can see what came from it. New language features tend to need new syntax. If you want to remain compatible, you'll have to find a way to introduce that new syntax without breaking the old ones. This is usually quite hard to achieve without making the new syntax either cumbersome or fragile and hard to grok. Regards, Frank
Sep 07 2007