www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Stroustrup's talk on C++0x

reply Bill Baxter <dnewsgroup billbaxter.com> writes:
A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
If not here's the link:
   http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

I recommend hitting pause on the video and then go get some lunch while 
it buffers up enough that you won't get hiccups.  Or if you can figure 
out how to get those newfangled torrent thingys to work, that's probably 
a good option too.

--bb
Aug 19 2007
next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Bill Baxter wrote:
 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
   http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
Thanks for the link, missed it.
 I recommend hitting pause on the video and then go get some lunch while 
 it buffers up enough that you won't get hiccups.  Or if you can figure 
 out how to get those newfangled torrent thingys to work, that's probably 
 a good option too.
 
 --bb
With opera you can just click on it and it works, if you don't want to figure things out.
Aug 19 2007
prev sibling next sibling parent reply "Saaa" <empty needmail.com> writes:
D programming people who don't understand torrents...

btw. here pausing wasn't necessary here

 I recommend hitting pause on the video and then go get some lunch while it 
 buffers up enough that you won't get hiccups.  Or if you can figure out 
 how to get those newfangled torrent thingys to work, that's probably a 
 good option too.

 --bb 
Aug 19 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Saaa wrote:
 D programming people who don't understand torrents...
:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
 btw. here pausing wasn't necessary here
Ok. Well, it's probably just slow because over here because I've got to pull it over the trans-pacific pipes. --bb
Aug 19 2007
parent reply "Saaa" <empty needmail.com> writes:
 D programming people who don't understand torrents...
:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.
Aug 19 2007
next sibling parent "Jb" <jb nowhere.com> writes:
"Saaa" <empty needmail.com> wrote in message 
news:fab0tc$1b0k$1 digitalmars.com...
 D programming people who don't understand torrents...
:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.
I second the uTorrent recomendation. By far the best client i've used. Bill, if you do try it, open the 'Speed Guide' from the options menu, you can test to see whether the port is open / forwarded correctly from there. jb
Aug 20 2007
prev sibling parent reply Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Saaa wrote:
 D programming people who don't understand torrents...
:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.
Am I the only person who actually uses... BitTorrent, as my BitTorrent client? :) http://www.bittorrent.com/download I haven't had any issues with it, though that doesn't mean no one will. Azureus/2.x is good too... the new version is an abomination. (In My Humble Opinion) -- Chris Nicholson-Sauls
Aug 20 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Chris Nicholson-Sauls wrote:
 Saaa wrote:
 D programming people who don't understand torrents...
:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.
Am I the only person who actually uses... BitTorrent, as my BitTorrent client? :) http://www.bittorrent.com/download I haven't had any issues with it, though that doesn't mean no one will. Azureus/2.x is good too... the new version is an abomination. (In My Humble Opinion)
Well the troubleshooting links pointed to by utorrent were spot-on. It takes you right to a place that can give you step-by-step instructions about how to set up a huge number of different broadband routers. The others I tried just said vague things about needing to open up a port without suggesting how -- or suggesting I talk to my "system administratior". That said, now that thanks to utorrent I've got the hole punched through my firewall, probably any client will work fine for me. --bb
Aug 21 2007
parent Regan Heath <regan netmail.co.nz> writes:
Bill Baxter wrote:
 Chris Nicholson-Sauls wrote:
 Saaa wrote:
 D programming people who don't understand torrents...
:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.
Am I the only person who actually uses... BitTorrent, as my BitTorrent client? :) http://www.bittorrent.com/download I haven't had any issues with it, though that doesn't mean no one will. Azureus/2.x is good too... the new version is an abomination. (In My Humble Opinion)
Well the troubleshooting links pointed to by utorrent were spot-on. It takes you right to a place that can give you step-by-step instructions about how to set up a huge number of different broadband routers. The others I tried just said vague things about needing to open up a port without suggesting how -- or suggesting I talk to my "system administratior". That said, now that thanks to utorrent I've got the hole punched through my firewall, probably any client will work fine for me.
uTorrent is my favourtire client, it is small, fast, fully featured but setup in such a way as to be simple enough to use if you're new at this sort of thing. Torrents don't require you to have an open inbound port but without one you cannot receive connections from other peers. You can still connect to other peers, unless they too have no open ports, in which case you cannot form any connection with them and as a result you may get lower speeds. Just the other day I downloaded OpenOffice using a torrent, the download was fast, probably faster than getting it directly from any single website. Regan
Aug 21 2007
prev sibling next sibling parent reply eao197 <eao197 intervale.ru> writes:
On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter  
<dnewsgroup billbaxter.com> wrote:

 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC. -- Regards, Yauheni Akhotnikau
Aug 19 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia: 
 http://en.wikipedia.org/wiki/C%2B%2B0x
 
 It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) 
 over C++0x? May be only high speed compilation and GC.
Looks like C++ is adding D features thick & fast!
Aug 19 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia: 
 http://en.wikipedia.org/wiki/C%2B%2B0x

 It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) 
 over C++0x? May be only high speed compilation and GC.
Looks like C++ is adding D features thick & fast!
Yeh, from the way Stroustrup was talking I really wouldn't be surprised if they haven't finished the spec by year-end 2009. So, Walter, are you planning to update DMC when the spec is finished? --bb
Aug 19 2007
parent reply Sean Kelly <sean f4.ca> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia: 
 http://en.wikipedia.org/wiki/C%2B%2B0x

 It is iteresting to know which advantages will have D (2.0? 3.0? 
 4.0?) over C++0x? May be only high speed compilation and GC.
Looks like C++ is adding D features thick & fast!
Yeh, from the way Stroustrup was talking I really wouldn't be surprised if they haven't finished the spec by year-end 2009.
It actually has to be finished by year end 2008, and they have committed to getting the standard done on time even if it means dropping features. In fact, last I heard, a few features were indeed being dropped for lack of time, but I can't recall what they were. I haven't been keeping that close an eye on the C++ standardization process recently, aside from the new memory model and atomic features. As for the C++ 0x additions themselves, if D did not exist I might be excited. As it is, I can only cringe at the syntax in some of those examples and hope things turn out better than I fear they will. Sean
Aug 20 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 It actually has to be finished by year end 2008, and they have committed 
 to getting the standard done on time even if it means dropping features. 
  In fact, last I heard, a few features were indeed being dropped for 
 lack of time, but I can't recall what they were.  I haven't been keeping 
 that close an eye on the C++ standardization process recently, aside 
 from the new memory model and atomic features.
C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Aug 20 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 It actually has to be finished by year end 2008, and they have 
 committed to getting the standard done on time even if it means 
 dropping features.  In fact, last I heard, a few features were indeed 
 being dropped for lack of time, but I can't recall what they were.  I 
 haven't been keeping that close an eye on the C++ standardization 
 process recently, aside from the new memory model and atomic features.
C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
It probably gave them a nudge, but on the other hand, as is abundantly clear here on this newsgroup, everybody has a favorite feature. So if you throw a bunch of engineers and language designers into a room, the natural tendency is towards trying to add everything and the kitchen Python, and Ruby, too) undoubtedly influenced people's votes when it came time to decide whether it was more important to have feature X or get the revision out sooner. It is pretty scary, though, to hear Stroustrup saying that the C++ text books will need to become thicker than they already are, which was already about 3x as big as K&R's original book on C. The one feature (or lack thereof) that surprises me about C++0x is nested functions. They're one of my favorite things about D, but they don't seem to be a part of C++0x. There can't be any fundamental reason for it, since I've heard g++ supports them. Maybe lambdas will serve that purpose? As for standards vs standards-compliant compilers, note that MS still hasn't made a C99 compiler, 8 years after the standard. And implementing *that* standard looks like an undergrad homework assignment compared to what compiler writers will have to go through for C++0x. --bb
Aug 20 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 I think it's the success of D that lit the fire.
It probably gave them a nudge,
More than that. The active people on the C++ committee are well aware of D. Many have attended my presentations on D, correspond with me about it, and lurk in this n.g. Most of them will deny the influence, however, so feel free to decide what to believe <g>.
 but on the other hand, as is abundantly 
 clear here on this newsgroup, everybody has a favorite feature.  So if 
 you throw a bunch of engineers and language designers into a room, the 
 natural tendency is towards trying to add everything and the kitchen 
 sink.
One thing the C++ committee is good about is the features they have added *are* targeted at glaring shortcomings. They really are not throwing in the kitchen sink. How well those shortcomings are addressed, however, is another matter. For example, look at the C++ proposal for doing a very limited form of compile time function evaluation, then compare it with D's.

 Python, and Ruby, too) undoubtedly influenced people's votes when it 
 came time to decide whether it was more important to have feature X or 
 get the revision out sooner.
GC is a prime example of that; C++ could no longer dismiss it. (And Hans Boehm, who I admire a lot, did a spectacular job of dealing with every objection to adding GC.)
 It is pretty scary, though, to hear Stroustrup saying that the C++ text 
 books will need to become thicker than they already are, which was 
 already about 3x as big as K&R's original book on C.
There are two phases to learning C++: 1) learning the language 2) learning all the idioms and conventions used to avoid the shortcomings (One example we've discussed here recently is the slicing problem.)
 The one feature (or lack thereof) that surprises me about C++0x is 
 nested functions.  They're one of my favorite things about D, but they 
 don't seem to be a part of C++0x.  There can't be any fundamental reason 
 for it, since I've heard g++ supports them.  Maybe lambdas will serve 
 that purpose?
I was surprised to see lambdas without nested functions.
 As for standards vs standards-compliant compilers, note that MS still 
 hasn't made a C99 compiler, 8 years after the standard.  And 
 implementing *that* standard looks like an undergrad homework assignment 
 compared to what compiler writers will have to go through for C++0x.
It took 5 years for a C++98 compliant compiler to emerge. Extrapolating to C++09, that would be 2014 to get features that existed in D years ago. I obviously gave up waiting for such features from C++ long ago.
Aug 22 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 GC is a prime example of that; C++ could no longer dismiss it. (And Hans 
 Boehm, who I admire a lot, did a spectacular job of dealing with every 
 objection to adding GC.)
I decided to download his GC for C++ recently to give it a try. I was amazed to find that the documentation is really quite bad from a user point of view. And what little user doc there was was mostly about the C interface. If you care about implementation, there's tons to read, but just not if you're interested in actually *using* it. I expected a little more pleasant user experience given how long its been around, how much I hear about it here and there, and how often I've heard C++ people say that you don't need GC in the language because you can just download Boehm's library.
 It took 5 years for a C++98 compliant compiler to emerge. Extrapolating 
 to C++09, that would be 2014 to get features that existed in D years 
 ago. I obviously gave up waiting for such features from C++ long ago.
Well, that's true, but when comparing availability C++09 vs D, you should perhaps be a little more forgiving, given that D isn't quite done either. Sure, some C++09 features are available in D now, but some are also available in g++ now, I believe. And there are some features slated for C++ 09 that aren't on the roadmap for D at all (like concepts and thread stuff), which might appear in some C++ compiler before they appear D. Furthermore, I'm pretty sure some partially conforming C++98 compilers existed before the end of 93, so what I'm trying to say with all this is that if you're a programmer who's willing to work with an incompatible language that is has an ever-evolving spec, then you're probably also willing to use a bleeding edge C++ compiler that only partially supports the C++09 spec. So there may be less of a wait than 2014 for the sort of bleeding edgers who would be interested in D in the first place. But either way its still infinitely more waiting than "download and use it right now" -- the current situation with D. --bb
Aug 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 It took 5 years for a C++98 compliant compiler to emerge. 
 Extrapolating to C++09, that would be 2014 to get features that 
 existed in D years ago. I obviously gave up waiting for such features 
 from C++ long ago.
Well, that's true, but when comparing availability C++09 vs D, you should perhaps be a little more forgiving, given that D isn't quite done either. Sure, some C++09 features are available in D now,
Nearly all of them are, and D has quite a bit that isn't even on the horizon for C++. I should draw up a chart...
 but some are 
 also available in g++ now, I believe.  And there are some features 
 slated for  C++ 09 that aren't on the roadmap for D at all (like 
 concepts
Concepts aren't a whole lot more than interface specialization, which is already supported in D.
 and thread stuff), which might appear in some C++ compiler 
 before they appear D.  Furthermore, I'm pretty sure some partially 
 conforming C++98 compilers existed before the end of 93,
Partial, sure, including mine <g>.
 so what I'm 
 trying to say with all this is that if you're a programmer who's willing 
 to work with an incompatible language that is has an ever-evolving spec, 
 then you're probably also willing to use a bleeding edge C++ compiler 
 that only partially supports the C++09 spec.  So there may be less of a 
 wait than 2014 for the sort of bleeding edgers who would be interested 
 in D in the first place.  But either way its still infinitely more 
 waiting than "download and use it right now" -- the current situation 
 with D.
Yes. And D 2.0 isn't standing still, either.
Aug 23 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 but some are also available in g++ now, I believe.  And there are some 
 features slated for  C++ 09 that aren't on the roadmap for D at all 
 (like concepts
Concepts aren't a whole lot more than interface specialization, which is already supported in D.
I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D, but I think defining and implementing concepts should be as easy as defining and implementing a run-time interface. Duck typing is nice, but if you look at even scripting language founded on the idea, like Python, you'll find that where people are putting together, they're also creating and using tools like zope.interface to get back some of the benefits of type checking. At the end of the day, even with duck typing there are some requirements I have to fulfill to use my object with your function. You want to be able to specify those things and have the compiler check it. --bb
Aug 23 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:
 but some are also available in g++ now, I believe.  And there are 
 some features slated for  C++ 09 that aren't on the roadmap for D at 
 all (like concepts
Concepts aren't a whole lot more than interface specialization, which is already supported in D.
I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D,
Yup: interface KewlIteratorConcept { T opPostInc(); U opSlice(); } class KewlContainer : KewlIteratorConcept { T opPostInc() { ... } U opSlice() { ... } } class WrongContainer { } template Foo(T : KewlIteratorConcept) { ... } KewlContainer k; WrongContainer w; Foo!(k); // ok Foo!(w); // error, w is not a KewlIteratorConcept
Aug 25 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:
 but some are also available in g++ now, I believe.  And there are 
 some features slated for  C++ 09 that aren't on the roadmap for D at 
 all (like concepts
Concepts aren't a whole lot more than interface specialization, which is already supported in D.
I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D,
Yup: interface KewlIteratorConcept { T opPostInc(); U opSlice(); } class KewlContainer : KewlIteratorConcept { T opPostInc() { ... } U opSlice() { ... } } class WrongContainer { } template Foo(T : KewlIteratorConcept) { ... } KewlContainer k; WrongContainer w; Foo!(k); // ok Foo!(w); // error, w is not a KewlIteratorConcept
Ok, but does that work if you want it to work with a built-in type too? Will a float be recognized as supporting opPostInc? --bb
Aug 25 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Ok, but does that work if you want it to work with a built-in type too? 
  Will a float be recognized as supporting opPostInc?
No, it doesn't currently work with builtin types. But see Sean's approach!
Aug 26 2007
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:
 but some are also available in g++ now, I believe.  And there are 
 some features slated for  C++ 09 that aren't on the roadmap for D at 
 all (like concepts
Concepts aren't a whole lot more than interface specialization, which is already supported in D.
I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D,
Yup: interface KewlIteratorConcept { T opPostInc(); U opSlice(); } class KewlContainer : KewlIteratorConcept { T opPostInc() { ... } U opSlice() { ... } } class WrongContainer { } template Foo(T : KewlIteratorConcept) { ... }
The obvious disadvantage to this approach is that is requires implementation of an interface by the creator of the object. More often, I use an additional value parameter to specialize against: template Foo(T, bool isValid : true = PassesSomeTest!(T)) {} This also works for non-class types. I'm not sure I like the syntax quite as much as concepts here, but it's good enough that I haven't really missed them. Sean
Aug 26 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 The obvious disadvantage to this approach is that is requires 
 implementation of an interface by the creator of the object.  More 
 often, I use an additional value parameter to specialize against:
 
 template Foo(T, bool isValid : true = PassesSomeTest!(T)) {}
 
 This also works for non-class types.  I'm not sure I like the syntax 
 quite as much as concepts here, but it's good enough that I haven't 
 really missed them.
This is a brilliant idea. It would make for a great article! Can I press you to write it? Doesn't have to be long, just explain the concept(!) and flesh it out with a few examples.
Aug 26 2007
next sibling parent James Dennett <jdennett acm.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 The obvious disadvantage to this approach is that is requires
 implementation of an interface by the creator of the object.  More
 often, I use an additional value parameter to specialize against:

 template Foo(T, bool isValid : true = PassesSomeTest!(T)) {}

 This also works for non-class types.  I'm not sure I like the syntax
 quite as much as concepts here, but it's good enough that I haven't
 really missed them.
This is a brilliant idea. It would make for a great article! Can I press you to write it? Doesn't have to be long, just explain the concept(!) and flesh it out with a few examples.
Is this largely comparable to C++0x's enable_if (except that, as I understand it, D appears to be more flexible in how the compile- time test can work/be expressed)? enable_if certainly covers many of the simple use cases for Concepts (though not so elegantly as C++0x Concepts do). -- James
Aug 26 2007
prev sibling parent Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 The obvious disadvantage to this approach is that is requires 
 implementation of an interface by the creator of the object.  More 
 often, I use an additional value parameter to specialize against:

 template Foo(T, bool isValid : true = PassesSomeTest!(T)) {}

 This also works for non-class types.  I'm not sure I like the syntax 
 quite as much as concepts here, but it's good enough that I haven't 
 really missed them.
This is a brilliant idea. It would make for a great article! Can I press you to write it? Doesn't have to be long, just explain the concept(!) and flesh it out with a few examples.
Sure thing. :-) Sean
Aug 26 2007
prev sibling parent James Dennett <jdennett acm.org> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 It took 5 years for a C++98 compliant compiler to emerge.
 Extrapolating to C++09, that would be 2014 to get features that
 existed in D years ago. I obviously gave up waiting for such features
 from C++ long ago.
Well, that's true, but when comparing availability C++09 vs D, you should perhaps be a little more forgiving, given that D isn't quite done either. Sure, some C++09 features are available in D now,
Nearly all of them are, and D has quite a bit that isn't even on the horizon for C++. I should draw up a chart...
For marketing purposes, maybe ;)
 but some are also available in g++ now, I believe.  And there are some
 features slated for  C++ 09 that aren't on the roadmap for D at all
 (like concepts
Concepts aren't a whole lot more than interface specialization, which is already supported in D.
They're far, far more than that: more akin to an enhanced version of Haskell's typeclasses. -- James
Aug 23 2007
prev sibling parent reply Stephen Waits <steve waits.net> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 I think it's the success of D that lit the fire.
It probably gave them a nudge,
More than that. The active people on the C++ committee are well aware of D. Many have attended my presentations on D, correspond with me about it, and lurk in this n.g. Most of them will deny the influence, however, so feel free to decide what to believe <g>.
FWIW, I corresponded with Bjarne a little over 3 years ago. I asked him for his opinion of D. He refused to give an opinion on the grounds that he didn't want to get into a flamewar about "Walter's language". I wrote him back to make sure he understood that I wasn't looking for a fight. I simply respected him and was curious about his opinion, but that I also understand why, in his position, he cannot comment on such things. --Steve
Aug 23 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Stephen Waits wrote:
 FWIW, I corresponded with Bjarne a little over 3 years ago.  I asked him 
 for his opinion of D.  He refused to give an opinion on the grounds that 
 he didn't want to get into a flamewar about "Walter's language".
 
 I wrote him back to make sure he understood that I wasn't looking for a 
 fight.  I simply respected him and was curious about his opinion, but 
 that I also understand why, in his position, he cannot comment on such 
 things.
I know Bjarne, and he's a class act. I have the greatest respect for him.
Aug 25 2007
prev sibling next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Walter,

 Sean Kelly wrote:
 
 It actually has to be finished by year end 2008, and they have
 committed to getting the standard done on time even if it means
 dropping features. In fact, last I heard, a few features were indeed
 being dropped for lack of time, but I can't recall what they were.  I
 haven't been keeping that close an eye on the C++ standardization
 process recently, aside from the new memory model and atomic
 features.
 
C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Does that make the c++ crowd's main objective to remain as the dominant language? Maybe somebody needs to enforce term limit on programming languages.
Aug 20 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
BCS wrote:
 Reply to Walter,
 
 Sean Kelly wrote:

 It actually has to be finished by year end 2008, and they have
 committed to getting the standard done on time even if it means
 dropping features. In fact, last I heard, a few features were indeed
 being dropped for lack of time, but I can't recall what they were.  I
 haven't been keeping that close an eye on the C++ standardization
 process recently, aside from the new memory model and atomic
 features.
C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Does that make the c++ crowd's main objective to remain as the dominant language?
I doubt it. I think the C++ crowd's main objective is to turn C++ into something that doesn't suck like a dozen turbine jets strapped together with duct tape. They want it to be a better language for themselves, because they have to use it every day. I'm thinking specifically of generic and meta-programming functionality. That's what looks like will get the most benefit from the new language additions.
 Maybe somebody needs to enforce term limit on programming 
 languages.
"You don't vote for kings." -- King Arthur, Monty Python and the Holy Grail.
Aug 20 2007
parent BCS <ao pathlink.com> writes:
Reply to Bill,

 BCS wrote:
 
 Reply to Walter,
 
 Sean Kelly wrote:
 
 It actually has to be finished by year end 2008, and they have
 committed to getting the standard done on time even if it means
 dropping features. In fact, last I heard, a few features were
 indeed being dropped for lack of time, but I can't recall what they
 were.  I haven't been keeping that close an eye on the C++
 standardization process recently, aside from the new memory model
 and atomic features.
 
C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Does that make the c++ crowd's main objective to remain as the dominant language?
I doubt it. I think the C++ crowd's main objective is to turn C++ into something that doesn't suck like a dozen turbine jets strapped together with duct tape.
Nice <G>
 They want it to be a better language for
 themselves, because they have to use it every day.  I'm thinking
 specifically of generic and meta-programming functionality.  That's
 what looks like will get the most benefit from the new language
 additions.
 
Yah, I see your point. However some times the best way to improve somthing is to take it out back and shoot it. Not add more jet engines and duck tape.
 Maybe somebody needs to enforce term limit on programming languages.
 
"You don't vote for kings." -- King Arthur, Monty Python and the Holy Grail.
Aug 20 2007
prev sibling parent reply James Dennett <jdennett acm.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 It actually has to be finished by year end 2008, and they have
 committed to getting the standard done on time even if it means
 dropping features.  In fact, last I heard, a few features were indeed
 being dropped for lack of time, but I can't recall what they were.  I
 haven't been keeping that close an eye on the C++ standardization
 process recently, aside from the new memory model and atomic features.
C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Most of these features have been in development for years; it's desires to improve C++ that have lit these fires, just as your urge to create D was based on other ideas about how to improve on C++98. For many involved in language design, it's language features that are more tempting than new library functionality. I don't see much in C++0x that has much claim to being inspired by D. I look forward to type deduction with auto, but that dates from the 80's. Concepts will be great, but those have most overlap with Haskell's typeclasses, not mirrored in D. The new for syntax reflects many languages (D, Perl, Java, sh, others) in some ways. GC for C++ predates D. The smart pointers have no counterpart in D, yet. D has cool metaprogramming facilities, and does some other things nicely, but C++ faces more competition It would, however, seem reasonable for C++ to pick up on good features of D, when they are a match for C++, forth. -- James
Aug 20 2007
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
James Dennett wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 It actually has to be finished by year end 2008, and they have
 committed to getting the standard done on time even if it means
 dropping features.  In fact, last I heard, a few features were indeed
 being dropped for lack of time, but I can't recall what they were.  I
 haven't been keeping that close an eye on the C++ standardization
 process recently, aside from the new memory model and atomic features.
C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Most of these features have been in development for years; it's desires to improve C++ that have lit these fires, just as your urge to create D was based on other ideas about how to improve on C++98. For many involved in language design, it's language features that are more tempting than new library functionality. I don't see much in C++0x that has much claim to being inspired by D. I look forward to type deduction with auto, but that dates from the 80's. Concepts will be great, but those have most overlap with Haskell's typeclasses, not mirrored in D. The new for syntax reflects many languages (D, Perl, Java, sh, others) in some ways. GC for C++ predates D. The smart pointers have no counterpart in D, yet. D has cool metaprogramming facilities, and does some other things nicely, but C++ faces more competition It would, however, seem reasonable for C++ to pick up on good features of D, when they are a match for C++, forth. -- James
I think Walter wasn't saying that C++0x features were inspired or based on D, just that D speed up the adoption of those features. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Aug 21 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Bruno Medeiros wrote:
 I think Walter wasn't saying that C++0x features were inspired or based 
 on D, just that D speed up the adoption of those features.
D hasn't invented many truly *new* features. What it has done, however, is dramatically demonstrate that: 1) they fit well in a language that is very close to C++ 2) they dramatically improve productivity When one can point to real, live, *relevant* implementations of a feature, it tends to be convincing. After all, one can actually fly it rather than dreaming about paper airplanes. The further a language is from C++, the easier it is to dismiss a feature of that language as irrelevant.
Aug 22 2007
prev sibling parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia:  
 http://en.wikipedia.org/wiki/C%2B%2B0x
  It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?)  
 over C++0x? May be only high speed compilation and GC.
Looks like C++ is adding D features thick & fast!
Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :( Current C++ is far behind D, but D is not stable, not mature, not equiped by tools/libraries as C++. So it will took several years to make D competitive with C++ in that area. But if in 2010 (it is only 2.5 year ahead) C++ will have things like lambdas and autos (and tons of libraries and army of programmers), what will be D 'killer feature' to attract C++ programmers? And not only C++, at this time D would compete with new functional languages (like Haskell and OCaml). -- Regards, Yauheni Akhotnikau
Aug 20 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia: 
 http://en.wikipedia.org/wiki/C%2B%2B0x
  It is iteresting to know which advantages will have D (2.0? 3.0? 
 4.0?) over C++0x? May be only high speed compilation and GC.
Looks like C++ is adding D features thick & fast!
Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :(
The trouble with the new features is they don't fix the inscrutably awful syntax of complex C++ code, in fact, they make it worse. C++ will further become an "experts only" language.
 Current C++ is far behind D, but D is not stable, not mature, not 
 equiped by tools/libraries as C++. So it will took several years to make 
 D competitive with C++ in that area. But if in 2010 (it is only 2.5 year 
 ahead) C++ will have things like lambdas and autos (and tons of 
 libraries and army of programmers), what will be D 'killer feature' to 
 attract C++ programmers? And not only C++, at this time D would compete 

 of functional languages (like Haskell and OCaml).
The C++ standard will have those features. C++ compilers? Who knows. It took five years for C++98 to get implemented. C++'s problems are still in place, though. Like no modules, verbose and awkward syntax, very long learning curve, very difficult to do the simplest metaprogramming, etc.
Aug 20 2007
next sibling parent Uno <unodgs tlen.pl> writes:
 The C++ standard will have those features. C++ compilers? Who knows. It 
 took five years for C++98 to get implemented.
GCC 4.3 has some of the coming standard features already implemented (like variadic templates). ConceptGCC has working concepts. So there is a chance at least one compiler will be available when new standard comes out. Uno
Aug 20 2007
prev sibling next sibling parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 20:44:22 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 eao197 wrote:
 On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright  
 <newshound1 digitalmars.com> wrote:

 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia:  
 http://en.wikipedia.org/wiki/C%2B%2B0x
  It is iteresting to know which advantages will have D (2.0? 3.0?  
 4.0?) over C++0x? May be only high speed compilation and GC.
Looks like C++ is adding D features thick & fast!
Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :(
The trouble with the new features is they don't fix the inscrutably awful syntax of complex C++ code, in fact, they make it worse. C++ will further become an "experts only" language.
It reminds me 'Worse is Better' (http://en.wikipedia.org/wiki/Worse_is_Better). I'm not a C++ expert but I haven't any serious problem with C++. And such features allow me to write in C++ more productive and use all my codebase. So I'm affraid many expirienced C++ programmers remain with C++. Because of that D must be focused to different programmer audience, to
 Current C++ is far behind D, but D is not stable, not mature, not  
 equiped by tools/libraries as C++. So it will took several years to  
 make D competitive with C++ in that area. But if in 2010 (it is only  
 2.5 year ahead) C++ will have things like lambdas and autos (and tons  
 of libraries and army of programmers), what will be D 'killer feature'  
 to attract C++ programmers? And not only C++, at this time D would  

 with some of functional languages (like Haskell and OCaml).
The C++ standard will have those features. C++ compilers? Who knows. It took five years for C++98 to get implemented. C++'s problems are still in place, though. Like no modules, verbose and awkward syntax, very long learning curve, very difficult to do the simplest metaprogramming, etc.
Yes, but now there are only few C++ compiler vendors (unlike 98). There is hope that GCC will have almost all new C++ features in near future. -- Regards, Yauheni Akhotnikau
Aug 20 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 Yes, but now there are only few C++ compiler vendors (unlike 98). There 
 is hope that GCC will have almost all new C++ features in near future.
Every new revision to the C++ standard kills off more C++ vendors.
Aug 22 2007
prev sibling parent janderson <askme me.com> writes:
Walter Bright wrote:
 The C++ standard will have those features. C++ compilers? Who knows. It 
 took five years for C++98 to get implemented.
 
 C++'s problems are still in place, though. Like no modules, verbose and 
 awkward syntax, very long learning curve, very difficult to do the 
 simplest metaprogramming, etc.
Another awesome (and annoying at the same time) thing about D, real-time development, well not quite but you know what I mean. -Joel
Aug 20 2007
prev sibling parent reply "Craig Black" <cblack ara.com> writes:
"eao197" <eao197 intervale.ru> wrote in message 
news:op.txc0txbtsdcfd2 eao197nb2.intervale.ru...
 On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright 
 <newshound1 digitalmars.com> wrote:

 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia: 
 http://en.wikipedia.org/wiki/C%2B%2B0x
  It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) 
 over C++0x? May be only high speed compilation and GC.
Looks like C++ is adding D features thick & fast!
Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :( Current C++ is far behind D, but D is not stable, not mature, not equiped by tools/libraries as C++. So it will took several years to make D competitive with C++ in that area. But if in 2010 (it is only 2.5 year ahead) C++ will have things like lambdas and autos (and tons of libraries and army of programmers), what will be D 'killer feature' to attract C++ programmers? And not only C++, at this time D would compete with new functional languages (like Haskell and OCaml). -- Regards, Yauheni Akhotnikau
Agreed. The standard is moving faster and includes more improvements than I previously expected. What's more, major compilers have already begun working on these new features. Some of these features will be available in GCC 4.3 and VC++ 2008 (expected February). I would not be surprised if most of the standard features are implemented in compilers by 2010 as you suggest. However, the benefit of D over C++ is still cleaner, more powerful syntax in general. C++ might be getting more capability, but even so, it will not match D in clean expressive power. Additionally, two and a a half years is also a long time for D to advance as well, and D's progression is very fast. Overall I have been very happy with D's progress over the past few years, both in compiler and library development. That said, there are a number of things that I think would help aid the adoption of D over the next few years. First, D needs to at the very least match the features that are added to C++ with regards to parallelism and concurrency. Another thing that will aid in the adoption of D is to iron out whatever issues or percieved issues there are with the D 2.0 so that it will be accepted by the D community enough for library writers to migrate their code. A dead horse perhaps, but I still think it would serve D well to have better C++ integration. Granted this is a tough problem, as Walter emphasizes, but so was integrating Managed .NET C++ with native C++, which Microsoft was able to do rather well. Experience has taught me that there is always a solution to issues like this, but sometimes requires us to think about the problem in a different way. Other than that, fixing compiler bugs is probably the most important thing for D right now. I am especially looking forward to fixes that will make __traits usable (if that's still what it's called). One particular feature of pesonal interest is better support for structs (ctors, dtors, etc.) This will help with complex mathematical data structures that I use that must be uber-efficient. As far as these new namfangled macros, D is so powerful already, I don't really know exactly what this will give us over what we already have. But perhaps I haven't given this as much thought as others have. -Craig
Aug 20 2007
next sibling parent reply Ingo Oeser <ioe-news rameria.de> writes:
Craig Black wrote:

 First, D needs to at the very least match the features that are added to
 C++ with regards to parallelism and concurrency.
Yes, I already had some discussions and proposal about this in another thread. - More explicit loop notion which can do map(), reduce(), filter(). These are well understood and known idioms, which explicitly state how the data dependency is with just a single keyword. OpenMP basically does just this plus some thread management for advanced stuff. -> Easy to add now and improve later support with newer library functions. -> Allows also to let the compiler do the optimisation via auto vectorisation for simpler cases (like shorter loop bodies). - Please remove the inXX() and outXX() intrinsics. They are oneliners in asm on X86 and not present on many architectures. - asm construct should be backend dependent. This will aid its optimal integration into the surrounding D code. Reason: asm is very seldom used this days and usually states one of 3 things 1. The optimizer of my compiler sucks 2. I'm l33t 3. I write sth. which can/should not be expressed in D, because it is highly machine dependend and a NOP on many machines. (e.g. inp()/outp()) Short term acceptable is 1. and 2., but long term acceptable is only 3. Case 3. is "write once and never touch again" but has to integrate very tightly into the surrounding code and thus has to answer many questions: - How many delay slots are still free? - Which registers are spilled, read, written? - Where is the result? - Which CPU units are used/unused? - In what state is the pipeline of that unit? ... Even GCC asm syntax can not express all of this yet, AFAIK. Many platform specific hard assembly stuff is written with GCC asm syntax. So being at least as powerful of defining it as opaque as a mixin might be more useful here. Maybe sth. opaque like the mixin() statement, which passes everything there to the backend would be better. Esp. for DSP architectures.
 Other than that, fixing compiler bugs is probably the most important thing
 for D right now.  I am especially looking forward to fixes that will make
 __traits usable (if that's still what it's called).
Yes, progress there is most exciting for me at the moment and I think the developers do a good job there.
 One particular feature of pesonal interest is better support for structs
 (ctors, dtors, etc.)  This will help with complex mathematical data
 structures that I use that must be uber-efficient.
ctors which only assign and MUST assign all values might be very useful. Static initializers with C99 syntax will be very welcome, too.
 As far as these new namfangled macros, D is so powerful already, I don't
 really know exactly what this will give us over what we already have.  But
 perhaps I haven't given this as much thought as others have.
Are there any articles about the current macro design decisions? Best Regards Ingo Oeser
Aug 21 2007
parent reply Downs <default_357-line yahoo.de> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ingo Oeser wrote:
 Craig Black wrote:
 
 First, D needs to at the very least match the features that are added to
 C++ with regards to parallelism and concurrency.
Yes, I already had some discussions and proposal about this in another thread. - More explicit loop notion which can do map(), reduce(), filter().
See also my "Some tools for D" post. I implemented those with iterators, but they're trivial to do as freestanding functions.
   These are well understood and known idioms, which explicitly state how the
   data dependency is with just a single keyword. OpenMP basically does just 
   this plus some thread management for advanced stuff.
   -> Easy to add now and improve later support with newer library functions.
   -> Allows also to let the compiler do the optimisation via auto 
      vectorisation for simpler cases (like shorter loop bodies).
GCC has autovectorization support in principle, so whenever the gdc maintainer gets around to fixing the tree format for statements, gdc will be able to take advantage of this.
 
 - Please remove the inXX() and outXX() intrinsics.
   They are oneliners in asm on X86 and not present on many architectures.
 
I don't know those; what do they do?
 - asm construct should be backend dependent.
[snip asm stuff]
 
 Other than that, fixing compiler bugs is probably the most important thing
 for D right now.
Definitely agreed.
 I am especially looking forward to fixes that will make
 __traits usable (if that's still what it's called).
Yes, progress there is most exciting for me at the moment and I think the developers do a good job there.
 One particular feature of pesonal interest is better support for structs
 (ctors, dtors, etc.)
Also agreed. _Please_.
 This will help with complex mathematical data
 structures that I use that must be uber-efficient.
ctors which only assign and MUST assign all values might be very useful. Static initializers with C99 syntax will be very welcome, too.
 As far as these new namfangled macros, D is so powerful already, I don't
 really know exactly what this will give us over what we already have.  But
 perhaps I haven't given this as much thought as others have.
Are there any articles about the current macro design decisions? Best Regards Ingo Oeser
- --downs -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGzSrdpEPJRr05fBERAlXxAJ9uxf9C9IDQ4xwpS1U6ZR2ymGHFpQCgmSro TJhan50sNAM0WqVkuOGMD60= =7Eka -----END PGP SIGNATURE-----
Aug 22 2007
parent reply Ingo Oeser <ioe-news rameria.de> writes:
Downs wrote:
 Ingo Oeser wrote:
 - More explicit loop notion which can do map(), reduce(), filter().
See also my "Some tools for D" post. I implemented those with iterators, but they're trivial to do as freestanding functions.
Ok, with your tools, I could already CODE like that right now and remove the "import iter;" later. Thanks for that :-) But I want to express that there isn't any defined order of execution, as long as data flow is correct. And of course I would like them overloadable to compose and distribute them driven by data flow. I just like to express the data flow more explicitly instead of depending on the optimizer to figure it out himself. Let's help the optimizer greatly here.
 GCC has autovectorization support in principle, so whenever the gdc
 maintainer gets around to fixing the tree format for statements, gdc
 will be able to take advantage of this.
I know. But this is just a side effect for me and is the compiler guessing himself, that I do map(), reduce(), filter(). The optimizer is more useful to the programmer doing (memory) copy elimination and life time analysis, so the programmer can write more readable code by using more temporaries.
 - Please remove the inXX() and outXX() intrinsics.
   They are oneliners in asm on X86 and not present on many architectures.
 
I don't know those; what do they do?
They provide port io (IN PORT and OUT PORT) on X86. in Intel syntax for your reference. OUT PORT mov dx,IOport mov al,Value out dx,al IN PORT mov dx,IOport in al,dx mov ReturnValue,al These are of course privileged operations and the compiler doesn't actually know your privilege level. So this kind of intrinsic is just nonsense. It is also non-sense in a standard library, since >99% of all programs out there don't need to do that and the rest codes it either in assembly or uses a special API for that. Best Regards Ingo Oeser
Aug 23 2007
next sibling parent reply 0ffh <spam frankhirsch.net> writes:
Ingo Oeser wrote [about port i/o]:
 These are of course privileged operations and the compiler 
 doesn't actually know your privilege level. So this kind of intrinsic
 is just nonsense. It is also non-sense in a standard library, since >99%
 of all programs out there don't need to do that and the rest codes
 it either in assembly or uses a special API for that.
I do not agree. I've been writing system level software on a number of architectures since more than 15 years, mostly in C, and of course these functions are used. Also, I never even heard of any "special API" for port i/o, and wonder what such a thing might be needed for (unless your compiler is crippled). Regards, Frank
Aug 23 2007
parent reply Ingo Oeser <ioe-news rameria.de> writes:
0ffh wrote:
 Ingo Oeser wrote [about port i/o]:
 These are of course privileged operations and the compiler
 doesn't actually know your privilege level. So this kind of intrinsic
 is just nonsense. It is also non-sense in a standard library, since >99%
 of all programs out there don't need to do that and the rest codes
 it either in assembly or uses a special API for that.
I do not agree. I've been writing system level software on a number of architectures since more than 15 years, mostly in C, and of course these functions are used.
But they had quite different properties/semantics. You should have noticed, that some architectures don't even have them and threat all IO as special memory accesses. E.g. "There exists no such thing as port-based I/O on AVR32" and "The ARM doesn't have special IO access instructions". Or "On MIPS I/O ports are memory mapped, so we access them using normal load/store instructions" Just to quote some random comments from include/asm-*/io.h from Linux. Oh and some port i/o needs to slowed down, byte swapped, include barriers. How should a intrinsic handle that?
 Also, I never even heard of any "special API" for port i/o, and wonder
 what such a thing might be needed for (unless your compiler is crippled).
I mean special API for ENABLING port i/o for your task. How does the intrinsic know, whether you can actually DO port i/o in that task? Inline assembler functions are much more suited to that task. And how to do that in assembler is usually in the example collection of your system manual. Even better is to threat that stuff as special memory and define unified i/o memory accessors somehere in a library. That stuff isn't fast anyway, so the class overhead might not be too significant here, as long as it is bounded. Best Regards Ingo Oeser
Aug 26 2007
parent reply 0ffh <spam frankhirsch.net> writes:
Ingo Oeser wrote:
 0ffh wrote:
 But they had quite different properties/semantics. You should have noticed,
 that some architectures don't even have them and threat all IO as special
 memory accesses.
Sure did. Doesn't prevent the compiler from supporting it. Some compilers support FP on architectures without FP. Good thing, too!
 Oh and some port i/o needs to slowed down, byte swapped, include barriers.
 How should a intrinsic handle that?
You normally use them as primitives and build more complex funs around.
 I mean special API for ENABLING port i/o for your task. How does the
 intrinsic know, whether you can actually DO port i/o in that task?
It can't and why should it? BTW not all architectures (and all modes) need any special enabling.
 Inline assembler functions are much more suited to that task.
I have to use inline asm every time? IIRC DMD doesn't inline funs with inline asm, so no good.
 That stuff isn't fast anyway, so the class overhead might not be
 too significant here, as long as it is bounded.
I wouldn't count on it. Anyways my basic point is: Why throw it out just because "99% don't need it anyway"? It's there already, and some people are quite happy about it! It does not in any way make DMD less usable for the "99%", so why do you insist on treading on the "1%" minority? Regards, Frank
Aug 26 2007
parent reply Ingo Oeser <ioe-news rameria.de> writes:
0ffh wrote:

 Ingo Oeser wrote:
 Oh and some port i/o needs to slowed down, byte swapped, include
 barriers. How should a intrinsic handle that?
You normally use them as primitives and build more complex funs around.
My problem is, that this behavior isn't defined.
 I mean special API for ENABLING port i/o for your task. How does the
 intrinsic know, whether you can actually DO port i/o in that task?
It can't and why should it? BTW not all architectures (and all modes) need any special enabling.
I know. I'm a system programmer, too :-)
 Inline assembler functions are much more suited to that task.
I have to use inline asm every time? IIRC DMD doesn't inline funs with inline asm, so no good.
GDC does, so DMD might get it one day.
 Why throw it out just because "99% don't need it anyway"?
 It's there already, and some people are quite happy about it!
 
 It does not in any way make DMD less usable for the "99%",
 so why do you insist on treading on the "1%" minority?
Because every D compiler HAS to implement it for every architecture. And it has to fake it somehow on architectures, which don't have port i/o. How to fake that correctly, is simply not defined (in contrast to IEEE FP math). Making inline assembly better (e.g. giving the compiler more knowledge about the instructions and their constraints, make it inlineable) is the more useful goal. These compiler refinements then work for ALL instructions on EVERY architectures and may someday even be more powerful than GCCs inline assembler syntax. This will reduce X86ism in D. DMD can learn A LOT from GCC in that area. Best Regards Ingo Oeser
Aug 26 2007
next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Ingo Oeser wrote:
 Because every D compiler HAS to implement it for every architecture.
 And it has to fake it somehow on architectures, which don't have
 port i/o. How to fake that correctly, is simply not defined 
 (in contrast to IEEE FP math).
Why must every D compiler implement them? I thought intrinsics are part of the Phobos / DMD implementation of D, not of the D language itself - contrary to FP math.
Aug 26 2007
prev sibling parent reply 0ffh <spam frankhirsch.net> writes:
Ingo Oeser wrote:
 Because every D compiler HAS to implement it for every architecture.
I concur with Lutger on this: IIRC intrinsics are a compiler thing, not a language thing. They are routinely used for compiler-specific stuff, I take that as a hint.
 Making inline assembly better [...] is the more useful goal. [...]
 This will reduce X86ism in D. DMD can learn A LOT from GCC in that area.
Ach, blast! I don't care to fight any more over this... come macros I don't even need inlineable functions anymore! ALL HAIL WALTER AND HIS AST MACROS! :-))) They might be just the kick ass feature to kick off the final take off... Regards, frank
Aug 26 2007
next sibling parent 0ffh <spam frankhirsch.net> writes:
0ffh wrote:
                ALL HAIL WALTER AND HIS AST MACROS! :-)))
YAY!!! Regards, frank p.s. Sorry for self-reply, I re-read it and couldn't resist! =)
Aug 26 2007
prev sibling parent reply Carlos Santander <csantander619 gmail.com> writes:
0ffh escribió:
 Ingo Oeser wrote:
 Because every D compiler HAS to implement it for every architecture.
I concur with Lutger on this: IIRC intrinsics are a compiler thing, not a language thing. They are routinely used for compiler-specific stuff, I take that as a hint.
I remember that Walter once said that all in Phobos under std/ was standard, as a part of the D standard (I hope I'm getting my words right.) Thus, as Ingo said, a standard D compiler has to implement those things. This (not specifically inp/outp, but Phobos in general) was a problem when there were licensing issues (I wonder if still there are some), as other D implementations would not be able to provide those features. In this case, the architecture-specific parts would be the issue to overcome. In a way, it would be like expecting all OSes to have a registry. The sound solution was to put the Windows Registry stuff under std.windows. A D compiler that doesn't run on Windows wouldn't need to provide those modules. The same could be done for these intrinsics: put them in std.arch.x86.intrinsics, or something like that.
 Making inline assembly better [...] is the more useful goal. [...]
 This will reduce X86ism in D. DMD can learn A LOT from GCC in that area.
Ach, blast! I don't care to fight any more over this... come macros I don't even need inlineable functions anymore! ALL HAIL WALTER AND HIS AST MACROS! :-))) They might be just the kick ass feature to kick off the final take off... Regards, frank
-- Carlos Santander Bernal
Aug 26 2007
parent Ingo Oeser <ioe-news rameria.de> writes:
Carlos Santander wrote:

 I remember that Walter once said that all in Phobos under std/ was
 standard, as a part of the D standard (I hope I'm getting my words right.)
 Thus, as Ingo said, a standard D compiler has to implement those things.
Yes, that's my point.
 In a way, it would be like expecting all OSes to have a registry. The
 sound solution was to put the Windows Registry stuff under std.windows. A
 D compiler that doesn't run on Windows wouldn't need to provide those
 modules. The same could be done for these intrinsics: put them in
 std.arch.x86.intrinsics, or something like that.
Yes, that sounds sane enough. And if you implement a D-compiler for AVR, you don't need to implement that stuff. Best Regards Ingo Oeser
Aug 28 2007
prev sibling parent Downs <default_357-line yahoo.de> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ingo Oeser wrote:
 Downs wrote:
 Ingo Oeser wrote:
 - More explicit loop notion which can do map(), reduce(), filter().
See also my "Some tools for D" post. I implemented those with iterators, but they're trivial to do as freestanding functions.
Ok, with your tools, I could already CODE like that right now and remove the "import iter;" later. Thanks for that :-) But I want to express that there isn't any defined order of execution, as long as data flow is correct. And of course I would like them overloadable to compose and distribute them driven by data flow. I just like to express the data flow more explicitly instead of depending on the optimizer to figure it out himself. Let's help the optimizer greatly here.
FWIW, you can already run a MT foreach variation on the tools.threadpool class, and it wouldn't be that hard to write a tree reduce that does the same .. but I see your point and agree. In the end, programming languages should come as close as possible to capturing your intentions with a given piece of code, which is why this kind of metadata would be extremely useful to a clever compiler. --downs -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGzp8/pEPJRr05fBERAkaBAJwKPlb5Y5allxdSLS9yjia72k6iJwCZAXGF dGP7v6VDgB9eA/NvkFBOto0= =91hz -----END PGP SIGNATURE-----
Aug 24 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Craig Black wrote:
 First, D needs to at the very least match the features that are added to C++ 
 with regards to parallelism and concurrency.
D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model. Also, D already implements a superset of some of C++0x's synchronization primitives.
Aug 22 2007
next sibling parent reply serg kovrov <sergk mailinator.com> writes:
Walter Bright wrote:
 D will be addressing the problem by moving towards supporting pure 
 functions, which are automatically parallelizable. I think this will be 
 much more powerful than C++'s model.
Very interesting, could you tell more regarding automatically parallelizable functions? -- serg
Aug 23 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
serg kovrov wrote:
 Walter Bright wrote:
 D will be addressing the problem by moving towards supporting pure 
 functions, which are automatically parallelizable. I think this will 
 be much more powerful than C++'s model.
Very interesting, could you tell more regarding automatically parallelizable functions?
Andrei and I covered this in our presentation on D at the D conference, which we'll post here soon.
Aug 23 2007
prev sibling next sibling parent Stephen Waits <steve waits.net> writes:
Walter Bright wrote:
 D will be addressing the problem by moving towards supporting pure 
 functions, which are automatically parallelizable. I think this will be 
 much more powerful than C++'s model.
I heard the songs of angels in my mind when I read this. --Steve
Aug 23 2007
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Craig Black wrote:
 First, D needs to at the very least match the features that are added 
 to C++ with regards to parallelism and concurrency.
D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model. Also, D already implements a superset of some of C++0x's synchronization primitives.
And with inline asm and volatile, an atomic operations package is fairly easy to implement in D (most easily for x86 for obvious reasons). I really think D is in fairly good shape for concurrent programming even without a carefully established multithread-aware memory model. Sean
Aug 25 2007
parent Brad Roberts <braddr puremagic.com> writes:
On Sat, 25 Aug 2007, Sean Kelly wrote:

 Walter Bright wrote:
 Craig Black wrote:
 First, D needs to at the very least match the features that are added to
 C++ with regards to parallelism and concurrency.
D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model. Also, D already implements a superset of some of C++0x's synchronization primitives.
And with inline asm and volatile, an atomic operations package is fairly easy to implement in D (most easily for x86 for obvious reasons). I really think D is in fairly good shape for concurrent programming even without a carefully established multithread-aware memory model. Sean
As long as you don't care about the performance of calling a function for a single asm operation or writing asm { ... } at each callsite for the atomic operations. The problem is that dmd won't inline functions with inline asm. GDC will, so all isn't lost. Luckily, a future 2.0 feature, macros, will make it easy to shove the asm inline. But yes, it's possible. But it's no better than c++ on that front.. it's only on par. Later, Brad
Aug 25 2007
prev sibling next sibling parent anonymous <foo bar.com> writes:
eao197 Wrote:

[...]
 BTW, there is a C++0x overview in Wikipedia:  
 http://en.wikipedia.org/wiki/C%2B%2B0x
 
 It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?)  
 over C++0x? May be only high speed compilation and GC.
First of all, thanks for the link. To me as a non-professional programmer, D is much clearer and cleaner structured. Coming from Pascal/Delphi, it is far easier to understand. It also has a bit the air of perl: There is more than one way to do it. In C++, there seems to be only one right way, but that is hard to understand.
Aug 20 2007
prev sibling next sibling parent reply Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
eao197 wrote:

 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter
 <dnewsgroup billbaxter.com> wrote:
 
 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.
I would put my hopes on the macros, type system and other metaprogramming stuff. Those are areas in which C++ doesn't really shine. I think "We, the Gods, decided to give you now 2 new keywords - if you ask nicely, the next release might have 3 more." vs "We give you all the power to create your own constructs." becomes more apparent now that C++ started to take its steps towards Lisp expressiveness-wise. Still, if C++1x will implement these too, there isn't much need for D anymore beside as a syntactic "skin" for the C++ ugliness.
Aug 20 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 16:35:41 +0400, Jari-Matti Mäkelä  
<jmjmak utu.fi.invalid> wrote:

 eao197 wrote:

 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter
 <dnewsgroup billbaxter.com> wrote:

 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.
I would put my hopes on the macros, type system and other metaprogramming stuff.
If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.
 Those are areas in which C++ doesn't really shine.
IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.
 "We give you all the power to create your own constructs."
I'm affraid it would lead to another Lisp-like failure: each lisper write its own domain-specific language to solve exactly the same problem. -- Regards, Yauheni Akhotnikau
Aug 20 2007
next sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
eao197 wrote:
 
 "We give you all the power to create your own constructs."
I'm affraid it would lead to another Lisp-like failure: each lisper write its own domain-specific language to solve exactly the same problem.
That's my thoughts exactly on LISP's "power to create your own constructs" issue! -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Aug 20 2007
prev sibling parent reply Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
eao197 wrote:

 On Mon, 20 Aug 2007 16:35:41 +0400, Jari-Matti Mäkelä
 <jmjmak utu.fi.invalid> wrote:
 
 eao197 wrote:

 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter
 <dnewsgroup billbaxter.com> wrote:

 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.
I would put my hopes on the macros, type system and other metaprogramming stuff.
If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.
It isn't well suited for system programming.
 
 Those are areas in which C++ doesn't really shine.
IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.
Is it better for each C++ coder to write his own domain-specific build tool to solve exactly the same problem than bolt that functionality to the core language?
 
 "We give you all the power to create your own constructs."
I'm affraid it would lead to another Lisp-like failure: each lisper write its own domain-specific language to solve exactly the same problem.
If C++ would have had enough compile time capabilities, many of the new feature proposals would have been implementable on library level and thus available to most currently available compilers without much transitioning costs/delays like now. What's so bad about DSLs - they're a typical programming idiom in Lisp just as functions or assignments are in BCPL-like languages. Also, I think the amount of algorithm implementations for Lisp can be pretty much explained by the age and popularity of the language. In case you haven't noticed, there are already 18+ GUI toolkit bindings/implementations (according to wiki4d) for D and D is 41 years younger than Lisp.
Aug 20 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 17:31:59 +0400, Jari-Matti Mäkelä  
<jmjmak utu.fi.invalid> wrote:

 I would put my hopes on the macros, type system and other  
 metaprogramming
 stuff.
If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.
It isn't well suited for system programming.
What kind of system programming? Writting compilers and writting drivers are examples of system programming, but Nemerle is well suited for the first, but not the second. Is low-level system programming (like drivers really need macros or/and metaprogramming)?
 Those are areas in which C++ doesn't really shine.
IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.
Is it better for each C++ coder to write his own domain-specific build tool to solve exactly the same problem than bolt that functionality to the core language?
There are more modern than make build tools (like Scons) which make pre-compile-time code generation more smoothly task. And after years of C++ development I've came to conclusion that usage of C++ and some scripting language (like Ruby) is more flexible, easy and fast than usage of only one main language (like C++/Java).
 In case you haven't noticed,
 there are already 18+ GUI toolkit bindings/implementations (according to
 wiki4d) for D and D is 41 years younger than Lisp.
Do you think that 18+ GUI bindings is good thing? I know that D has two standard libraries and this is not good at all :( -- Regards, Yauheni Akhotnikau
Aug 20 2007
next sibling parent reply Gregor Kopp <gregor.kopp chello.at> writes:
eao197 wrote:
 I know that D has two standard libraries and this is not good at all :(
That is really annoying! I think that this could break the neck of D.
Aug 20 2007
parent eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 18:21:13 +0400, Gregor Kopp <gregor.kopp chello.at>  
wrote:

 eao197 wrote:
 I know that D has two standard libraries and this is not good at all :(
That is really annoying! I think that this could break the neck of D.
I hope this issue will be solved during D Conference. -- Regards, Yauheni Akhotnikau
Aug 20 2007
prev sibling parent reply Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
eao197 wrote:

 On Mon, 20 Aug 2007 17:31:59 +0400, Jari-Matti Mäkelä
 <jmjmak utu.fi.invalid> wrote:
 
 I would put my hopes on the macros, type system and other
 metaprogramming
 stuff.
If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.
It isn't well suited for system programming.
What kind of system programming? Writting compilers and writting drivers are examples of system programming, but Nemerle is well suited for the first, but not the second. Is low-level system programming (like drivers really need macros or/and metaprogramming)?
If you look at e.g. linux sources there's some heavy use preprocessor macros. With proper language support that could be made less error prone and productive. Even some formal proofing might became possible. Nemerle isn't bad as a language, but it doesn't scale to low level stuff as long as it runs on a VM. In my opinion having a language that scales from driver/kernel level to end user GUI apps and OTOH from asm opcode level to high level FP constructs isn't a bad idea. For example game development is one area where more speed and abstractions are always welcome. I suppose most application programs will run in a VM in the distant future, but at the moment at least my PCs are crying for mercy if I try to run bigger Java apps (running code analysis for a relatively small Java GUI app took about 2 hours in Eclipse). Flash applets aren't any better: a 300x200 mpeg2 movie can choke the system even though mplayer plays several simultaneous 720p streams just fine. The only .Net app I've used was a video card control panel on Windows. It felt too unresponsive to be usable in the long term.
 
 Those are areas in which C++ doesn't really shine.
IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.
Is it better for each C++ coder to write his own domain-specific build tool to solve exactly the same problem than bolt that functionality to the core language?
There are more modern than make build tools (like Scons) which make pre-compile-time code generation more smoothly task. And after years of C++ development I've came to conclusion that usage of C++ and some scripting language (like Ruby) is more flexible, easy and fast than usage of only one main language (like C++/Java).
I agree. But I was on another abstraction level. Things like the active and atomic keywords in C++0x are features that are much easier to implement with builtin macro functionality than 3. party preprocessors / build tools.
 
 In case you haven't noticed,
 there are already 18+ GUI toolkit bindings/implementations (according to
 wiki4d) for D and D is 41 years younger than Lisp.
Do you think that 18+ GUI bindings is good thing?
Probably not.
 I know that D has two standard libraries and this is not good at all :(
I know :/
Aug 20 2007
next sibling parent eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 18:39:44 +0400, Jari-Matti Mäkelä  
<jmjmak utu.fi.invalid> wrote:

 Nemerle isn't bad as a language, but it doesn't scale to low level stuff  
 as
 long as it runs on a VM. In my opinion having a language that scales from
 driver/kernel level to end user GUI apps and OTOH from asm opcode level  
 to
 high level FP constructs isn't a bad idea. For example game development  
 is
 one area where more speed and abstractions are always welcome.
I don't know about game development, but I can mention another area: telecommunications. SMS/MMS gateway requires much low-level bit/byte transformation operation and much high-level logic like transaction routing. But I've learnt that more verbose code, written only with standard language features, is much more maintainlable than more compact code, written with some domain-specific extensions. But may be it is just my Karma :) -- Regards, Yauheni Akhotnikau
Aug 20 2007
prev sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
 I suppose most application programs will run in a VM in the distant future,
 but at the moment at least my PCs are crying for mercy if I try to run
 bigger Java apps (running code analysis for a relatively small Java GUI app
 took about 2 hours in Eclipse). Flash applets aren't any better: a 300x200
 mpeg2 movie can choke the system even though mplayer plays several
 simultaneous 720p streams just fine. The only .Net app I've used was a
 video card control panel on Windows. It felt too unresponsive to be usable
 in the long term.
Really? JIT compilers (LaTtE, Sun Java 5+, probably IBMs JVMs .Net runtime, Mono...) compile things to native code, so besides a little overhead as far as the compilation goes at startup, they should run just as fast as native code, if not faster. I think one of the problems as far as speed is concerned is that the languages were not designed for pure efficiency, and make heavy use of heap allocation, etc. That said, there are already some applications that run faster on a JIT VM because the VM can do certain optimizations that a native compiler can't, and, further, can do it transparently (without programmer interaction). Especially as multi-cores become more prevalent, VMs will be able to automatically vectorize/parallelize loops, which means that, if nothing else, they provide the power to spare the "average developer" who might not know so much about optimization or paralellism, from those horrors. Flash is a different story, as I doubt ActionScript is JITed. As far as Eclipse goes, the IDE does a _lot_ behind the scenes (it compiles any changes every time you stop typing for a couple seconds, marks semantic errors & resolves bindings as you type, etc., etc.), so on slower computers it might sometimes feel a little sluggish. Java's String class is also partly to blame (40 bytes of heap-allocated overhead for every string is a lot), but that's more an issue with the coding style & standard library than with VMs in general.
Aug 20 2007
parent =?UTF-8?B?U3TDqXBoYW4gS29jaGVu?= <stephan kochen.nl> writes:
Robert Fraser schreef:
 That said, there are already some applications that run faster on a JIT VM
because the VM can do certain optimizations that a native compiler can't, and,
further, can do it transparently (without programmer interaction). Especially
as multi-cores become more prevalent, VMs will be able to automatically
vectorize/parallelize loops, which means that, if nothing else, they provide
the power to spare the "average developer" who might not know so much about
optimization or paralellism, from those horrors.
This is something LLVM [1] tries to fix, I think. I skimmed over some articles about it, and they talk about run-time optimization of native code. It's all over the site; a lot of interesting jargon to me. :)
 Flash is a different story, as I doubt ActionScript is JITed. As far as
Eclipse goes, the IDE does a _lot_ behind the scenes (it compiles any changes
every time you stop typing for a couple seconds, marks semantic errors &
resolves bindings as you type, etc., etc.), so on slower computers it might
sometimes feel a little sluggish. Java's String class is also partly to blame
(40 bytes of heap-allocated overhead for every string is a lot), but that's
more an issue with the coding style & standard library than with VMs in general.
Recently, Adobe donated the source of their ActionScript VM and (there it is) JIT compiler to Mozilla. The project is called Tamarin [2]. Lazy web, signing off. :) [1] http://llvm.org/ [2] http://mozilla.org/projects/tamarin/
Aug 20 2007
prev sibling next sibling parent reply Carlos Santander <csantander619 gmail.com> writes:
eao197 escribió:
 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter 
 <dnewsgroup billbaxter.com> wrote:
 
 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x
C++0x will be an enormous, ugly, and scary language...
 It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) 
 over C++0x? May be only high speed compilation and GC.
 
-- Carlos Santander Bernal
Aug 20 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 18:41:13 +0400, Carlos Santander  
<csantander619 gmail.com> wrote:

 eao197 escribió:
 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter  
 <dnewsgroup billbaxter.com> wrote:

 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x
C++0x will be an enormous, ugly, and scary language...
Will be? It is! :) But it is here, and will be here. And D is just growing. -- Regards, Yauheni Akhotnikau
Aug 20 2007
parent Carlos Santander <csantander619 gmail.com> writes:
eao197 escribió:
 On Mon, 20 Aug 2007 18:41:13 +0400, Carlos Santander 
 <csantander619 gmail.com> wrote:
 
 eao197 escribió:
 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter 
 <dnewsgroup billbaxter.com> wrote:

 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x
C++0x will be an enormous, ugly, and scary language...
Will be? It is! :)
Hehe, true.
 But it is here, and will be here. And D is just growing.
 
But it's not getting ugly or scary... Big difference... ;) -- Carlos Santander Bernal
Aug 20 2007
prev sibling parent reply Jascha Wetzel <"[firstname]" mainia.de> writes:
eao197 wrote:
 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter 
 <dnewsgroup billbaxter.com> wrote:
 
 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.
bjarne stroustroup is talking about ongoing discussion about GC features getting into the standard. but obviously, none of C++'s real problems will be fixed in C++0x, since that would require to remove features, not to add them. on the other hand, they didn't even get the ABI/linking issues into the standard. backward compatibility is bad. C++ and Windows are the most prominent examples for that. it's better to have cuts every now and then and provide separate tools that ease the transition. i don't see any of D's potential diminished by C++0x. i think it's curious how much time bjarne stroustroup spends explaining how constrained C++'s language design process is.
Aug 20 2007
parent Sean Kelly <sean f4.ca> writes:
Jascha Wetzel wrote:
 
 i think it's curious how much time bjarne stroustroup spends explaining 
 how constrained C++'s language design process is.
I don't :-) Bjarne may have created C++, but he hasn't had any real control over the language for perhaps the last fifteen years. Still, Bjarne is the one people look to when wondering why C++ doesn't have some feature they consider important (as this interview can attest). What else can he do but explain, again, why things are the way they are? Sean
Aug 20 2007
prev sibling next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
eao197 Wrote:

 Yes! But C++ is doing that without breaking existing codebase. So  
 significant amount of C++ programmers needn't look to D -- they will have  
 new advanced features without dropping their old tools, IDE and libraries.
 
 I'm affraid that would play against D :(
 
 Current C++ is far behind D, but D is not stable, not mature, not equiped  
 by tools/libraries as C++. So it will took several years to make D  
 competitive with C++ in that area. But if in 2010 (it is only 2.5 year  
 ahead) C++ will have things like lambdas and autos (and tons of libraries  
 and army of programmers), what will be D 'killer feature' to attract  
 C++ programmers? And not only C++, at this time D would compete with new  

 functional languages (like Haskell and OCaml).
You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.
Aug 20 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser  
<fraserofthenight gmail.com> wrote:

 eao197 Wrote:

 Yes! But C++ is doing that without breaking existing codebase. So
 significant amount of C++ programmers needn't look to D -- they will  
 have
 new advanced features without dropping their old tools, IDE and  
 libraries.

 I'm affraid that would play against D :(

 Current C++ is far behind D, but D is not stable, not mature, not  
 equiped
 by tools/libraries as C++. So it will took several years to make D
 competitive with C++ in that area. But if in 2010 (it is only 2.5 year
 ahead) C++ will have things like lambdas and autos (and tons of  
 libraries
 and army of programmers), what will be D 'killer feature' to attract
 C++ programmers? And not only C++, at this time D would compete with new

 functional languages (like Haskell and OCaml).
You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.
I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet. To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications. -- Regards, Yauheni Akhotnikau
Aug 20 2007
next sibling parent Charles D Hixson <charleshixsn earthlink.net> writes:
eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser 
 <fraserofthenight gmail.com> wrote:
 
 eao197 Wrote:
 ...
I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet. To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications.
To me it seems that D's main current problem is lack of dependable libraries. A secondary problem is lack of run-time flexibility (ala Python, etc.), but that may be intractable in a language that intends to be fast. Well... the libraries problem is intractable, also. Just, perhaps, less so. OTOH, it is crucial that new releases not break working libraries. If they do it will not only prevent the accumulation over time of working libraries, but will also discourage people from working on them.
Aug 20 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser 
 You seem to forget that D is evolving, too. C++ might get a lot of the 
 cool D features (albiet with ugly syntax), but by that time, D might 
 have superpowers incomprehensible to the C++ mind.
I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.
I don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.
 To outperform C++ in 2009-2010 D must have full strength now and must be 
 stable during some years to proof that strength in some killer 
 applications.
C++0x's new features are essentially all present in D 1.0.
Aug 22 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser
 You seem to forget that D is evolving, too. C++ might get a lot of 
 the cool D features (albiet with ugly syntax), but by that time, D 
 might have superpowers incomprehensible to the C++ mind.
I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.
I don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.
 To outperform C++ in 2009-2010 D must have full strength now and must 
 be stable during some years to proof that strength in some killer 
 applications.
C++0x's new features are essentially all present in D 1.0.
..but C++98's features that were missing from D are still missing (both good and bad ones). --bb
Aug 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 C++0x's new features are essentially all present in D 1.0.
..but C++98's features that were missing from D are still missing (both good and bad ones).
Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.
Aug 23 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 C++0x's new features are essentially all present in D 1.0.
..but C++98's features that were missing from D are still missing (both good and bad ones).
Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.
The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting. 2) lack of a way to return a reference. 3) From what I can tell "const ref" doesn't work for parameters in D 2.0. Oh, and 4) real struct constructors. Just a syntactic annoyance, but still an annoyance. --bb
Aug 23 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 C++0x's new features are essentially all present in D 1.0.
..but C++98's features that were missing from D are still missing (both good and bad ones).
Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.
The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting.
Sorry for the self-follow-up, but I just wanted to add that really C++ smart pointers are themselves kind of klunky due to the fact that _all_ you have access to is that operator*/operator-> thing. So for instance if you make a boost::shared_ptr<std::map>, you end up always having to dereference to do anything interesting involving operator overloads. mymap["foo"] doesn't work, you need to use (*mymap)["foo"]. What you really want most of the time is something more like "smart references". This kind of thing is coming close to possibility with the reflection stuff some people are doing. Basically shapred_ptr!(T) would do introspection on T and populate itself with basic foward-to-T implementations of all of T's methods. But that seems kind of heavyweight to me. All you really want to do is define a fallback -- when the compiler sees foo[x] and foo is a shared_ptr!(T), there should be a way to tell it to check T for an opIndex if the shared_ptr itself doesn't have one. That would handle the access syntax. But that still leaves the destructor/copy constructors necessary to get a real smart pointer.
 2) lack of a way to return a reference.
This would also be less critical given a way to fall-back to a member's implementation.
 3) From what I can tell "const ref" doesn't work for parameters in D 
 2.0. Oh, and
 4) real struct constructors.  Just a syntactic annoyance, but still an 
 annoyance.
--bb
Aug 23 2007
parent reply Regan Heath <regan netmail.co.nz> writes:
Bill Baxter wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 C++0x's new features are essentially all present in D 1.0.
..but C++98's features that were missing from D are still missing (both good and bad ones).
Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.
The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting.
Sorry for the self-follow-up, but I just wanted to add that really C++ smart pointers are themselves kind of klunky due to the fact that _all_ you have access to is that operator*/operator-> thing. So for instance if you make a boost::shared_ptr<std::map>, you end up always having to dereference to do anything interesting involving operator overloads. mymap["foo"] doesn't work, you need to use (*mymap)["foo"]. What you really want most of the time is something more like "smart references". This kind of thing is coming close to possibility with the reflection stuff some people are doing. Basically shapred_ptr!(T) would do introspection on T and populate itself with basic foward-to-T implementations of all of T's methods. But that seems kind of heavyweight to me. All you really want to do is define a fallback -- when the compiler sees foo[x] and foo is a shared_ptr!(T), there should be a way to tell it to check T for an opIndex if the shared_ptr itself doesn't have one. That would handle the access syntax. But that still leaves the destructor/copy constructors necessary to get a real smart pointer.
 2) lack of a way to return a reference.
This would also be less critical given a way to fall-back to a member's implementation.
Funny, after reading you post I was thinking that you would provide a way to fallback by returning a reference :P eg. ref T opDereference() { return ptr; } which would then automatically be called when using [] . etc on a T* I guess we wait and see what Walter cooks up for us in 2.0 :) Regan
Aug 24 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Regan Heath wrote:
 Bill Baxter wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 C++0x's new features are essentially all present in D 1.0.
..but C++98's features that were missing from D are still missing (both good and bad ones).
Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.
The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting.
Sorry for the self-follow-up, but I just wanted to add that really C++ smart pointers are themselves kind of klunky due to the fact that _all_ you have access to is that operator*/operator-> thing. So for instance if you make a boost::shared_ptr<std::map>, you end up always having to dereference to do anything interesting involving operator overloads. mymap["foo"] doesn't work, you need to use (*mymap)["foo"]. What you really want most of the time is something more like "smart references". This kind of thing is coming close to possibility with the reflection stuff some people are doing. Basically shapred_ptr!(T) would do introspection on T and populate itself with basic foward-to-T implementations of all of T's methods. But that seems kind of heavyweight to me. All you really want to do is define a fallback -- when the compiler sees foo[x] and foo is a shared_ptr!(T), there should be a way to tell it to check T for an opIndex if the shared_ptr itself doesn't have one. That would handle the access syntax. But that still leaves the destructor/copy constructors necessary to get a real smart pointer.
 2) lack of a way to return a reference.
This would also be less critical given a way to fall-back to a member's implementation.
Funny, after reading you post I was thinking that you would provide a way to fallback by returning a reference :P eg. ref T opDereference() { return ptr; } which would then automatically be called when using [] . etc on a T* I guess we wait and see what Walter cooks up for us in 2.0 :)
Really I'd rather have something that gives a little more control. Returning a reference is like pulling down your pants in public. --bb
Aug 24 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 The things that have me banging my head most often are
 1) the few things preventing an implementation of smart pointers 
 [destructors, copy constructors and opDot].  There are some cases where 
 you just want to refcount objects.  This is the one hole in D that I 
 haven't heard any reasonable workaround for.  I don't necessarily _want_ 
 copy constructors in general but they seem to be necessary for 
 implementing automatic reference counting.
 2) lack of a way to return a reference.
 3) From what I can tell "const ref" doesn't work for parameters in D 
 2.0. Oh, and
 4) real struct constructors.  Just a syntactic annoyance, but still an 
 annoyance.
These will all be addressed in 2.0.
Aug 23 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 The things that have me banging my head most often are
 1) the few things preventing an implementation of smart pointers 
 [destructors, copy constructors and opDot].  There are some cases 
 where you just want to refcount objects.  This is the one hole in D 
 that I haven't heard any reasonable workaround for.  I don't 
 necessarily _want_ copy constructors in general but they seem to be 
 necessary for implementing automatic reference counting.
 2) lack of a way to return a reference.
 3) From what I can tell "const ref" doesn't work for parameters in D 
 2.0. Oh, and
 4) real struct constructors.  Just a syntactic annoyance, but still an 
 annoyance.
These will all be addressed in 2.0.
Hot diggity. Looking forward to it. --bb
Aug 24 2007
prev sibling next sibling parent reply Reiner Pope <some address.com> writes:
Walter Bright wrote:
 eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser
 You seem to forget that D is evolving, too. C++ might get a lot of 
 the cool D features (albiet with ugly syntax), but by that time, D 
 might have superpowers incomprehensible to the C++ mind.
I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.
I don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.
 To outperform C++ in 2009-2010 D must have full strength now and must 
 be stable during some years to proof that strength in some killer 
 applications.
C++0x's new features are essentially all present in D 1.0.
All except Concepts. I know there was a small discussion of Concepts here after someone posted a Doug Gregor video on Concepts, but other than that they haven't really got much attention. I know that a lot of the problems they solve in simplifying template error messages can be done alternatively in D with static-if, is() and now __traits, in conjunction with the 'static unittest' idiom, but even then, I think C++0x Concepts give a nicer syntax for expressing exactly what you want, and they also allow overloading on Concepts (which AFAIK there is no way to emulate in D). Two characteristic examples (the first one is in would-be D with Concepts): // if D had Concepts void sort(T :: RandomAccessIteratorConcept)(T t) {...} // currently void sort(T)(T t) { static assert(IsRandomAccessIterator!(T), T.stringof ~ " isn't a random access iterator"); ... } alias sort!(MinimalRandomAccessIterator) _sort__UnitTest; It isn't syntactically clean, so people won't be encouraged to support this idiom, and it doesn't allow the Concepts features of overloading or concept maps (I think concept maps can be emulated, but they currently break IFTI). I'm interested in knowing your thoughts/plans for this. -- Reiner
Aug 23 2007
parent reply Reiner Pope <some address.com> writes:
Reiner Pope wrote:
 Walter Bright wrote:
 eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser
 You seem to forget that D is evolving, too. C++ might get a lot of 
 the cool D features (albiet with ugly syntax), but by that time, D 
 might have superpowers incomprehensible to the C++ mind.
I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.
I don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.
 To outperform C++ in 2009-2010 D must have full strength now and must 
 be stable during some years to proof that strength in some killer 
 applications.
C++0x's new features are essentially all present in D 1.0.
All except Concepts. I know there was a small discussion of Concepts here after someone posted a Doug Gregor video on Concepts, but other than that they haven't really got much attention. I know that a lot of the problems they solve in simplifying template error messages can be done alternatively in D with static-if, is() and now __traits, in conjunction with the 'static unittest' idiom, but even then, I think C++0x Concepts give a nicer syntax for expressing exactly what you want, and they also allow overloading on Concepts (which AFAIK there is no way to emulate in D). Two characteristic examples (the first one is in would-be D with Concepts): // if D had Concepts void sort(T :: RandomAccessIteratorConcept)(T t) {...} // currently void sort(T)(T t) { static assert(IsRandomAccessIterator!(T), T.stringof ~ " isn't a random access iterator"); ... } alias sort!(MinimalRandomAccessIterator) _sort__UnitTest; It isn't syntactically clean, so people won't be encouraged to support this idiom, and it doesn't allow the Concepts features of overloading or concept maps (I think concept maps can be emulated, but they currently break IFTI). I'm interested in knowing your thoughts/plans for this. -- Reiner
I see Walter has now said elsewhere in this thread that 'concepts aren't a whole lot more than interface specialization, which is already supported in D.' True; what I'm really wondering, though, is 1. Will specialisation be "fixed" to work with IFTI? 2. Will there be a way to support user-defined specialisations, for instance once which don't depend on the inheritance hierarchy? -- Reiner
Aug 23 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Reiner Pope wrote:
  1. Will specialisation be "fixed" to work with IFTI?
You can simply specialize the parameter to the function.
  2. Will there be a way to support user-defined specialisations, for 
 instance once which don't depend on the inheritance hierarchy?
I don't know what that means - interfaces are already user-defined.
Aug 23 2007
parent reply Reiner Pope <some address.com> writes:
Walter Bright wrote:
 Reiner Pope wrote:
  1. Will specialisation be "fixed" to work with IFTI?
You can simply specialize the parameter to the function.
I'm not sure what you mean. But what I refer to is the part of the spec (the templates page, under Function Templates) that says "Function template type parameters that are to be implicitly deduced may not have specializations:" and gives the example: void Foo(T : T*)(T t) { ... } int x,y; Foo!(int*)(x); // ok, T is not deduced from function argument Foo(&y); // error, T has specialization Perhaps you mean that you can write void Foo(T)(T* t) { ... } ... int x; Foo(&x); Sure. But the following doesn't work: void Foo(T)(T t) { ... } void Foo(T)(T* t) { /* different implementation for this specialisation */ } ... int x; Foo(x); Foo(&x); // ambiguous and using template parameter specialisation, IFTI breaks.
 
  2. Will there be a way to support user-defined specialisations, for 
 instance once which don't depend on the inheritance hierarchy?
I don't know what that means - interfaces are already user-defined.
They are, but they only allow you to stipulate requirements on the types place in the inheritance hierarchy. Two things that inheritance doesn't cover is structural conformance, and complicated predicates. Structural conformance is clearly important simply because templates make it possible and it avoids the overheads of inheriting from an interface. This is what C++ Concepts have on D interface specialisation. As to complicated predicates, I refer to the common idiom in D templates which looks like the following: template Foo(T) { static assert(SomeComplicatedRequirement!(T), "T doesn't meet condition"); ... // implementation } (SomeComplicatedRequirement is something inexpressible with the inheritance system; something like "a static array with a size that is a multiple of 1KB") Some people have suggested (Don Clugston, from memory) that failing the static assert should cause the compiler to try another template overload. I thought this would be easier if you allowed custom specialisations on templates. This would allow the above idiom to turn into something like template Foo(T :: SomeComplicatedRequirement) { ... } (The rest is just how I think it should work) The user-defined specialisation would be an alias which must define two templates which can answer the two questions: - does a given type meet the requirements of this specialisation? - is this specialisation a superset or subset of this other specialisation, or can't you tell? (giving the partial ordering rules) This allows user-defined predicates to fit in neatly with partial ordering of templates. -- Reiner
Aug 24 2007
next sibling parent Oskar Linde <oskar.lindeREM OVEgmail.com> writes:
Reiner Pope wrote:
 Walter Bright wrote:
 Reiner Pope wrote:
  1. Will specialisation be "fixed" to work with IFTI?
You can simply specialize the parameter to the function.
I'm not sure what you mean.
Neither am I... [snip]
 Sure. But the following doesn't work:
 
 void Foo(T)(T t) { ... }
 void Foo(T)(T* t) { /* different implementation for this specialisation 
  */ }
 ...
 int x;
 Foo(x);
 Foo(&x); // ambiguous
 
 and using template parameter specialisation, IFTI breaks.
This is the workaround I've been using: void Foo_(T: T*)(T* a) { writefln("ptr"); } void Foo_(T)(T a) { writefln("non-ptr"); } // dispatcher void Foo(T)(T x) { Foo_!(T)(x); } void main() { int x; Foo(x); Foo(&x); }
  2. Will there be a way to support user-defined specialisations, for 
 instance once which don't depend on the inheritance hierarchy?
I don't know what that means - interfaces are already user-defined.
They are, but they only allow you to stipulate requirements on the types place in the inheritance hierarchy. Two things that inheritance doesn't cover is structural conformance, and complicated predicates. Structural conformance is clearly important simply because templates make it possible and it avoids the overheads of inheriting from an interface. This is what C++ Concepts have on D interface specialisation. As to complicated predicates, I refer to the common idiom in D templates which looks like the following: template Foo(T) { static assert(SomeComplicatedRequirement!(T), "T doesn't meet condition"); ... // implementation } (SomeComplicatedRequirement is something inexpressible with the inheritance system; something like "a static array with a size that is a multiple of 1KB") Some people have suggested (Don Clugston, from memory) that failing the static assert should cause the compiler to try another template overload. I thought this would be easier if you allowed custom specialisations on templates. This would allow the above idiom to turn into something like template Foo(T :: SomeComplicatedRequirement) { ... }
 The user-defined specialisation would be an alias which must define two 
 templates which can answer the two questions:
 
  - does a given type meet the requirements of this specialisation?
  - is this specialisation a superset or subset of this other 
 specialisation, or can't you tell? (giving the partial ordering rules)
 
 This allows user-defined predicates to fit in neatly with partial 
 ordering of templates.
My suggestion has been the following: template Foo(T : <compile time expression yielding boolean value>), where the expression may depend on T. E.g: template Foo(T: RandomIndexableContainer!(T)) { ... } template RandomIndexableContainer(T) { const RandomIndexableContainer = HasMember!(T, "ValueType") && HasMember!(T, "length") && HasMember!(T, "opIndex",int); } Even something like this should be possible: struct RandomIndexableContainerConcept {...} template Foo(T: Implements!(T, RandomIndexableContainerConcept)) { } or something. This suggestion lacks the partial ordering of specializations, but those could be probably imposed on a case by case basis by nesting the conditions. -- Oskar
Aug 24 2007
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Reiner Pope wrote:
 Perhaps you mean that you can write
 
 void Foo(T)(T* t) { ... }
 ...
 int x;
 Foo(&x);
 
 Sure. But the following doesn't work:
 
 void Foo(T)(T t) { ... }
 void Foo(T)(T* t) { /* different implementation for this specialisation 
  */ }
 ...
 int x;
 Foo(x);
 Foo(&x); // ambiguous
 
 and using template parameter specialisation, IFTI breaks.
You can write the templates as: void Foo(T)(T t) { ... } void Foo(T, dummy=void)(T* t) { /* different implementation for this specialisation */ } Not so pretty, but it works.
 As to complicated predicates, I refer to the common idiom in D templates 
 which looks like the following:
Sean Kelly had a solution for that of the form:
 More often, I use an additional value parameter to specialize against:
 
 template Foo(T, bool isValid : true = PassesSomeTest!(T)) {}
Aug 26 2007
prev sibling parent reply eao197 <eao197 intervale.ru> writes:
On Thu, 23 Aug 2007 10:14:39 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 eao197 wrote:
 On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser
 You seem to forget that D is evolving, too. C++ might get a lot of the  
 cool D features (albiet with ugly syntax), but by that time, D might  
 have superpowers incomprehensible to the C++ mind.
I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.
I don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.
AFAIK, C++0x doesn't break compatibility with C++98. So if I teach students C++98 now they could use C++0x. Moreover they could use in C++0x all their C++98 code. Now I see D 2.0 as very different language from D 1.0.
 To outperform C++ in 2009-2010 D must have full strength now and must  
 be stable during some years to proof that strength in some killer  
 applications.
C++0x's new features are essentially all present in D 1.0.
Yes, but C++ doesn't require programmers to change their language, tools and libraries. Such change require a lot of time and efforts. Such effors could be applied to the current projects instead of switching to D. But, if D could afford something else, something that completely missing from C++0x (like non-null reference/pointers, some kind of functional programming (pattern-matching) and so on) than such switching would be much more attractive. I know that you work very hard on D, but D 1.0 took almost 7 years. D 2.0 started in 2007, so final D 2.0 could be in 2014? -- Regards, Yauheni Akhotnikau
Aug 23 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 AFAIK, C++0x doesn't break compatibility with C++98. So if I teach 
 students C++98 now they could use C++0x. Moreover they could use in 
 C++0x all their C++98 code.
It's not a perfect superset, but the breakage is very small.
 Now I see D 2.0 as very different language from D 1.0.
There is more breakage from 1.0 to 2.0, but the changes required are straightforward to find and correct.
 To outperform C++ in 2009-2010 D must have full strength now and must 
 be stable during some years to proof that strength in some killer 
 applications.
C++0x's new features are essentially all present in D 1.0.
Yes, but C++ doesn't require programmers to change their language, tools and libraries. Such change require a lot of time and efforts. Such effors could be applied to the current projects instead of switching to D. But, if D could afford something else, something that completely missing from C++0x (like non-null reference/pointers, some kind of functional programming (pattern-matching) and so on) than such switching would be much more attractive.
D 1.0 provides a lot of things completely missing in C++0x: 1) unit tests 2) documentation generation 3) modules 4) string mixins 5) template string & floating point parameters 6) compile time function execution 7) contract programming 8) nested functions 9) inner classes 10) delegates 11) scope statement 12) try-finally statement 13) static if 14) exported templates that are implementable 15) compilation speeds that are an order of magnitude faster 16) unambiguous template syntax 17) easy creation of tools that need to parse D code 18) synchronized functions 19) template metaprogramming that can be done by mortals 20) comprehensive support for array slicing 21) inline assembler 22) no crazy quilt dependent/non-dependent 2 level lookup rules that major compilers still get wrong and for which I still regularly get 'bug' reports because DMC++ does it according to the Standard 23) standard I/O that runs several times faster 24) portable sizes for types 25) guaranteed initialization 26) out function parameters 27) imaginary types 28) forward referencing of declarations
 I know that you work very hard on D, but D 1.0 took almost 7 years. D 
 2.0 started in 2007, so final D 2.0 could be in 2014?
Even if it does take that long, D 1.0 is still far ahead, and is available now. To see how much more productive D is, compare Kirk McDonald's amazing PyD http://pyd.dsource.org/dconf2007/presentation.html with Boost Python. To see what D can do that C++ can't touch, see Don Clugston's incredible optimal code generator at http://s3.amazonaws.com/dconf2007/Don.ppt
Aug 25 2007
parent eao197 <eao197 intervale.ru> writes:
The first of all -- thanks for your patience!

On Sun, 26 Aug 2007 10:35:47 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Now I see D 2.0 as very different language from D 1.0.
There is more breakage from 1.0 to 2.0, but the changes required are straightforward to find and correct.
Yes, but I mean changes not only in syntax, but in program design. And see yet another comment on that below.
 C++0x's new features are essentially all present in D 1.0.
Yes, but C++ doesn't require programmers to change their language, tools and libraries. Such change require a lot of time and efforts. Such effors could be applied to the current projects instead of switching to D. But, if D could afford something else, something that completely missing from C++0x (like non-null reference/pointers, some kind of functional programming (pattern-matching) and so on) than such switching would be much more attractive.
D 1.0 provides a lot of things completely missing in C++0x: 1) unit tests 2) documentation generation 3) modules 4) string mixins 5) template string & floating point parameters 6) compile time function execution 7) contract programming 8) nested functions 9) inner classes 10) delegates 11) scope statement 12) try-finally statement 13) static if 14) exported templates that are implementable 15) compilation speeds that are an order of magnitude faster 16) unambiguous template syntax 17) easy creation of tools that need to parse D code 18) synchronized functions 19) template metaprogramming that can be done by mortals 20) comprehensive support for array slicing 21) inline assembler 22) no crazy quilt dependent/non-dependent 2 level lookup rules that major compilers still get wrong and for which I still regularly get 'bug' reports because DMC++ does it according to the Standard 23) standard I/O that runs several times faster 24) portable sizes for types 25) guaranteed initialization 26) out function parameters 27) imaginary types 28) forward referencing of declarations
In November 2006 in a Russian developers forum I noticed [1] the following D's advantages: 1) fixed and portable data type sizes (byte, short,...); 2) type properties (like .min, .max, ...); 3) all variables and members have default init values; 4) local variables can't be defined without initial values; 5) type inference in 'auto' declaration and in foreach; 6) unified type casting with 'cast'; 7) strict 'typedef' and relaxed 'alias'; 8) array have 'length' property and slicing operations; 9) exception in switch if no appropriate 'case'; 10) string values in 'case'; 11) static constructors and destructors for classes/modules; 12) class invariants; 13) unit tests; 14) static assert; 15) Error as root for all exception classes; 16) scope constructs; 17) nested classes, structs, functions; 18) there aren't macros, all symbol mean exactly what they mean; 19) typesafe variadic functions; 20) floats and strings as template parameters; 21) template parameters specialization. There are a lot of intersections in our lists ;)
 I know that you work very hard on D, but D 1.0 took almost 7 years. D  
 2.0 started in 2007, so final D 2.0 could be in 2014?
Even if it does take that long, D 1.0 is still far ahead, and is available now.
As I can see from your D conf's presentation D 2.0 is in the begining of long road. I've seen from your presentation what will D provide as an ultimate answer to C++ and some others languages. As for me, D 2.0 is a descendant of D (almost as D is descendant of C++). So it is better to think that now we have modern language D 1.0 and we will have better language D 2.0 in time (may be it is better to chose new name for D 2.0, something like D-Bright ;) ). And now the key factor to make D successful is creating D 1.0 tools, libraries, docs and applications. And show how D 1.0 outperform C++ and others. If we will made this than D 2.0 will come on the prepared ground. So it is time for pragmatics to focus on D 1.0 and let language enthusiasts play with D 2.0 prototypes. [1] http://www.rsdn.ru/forum/message/2222569.aspx -- Regards, Yauheni Akhotnikau
Aug 26 2007
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
eao197 wrote:
 I know that you work very hard on D, but D 1.0 took almost 7 years. D 
 2.0 started in 2007, so final D 2.0 could be in 2014?
It's very amusing to read how Walter described D 1.0, seven years ago. It wasn't going to have templates, for example.
Aug 29 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Wed, 29 Aug 2007 15:56:29 +0400, Don Clugston <dac nospam.com.au> wrote:

 eao197 wrote:
 I know that you work very hard on D, but D 1.0 took almost 7 years. D  
 2.0 started in 2007, so final D 2.0 could be in 2014?
It's very amusing to read how Walter described D 1.0, seven years ago. It wasn't going to have templates, for example.
Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. It looks as D have never been a stable language. -- Regards, Yauheni Akhotnikau
Aug 29 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. 
 It looks as D have never been a stable language.
I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.
Aug 29 2007
next sibling parent kris <foo bar.com> writes:
Walter Bright wrote:
 eao197 wrote:
 Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. 
 It looks as D have never been a stable language.
I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.
I guess there's "stable" and there's "stable"? The history of Simula67 illustrates what can happen when a language is nailed to the wall :)
Aug 29 2007
prev sibling parent reply eao197 <eao197 intervale.ru> writes:
On Wed, 29 Aug 2007 23:15:26 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 eao197 wrote:
 Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year.  
 It looks as D have never been a stable language.
I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.
I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new major even C++ sometimes). -- Regards, Yauheni Akhotnikau
Aug 29 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 On Wed, 29 Aug 2007 23:15:26 +0400, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 eao197 wrote:
 Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 
 year. It looks as D have never been a stable language.
I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.
I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new Python, even C++ sometimes).
C++ has been around for 20+ years now. I'll grant that for maybe 2 of those years (10%) it was stable. C++ has the rather dubious distinction of it being very hard to get two different compilers to compile non-trivial code without some sort of code customization needed. As evidence of that, just browse the STL and Boost sources. While the C++ standard has been stable for a couple years (C++98, C++03), it being nearly impossible to implement has meant the implementations have been unstable. For example, name lookup rules vary significantly *today* even among the major compilers. I regularly get bug reports that DMC++ does it wrong, even though it actually does it according to the Standard, and it's other compilers that get it wrong. On the other hand, when C++ has been stable, it rapidly lost ground relative to other languages. The recent about face in rationale and flurry of core language additions to C++0x is evidence of that. I haven't programmed long term in the other languages, so don't have a good basis for commenting on their stability. I have been programming in C++ since 1987. It's pretty normal to take a C++ project from the past and have to dink around with it to get it to compile with a modern compiler. The odds of taking a few thousand lines of C++ pulled off the web that's set up to compile with C++ Brand X are about 0% for getting it to compile with C++ Brand Y without changes.
Aug 30 2007
parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 30 de agosto a las 00:07 me escribiste:
dead language.
I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new major version didn't break existing code (for example: even C++ sometimes).
C++ has been around for 20+ years now. I'll grant that for maybe 2 of those years (10%) it was stable. C++ has the rather dubious distinction of it being very hard to get two different compilers to compile non-trivial code without some sort of code customization needed. As evidence of that, just browse the STL and Boost sources. While the C++ standard has been stable for a couple years (C++98, C++03), it being nearly impossible to implement has meant the implementations have been unstable. For example, name lookup rules vary significantly *today* even among the major compilers. I regularly get bug reports that DMC++ does it wrong, even though it actually does it according to the Standard, and it's other compilers that get it wrong. On the other hand, when C++ has been stable, it rapidly lost ground relative to other languages. The recent about face in rationale and flurry of core language additions to C++0x is evidence of that. I haven't programmed long term in the other languages, so don't have a good basis for commenting on their stability.
Forget about C++ for a second. Try with Python. It is an stable, or at least *predictable* language. It's evolution is well structured, so you know you will have no suprises, and you know the language will evolve. Python is *really* community driven (besides the BDFL[1] ;). It has a formal proposal system to make changes to the language: PEPs[2]. When a PEP is aproved, it's included in the next version and can be used *optionally* (if it could break backward compatibility). For example, you can use now the "future" behavoir of division:
 10/3
3
 from __future__ import division
 10/3
3.3333333333333335 In the next python version, the new feature is included without need to import __future__, and the old behavior is deprecated (for example, with libraries, when something changes, in the first version you can ask for the new feature, in the second the new feature is the default but you can fallback to the old behavior, and in the third version teh old behavior is completely removed). [1] http://en.wikipedia.org/wiki/BDFL [2] http://www.python.org/dev/peps/
 I have been programming in C++ since 1987. It's pretty normal to take a C++
project from the past 
 and have to dink around with it to get it to compile with a modern compiler.
The odds of taking a 
 few thousand lines of C++ pulled off the web that's set up to compile with C++
Brand X are about 0% 
 for getting it to compile with C++ Brand Y without changes.
You are talking about 20 years. D evolves in a daily basis and the worst is that this evolution is without any formal procedure. Forking D 2.0 was a huge improvement in this matter, but I think there's is more work to be done so D can success as a long term language (or at least to be trusted). Another good step forward this could be to maintain phobos (or whatever the standard library would be :P) as an open source project. You can create a repository (please use git! :) so people can track its development and send patches easier. Same for the D frontend. It's almost impossible for someone who is used to colaborate in open source projects to do it with D. And that's a shame... -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ .------------------------------------------------------------------------, \ GPG: 5F5A8D05 // F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05 / '--------------------------------------------------------------------' Pa' ella cociné, pa' ella lavé, pa' ella soñe Paella completa, $2,50 Pero, la luz mala me tira, y yo? yo soy ligero pa'l trote La luz buena, está en el monte, allá voy, al horizonte
Aug 30 2007
prev sibling parent reply 0ffh <spam frankhirsch.net> writes:
eao197 wrote:
 I mean changes in languages which break compatibility with previous 
 code. AFAIK, successful languages always had some periods (usually 2-3 
 years, sometimes more) when there were no additions to language and new 

 Python, even C++ sometimes).
I rather think, that a "new major version" of any language that "doesn't break existing code" could hardly justify it's new major version number. A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version. An D /does have/ a stable language version, D1. Regards, Frank
Aug 30 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:

 eao197 wrote:
 I mean changes in languages which break compatibility with previous  =
 code. AFAIK, successful languages always had some periods (usually 2-=
3 =
 years, sometimes more) when there were no additions to language and n=
ew =

, =
 Python, even C++ sometimes).
I rather think, that a "new major version" of any language that "doesn=
't
 break existing code" could hardly justify it's new major version numbe=
r.
 A complete rewrite of the compiler, e.g., would justify a majer new
 compiler version, but not even a teeny-minor new language version.
k = old code.
 An D /does have/ a stable language version, D1.
http://d.puremagic.com/issues/show_bug.cgi?id=3D302 -- very strange bag= for = _stable_ version. Try to imagine _stable_ Eiffel with broken DesignByContract support :-/ -- = Regards, Yauheni Akhotnikau
Aug 30 2007
next sibling parent Downs <default_357-line yahoo.de> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

eao197 wrote:

 break old code.
 
 An D /does have/ a stable language version, D1.
http://d.puremagic.com/issues/show_bug.cgi?id=302 -- very strange bag for _stable_ version. Try to imagine _stable_ Eiffel with broken DesignByContract support :-/
I have to agree on this. Sometimes, when I write code and run into a feature that's documented, but not implemented yet (GC, I'm looking at you) or supposed to be working, but broken in strange ways, I can't help thinking D isn't nearly 1.0 yet, let alone 2.0. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG2E3ApEPJRr05fBERAtJaAJ9U065ri1iBTuDOlg//ZHPVwUbMMACgjXly R0bTvNP7b3ivgQkdC5UC2sE= =vc38 -----END PGP SIGNATURE-----
Aug 31 2007
prev sibling next sibling parent reply Don Clugston <dac nospam.com.au> writes:
eao197 wrote:
 On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:
 
 eao197 wrote:
 I mean changes in languages which break compatibility with previous 
 code. AFAIK, successful languages always had some periods (usually 
 2-3 years, sometimes more) when there were no additions to language 
 and new major version didn't break existing code (for example: Java, 

I rather think, that a "new major version" of any language that "doesn't break existing code" could hardly justify it's new major version number. A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version.
break old code.
Actually, I think new features that make old code obsolete (even if it still compiles and works perfectly) are even more of a problem -- breaking "mental been a problem for C++ and D. If you get 500 compile errors you need to fix, that's annoying and tedious. But when your code uses a technique that still works, but isn't supported by recent libraries, you're locked into the past forever.
Sep 04 2007
parent eao197 <eao197 intervale.ru> writes:
On Tue, 04 Sep 2007 12:34:14 +0400, Don Clugston <dac nospam.com.au> wrote:

 If you get 500 compile errors you need to fix, that's annoying and  
 tedious.
If you get 500 compile errors in old 10KLOC project its annoying. If you would get 500 compile errors in each of tens of legacy projects that is much more that simply 'annoying and tedious'.
 But when your code uses a technique that still works, but isn't  
 supported by recent libraries, you're locked into the past forever.
There is a good example in C++ world: the ACE library. It has been started a long time ago, it has been ported to various systems, it outlived many changes in the language and suffered from different compilers. Because of that ACE use C++ almost as "C++ with classes", even without usage of exceptions. In comparision with modern C++ (over)designed libraries (like Crypto++ or parts of Boost) ACE is an ugly old monster. But it has no real competitors in C++ and it allow me to write complex software more easyly than if I try to write part of ACE on modern C++ myself. So I don't think that old ACE library look me in the past (even if I can't use STL and exceptions with ACE). IMHO, the real power of any language is its code base -- all projects which have been developed using the language. And any actions which descriminate legacy code lead to decreasing the language's power. -- Regards, Yauheni Akhotnikau
Sep 04 2007
prev sibling next sibling parent Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
eao197 wrote:

 On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:
 
 eao197 wrote:
 I mean changes in languages which break compatibility with previous
 code. AFAIK, successful languages always had some periods (usually 2-3
 years, sometimes more) when there were no additions to language and new

 Python, even C++ sometimes).
I rather think, that a "new major version" of any language that "doesn't break existing code" could hardly justify it's new major version number. A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version.
old code.
Oh, btw, Java 1.5 did break old code. I used to use Gentoo during the transition phase so I had some experience compiling stuff. :) There were at least a couple of commonly used libraries and programs that broke. One minor problem was the new 'enum' keyword. Of course at least Sun Java compiler allows compiling in 1.4 mode too. I think Gentoo has a common practice nowadays to compile each Java program using the oldest compatible compiler profile for best compatibility. IIRC there were also some incompatible ABI changes because of the generics.
Sep 07 2007
prev sibling parent 0ffh <spam frankhirsch.net> writes:
eao197 wrote:
 On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:
 I rather think, that a "new major version" of any language that "doesn't
 break existing code" could hardly justify it's new major version number.
 A complete rewrite of the compiler, e.g., would justify a majer new
 compiler version, but not even a teeny-minor new language version.
break old code.
Well, yeah, maybe (apart from what Jari-Matti said about Java 1.5 breaking code). But anyways, adding something to a language without breaking old code does only work so often. C++ tried to add to C without breaking code (it still does, but it tried) and you can see what came from it. New language features tend to need new syntax. If you want to remain compatible, you'll have to find a way to introduce that new syntax without breaking the old ones. This is usually quite hard to achieve without making the new syntax either cumbersome or fragile and hard to grok. Regards, Frank
Sep 07 2007
prev sibling parent janderson <askme me.com> writes:
Bill Baxter wrote:
 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
   http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
 
 I recommend hitting pause on the video and then go get some lunch while 
 it buffers up enough that you won't get hiccups.  Or if you can figure 
 out how to get those newfangled torrent thingys to work, that's probably 
 a good option too.
 
 --bb
To me this show why D may be the "better" language syntacticly in the long run. While legacy code is a great thing, it also is a weight around C++'s head. D still has the flexibility to take on many of these good features that would be improbable in C++ due to all parties involved. Although I hope that D takes a serious look at the new much more complicated CPU architectures because I'm afraid that is one area where it could be left behind. -Joel
Aug 20 2007