www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Stroustrup's talk on C++0x

reply Bill Baxter <dnewsgroup billbaxter.com> writes:
A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
If not here's the link:
   http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

I recommend hitting pause on the video and then go get some lunch while 
it buffers up enough that you won't get hiccups.  Or if you can figure 
out how to get those newfangled torrent thingys to work, that's probably 
a good option too.

--bb
Aug 19 2007
next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Bill Baxter wrote:
 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
   http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

Thanks for the link, missed it.
 I recommend hitting pause on the video and then go get some lunch while 
 it buffers up enough that you won't get hiccups.  Or if you can figure 
 out how to get those newfangled torrent thingys to work, that's probably 
 a good option too.
 
 --bb

With opera you can just click on it and it works, if you don't want to figure things out.
Aug 19 2007
prev sibling next sibling parent reply "Saaa" <empty needmail.com> writes:
D programming people who don't understand torrents...

btw. here pausing wasn't necessary here

 I recommend hitting pause on the video and then go get some lunch while it 
 buffers up enough that you won't get hiccups.  Or if you can figure out 
 how to get those newfangled torrent thingys to work, that's probably a 
 good option too.

 --bb 

Aug 19 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Saaa wrote:
 D programming people who don't understand torrents...

:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
 btw. here pausing wasn't necessary here

Ok. Well, it's probably just slow because over here because I've got to pull it over the trans-pacific pipes. --bb
Aug 19 2007
parent reply "Saaa" <empty needmail.com> writes:
 D programming people who don't understand torrents...

:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.

I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.
Aug 19 2007
next sibling parent "Jb" <jb nowhere.com> writes:
"Saaa" <empty needmail.com> wrote in message 
news:fab0tc$1b0k$1 digitalmars.com...
 D programming people who don't understand torrents...

:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.

I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.

I second the uTorrent recomendation. By far the best client i've used. Bill, if you do try it, open the 'Speed Guide' from the options menu, you can test to see whether the port is open / forwarded correctly from there. jb
Aug 20 2007
prev sibling parent reply Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Saaa wrote:
 D programming people who don't understand torrents...

that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.

I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.

Am I the only person who actually uses... BitTorrent, as my BitTorrent client? :) http://www.bittorrent.com/download I haven't had any issues with it, though that doesn't mean no one will. Azureus/2.x is good too... the new version is an abomination. (In My Humble Opinion) -- Chris Nicholson-Sauls
Aug 20 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Chris Nicholson-Sauls wrote:
 Saaa wrote:
 D programming people who don't understand torrents...

infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.

I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.

Am I the only person who actually uses... BitTorrent, as my BitTorrent client? :) http://www.bittorrent.com/download I haven't had any issues with it, though that doesn't mean no one will. Azureus/2.x is good too... the new version is an abomination. (In My Humble Opinion)

Well the troubleshooting links pointed to by utorrent were spot-on. It takes you right to a place that can give you step-by-step instructions about how to set up a huge number of different broadband routers. The others I tried just said vague things about needing to open up a port without suggesting how -- or suggesting I talk to my "system administratior". That said, now that thanks to utorrent I've got the hole punched through my firewall, probably any client will work fine for me. --bb
Aug 21 2007
parent Regan Heath <regan netmail.co.nz> writes:
Bill Baxter wrote:
 Chris Nicholson-Sauls wrote:
 Saaa wrote:
 D programming people who don't understand torrents...

infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.

I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.

Am I the only person who actually uses... BitTorrent, as my BitTorrent client? :) http://www.bittorrent.com/download I haven't had any issues with it, though that doesn't mean no one will. Azureus/2.x is good too... the new version is an abomination. (In My Humble Opinion)

Well the troubleshooting links pointed to by utorrent were spot-on. It takes you right to a place that can give you step-by-step instructions about how to set up a huge number of different broadband routers. The others I tried just said vague things about needing to open up a port without suggesting how -- or suggesting I talk to my "system administratior". That said, now that thanks to utorrent I've got the hole punched through my firewall, probably any client will work fine for me.

uTorrent is my favourtire client, it is small, fast, fully featured but setup in such a way as to be simple enough to use if you're new at this sort of thing. Torrents don't require you to have an open inbound port but without one you cannot receive connections from other peers. You can still connect to other peers, unless they too have no open ports, in which case you cannot form any connection with them and as a result you may get lower speeds. Just the other day I downloaded OpenOffice using a torrent, the download was fast, probably faster than getting it directly from any single website. Regan
Aug 21 2007
prev sibling next sibling parent reply eao197 <eao197 intervale.ru> writes:
On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter  
<dnewsgroup billbaxter.com> wrote:

 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC. -- Regards, Yauheni Akhotnikau
Aug 19 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia: 
 http://en.wikipedia.org/wiki/C%2B%2B0x
 
 It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) 
 over C++0x? May be only high speed compilation and GC.

Looks like C++ is adding D features thick & fast!
Aug 19 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia: 
 http://en.wikipedia.org/wiki/C%2B%2B0x

 It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) 
 over C++0x? May be only high speed compilation and GC.

Looks like C++ is adding D features thick & fast!

Yeh, from the way Stroustrup was talking I really wouldn't be surprised if they haven't finished the spec by year-end 2009. So, Walter, are you planning to update DMC when the spec is finished? --bb
Aug 19 2007
parent reply Sean Kelly <sean f4.ca> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia: 
 http://en.wikipedia.org/wiki/C%2B%2B0x

 It is iteresting to know which advantages will have D (2.0? 3.0? 
 4.0?) over C++0x? May be only high speed compilation and GC.

Looks like C++ is adding D features thick & fast!

Yeh, from the way Stroustrup was talking I really wouldn't be surprised if they haven't finished the spec by year-end 2009.

It actually has to be finished by year end 2008, and they have committed to getting the standard done on time even if it means dropping features. In fact, last I heard, a few features were indeed being dropped for lack of time, but I can't recall what they were. I haven't been keeping that close an eye on the C++ standardization process recently, aside from the new memory model and atomic features. As for the C++ 0x additions themselves, if D did not exist I might be excited. As it is, I can only cringe at the syntax in some of those examples and hope things turn out better than I fear they will. Sean
Aug 20 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 It actually has to be finished by year end 2008, and they have committed 
 to getting the standard done on time even if it means dropping features. 
  In fact, last I heard, a few features were indeed being dropped for 
 lack of time, but I can't recall what they were.  I haven't been keeping 
 that close an eye on the C++ standardization process recently, aside 
 from the new memory model and atomic features.

C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Aug 20 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 It actually has to be finished by year end 2008, and they have 
 committed to getting the standard done on time even if it means 
 dropping features.  In fact, last I heard, a few features were indeed 
 being dropped for lack of time, but I can't recall what they were.  I 
 haven't been keeping that close an eye on the C++ standardization 
 process recently, aside from the new memory model and atomic features.

C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.

It probably gave them a nudge, but on the other hand, as is abundantly clear here on this newsgroup, everybody has a favorite feature. So if you throw a bunch of engineers and language designers into a room, the natural tendency is towards trying to add everything and the kitchen sink. But I agree that the fact that D is out there (and probably C#, Python, and Ruby, too) undoubtedly influenced people's votes when it came time to decide whether it was more important to have feature X or get the revision out sooner. It is pretty scary, though, to hear Stroustrup saying that the C++ text books will need to become thicker than they already are, which was already about 3x as big as K&R's original book on C. The one feature (or lack thereof) that surprises me about C++0x is nested functions. They're one of my favorite things about D, but they don't seem to be a part of C++0x. There can't be any fundamental reason for it, since I've heard g++ supports them. Maybe lambdas will serve that purpose? As for standards vs standards-compliant compilers, note that MS still hasn't made a C99 compiler, 8 years after the standard. And implementing *that* standard looks like an undergrad homework assignment compared to what compiler writers will have to go through for C++0x. --bb
Aug 20 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 I think it's the success of D that lit the fire.

It probably gave them a nudge,

More than that. The active people on the C++ committee are well aware of D. Many have attended my presentations on D, correspond with me about it, and lurk in this n.g. Most of them will deny the influence, however, so feel free to decide what to believe <g>.
 but on the other hand, as is abundantly 
 clear here on this newsgroup, everybody has a favorite feature.  So if 
 you throw a bunch of engineers and language designers into a room, the 
 natural tendency is towards trying to add everything and the kitchen 
 sink.

One thing the C++ committee is good about is the features they have added *are* targeted at glaring shortcomings. They really are not throwing in the kitchen sink. How well those shortcomings are addressed, however, is another matter. For example, look at the C++ proposal for doing a very limited form of compile time function evaluation, then compare it with D's.
 But I agree that the fact that D is out there (and probably C#, 
 Python, and Ruby, too) undoubtedly influenced people's votes when it 
 came time to decide whether it was more important to have feature X or 
 get the revision out sooner.

GC is a prime example of that; C++ could no longer dismiss it. (And Hans Boehm, who I admire a lot, did a spectacular job of dealing with every objection to adding GC.)
 It is pretty scary, though, to hear Stroustrup saying that the C++ text 
 books will need to become thicker than they already are, which was 
 already about 3x as big as K&R's original book on C.

There are two phases to learning C++: 1) learning the language 2) learning all the idioms and conventions used to avoid the shortcomings (One example we've discussed here recently is the slicing problem.)
 The one feature (or lack thereof) that surprises me about C++0x is 
 nested functions.  They're one of my favorite things about D, but they 
 don't seem to be a part of C++0x.  There can't be any fundamental reason 
 for it, since I've heard g++ supports them.  Maybe lambdas will serve 
 that purpose?

I was surprised to see lambdas without nested functions.
 As for standards vs standards-compliant compilers, note that MS still 
 hasn't made a C99 compiler, 8 years after the standard.  And 
 implementing *that* standard looks like an undergrad homework assignment 
 compared to what compiler writers will have to go through for C++0x.

It took 5 years for a C++98 compliant compiler to emerge. Extrapolating to C++09, that would be 2014 to get features that existed in D years ago. I obviously gave up waiting for such features from C++ long ago.
Aug 22 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:


 GC is a prime example of that; C++ could no longer dismiss it. (And Hans 
 Boehm, who I admire a lot, did a spectacular job of dealing with every 
 objection to adding GC.)

I decided to download his GC for C++ recently to give it a try. I was amazed to find that the documentation is really quite bad from a user point of view. And what little user doc there was was mostly about the C interface. If you care about implementation, there's tons to read, but just not if you're interested in actually *using* it. I expected a little more pleasant user experience given how long its been around, how much I hear about it here and there, and how often I've heard C++ people say that you don't need GC in the language because you can just download Boehm's library.
 It took 5 years for a C++98 compliant compiler to emerge. Extrapolating 
 to C++09, that would be 2014 to get features that existed in D years 
 ago. I obviously gave up waiting for such features from C++ long ago.

Well, that's true, but when comparing availability C++09 vs D, you should perhaps be a little more forgiving, given that D isn't quite done either. Sure, some C++09 features are available in D now, but some are also available in g++ now, I believe. And there are some features slated for C++ 09 that aren't on the roadmap for D at all (like concepts and thread stuff), which might appear in some C++ compiler before they appear D. Furthermore, I'm pretty sure some partially conforming C++98 compilers existed before the end of 93, so what I'm trying to say with all this is that if you're a programmer who's willing to work with an incompatible language that is has an ever-evolving spec, then you're probably also willing to use a bleeding edge C++ compiler that only partially supports the C++09 spec. So there may be less of a wait than 2014 for the sort of bleeding edgers who would be interested in D in the first place. But either way its still infinitely more waiting than "download and use it right now" -- the current situation with D. --bb
Aug 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 It took 5 years for a C++98 compliant compiler to emerge. 
 Extrapolating to C++09, that would be 2014 to get features that 
 existed in D years ago. I obviously gave up waiting for such features 
 from C++ long ago.

Well, that's true, but when comparing availability C++09 vs D, you should perhaps be a little more forgiving, given that D isn't quite done either. Sure, some C++09 features are available in D now,

Nearly all of them are, and D has quite a bit that isn't even on the horizon for C++. I should draw up a chart...
 but some are 
 also available in g++ now, I believe.  And there are some features 
 slated for  C++ 09 that aren't on the roadmap for D at all (like 
 concepts

Concepts aren't a whole lot more than interface specialization, which is already supported in D.
 and thread stuff), which might appear in some C++ compiler 
 before they appear D.  Furthermore, I'm pretty sure some partially 
 conforming C++98 compilers existed before the end of 93,

Partial, sure, including mine <g>.
 so what I'm 
 trying to say with all this is that if you're a programmer who's willing 
 to work with an incompatible language that is has an ever-evolving spec, 
 then you're probably also willing to use a bleeding edge C++ compiler 
 that only partially supports the C++09 spec.  So there may be less of a 
 wait than 2014 for the sort of bleeding edgers who would be interested 
 in D in the first place.  But either way its still infinitely more 
 waiting than "download and use it right now" -- the current situation 
 with D.

Yes. And D 2.0 isn't standing still, either.
Aug 23 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:

 but some are also available in g++ now, I believe.  And there are some 
 features slated for  C++ 09 that aren't on the roadmap for D at all 
 (like concepts

Concepts aren't a whole lot more than interface specialization, which is already supported in D.

I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D, but I think defining and implementing concepts should be as easy as defining and implementing a run-time interface. Duck typing is nice, but if you look at even scripting language founded on the idea, like Python, you'll find that where people are putting together, they're also creating and using tools like zope.interface to get back some of the benefits of type checking. At the end of the day, even with duck typing there are some requirements I have to fulfill to use my object with your function. You want to be able to specify those things and have the compiler check it. --bb
Aug 23 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:

 but some are also available in g++ now, I believe.  And there are 
 some features slated for  C++ 09 that aren't on the roadmap for D at 
 all (like concepts

Concepts aren't a whole lot more than interface specialization, which is already supported in D.

I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D,

Yup: interface KewlIteratorConcept { T opPostInc(); U opSlice(); } class KewlContainer : KewlIteratorConcept { T opPostInc() { ... } U opSlice() { ... } } class WrongContainer { } template Foo(T : KewlIteratorConcept) { ... } KewlContainer k; WrongContainer w; Foo!(k); // ok Foo!(w); // error, w is not a KewlIteratorConcept
Aug 25 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:

 but some are also available in g++ now, I believe.  And there are 
 some features slated for  C++ 09 that aren't on the roadmap for D at 
 all (like concepts

Concepts aren't a whole lot more than interface specialization, which is already supported in D.

I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D,

Yup: interface KewlIteratorConcept { T opPostInc(); U opSlice(); } class KewlContainer : KewlIteratorConcept { T opPostInc() { ... } U opSlice() { ... } } class WrongContainer { } template Foo(T : KewlIteratorConcept) { ... } KewlContainer k; WrongContainer w; Foo!(k); // ok Foo!(w); // error, w is not a KewlIteratorConcept

Ok, but does that work if you want it to work with a built-in type too? Will a float be recognized as supporting opPostInc? --bb
Aug 25 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Ok, but does that work if you want it to work with a built-in type too? 
  Will a float be recognized as supporting opPostInc?

No, it doesn't currently work with builtin types. But see Sean's approach!
Aug 26 2007
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:

 but some are also available in g++ now, I believe.  And there are 
 some features slated for  C++ 09 that aren't on the roadmap for D at 
 all (like concepts

Concepts aren't a whole lot more than interface specialization, which is already supported in D.

I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D,

Yup: interface KewlIteratorConcept { T opPostInc(); U opSlice(); } class KewlContainer : KewlIteratorConcept { T opPostInc() { ... } U opSlice() { ... } } class WrongContainer { } template Foo(T : KewlIteratorConcept) { ... }

The obvious disadvantage to this approach is that is requires implementation of an interface by the creator of the object. More often, I use an additional value parameter to specialize against: template Foo(T, bool isValid : true = PassesSomeTest!(T)) {} This also works for non-class types. I'm not sure I like the syntax quite as much as concepts here, but it's good enough that I haven't really missed them. Sean
Aug 26 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 The obvious disadvantage to this approach is that is requires 
 implementation of an interface by the creator of the object.  More 
 often, I use an additional value parameter to specialize against:
 
 template Foo(T, bool isValid : true = PassesSomeTest!(T)) {}
 
 This also works for non-class types.  I'm not sure I like the syntax 
 quite as much as concepts here, but it's good enough that I haven't 
 really missed them.

This is a brilliant idea. It would make for a great article! Can I press you to write it? Doesn't have to be long, just explain the concept(!) and flesh it out with a few examples.
Aug 26 2007
next sibling parent James Dennett <jdennett acm.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 The obvious disadvantage to this approach is that is requires
 implementation of an interface by the creator of the object.  More
 often, I use an additional value parameter to specialize against:

 template Foo(T, bool isValid : true = PassesSomeTest!(T)) {}

 This also works for non-class types.  I'm not sure I like the syntax
 quite as much as concepts here, but it's good enough that I haven't
 really missed them.

This is a brilliant idea. It would make for a great article! Can I press you to write it? Doesn't have to be long, just explain the concept(!) and flesh it out with a few examples.

Is this largely comparable to C++0x's enable_if (except that, as I understand it, D appears to be more flexible in how the compile- time test can work/be expressed)? enable_if certainly covers many of the simple use cases for Concepts (though not so elegantly as C++0x Concepts do). -- James
Aug 26 2007
prev sibling parent Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 The obvious disadvantage to this approach is that is requires 
 implementation of an interface by the creator of the object.  More 
 often, I use an additional value parameter to specialize against:

 template Foo(T, bool isValid : true = PassesSomeTest!(T)) {}

 This also works for non-class types.  I'm not sure I like the syntax 
 quite as much as concepts here, but it's good enough that I haven't 
 really missed them.

This is a brilliant idea. It would make for a great article! Can I press you to write it? Doesn't have to be long, just explain the concept(!) and flesh it out with a few examples.

Sure thing. :-) Sean
Aug 26 2007
prev sibling parent James Dennett <jdennett acm.org> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 It took 5 years for a C++98 compliant compiler to emerge.
 Extrapolating to C++09, that would be 2014 to get features that
 existed in D years ago. I obviously gave up waiting for such features
 from C++ long ago.

Well, that's true, but when comparing availability C++09 vs D, you should perhaps be a little more forgiving, given that D isn't quite done either. Sure, some C++09 features are available in D now,

Nearly all of them are, and D has quite a bit that isn't even on the horizon for C++. I should draw up a chart...

For marketing purposes, maybe ;)
 but some are also available in g++ now, I believe.  And there are some
 features slated for  C++ 09 that aren't on the roadmap for D at all
 (like concepts

Concepts aren't a whole lot more than interface specialization, which is already supported in D.

They're far, far more than that: more akin to an enhanced version of Haskell's typeclasses. -- James
Aug 23 2007
prev sibling parent reply Stephen Waits <steve waits.net> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 I think it's the success of D that lit the fire.

It probably gave them a nudge,

More than that. The active people on the C++ committee are well aware of D. Many have attended my presentations on D, correspond with me about it, and lurk in this n.g. Most of them will deny the influence, however, so feel free to decide what to believe <g>.

FWIW, I corresponded with Bjarne a little over 3 years ago. I asked him for his opinion of D. He refused to give an opinion on the grounds that he didn't want to get into a flamewar about "Walter's language". I wrote him back to make sure he understood that I wasn't looking for a fight. I simply respected him and was curious about his opinion, but that I also understand why, in his position, he cannot comment on such things. --Steve
Aug 23 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Stephen Waits wrote:
 FWIW, I corresponded with Bjarne a little over 3 years ago.  I asked him 
 for his opinion of D.  He refused to give an opinion on the grounds that 
 he didn't want to get into a flamewar about "Walter's language".
 
 I wrote him back to make sure he understood that I wasn't looking for a 
 fight.  I simply respected him and was curious about his opinion, but 
 that I also understand why, in his position, he cannot comment on such 
 things.

I know Bjarne, and he's a class act. I have the greatest respect for him.
Aug 25 2007
prev sibling next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Walter,

 Sean Kelly wrote:
 
 It actually has to be finished by year end 2008, and they have
 committed to getting the standard done on time even if it means
 dropping features. In fact, last I heard, a few features were indeed
 being dropped for lack of time, but I can't recall what they were.  I
 haven't been keeping that close an eye on the C++ standardization
 process recently, aside from the new memory model and atomic
 features.
 

tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.

Does that make the c++ crowd's main objective to remain as the dominant language? Maybe somebody needs to enforce term limit on programming languages.
Aug 20 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
BCS wrote:
 Reply to Walter,
 
 Sean Kelly wrote:

 It actually has to be finished by year end 2008, and they have
 committed to getting the standard done on time even if it means
 dropping features. In fact, last I heard, a few features were indeed
 being dropped for lack of time, but I can't recall what they were.  I
 haven't been keeping that close an eye on the C++ standardization
 process recently, aside from the new memory model and atomic
 features.

tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.

Does that make the c++ crowd's main objective to remain as the dominant language?

I doubt it. I think the C++ crowd's main objective is to turn C++ into something that doesn't suck like a dozen turbine jets strapped together with duct tape. They want it to be a better language for themselves, because they have to use it every day. I'm thinking specifically of generic and meta-programming functionality. That's what looks like will get the most benefit from the new language additions.
 Maybe somebody needs to enforce term limit on programming 
 languages.

"You don't vote for kings." -- King Arthur, Monty Python and the Holy Grail.
Aug 20 2007
parent BCS <ao pathlink.com> writes:
Reply to Bill,

 BCS wrote:
 
 Reply to Walter,
 
 Sean Kelly wrote:
 
 It actually has to be finished by year end 2008, and they have
 committed to getting the standard done on time even if it means
 dropping features. In fact, last I heard, a few features were
 indeed being dropped for lack of time, but I can't recall what they
 were.  I haven't been keeping that close an eye on the C++
 standardization process recently, aside from the new memory model
 and atomic features.
 

language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.

dominant language?

into something that doesn't suck like a dozen turbine jets strapped together with duct tape.

Nice <G>
 They want it to be a better language for
 themselves, because they have to use it every day.  I'm thinking
 specifically of generic and meta-programming functionality.  That's
 what looks like will get the most benefit from the new language
 additions.
 

Yah, I see your point. However some times the best way to improve somthing is to take it out back and shoot it. Not add more jet engines and duck tape.
 Maybe somebody needs to enforce term limit on programming languages.
 

Grail.

Aug 20 2007
prev sibling parent reply James Dennett <jdennett acm.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 It actually has to be finished by year end 2008, and they have
 committed to getting the standard done on time even if it means
 dropping features.  In fact, last I heard, a few features were indeed
 being dropped for lack of time, but I can't recall what they were.  I
 haven't been keeping that close an eye on the C++ standardization
 process recently, aside from the new memory model and atomic features.

C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.

Most of these features have been in development for years; it's desires to improve C++ that have lit these fires, just as your urge to create D was based on other ideas about how to improve on C++98. For many involved in language design, it's language features that are more tempting than new library functionality. I don't see much in C++0x that has much claim to being inspired by D. I look forward to type deduction with auto, but that dates from the 80's. Concepts will be great, but those have most overlap with Haskell's typeclasses, not mirrored in D. The new for syntax reflects many languages (D, Perl, Java, sh, others) in some ways. GC for C++ predates D. The smart pointers have no counterpart in D, yet. D has cool metaprogramming facilities, and does some other things nicely, but C++ faces more competition at this time from C, C# and Java than it does from D. It would, however, seem reasonable for C++ to pick up on good features of D, when they are a match for C++, just as C# and Java have borrowed features back and forth. -- James
Aug 20 2007
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
James Dennett wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 It actually has to be finished by year end 2008, and they have
 committed to getting the standard done on time even if it means
 dropping features.  In fact, last I heard, a few features were indeed
 being dropped for lack of time, but I can't recall what they were.  I
 haven't been keeping that close an eye on the C++ standardization
 process recently, aside from the new memory model and atomic features.

tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.

Most of these features have been in development for years; it's desires to improve C++ that have lit these fires, just as your urge to create D was based on other ideas about how to improve on C++98. For many involved in language design, it's language features that are more tempting than new library functionality. I don't see much in C++0x that has much claim to being inspired by D. I look forward to type deduction with auto, but that dates from the 80's. Concepts will be great, but those have most overlap with Haskell's typeclasses, not mirrored in D. The new for syntax reflects many languages (D, Perl, Java, sh, others) in some ways. GC for C++ predates D. The smart pointers have no counterpart in D, yet. D has cool metaprogramming facilities, and does some other things nicely, but C++ faces more competition at this time from C, C# and Java than it does from D. It would, however, seem reasonable for C++ to pick up on good features of D, when they are a match for C++, just as C# and Java have borrowed features back and forth. -- James

I think Walter wasn't saying that C++0x features were inspired or based on D, just that D speed up the adoption of those features. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Aug 21 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Bruno Medeiros wrote:
 I think Walter wasn't saying that C++0x features were inspired or based 
 on D, just that D speed up the adoption of those features.

D hasn't invented many truly *new* features. What it has done, however, is dramatically demonstrate that: 1) they fit well in a language that is very close to C++ 2) they dramatically improve productivity When one can point to real, live, *relevant* implementations of a feature, it tends to be convincing. After all, one can actually fly it rather than dreaming about paper airplanes. The further a language is from C++, the easier it is to dismiss a feature of that language as irrelevant.
Aug 22 2007
prev sibling parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia:  
 http://en.wikipedia.org/wiki/C%2B%2B0x
  It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?)  
 over C++0x? May be only high speed compilation and GC.

Looks like C++ is adding D features thick & fast!

Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :( Current C++ is far behind D, but D is not stable, not mature, not equiped by tools/libraries as C++. So it will took several years to make D competitive with C++ in that area. But if in 2010 (it is only 2.5 year ahead) C++ will have things like lambdas and autos (and tons of libraries and army of programmers), what will be D 'killer feature' to attract C++ programmers? And not only C++, at this time D would compete with new versions of C#, Java, Scala, Nemerle (probably) and with some of functional languages (like Haskell and OCaml). -- Regards, Yauheni Akhotnikau
Aug 20 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia: 
 http://en.wikipedia.org/wiki/C%2B%2B0x
  It is iteresting to know which advantages will have D (2.0? 3.0? 
 4.0?) over C++0x? May be only high speed compilation and GC.

Looks like C++ is adding D features thick & fast!

Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :(

The trouble with the new features is they don't fix the inscrutably awful syntax of complex C++ code, in fact, they make it worse. C++ will further become an "experts only" language.
 Current C++ is far behind D, but D is not stable, not mature, not 
 equiped by tools/libraries as C++. So it will took several years to make 
 D competitive with C++ in that area. But if in 2010 (it is only 2.5 year 
 ahead) C++ will have things like lambdas and autos (and tons of 
 libraries and army of programmers), what will be D 'killer feature' to 
 attract C++ programmers? And not only C++, at this time D would compete 
 with new versions of C#, Java, Scala, Nemerle (probably) and with some 
 of functional languages (like Haskell and OCaml).

The C++ standard will have those features. C++ compilers? Who knows. It took five years for C++98 to get implemented. C++'s problems are still in place, though. Like no modules, verbose and awkward syntax, very long learning curve, very difficult to do the simplest metaprogramming, etc.
Aug 20 2007
next sibling parent Uno <unodgs tlen.pl> writes:
 The C++ standard will have those features. C++ compilers? Who knows. It 
 took five years for C++98 to get implemented.

GCC 4.3 has some of the coming standard features already implemented (like variadic templates). ConceptGCC has working concepts. So there is a chance at least one compiler will be available when new standard comes out. Uno
Aug 20 2007
prev sibling next sibling parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 20:44:22 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 eao197 wrote:
 On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright  
 <newshound1 digitalmars.com> wrote:

 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia:  
 http://en.wikipedia.org/wiki/C%2B%2B0x
  It is iteresting to know which advantages will have D (2.0? 3.0?  
 4.0?) over C++0x? May be only high speed compilation and GC.

Looks like C++ is adding D features thick & fast!

significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :(

The trouble with the new features is they don't fix the inscrutably awful syntax of complex C++ code, in fact, they make it worse. C++ will further become an "experts only" language.

It reminds me 'Worse is Better' (http://en.wikipedia.org/wiki/Worse_is_Better). I'm not a C++ expert but I haven't any serious problem with C++. And such features allow me to write in C++ more productive and use all my codebase. So I'm affraid many expirienced C++ programmers remain with C++. Because of that D must be focused to different programmer audience, to compete with Java/C#/Scala...
 Current C++ is far behind D, but D is not stable, not mature, not  
 equiped by tools/libraries as C++. So it will took several years to  
 make D competitive with C++ in that area. But if in 2010 (it is only  
 2.5 year ahead) C++ will have things like lambdas and autos (and tons  
 of libraries and army of programmers), what will be D 'killer feature'  
 to attract C++ programmers? And not only C++, at this time D would  
 compete with new versions of C#, Java, Scala, Nemerle (probably) and  
 with some of functional languages (like Haskell and OCaml).

The C++ standard will have those features. C++ compilers? Who knows. It took five years for C++98 to get implemented. C++'s problems are still in place, though. Like no modules, verbose and awkward syntax, very long learning curve, very difficult to do the simplest metaprogramming, etc.

Yes, but now there are only few C++ compiler vendors (unlike 98). There is hope that GCC will have almost all new C++ features in near future. -- Regards, Yauheni Akhotnikau
Aug 20 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
eao197 wrote:
 Yes, but now there are only few C++ compiler vendors (unlike 98). There 
 is hope that GCC will have almost all new C++ features in near future.

Every new revision to the C++ standard kills off more C++ vendors.
Aug 22 2007
prev sibling parent janderson <askme me.com> writes:
Walter Bright wrote:
 The C++ standard will have those features. C++ compilers? Who knows. It 
 took five years for C++98 to get implemented.
 
 C++'s problems are still in place, though. Like no modules, verbose and 
 awkward syntax, very long learning curve, very difficult to do the 
 simplest metaprogramming, etc.

Another awesome (and annoying at the same time) thing about D, real-time development, well not quite but you know what I mean. -Joel
Aug 20 2007
prev sibling parent reply "Craig Black" <cblack ara.com> writes:
"eao197" <eao197 intervale.ru> wrote in message 
news:op.txc0txbtsdcfd2 eao197nb2.intervale.ru...
 On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright 
 <newshound1 digitalmars.com> wrote:

 eao197 wrote:
 BTW, there is a C++0x overview in Wikipedia: 
 http://en.wikipedia.org/wiki/C%2B%2B0x
  It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) 
 over C++0x? May be only high speed compilation and GC.

Looks like C++ is adding D features thick & fast!

Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :( Current C++ is far behind D, but D is not stable, not mature, not equiped by tools/libraries as C++. So it will took several years to make D competitive with C++ in that area. But if in 2010 (it is only 2.5 year ahead) C++ will have things like lambdas and autos (and tons of libraries and army of programmers), what will be D 'killer feature' to attract C++ programmers? And not only C++, at this time D would compete with new versions of C#, Java, Scala, Nemerle (probably) and with some of functional languages (like Haskell and OCaml). -- Regards, Yauheni Akhotnikau

Agreed. The standard is moving faster and includes more improvements than I previously expected. What's more, major compilers have already begun working on these new features. Some of these features will be available in GCC 4.3 and VC++ 2008 (expected February). I would not be surprised if most of the standard features are implemented in compilers by 2010 as you suggest. However, the benefit of D over C++ is still cleaner, more powerful syntax in general. C++ might be getting more capability, but even so, it will not match D in clean expressive power. Additionally, two and a a half years is also a long time for D to advance as well, and D's progression is very fast. Overall I have been very happy with D's progress over the past few years, both in compiler and library development. That said, there are a number of things that I think would help aid the adoption of D over the next few years. First, D needs to at the very least match the features that are added to C++ with regards to parallelism and concurrency. Another thing that will aid in the adoption of D is to iron out whatever issues or percieved issues there are with the D 2.0 so that it will be accepted by the D community enough for library writers to migrate their code. A dead horse perhaps, but I still think it would serve D well to have better C++ integration. Granted this is a tough problem, as Walter emphasizes, but so was integrating Managed .NET C++ with native C++, which Microsoft was able to do rather well. Experience has taught me that there is always a solution to issues like this, but sometimes requires us to think about the problem in a different way. Other than that, fixing compiler bugs is probably the most important thing for D right now. I am especially looking forward to fixes that will make __traits usable (if that's still what it's called). One particular feature of pesonal interest is better support for structs (ctors, dtors, etc.) This will help with complex mathematical data structures that I use that must be uber-efficient. As far as these new namfangled macros, D is so powerful already, I don't really know exactly what this will give us over what we already have. But perhaps I haven't given this as much thought as others have. -Craig
Aug 20 2007
next sibling parent reply Ingo Oeser <ioe-news rameria.de> writes:
Craig Black wrote:

 First, D needs to at the very least match the features that are added to
 C++ with regards to parallelism and concurrency.

Yes, I already had some discussions and proposal about this in another thread. - More explicit loop notion which can do map(), reduce(), filter(). These are well understood and known idioms, which explicitly state how the data dependency is with just a single keyword. OpenMP basically does just this plus some thread management for advanced stuff. -> Easy to add now and improve later support with newer library functions. -> Allows also to let the compiler do the optimisation via auto vectorisation for simpler cases (like shorter loop bodies). - Please remove the inXX() and outXX() intrinsics. They are oneliners in asm on X86 and not present on many architectures. - asm construct should be backend dependent. This will aid its optimal integration into the surrounding D code. Reason: asm is very seldom used this days and usually states one of 3 things 1. The optimizer of my compiler sucks 2. I'm l33t 3. I write sth. which can/should not be expressed in D, because it is highly machine dependend and a NOP on many machines. (e.g. inp()/outp()) Short term acceptable is 1. and 2., but long term acceptable is only 3. Case 3. is "write once and never touch again" but has to integrate very tightly into the surrounding code and thus has to answer many questions: - How many delay slots are still free? - Which registers are spilled, read, written? - Where is the result? - Which CPU units are used/unused? - In what state is the pipeline of that unit? ... Even GCC asm syntax can not express all of this yet, AFAIK. Many platform specific hard assembly stuff is written with GCC asm syntax. So being at least as powerful of defining it as opaque as a mixin might be more useful here. Maybe sth. opaque like the mixin() statement, which passes everything there to the backend would be better. Esp. for DSP architectures.
 Other than that, fixing compiler bugs is probably the most important thing
 for D right now.  I am especially looking forward to fixes that will make
 __traits usable (if that's still what it's called).

Yes, progress there is most exciting for me at the moment and I think the developers do a good job there.
 One particular feature of pesonal interest is better support for structs
 (ctors, dtors, etc.)  This will help with complex mathematical data
 structures that I use that must be uber-efficient.

ctors which only assign and MUST assign all values might be very useful. Static initializers with C99 syntax will be very welcome, too.
 As far as these new namfangled macros, D is so powerful already, I don't
 really know exactly what this will give us over what we already have.  But
 perhaps I haven't given this as much thought as others have.

Are there any articles about the current macro design decisions? Best Regards Ingo Oeser
Aug 21 2007
parent reply Downs <default_357-line yahoo.de> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ingo Oeser wrote:
 Craig Black wrote:
 
 First, D needs to at the very least match the features that are added to
 C++ with regards to parallelism and concurrency.

Yes, I already had some discussions and proposal about this in another thread. - More explicit loop notion which can do map(), reduce(), filter().

but they're trivial to do as freestanding functions.
   These are well understood and known idioms, which explicitly state how the
   data dependency is with just a single keyword. OpenMP basically does just 
   this plus some thread management for advanced stuff.
   -> Easy to add now and improve later support with newer library functions.
   -> Allows also to let the compiler do the optimisation via auto 
      vectorisation for simpler cases (like shorter loop bodies).

maintainer gets around to fixing the tree format for statements, gdc will be able to take advantage of this.
 
 - Please remove the inXX() and outXX() intrinsics.
   They are oneliners in asm on X86 and not present on many architectures.
 

 - asm construct should be backend dependent.

 
 Other than that, fixing compiler bugs is probably the most important thing
 for D right now.


 I am especially looking forward to fixes that will make
 __traits usable (if that's still what it's called).

Yes, progress there is most exciting for me at the moment and I think the developers do a good job there.
 One particular feature of pesonal interest is better support for structs
 (ctors, dtors, etc.)


 This will help with complex mathematical data
 structures that I use that must be uber-efficient.

ctors which only assign and MUST assign all values might be very useful. Static initializers with C99 syntax will be very welcome, too.
 As far as these new namfangled macros, D is so powerful already, I don't
 really know exactly what this will give us over what we already have.  But
 perhaps I haven't given this as much thought as others have.

Are there any articles about the current macro design decisions? Best Regards Ingo Oeser

-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGzSrdpEPJRr05fBERAlXxAJ9uxf9C9IDQ4xwpS1U6ZR2ymGHFpQCgmSro TJhan50sNAM0WqVkuOGMD60= =7Eka -----END PGP SIGNATURE-----
Aug 22 2007
parent reply Ingo Oeser <ioe-news rameria.de> writes:
Downs wrote:
 Ingo Oeser wrote:
 - More explicit loop notion which can do map(), reduce(), filter().

but they're trivial to do as freestanding functions.

Ok, with your tools, I could already CODE like that right now and remove the "import iter;" later. Thanks for that :-) But I want to express that there isn't any defined order of execution, as long as data flow is correct. And of course I would like them overloadable to compose and distribute them driven by data flow. I just like to express the data flow more explicitly instead of depending on the optimizer to figure it out himself. Let's help the optimizer greatly here.
 GCC has autovectorization support in principle, so whenever the gdc
 maintainer gets around to fixing the tree format for statements, gdc
 will be able to take advantage of this.

I know. But this is just a side effect for me and is the compiler guessing himself, that I do map(), reduce(), filter(). The optimizer is more useful to the programmer doing (memory) copy elimination and life time analysis, so the programmer can write more readable code by using more temporaries.
 - Please remove the inXX() and outXX() intrinsics.
   They are oneliners in asm on X86 and not present on many architectures.
 


They provide port io (IN PORT and OUT PORT) on X86. in Intel syntax for your reference. OUT PORT mov dx,IOport mov al,Value out dx,al IN PORT mov dx,IOport in al,dx mov ReturnValue,al These are of course privileged operations and the compiler doesn't actually know your privilege level. So this kind of intrinsic is just nonsense. It is also non-sense in a standard library, since >99% of all programs out there don't need to do that and the rest codes it either in assembly or uses a special API for that. Best Regards Ingo Oeser
Aug 23 2007
next sibling parent reply 0ffh <spam frankhirsch.net> writes:
Ingo Oeser wrote [about port i/o]:
 These are of course privileged operations and the compiler 
 doesn't actually know your privilege level. So this kind of intrinsic
 is just nonsense. It is also non-sense in a standard library, since >99%
 of all programs out there don't need to do that and the rest codes
 it either in assembly or uses a special API for that.

I do not agree. I've been writing system level software on a number of architectures since more than 15 years, mostly in C, and of course these functions are used. Also, I never even heard of any "special API" for port i/o, and wonder what such a thing might be needed for (unless your compiler is crippled). Regards, Frank
Aug 23 2007
parent reply Ingo Oeser <ioe-news rameria.de> writes:
0ffh wrote:
 Ingo Oeser wrote [about port i/o]:
 These are of course privileged operations and the compiler
 doesn't actually know your privilege level. So this kind of intrinsic
 is just nonsense. It is also non-sense in a standard library, since >99%
 of all programs out there don't need to do that and the rest codes
 it either in assembly or uses a special API for that.

I do not agree. I've been writing system level software on a number of architectures since more than 15 years, mostly in C, and of course these functions are used.

But they had quite different properties/semantics. You should have noticed, that some architectures don't even have them and threat all IO as special memory accesses. E.g. "There exists no such thing as port-based I/O on AVR32" and "The ARM doesn't have special IO access instructions". Or "On MIPS I/O ports are memory mapped, so we access them using normal load/store instructions" Just to quote some random comments from include/asm-*/io.h from Linux. Oh and some port i/o needs to slowed down, byte swapped, include barriers. How should a intrinsic handle that?
 Also, I never even heard of any "special API" for port i/o, and wonder
 what such a thing might be needed for (unless your compiler is crippled).

I mean special API for ENABLING port i/o for your task. How does the intrinsic know, whether you can actually DO port i/o in that task? Inline assembler functions are much more suited to that task. And how to do that in assembler is usually in the example collection of your system manual. Even better is to threat that stuff as special memory and define unified i/o memory accessors somehere in a library. That stuff isn't fast anyway, so the class overhead might not be too significant here, as long as it is bounded. Best Regards Ingo Oeser
Aug 26 2007
parent reply 0ffh <spam frankhirsch.net> writes:
Ingo Oeser wrote:
 0ffh wrote:
 But they had quite different properties/semantics. You should have noticed,
 that some architectures don't even have them and threat all IO as special
 memory accesses.

Sure did. Doesn't prevent the compiler from supporting it. Some compilers support FP on architectures without FP. Good thing, too!
 Oh and some port i/o needs to slowed down, byte swapped, include barriers.
 How should a intrinsic handle that?

You normally use them as primitives and build more complex funs around.
 I mean special API for ENABLING port i/o for your task. How does the
 intrinsic know, whether you can actually DO port i/o in that task?

It can't and why should it? BTW not all architectures (and all modes) need any special enabling.
 Inline assembler functions are much more suited to that task.

I have to use inline asm every time? IIRC DMD doesn't inline funs with inline asm, so no good.
 That stuff isn't fast anyway, so the class overhead might not be
 too significant here, as long as it is bounded.

I wouldn't count on it. Anyways my basic point is: Why throw it out just because "99% don't need it anyway"? It's there already, and some people are quite happy about it! It does not in any way make DMD less usable for the "99%", so why do you insist on treading on the "1%" minority? Regards, Frank
Aug 26 2007
parent reply Ingo Oeser <ioe-news rameria.de> writes:
0ffh wrote:

 Ingo Oeser wrote:
 Oh and some port i/o needs to slowed down, byte swapped, include
 barriers. How should a intrinsic handle that?


My problem is, that this behavior isn't defined.
 I mean special API for ENABLING port i/o for your task. How does the
 intrinsic know, whether you can actually DO port i/o in that task?

It can't and why should it? BTW not all architectures (and all modes) need any special enabling.

I know. I'm a system programmer, too :-)
 Inline assembler functions are much more suited to that task.

I have to use inline asm every time? IIRC DMD doesn't inline funs with inline asm, so no good.

GDC does, so DMD might get it one day.
 Why throw it out just because "99% don't need it anyway"?
 It's there already, and some people are quite happy about it!
 
 It does not in any way make DMD less usable for the "99%",
 so why do you insist on treading on the "1%" minority?

Because every D compiler HAS to implement it for every architecture. And it has to fake it somehow on architectures, which don't have port i/o. How to fake that correctly, is simply not defined (in contrast to IEEE FP math). Making inline assembly better (e.g. giving the compiler more knowledge about the instructions and their constraints, make it inlineable) is the more useful goal. These compiler refinements then work for ALL instructions on EVERY architectures and may someday even be more powerful than GCCs inline assembler syntax. This will reduce X86ism in D. DMD can learn A LOT from GCC in that area. Best Regards Ingo Oeser
Aug 26 2007
next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Ingo Oeser wrote:
 Because every D compiler HAS to implement it for every architecture.
 And it has to fake it somehow on architectures, which don't have
 port i/o. How to fake that correctly, is simply not defined 
 (in contrast to IEEE FP math).

Why must every D compiler implement them? I thought intrinsics are part of the Phobos / DMD implementation of D, not of the D language itself - contrary to FP math.
Aug 26 2007
prev sibling parent reply 0ffh <spam frankhirsch.net> writes:
Ingo Oeser wrote:
 Because every D compiler HAS to implement it for every architecture.

I concur with Lutger on this: IIRC intrinsics are a compiler thing, not a language thing. They are routinely used for compiler-specific stuff, I take that as a hint.
 Making inline assembly better [...] is the more useful goal. [...]
 This will reduce X86ism in D. DMD can learn A LOT from GCC in that area.

Ach, blast! I don't care to fight any more over this... come macros I don't even need inlineable functions anymore! ALL HAIL WALTER AND HIS AST MACROS! :-))) They might be just the kick ass feature to kick off the final take off... Regards, frank
Aug 26 2007
next sibling parent 0ffh <spam frankhirsch.net> writes:
0ffh wrote:
                ALL HAIL WALTER AND HIS AST MACROS! :-)))

YAY!!! Regards, frank p.s. Sorry for self-reply, I re-read it and couldn't resist! =)
Aug 26 2007
prev sibling parent reply Carlos Santander <csantander619 gmail.com> writes:
0ffh escribiů:
 Ingo Oeser wrote:
 Because every D compiler HAS to implement it for every architecture.

I concur with Lutger on this: IIRC intrinsics are a compiler thing, not a language thing. They are routinely used for compiler-specific stuff, I take that as a hint.

I remember that Walter once said that all in Phobos under std/ was standard, as a part of the D standard (I hope I'm getting my words right.) Thus, as Ingo said, a standard D compiler has to implement those things. This (not specifically inp/outp, but Phobos in general) was a problem when there were licensing issues (I wonder if still there are some), as other D implementations would not be able to provide those features. In this case, the architecture-specific parts would be the issue to overcome. In a way, it would be like expecting all OSes to have a registry. The sound solution was to put the Windows Registry stuff under std.windows. A D compiler that doesn't run on Windows wouldn't need to provide those modules. The same could be done for these intrinsics: put them in std.arch.x86.intrinsics, or something like that.
 Making inline assembly better [...] is the more useful goal. [...]
 This will reduce X86ism in D. DMD can learn A LOT from GCC in that area.

Ach, blast! I don't care to fight any more over this... come macros I don't even need inlineable functions anymore! ALL HAIL WALTER AND HIS AST MACROS! :-))) They might be just the kick ass feature to kick off the final take off... Regards, frank

-- Carlos Santander Bernal
Aug 26 2007
parent Ingo Oeser <ioe-news rameria.de> writes:
Carlos Santander wrote:

 I remember that Walter once said that all in Phobos under std/ was
 standard, as a part of the D standard (I hope I'm getting my words right.)
 Thus, as Ingo said, a standard D compiler has to implement those things.

Yes, that's my point.
 In a way, it would be like expecting all OSes to have a registry. The
 sound solution was to put the Windows Registry stuff under std.windows. A
 D compiler that doesn't run on Windows wouldn't need to provide those
 modules. The same could be done for these intrinsics: put them in
 std.arch.x86.intrinsics, or something like that.

Yes, that sounds sane enough. And if you implement a D-compiler for AVR, you don't need to implement that stuff. Best Regards Ingo Oeser
Aug 28 2007
prev sibling parent Downs <default_357-line yahoo.de> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ingo Oeser wrote:
 Downs wrote:
 Ingo Oeser wrote:
 - More explicit loop notion which can do map(), reduce(), filter().

but they're trivial to do as freestanding functions.

Ok, with your tools, I could already CODE like that right now and remove the "import iter;" later. Thanks for that :-) But I want to express that there isn't any defined order of execution, as long as data flow is correct. And of course I would like them overloadable to compose and distribute them driven by data flow. I just like to express the data flow more explicitly instead of depending on the optimizer to figure it out himself. Let's help the optimizer greatly here.

FWIW, you can already run a MT foreach variation on the tools.threadpool class, and it wouldn't be that hard to write a tree reduce that does the same .. but I see your point and agree. In the end, programming languages should come as close as possible to capturing your intentions with a given piece of code, which is why this kind of metadata would be extremely useful to a clever compiler. --downs -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGzp8/pEPJRr05fBERAkaBAJwKPlb5Y5allxdSLS9yjia72k6iJwCZAXGF dGP7v6VDgB9eA/NvkFBOto0= =91hz -----END PGP SIGNATURE-----
Aug 24 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Craig Black wrote:
 First, D needs to at the very least match the features that are added to C++ 
 with regards to parallelism and concurrency.

D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model. Also, D already implements a superset of some of C++0x's synchronization primitives.
Aug 22 2007
next sibling parent reply serg kovrov <sergk mailinator.com> writes:
Walter Bright wrote:
 D will be addressing the problem by moving towards supporting pure 
 functions, which are automatically parallelizable. I think this will be 
 much more powerful than C++'s model.

Very interesting, could you tell more regarding automatically parallelizable functions? -- serg
Aug 23 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
serg kovrov wrote:
 Walter Bright wrote:
 D will be addressing the problem by moving towards supporting pure 
 functions, which are automatically parallelizable. I think this will 
 be much more powerful than C++'s model.

Very interesting, could you tell more regarding automatically parallelizable functions?

Andrei and I covered this in our presentation on D at the D conference, which we'll post here soon.
Aug 23 2007
prev sibling next sibling parent Stephen Waits <steve waits.net> writes:
Walter Bright wrote:
 D will be addressing the problem by moving towards supporting pure 
 functions, which are automatically parallelizable. I think this will be 
 much more powerful than C++'s model.

I heard the songs of angels in my mind when I read this. --Steve
Aug 23 2007
prev sibling parent Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Craig Black wrote:
 First, D needs to at the very least match the features that are added 
 to C++ with regards to parallelism and concurrency.

D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model. Also, D already implements a superset of some of C++0x's synchronization primitives.

And with inline asm and volatile, an atomic operations package is fairly easy to implement in D (most easily for x86 for obvious reasons). I really think D is in fairly good shape for concurrent programming even without a carefully established multithread-aware memory model. Sean
Aug 25 2007
prev sibling next sibling parent anonymous <foo bar.com> writes:
eao197 Wrote:

[...]
 BTW, there is a C++0x overview in Wikipedia:  
 http://en.wikipedia.org/wiki/C%2B%2B0x
 
 It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?)  
 over C++0x? May be only high speed compilation and GC.

programmer, D is much clearer and cleaner structured. Coming from Pascal/Delphi, it is far easier to understand. It also has a bit the air of perl: There is more than one way to do it. In C++, there seems to be only one right way, but that is hard to understand.
Aug 20 2007
prev sibling next sibling parent reply Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
eao197 wrote:

 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter
 <dnewsgroup billbaxter.com> wrote:
 
 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.

I would put my hopes on the macros, type system and other metaprogramming stuff. Those are areas in which C++ doesn't really shine. I think "We, the Gods, decided to give you now 2 new keywords - if you ask nicely, the next release might have 3 more." vs "We give you all the power to create your own constructs." becomes more apparent now that C++ started to take its steps towards Lisp expressiveness-wise. Still, if C++1x will implement these too, there isn't much need for D anymore beside as a syntactic "skin" for the C++ ugliness.
Aug 20 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 16:35:41 +0400, Jari-Matti Mäkelä  
<jmjmak utu.fi.invalid> wrote:

 eao197 wrote:

 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter
 <dnewsgroup billbaxter.com> wrote:

 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.

I would put my hopes on the macros, type system and other metaprogramming stuff.

If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.
 Those are areas in which C++ doesn't really shine.

IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.
 "We give you all the power to create your own constructs."

I'm affraid it would lead to another Lisp-like failure: each lisper write its own domain-specific language to solve exactly the same problem. -- Regards, Yauheni Akhotnikau
Aug 20 2007
next sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
eao197 wrote:
 
 "We give you all the power to create your own constructs."

I'm affraid it would lead to another Lisp-like failure: each lisper write its own domain-specific language to solve exactly the same problem.

That's my thoughts exactly on LISP's "power to create your own constructs" issue! -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Aug 20 2007
prev sibling parent reply Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
eao197 wrote:

 On Mon, 20 Aug 2007 16:35:41 +0400, Jari-Matti Mškelš
 <jmjmak utu.fi.invalid> wrote:
 
 eao197 wrote:

 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter
 <dnewsgroup billbaxter.com> wrote:

 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.

I would put my hopes on the macros, type system and other metaprogramming stuff.

If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.

It isn't well suited for system programming.
 
 Those are areas in which C++ doesn't really shine.

IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.

Is it better for each C++ coder to write his own domain-specific build tool to solve exactly the same problem than bolt that functionality to the core language?
 
 "We give you all the power to create your own constructs."

I'm affraid it would lead to another Lisp-like failure: each lisper write its own domain-specific language to solve exactly the same problem.

If C++ would have had enough compile time capabilities, many of the new feature proposals would have been implementable on library level and thus available to most currently available compilers without much transitioning costs/delays like now. What's so bad about DSLs - they're a typical programming idiom in Lisp just as functions or assignments are in BCPL-like languages. Also, I think the amount of algorithm implementations for Lisp can be pretty much explained by the age and popularity of the language. In case you haven't noticed, there are already 18+ GUI toolkit bindings/implementations (according to wiki4d) for D and D is 41 years younger than Lisp.
Aug 20 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 17:31:59 +0400, Jari-Matti Mäkelä  
<jmjmak utu.fi.invalid> wrote:

 I would put my hopes on the macros, type system and other  
 metaprogramming
 stuff.

If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.

It isn't well suited for system programming.

What kind of system programming? Writting compilers and writting drivers are examples of system programming, but Nemerle is well suited for the first, but not the second. Is low-level system programming (like drivers really need macros or/and metaprogramming)?
 Those are areas in which C++ doesn't really shine.

IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.

Is it better for each C++ coder to write his own domain-specific build tool to solve exactly the same problem than bolt that functionality to the core language?

There are more modern than make build tools (like Scons) which make pre-compile-time code generation more smoothly task. And after years of C++ development I've came to conclusion that usage of C++ and some scripting language (like Ruby) is more flexible, easy and fast than usage of only one main language (like C++/Java).
 In case you haven't noticed,
 there are already 18+ GUI toolkit bindings/implementations (according to
 wiki4d) for D and D is 41 years younger than Lisp.

Do you think that 18+ GUI bindings is good thing? I know that D has two standard libraries and this is not good at all :( -- Regards, Yauheni Akhotnikau
Aug 20 2007
next sibling parent reply Gregor Kopp <gregor.kopp chello.at> writes:
eao197 wrote:
 I know that D has two standard libraries and this is not good at all :(

That is really annoying! I think that this could break the neck of D.
Aug 20 2007
parent eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 18:21:13 +0400, Gregor Kopp <gregor.kopp chello.at>  
wrote:

 eao197 wrote:
 I know that D has two standard libraries and this is not good at all :(

That is really annoying! I think that this could break the neck of D.

I hope this issue will be solved during D Conference. -- Regards, Yauheni Akhotnikau
Aug 20 2007
prev sibling parent reply Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
eao197 wrote:

 On Mon, 20 Aug 2007 17:31:59 +0400, Jari-Matti Mškelš
 <jmjmak utu.fi.invalid> wrote:
 
 I would put my hopes on the macros, type system and other
 metaprogramming
 stuff.

If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.

It isn't well suited for system programming.

What kind of system programming? Writting compilers and writting drivers are examples of system programming, but Nemerle is well suited for the first, but not the second. Is low-level system programming (like drivers really need macros or/and metaprogramming)?

If you look at e.g. linux sources there's some heavy use preprocessor macros. With proper language support that could be made less error prone and productive. Even some formal proofing might became possible. Nemerle isn't bad as a language, but it doesn't scale to low level stuff as long as it runs on a VM. In my opinion having a language that scales from driver/kernel level to end user GUI apps and OTOH from asm opcode level to high level FP constructs isn't a bad idea. For example game development is one area where more speed and abstractions are always welcome. I suppose most application programs will run in a VM in the distant future, but at the moment at least my PCs are crying for mercy if I try to run bigger Java apps (running code analysis for a relatively small Java GUI app took about 2 hours in Eclipse). Flash applets aren't any better: a 300x200 mpeg2 movie can choke the system even though mplayer plays several simultaneous 720p streams just fine. The only .Net app I've used was a video card control panel on Windows. It felt too unresponsive to be usable in the long term.
 
 Those are areas in which C++ doesn't really shine.

IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.

Is it better for each C++ coder to write his own domain-specific build tool to solve exactly the same problem than bolt that functionality to the core language?

There are more modern than make build tools (like Scons) which make pre-compile-time code generation more smoothly task. And after years of C++ development I've came to conclusion that usage of C++ and some scripting language (like Ruby) is more flexible, easy and fast than usage of only one main language (like C++/Java).

I agree. But I was on another abstraction level. Things like the active and atomic keywords in C++0x are features that are much easier to implement with builtin macro functionality than 3. party preprocessors / build tools.
 
 In case you haven't noticed,
 there are already 18+ GUI toolkit bindings/implementations (according to
 wiki4d) for D and D is 41 years younger than Lisp.

Do you think that 18+ GUI bindings is good thing?

Probably not.
 I know that D has two standard libraries and this is not good at all :(

I know :/
Aug 20 2007
next sibling parent eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 18:39:44 +0400, Jari-Matti Mäkelä  
<jmjmak utu.fi.invalid> wrote:

 Nemerle isn't bad as a language, but it doesn't scale to low level stuff  
 as
 long as it runs on a VM. In my opinion having a language that scales from
 driver/kernel level to end user GUI apps and OTOH from asm opcode level  
 to
 high level FP constructs isn't a bad idea. For example game development  
 is
 one area where more speed and abstractions are always welcome.

I don't know about game development, but I can mention another area: telecommunications. SMS/MMS gateway requires much low-level bit/byte transformation operation and much high-level logic like transaction routing. But I've learnt that more verbose code, written only with standard language features, is much more maintainlable than more compact code, written with some domain-specific extensions. But may be it is just my Karma :) -- Regards, Yauheni Akhotnikau
Aug 20 2007
prev sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
 I suppose most application programs will run in a VM in the distant future,
 but at the moment at least my PCs are crying for mercy if I try to run
 bigger Java apps (running code analysis for a relatively small Java GUI app
 took about 2 hours in Eclipse). Flash applets aren't any better: a 300x200
 mpeg2 movie can choke the system even though mplayer plays several
 simultaneous 720p streams just fine. The only .Net app I've used was a
 video card control panel on Windows. It felt too unresponsive to be usable
 in the long term.

Really? JIT compilers (LaTtE, Sun Java 5+, probably IBMs JVMs .Net runtime, Mono...) compile things to native code, so besides a little overhead as far as the compilation goes at startup, they should run just as fast as native code, if not faster. I think one of the problems as far as speed is concerned is that the languages were not designed for pure efficiency, and make heavy use of heap allocation, etc. That said, there are already some applications that run faster on a JIT VM because the VM can do certain optimizations that a native compiler can't, and, further, can do it transparently (without programmer interaction). Especially as multi-cores become more prevalent, VMs will be able to automatically vectorize/parallelize loops, which means that, if nothing else, they provide the power to spare the "average developer" who might not know so much about optimization or paralellism, from those horrors. Flash is a different story, as I doubt ActionScript is JITed. As far as Eclipse goes, the IDE does a _lot_ behind the scenes (it compiles any changes every time you stop typing for a couple seconds, marks semantic errors & resolves bindings as you type, etc., etc.), so on slower computers it might sometimes feel a little sluggish. Java's String class is also partly to blame (40 bytes of heap-allocated overhead for every string is a lot), but that's more an issue with the coding style & standard library than with VMs in general.
Aug 20 2007
parent =?UTF-8?B?U3TDqXBoYW4gS29jaGVu?= <stephan kochen.nl> writes:
Robert Fraser schreef:
 That said, there are already some applications that run faster on a JIT VM
because the VM can do certain optimizations that a native compiler can't, and,
further, can do it transparently (without programmer interaction). Especially
as multi-cores become more prevalent, VMs will be able to automatically
vectorize/parallelize loops, which means that, if nothing else, they provide
the power to spare the "average developer" who might not know so much about
optimization or paralellism, from those horrors.

articles about it, and they talk about run-time optimization of native code. It's all over the site; a lot of interesting jargon to me. :)
 Flash is a different story, as I doubt ActionScript is JITed. As far as
Eclipse goes, the IDE does a _lot_ behind the scenes (it compiles any changes
every time you stop typing for a couple seconds, marks semantic errors &
resolves bindings as you type, etc., etc.), so on slower computers it might
sometimes feel a little sluggish. Java's String class is also partly to blame
(40 bytes of heap-allocated overhead for every string is a lot), but that's
more an issue with the coding style & standard library than with VMs in general.

it is) JIT compiler to Mozilla. The project is called Tamarin [2]. Lazy web, signing off. :) [1] http://llvm.org/ [2] http://mozilla.org/projects/tamarin/
Aug 20 2007
prev sibling next sibling parent reply Carlos Santander <csantander619 gmail.com> writes:
eao197 escribiů:
 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter 
 <dnewsgroup billbaxter.com> wrote:
 
 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x

C++0x will be an enormous, ugly, and scary language...
 It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) 
 over C++0x? May be only high speed compilation and GC.
 

-- Carlos Santander Bernal
Aug 20 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Mon, 20 Aug 2007 18:41:13 +0400, Carlos Santander  
<csantander619 gmail.com> wrote:

 eao197 escribió:
 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter  
 <dnewsgroup billbaxter.com> wrote:

 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

http://en.wikipedia.org/wiki/C%2B%2B0x

C++0x will be an enormous, ugly, and scary language...

Will be? It is! :) But it is here, and will be here. And D is just growing. -- Regards, Yauheni Akhotnikau
Aug 20 2007
parent Carlos Santander <csantander619 gmail.com> writes:
eao197 escribió:
 On Mon, 20 Aug 2007 18:41:13 +0400, Carlos Santander 
 <csantander619 gmail.com> wrote:
 
 eao197 escribió:
 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter 
 <dnewsgroup billbaxter.com> wrote:

 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

http://en.wikipedia.org/wiki/C%2B%2B0x

C++0x will be an enormous, ugly, and scary language...

Will be? It is! :)

Hehe, true.
 But it is here, and will be here. And D is just growing.
 

But it's not getting ugly or scary... Big difference... ;) -- Carlos Santander Bernal
Aug 20 2007
prev sibling next sibling parent reply Jascha Wetzel <"[firstname]" mainia.de> writes:
eao197 wrote:
 On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter 
 <dnewsgroup billbaxter.com> wrote:
 
 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
    http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html

BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.

bjarne stroustroup is talking about ongoing discussion about GC features getting into the standard. but obviously, none of C++'s real problems will be fixed in C++0x, since that would require to remove features, not to add them. on the other hand, they didn't even get the ABI/linking issues into the standard. backward compatibility is bad. C++ and Windows are the most prominent examples for that. it's better to have cuts every now and then and provide separate tools that ease the transition. i don't see any of D's potential diminished by C++0x. i think it's curious how much time bjarne stroustroup spends explaining how constrained C++'s language design process is.
Aug 20 2007
parent Sean Kelly <sean f4.ca> writes:
Jascha Wetzel wrote:
 
 i think it's curious how much time bjarne stroustroup spends explaining 
 how constrained C++'s language design process is.

I don't :-) Bjarne may have created C++, but he hasn't had any real control over the language for perhaps the last fifteen years. Still, Bjarne is the one people look to when wondering why C++ doesn't have some feature they consider important (as this interview can attest). What else can he do but explain, again, why things are the way they are? Sean
Aug 20 2007
prev sibling parent Brad Roberts <braddr puremagic.com> writes:
On Sat, 25 Aug 2007, Sean Kelly wrote:

 Walter Bright wrote:
 Craig Black wrote:
 First, D needs to at the very least match the features that are added to
 C++ with regards to parallelism and concurrency.

D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model. Also, D already implements a superset of some of C++0x's synchronization primitives.

And with inline asm and volatile, an atomic operations package is fairly easy to implement in D (most easily for x86 for obvious reasons). I really think D is in fairly good shape for concurrent programming even without a carefully established multithread-aware memory model. Sean

As long as you don't care about the performance of calling a function for a single asm operation or writing asm { ... } at each callsite for the atomic operations. The problem is that dmd won't inline functions with inline asm. GDC will, so all isn't lost. Luckily, a future 2.0 feature, macros, will make it easy to shove the asm inline. But yes, it's possible. But it's no better than c++ on that front.. it's only on par. Later, Brad
Aug 25 2007
prev sibling parent janderson <askme me.com> writes:
Bill Baxter wrote:
 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
 If not here's the link:
   http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html
 
 I recommend hitting pause on the video and then go get some lunch while 
 it buffers up enough that you won't get hiccups.  Or if you can figure 
 out how to get those newfangled torrent thingys to work, that's probably 
 a good option too.
 
 --bb

To me this show why D may be the "better" language syntacticly in the long run. While legacy code is a great thing, it also is a weight around C++'s head. D still has the flexibility to take on many of these good features that would be improbable in C++ due to all parties involved. Although I hope that D takes a serious look at the new much more complicated CPU architectures because I'm afraid that is one area where it could be left behind. -Joel
Aug 20 2007