www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - OSNews article about C++09 degenerates into C++ vs. D discussion

reply Mars <nospam null.void> writes:
http://www.osnews.com/comment.php?news_id=16526
Nov 19 2006
next sibling parent "John Reimer" <terminal.node gmail.com> writes:
On Sun, 19 Nov 2006 12:25:03 -0800, Mars <nospam null.void> wrote:

 http://www.osnews.com/comment.php?news_id=3D16526

Degenerates? :( -JJR
Nov 19 2006
prev sibling next sibling parent reply BCS <BCS pathilink.com> writes:
Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

One issue brought up is that of D "requiring" the use of a GC. What would it take to prove that wrong by making a full blown standard lib that doesn't use a GC, and in fact doesn't have a GC? It would be painful to work with but no more so than in C++. OTOH with scope() and such, it might be easy. Anyway, just a thought.
Nov 19 2006
next sibling parent reply "John Reimer" <terminal.node gmail.com> writes:
On Sun, 19 Nov 2006 14:59:19 -0800, BCS <BCS pathilink.com> wrote:

 Mars wrote:
 http://www.osnews.com/comment.php?news_id=3D16526

One issue brought up is that of D "requiring" the use of a GC. What would it take to prove that wrong by making a full blown standard=

 lib that doesn't use a GC, and in fact doesn't have a GC?

 It would be painful to work with but no more so than in C++. OTOH with=

 scope() and such, it might be easy.

 Anyway, just a thought.

Honestly, first we have to settle on a standard library, which I'm not s= o = sure Phobos is at this point (now that would be a valid criticism from t= he = outside). :P As for a non-gc based library, it might be a useful experiment; but = otherwise, I don't see the motivation for that (other than, I guess, for= = those very special cases); that said, I wouldn't mind seeing a minimalis= t = one implemented, maybe based off of Ares? Note, however, that C++ users, many who have grown dependent on manual = memory management, are looking for a reason to fault D. I've actually = heard cases where C++ users lambast GC based languages: use of a GC = apparently creates "bad programming practices" -- imagine the laziness o= f = not cleaning up after yourself! People are locked in a whole way of thinking, and you can't really fight= = it unless they're willing to open up their perspective. -JJR
Nov 19 2006
parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Sun, 19 Nov 2006 15:28:33 -0800, "John Reimer"
<terminal.node gmail.com> wrote:

On Sun, 19 Nov 2006 14:59:19 -0800, BCS <BCS pathilink.com> wrote:

 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

One issue brought up is that of D "requiring" the use of a GC. What would it take to prove that wrong by making a full blown standard lib that doesn't use a GC, and in fact doesn't have a GC?


I don't know. Personally, I am all in favour of having the choice - but remember, it's not just a matter of creating that library. Maintaining two standard libraries would mean a lot of ongoing headaches.
Note, however, that C++ users, many who have grown dependent on manual  
memory management, are looking for a reason to fault D.  I've actually  
heard cases where C++ users lambast GC based languages: use of a GC  
apparently creates "bad programming practices" -- imagine the laziness of  
not cleaning up after yourself!

I agree - but I also strongly disagree. The problem is that memory management isn't just about allocating and freeing memory. It is closely coupled with newing and deleting, with constructors and destructors, and therefore with wider resource management issues. Two problems can arise... 1. Garbage collection isn't immediate. Resources can stay locked long after they should have been freed, because the garbage collector hasn't got around to destroying those objects yet. This can be a problem if you are trying to acquire further locks or whatever. 2. Reference cycles. Take Java. It can garbage collect when there are reference cycles, sure, but it cannot know what order to destroy those objects in. Calling destructors in the wrong order could cause big problems. Solution - don't call the destructors (sorry, finalisers) at all. Just free the memory, since it doesn't matter what order you do that in. So that's it - Java doesn't guarantee to call finalisers. I don't know for sure that this is why, but it is the only good reason I can think of. If you think reference cycles are a theoretical rather than real problem, well, I'm afraid many practical data structures have them - even the humble doubly-linked list. Either of these problems is sufficient on its own to mean that the garbage collector cannot be relied upon. As the programmer, you have to take responsibility for ensuring that the cleaning up is done. And that, according to black-and-white reasoning, defeats the whole point of garbage collection. But then these problems, even counted together, only create issues for a minority of objects in most code. Awkward persons might observe that the rate of problems tends to increase in lower level code, and that this is why the applications-oriented language Java has more problems than the very-high-level languages that also do GC such as Python. And those same awkward persons might then point out that D explicitly targets systems level code, aiming its sights at a somewhat lower level than Java. But lets put that point to one side for a bit. Someone intelligent enough to consider shades of grey might still argue that it is a good idea to develop good habits early, and to apply them consistently. It saves on having these problems arise as surprise bugs, and perhaps as a result of third party libraries that you don't have source for and cannot fix. I have a lot of sympathy with this point of view, and don't think it can be lightly dismissed. It isn't just a matter of taking sides and rejecting the other side no matter what. It is a valid view of the issue. The trouble is that the non-GC way is also prone to surprise bugs. So, as far as I can see, neither approach is a clear and absolute winner. I know it can seem as if GC is the 'modern' way and that non-GC is a dinosaur, but good and bad isn't decided by fashions or bandwagons. Both GC and non-GC have problems. Now to consider that point I put to one side. D is explicitly aimed at systems level code. Well, that's true, but in the context of GC we have a rather skewed sense of high-level vs low-level - low level would tend to mean data structures and resource management rather than bit twiddling and hardware access. D systems level programming is probably roughly equally prone to GC problems as Java applications level programming. In any case, D is a systems level language in the sense of down-to-and-including systems level. Most real world code has a mix of high-level and low-level. So in a single app, there can be a whole bunch of high-level code where GC is a near perfect approach, and a whole bunch of low-level code in which GC cannot be relied upon and is probably just an unwanted complication. And when there are two equally valid approaches, each of which has its own advantages and disadvantages, and both of which could be valuable in the same application, which should the serious programmer demand? Particularly the systems-level programmer? Right - Both! But does it make sense to demand a separate non-GC standard library? That seems to suggest a world where an application is either all GC or all non-GC. GC seems pointless if it doesn't happen by default, so the approach of opting out for specific classes when necessary seems, to me, to be as close to ideal as you can get. And even then, there's the proviso that you should stick to the default approach as much as possible and make damned sure that when you opt out, it's clear what you are doing and why. It's not a GC-is-superior thing, just a consistency thing - minimising confusion and complexity. In that case, with GC as the default and with opting out being reserved for special cases, you're probably going to carry on using the GC standard library anyway. As for embedded platforms, if malloc and free would work, so would garbage collection. If not, you probably can't use any conventional standard library (and certainly not data structure library code), and should be using a specialised embedded development library (probably tailored for the specific platform). In other words, the only benefit I can see to having a separate non-GC library is marketing. And it seems that a better approach is to educate people about the benefits of dropping the old either-or thinking and choosing both. AFAIK, there are two competitors in this having-both approach, and they are both C++. Managed C++, and C++ with a GC library. And they all get it wrong IMO - you have to opt in to GC, not opt out. If GC isn't the default, you get new classes of bugs - the 'oh - I thought that was GC, but apparently not' and 'damn - I forgot to specify GC for this' bugs. So there we are, D is not only already perfect, it is the only language available that has achieved this amazing feat ;-) -- Remove 'wants' and 'nospam' from e-mail.
Nov 21 2006
next sibling parent reply Kyle Furlong <kylefurlong gmail.com> writes:
Steve Horne wrote:
 On Sun, 19 Nov 2006 15:28:33 -0800, "John Reimer"
 <terminal.node gmail.com> wrote:
 
 On Sun, 19 Nov 2006 14:59:19 -0800, BCS <BCS pathilink.com> wrote:

 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

One issue brought up is that of D "requiring" the use of a GC. What would it take to prove that wrong by making a full blown standard lib that doesn't use a GC, and in fact doesn't have a GC?


I don't know. Personally, I am all in favour of having the choice - but remember, it's not just a matter of creating that library. Maintaining two standard libraries would mean a lot of ongoing headaches.
 Note, however, that C++ users, many who have grown dependent on manual  
 memory management, are looking for a reason to fault D.  I've actually  
 heard cases where C++ users lambast GC based languages: use of a GC  
 apparently creates "bad programming practices" -- imagine the laziness of  
 not cleaning up after yourself!

I agree - but I also strongly disagree. The problem is that memory management isn't just about allocating and freeing memory. It is closely coupled with newing and deleting, with constructors and destructors, and therefore with wider resource management issues. Two problems can arise... 1. Garbage collection isn't immediate. Resources can stay locked long after they should have been freed, because the garbage collector hasn't got around to destroying those objects yet. This can be a problem if you are trying to acquire further locks or whatever. 2. Reference cycles. Take Java. It can garbage collect when there are reference cycles, sure, but it cannot know what order to destroy those objects in. Calling destructors in the wrong order could cause big problems. Solution - don't call the destructors (sorry, finalisers) at all. Just free the memory, since it doesn't matter what order you do that in. So that's it - Java doesn't guarantee to call finalisers. I don't know for sure that this is why, but it is the only good reason I can think of. If you think reference cycles are a theoretical rather than real problem, well, I'm afraid many practical data structures have them - even the humble doubly-linked list. Either of these problems is sufficient on its own to mean that the garbage collector cannot be relied upon. As the programmer, you have to take responsibility for ensuring that the cleaning up is done. And that, according to black-and-white reasoning, defeats the whole point of garbage collection. But then these problems, even counted together, only create issues for a minority of objects in most code. Awkward persons might observe that the rate of problems tends to increase in lower level code, and that this is why the applications-oriented language Java has more problems than the very-high-level languages that also do GC such as Python. And those same awkward persons might then point out that D explicitly targets systems level code, aiming its sights at a somewhat lower level than Java. But lets put that point to one side for a bit. Someone intelligent enough to consider shades of grey might still argue that it is a good idea to develop good habits early, and to apply them consistently. It saves on having these problems arise as surprise bugs, and perhaps as a result of third party libraries that you don't have source for and cannot fix. I have a lot of sympathy with this point of view, and don't think it can be lightly dismissed. It isn't just a matter of taking sides and rejecting the other side no matter what. It is a valid view of the issue. The trouble is that the non-GC way is also prone to surprise bugs. So, as far as I can see, neither approach is a clear and absolute winner. I know it can seem as if GC is the 'modern' way and that non-GC is a dinosaur, but good and bad isn't decided by fashions or bandwagons. Both GC and non-GC have problems. Now to consider that point I put to one side. D is explicitly aimed at systems level code. Well, that's true, but in the context of GC we have a rather skewed sense of high-level vs low-level - low level would tend to mean data structures and resource management rather than bit twiddling and hardware access. D systems level programming is probably roughly equally prone to GC problems as Java applications level programming. In any case, D is a systems level language in the sense of down-to-and-including systems level. Most real world code has a mix of high-level and low-level. So in a single app, there can be a whole bunch of high-level code where GC is a near perfect approach, and a whole bunch of low-level code in which GC cannot be relied upon and is probably just an unwanted complication. And when there are two equally valid approaches, each of which has its own advantages and disadvantages, and both of which could be valuable in the same application, which should the serious programmer demand? Particularly the systems-level programmer? Right - Both! But does it make sense to demand a separate non-GC standard library? That seems to suggest a world where an application is either all GC or all non-GC. GC seems pointless if it doesn't happen by default, so the approach of opting out for specific classes when necessary seems, to me, to be as close to ideal as you can get. And even then, there's the proviso that you should stick to the default approach as much as possible and make damned sure that when you opt out, it's clear what you are doing and why. It's not a GC-is-superior thing, just a consistency thing - minimising confusion and complexity. In that case, with GC as the default and with opting out being reserved for special cases, you're probably going to carry on using the GC standard library anyway. As for embedded platforms, if malloc and free would work, so would garbage collection. If not, you probably can't use any conventional standard library (and certainly not data structure library code), and should be using a specialised embedded development library (probably tailored for the specific platform). In other words, the only benefit I can see to having a separate non-GC library is marketing. And it seems that a better approach is to educate people about the benefits of dropping the old either-or thinking and choosing both. AFAIK, there are two competitors in this having-both approach, and they are both C++. Managed C++, and C++ with a GC library. And they all get it wrong IMO - you have to opt in to GC, not opt out. If GC isn't the default, you get new classes of bugs - the 'oh - I thought that was GC, but apparently not' and 'damn - I forgot to specify GC for this' bugs. So there we are, D is not only already perfect, it is the only language available that has achieved this amazing feat ;-)

Wow that was long, but good, make it an article, Walter?
Nov 21 2006
parent reply "John Reimer" <terminal.node gmail.com> writes:
On Tue, 21 Nov 2006 21:51:35 -0800, Kyle Furlong <kylefurlong gmail.com>=
  =

wrote:

 Steve Horne wrote:
 On Sun, 19 Nov 2006 15:28:33 -0800, "John Reimer"
 <terminal.node gmail.com> wrote:

 On Sun, 19 Nov 2006 14:59:19 -0800, BCS <BCS pathilink.com> wrote:

 Mars wrote:
 http://www.osnews.com/comment.php?news_id=3D16526

One issue brought up is that of D "requiring" the use of a GC. What would it take to prove that wrong by making a full blown =




 standard  lib that doesn't use a GC, and in fact doesn't have a GC?=




  I don't know. Personally, I am all in favour of having the choice -
 but remember, it's not just a matter of creating that library.
 Maintaining two standard libraries would mean a lot of ongoing
 headaches.

 Note, however, that C++ users, many who have grown dependent on  =



 manual  memory management, are looking for a reason to fault D.  I'v=



 actually  heard cases where C++ users lambast GC based languages: us=



 of a GC  apparently creates "bad programming practices" -- imagine t=



 laziness of  not cleaning up after yourself!

The problem is that memory management isn't just about allocating an=


 freeing memory. It is closely coupled with newing and deleting, with
 constructors and destructors, and therefore with wider resource
 management issues.
  Two problems can arise...
  1.  Garbage collection isn't immediate. Resources can stay locked lo=


     after they should have been freed, because the garbage collector
     hasn't got around to destroying those objects yet. This can be a
     problem if you are trying to acquire further locks or whatever.
  2.  Reference cycles. Take Java. It can garbage collect when there a=


     reference cycles, sure, but it cannot know what order to destroy
     those objects in. Calling destructors in the wrong order could
     cause big problems.
      Solution - don't call the destructors (sorry, finalisers) at all=


     Just free the memory, since it doesn't matter what order you do
     that in.
      So that's it - Java doesn't guarantee to call finalisers. I don'=


     know for sure that this is why, but it is the only good reason I
     can think of.
      If you think reference cycles are a theoretical rather than real=


     problem, well, I'm afraid many practical data structures have
     them - even the humble doubly-linked list.
  Either of these problems is sufficient on its own to mean that the
 garbage collector cannot be relied upon. As the programmer, you have
 to take responsibility for ensuring that the cleaning up is done. And=


 that, according to black-and-white reasoning, defeats the whole point=


 of garbage collection.
  But then these problems, even counted together, only create issues f=


 a minority of objects in most code.
  Awkward persons might observe that the rate of problems tends to
 increase in lower level code, and that this is why the
 applications-oriented language Java has more problems than the
 very-high-level languages that also do GC such as Python. And those
 same awkward persons might then point out that D explicitly targets
 systems level code, aiming its sights at a somewhat lower level than
 Java.
  But lets put that point to one side for a bit.
  Someone intelligent enough to consider shades of grey might still
 argue that it is a good idea to develop good habits early, and to
 apply them consistently. It saves on having these problems arise as
 surprise bugs, and perhaps as a result of third party libraries that
 you don't have source for and cannot fix.
  I have a lot of sympathy with this point of view, and don't think it=


 can be lightly dismissed. It isn't just a matter of taking sides and
 rejecting the other side no matter what. It is a valid view of the
 issue.
  The trouble is that the non-GC way is also prone to surprise bugs.
  So, as far as I can see, neither approach is a clear and absolute
 winner. I know it can seem as if GC is the 'modern' way and that
 non-GC is a dinosaur, but good and bad isn't decided by fashions or
 bandwagons. Both GC and non-GC have problems.
  Now to consider that point I put to one side. D is explicitly aimed =


 systems level code. Well, that's true, but in the context of GC we
 have a rather skewed sense of high-level vs low-level - low level
 would tend to mean data structures and resource management rather tha=


 bit twiddling and hardware access. D systems level programming is
 probably roughly equally prone to GC problems as Java applications
 level programming.
  In any case, D is a systems level language in the sense of
 down-to-and-including systems level. Most real world code has a mix o=


 high-level and low-level. So in a single app, there can be a whole
 bunch of high-level code where GC is a near perfect approach, and a
 whole bunch of low-level code in which GC cannot be relied upon and i=


 probably just an unwanted complication.
  And when there are two equally valid approaches, each of which has i=


 own advantages and disadvantages, and both of which could be valuable=


 in the same application, which should the serious programmer demand?
 Particularly the systems-level programmer?
  Right - Both!
  But does it make sense to demand a separate non-GC standard library?=


 That seems to suggest a world where an application is either all GC o=


 all non-GC.
  GC seems pointless if it doesn't happen by default, so the approach =


 opting out for specific classes when necessary seems, to me, to be as=


 close to ideal as you can get. And even then, there's the proviso tha=


 you should stick to the default approach as much as possible and make=


 damned sure that when you opt out, it's clear what you are doing and
 why. It's not a GC-is-superior thing, just a consistency thing -
 minimising confusion and complexity.
  In that case, with GC as the default and with opting out being
 reserved for special cases, you're probably going to carry on using
 the GC standard library anyway.
  As for embedded platforms, if malloc and free would work, so would
 garbage collection. If not, you probably can't use any conventional
 standard library (and certainly not data structure library code), and=


 should be using a specialised embedded development library (probably
 tailored for the specific platform).
  In other words, the only benefit I can see to having a separate non-=


 library is marketing. And it seems that a better approach is to
 educate people about the benefits of dropping the old either-or
 thinking and choosing both.
  AFAIK, there are two competitors in this having-both approach, and
 they are both C++. Managed C++, and C++ with a GC library. And they
 all get it wrong IMO - you have to opt in to GC, not opt out. If GC
 isn't the default, you get new classes of bugs - the 'oh - I thought
 that was GC, but apparently not' and 'damn - I forgot to specify GC
 for this' bugs.
  So there we are, D is not only already perfect, it is the only
 language available that has achieved this amazing feat ;-)

Wow that was long, but good, make it an article, Walter?

It was too long, but with good points. If it were pared down, it would = = read easier and the points might hit home even harder. Concerning D and GC: The problem is that most D apologists /DO/ advertise D as having the bes= t = of both worlds when it comes to memory management, but C++ fans are boun= d = and determined to see D as practically a GC-only language: the GC is one= = of the first points they always bring up. They keep seeing it in the sa= me = light as Java and other such languages. It's unfair and short-sited, bu= t = a typical response. If you really take an honest look at OSNEWS posts and others, you will = realize that some of these people are literally annoyed at D and D = promoters for a reason deeper and unrelated to the language. You can't = = argue with that. Some good considerations, like Steve's, just doesn't h= it = home with those boys. -JJR
Nov 21 2006
next sibling parent reply Kyle Furlong <kylefurlong gmail.com> writes:
John Reimer wrote:
 On Tue, 21 Nov 2006 21:51:35 -0800, Kyle Furlong <kylefurlong gmail.com> 
 wrote:
 
 Steve Horne wrote:
 On Sun, 19 Nov 2006 15:28:33 -0800, "John Reimer"
 <terminal.node gmail.com> wrote:

 On Sun, 19 Nov 2006 14:59:19 -0800, BCS <BCS pathilink.com> wrote:

 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

One issue brought up is that of D "requiring" the use of a GC. What would it take to prove that wrong by making a full blown standard lib that doesn't use a GC, and in fact doesn't have a GC?


but remember, it's not just a matter of creating that library. Maintaining two standard libraries would mean a lot of ongoing headaches.
 Note, however, that C++ users, many who have grown dependent on 
 manual  memory management, are looking for a reason to fault D.  
 I've actually  heard cases where C++ users lambast GC based 
 languages: use of a GC  apparently creates "bad programming 
 practices" -- imagine the laziness of  not cleaning up after yourself!

The problem is that memory management isn't just about allocating and freeing memory. It is closely coupled with newing and deleting, with constructors and destructors, and therefore with wider resource management issues. Two problems can arise... 1. Garbage collection isn't immediate. Resources can stay locked long after they should have been freed, because the garbage collector hasn't got around to destroying those objects yet. This can be a problem if you are trying to acquire further locks or whatever. 2. Reference cycles. Take Java. It can garbage collect when there are reference cycles, sure, but it cannot know what order to destroy those objects in. Calling destructors in the wrong order could cause big problems. Solution - don't call the destructors (sorry, finalisers) at all. Just free the memory, since it doesn't matter what order you do that in. So that's it - Java doesn't guarantee to call finalisers. I don't know for sure that this is why, but it is the only good reason I can think of. If you think reference cycles are a theoretical rather than real problem, well, I'm afraid many practical data structures have them - even the humble doubly-linked list. Either of these problems is sufficient on its own to mean that the garbage collector cannot be relied upon. As the programmer, you have to take responsibility for ensuring that the cleaning up is done. And that, according to black-and-white reasoning, defeats the whole point of garbage collection. But then these problems, even counted together, only create issues for a minority of objects in most code. Awkward persons might observe that the rate of problems tends to increase in lower level code, and that this is why the applications-oriented language Java has more problems than the very-high-level languages that also do GC such as Python. And those same awkward persons might then point out that D explicitly targets systems level code, aiming its sights at a somewhat lower level than Java. But lets put that point to one side for a bit. Someone intelligent enough to consider shades of grey might still argue that it is a good idea to develop good habits early, and to apply them consistently. It saves on having these problems arise as surprise bugs, and perhaps as a result of third party libraries that you don't have source for and cannot fix. I have a lot of sympathy with this point of view, and don't think it can be lightly dismissed. It isn't just a matter of taking sides and rejecting the other side no matter what. It is a valid view of the issue. The trouble is that the non-GC way is also prone to surprise bugs. So, as far as I can see, neither approach is a clear and absolute winner. I know it can seem as if GC is the 'modern' way and that non-GC is a dinosaur, but good and bad isn't decided by fashions or bandwagons. Both GC and non-GC have problems. Now to consider that point I put to one side. D is explicitly aimed at systems level code. Well, that's true, but in the context of GC we have a rather skewed sense of high-level vs low-level - low level would tend to mean data structures and resource management rather than bit twiddling and hardware access. D systems level programming is probably roughly equally prone to GC problems as Java applications level programming. In any case, D is a systems level language in the sense of down-to-and-including systems level. Most real world code has a mix of high-level and low-level. So in a single app, there can be a whole bunch of high-level code where GC is a near perfect approach, and a whole bunch of low-level code in which GC cannot be relied upon and is probably just an unwanted complication. And when there are two equally valid approaches, each of which has its own advantages and disadvantages, and both of which could be valuable in the same application, which should the serious programmer demand? Particularly the systems-level programmer? Right - Both! But does it make sense to demand a separate non-GC standard library? That seems to suggest a world where an application is either all GC or all non-GC. GC seems pointless if it doesn't happen by default, so the approach of opting out for specific classes when necessary seems, to me, to be as close to ideal as you can get. And even then, there's the proviso that you should stick to the default approach as much as possible and make damned sure that when you opt out, it's clear what you are doing and why. It's not a GC-is-superior thing, just a consistency thing - minimising confusion and complexity. In that case, with GC as the default and with opting out being reserved for special cases, you're probably going to carry on using the GC standard library anyway. As for embedded platforms, if malloc and free would work, so would garbage collection. If not, you probably can't use any conventional standard library (and certainly not data structure library code), and should be using a specialised embedded development library (probably tailored for the specific platform). In other words, the only benefit I can see to having a separate non-GC library is marketing. And it seems that a better approach is to educate people about the benefits of dropping the old either-or thinking and choosing both. AFAIK, there are two competitors in this having-both approach, and they are both C++. Managed C++, and C++ with a GC library. And they all get it wrong IMO - you have to opt in to GC, not opt out. If GC isn't the default, you get new classes of bugs - the 'oh - I thought that was GC, but apparently not' and 'damn - I forgot to specify GC for this' bugs. So there we are, D is not only already perfect, it is the only language available that has achieved this amazing feat ;-)

Wow that was long, but good, make it an article, Walter?

It was too long, but with good points. If it were pared down, it would read easier and the points might hit home even harder. Concerning D and GC: The problem is that most D apologists /DO/ advertise D as having the best of both worlds when it comes to memory management, but C++ fans are bound and determined to see D as practically a GC-only language: the GC is one of the first points they always bring up. They keep seeing it in the same light as Java and other such languages. It's unfair and short-sited, but a typical response. If you really take an honest look at OSNEWS posts and others, you will realize that some of these people are literally annoyed at D and D promoters for a reason deeper and unrelated to the language. You can't argue with that. Some good considerations, like Steve's, just doesn't hit home with those boys. -JJR

I seriously think there is a sizable group of people who use C++ at their workplace, and for their hobbies, and maybe have written a convoluted something or other for Boost. These people have invested a huge ammount of time and effort to carve out something usable from the jungles that are the C++ lands. These people fight D because they see how it will simply negate that time investment by making it irrelevant. When it comes down to it, someone who actually understands C++ in depth and can be productive in it is a very valuable person. If D becomes defacto, that skill set becomes much less valuable. Thats not to say that someone who is at that level of understanding in C++ can easily adapt to D, but the psychology of it is that they have spent so much time into actually getting C++ to work for them that its an abhorrent idea for them to leave that behind. Any reason they can grasp on to, they will. Any defect they can find, they'll point it out. Hopefully, over time, the smart ones will realize the dead end and move on to D.
Nov 21 2006
next sibling parent "John Reimer" <terminal.node gmail.com> writes:
On Tue, 21 Nov 2006 23:19:31 -0800, Kyle Furlong <kylefurlong gmail.com>  
wrote:

 I seriously think there is a sizable group of people who use C++ at  
 their workplace, and for their hobbies, and maybe have written a  
 convoluted something or other for Boost. These people have invested a  
 huge ammount of time and effort to carve out something usable from the  
 jungles that are the C++ lands.

 These people fight D because they see how it will simply negate that  
 time investment by making it irrelevant. When it comes down to it,  
 someone who actually understands C++ in depth and can be productive in  
 it is a very valuable person. If D becomes defacto, that skill set  
 becomes much less valuable.

 Thats not to say that someone who is at that level of understanding in  
 C++ can easily adapt to D, but the psychology of it is that they have  
 spent so much time into actually getting C++ to work for them that its  
 an abhorrent idea for them to leave that behind.

 Any reason they can grasp on to, they will. Any defect they can find,  
 they'll point it out. Hopefully, over time, the smart ones will realize  
 the dead end and move on to D.

You likely hit the nail on the head, Kyle. :) -JJR
Nov 21 2006
prev sibling next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Kyle Furlong wrote:
 John Reimer wrote:
 On Tue, 21 Nov 2006 21:51:35 -0800, Kyle Furlong 
 <kylefurlong gmail.com> wrote:

 Steve Horne wrote:
 On Sun, 19 Nov 2006 15:28:33 -0800, "John Reimer"
 <terminal.node gmail.com> wrote:

 On Sun, 19 Nov 2006 14:59:19 -0800, BCS <BCS pathilink.com> wrote:

 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

One issue brought up is that of D "requiring" the use of a GC. What would it take to prove that wrong by making a full blown standard lib that doesn't use a GC, and in fact doesn't have a GC?


but remember, it's not just a matter of creating that library. Maintaining two standard libraries would mean a lot of ongoing headaches.
 Note, however, that C++ users, many who have grown dependent on 
 manual  memory management, are looking for a reason to fault D.  
 I've actually  heard cases where C++ users lambast GC based 
 languages: use of a GC  apparently creates "bad programming 
 practices" -- imagine the laziness of  not cleaning up after yourself!

The problem is that memory management isn't just about allocating and freeing memory. It is closely coupled with newing and deleting, with constructors and destructors, and therefore with wider resource management issues. Two problems can arise... 1. Garbage collection isn't immediate. Resources can stay locked long after they should have been freed, because the garbage collector hasn't got around to destroying those objects yet. This can be a problem if you are trying to acquire further locks or whatever. 2. Reference cycles. Take Java. It can garbage collect when there are reference cycles, sure, but it cannot know what order to destroy those objects in. Calling destructors in the wrong order could cause big problems. Solution - don't call the destructors (sorry, finalisers) at all. Just free the memory, since it doesn't matter what order you do that in. So that's it - Java doesn't guarantee to call finalisers. I don't know for sure that this is why, but it is the only good reason I can think of. If you think reference cycles are a theoretical rather than real problem, well, I'm afraid many practical data structures have them - even the humble doubly-linked list. Either of these problems is sufficient on its own to mean that the garbage collector cannot be relied upon. As the programmer, you have to take responsibility for ensuring that the cleaning up is done. And that, according to black-and-white reasoning, defeats the whole point of garbage collection. But then these problems, even counted together, only create issues for a minority of objects in most code. Awkward persons might observe that the rate of problems tends to increase in lower level code, and that this is why the applications-oriented language Java has more problems than the very-high-level languages that also do GC such as Python. And those same awkward persons might then point out that D explicitly targets systems level code, aiming its sights at a somewhat lower level than Java. But lets put that point to one side for a bit. Someone intelligent enough to consider shades of grey might still argue that it is a good idea to develop good habits early, and to apply them consistently. It saves on having these problems arise as surprise bugs, and perhaps as a result of third party libraries that you don't have source for and cannot fix. I have a lot of sympathy with this point of view, and don't think it can be lightly dismissed. It isn't just a matter of taking sides and rejecting the other side no matter what. It is a valid view of the issue. The trouble is that the non-GC way is also prone to surprise bugs. So, as far as I can see, neither approach is a clear and absolute winner. I know it can seem as if GC is the 'modern' way and that non-GC is a dinosaur, but good and bad isn't decided by fashions or bandwagons. Both GC and non-GC have problems. Now to consider that point I put to one side. D is explicitly aimed at systems level code. Well, that's true, but in the context of GC we have a rather skewed sense of high-level vs low-level - low level would tend to mean data structures and resource management rather than bit twiddling and hardware access. D systems level programming is probably roughly equally prone to GC problems as Java applications level programming. In any case, D is a systems level language in the sense of down-to-and-including systems level. Most real world code has a mix of high-level and low-level. So in a single app, there can be a whole bunch of high-level code where GC is a near perfect approach, and a whole bunch of low-level code in which GC cannot be relied upon and is probably just an unwanted complication. And when there are two equally valid approaches, each of which has its own advantages and disadvantages, and both of which could be valuable in the same application, which should the serious programmer demand? Particularly the systems-level programmer? Right - Both! But does it make sense to demand a separate non-GC standard library? That seems to suggest a world where an application is either all GC or all non-GC. GC seems pointless if it doesn't happen by default, so the approach of opting out for specific classes when necessary seems, to me, to be as close to ideal as you can get. And even then, there's the proviso that you should stick to the default approach as much as possible and make damned sure that when you opt out, it's clear what you are doing and why. It's not a GC-is-superior thing, just a consistency thing - minimising confusion and complexity. In that case, with GC as the default and with opting out being reserved for special cases, you're probably going to carry on using the GC standard library anyway. As for embedded platforms, if malloc and free would work, so would garbage collection. If not, you probably can't use any conventional standard library (and certainly not data structure library code), and should be using a specialised embedded development library (probably tailored for the specific platform). In other words, the only benefit I can see to having a separate non-GC library is marketing. And it seems that a better approach is to educate people about the benefits of dropping the old either-or thinking and choosing both. AFAIK, there are two competitors in this having-both approach, and they are both C++. Managed C++, and C++ with a GC library. And they all get it wrong IMO - you have to opt in to GC, not opt out. If GC isn't the default, you get new classes of bugs - the 'oh - I thought that was GC, but apparently not' and 'damn - I forgot to specify GC for this' bugs. So there we are, D is not only already perfect, it is the only language available that has achieved this amazing feat ;-)

Wow that was long, but good, make it an article, Walter?

It was too long, but with good points. If it were pared down, it would read easier and the points might hit home even harder. Concerning D and GC: The problem is that most D apologists /DO/ advertise D as having the best of both worlds when it comes to memory management, but C++ fans are bound and determined to see D as practically a GC-only language: the GC is one of the first points they always bring up. They keep seeing it in the same light as Java and other such languages. It's unfair and short-sited, but a typical response. If you really take an honest look at OSNEWS posts and others, you will realize that some of these people are literally annoyed at D and D promoters for a reason deeper and unrelated to the language. You can't argue with that. Some good considerations, like Steve's, just doesn't hit home with those boys. -JJR

I seriously think there is a sizable group of people who use C++ at their workplace, and for their hobbies, and maybe have written a convoluted something or other for Boost. These people have invested a huge ammount of time and effort to carve out something usable from the jungles that are the C++ lands. These people fight D because they see how it will simply negate that time investment by making it irrelevant. When it comes down to it, someone who actually understands C++ in depth and can be productive in it is a very valuable person. If D becomes defacto, that skill set becomes much less valuable. Thats not to say that someone who is at that level of understanding in C++ can easily adapt to D, but the psychology of it is that they have spent so much time into actually getting C++ to work for them that its an abhorrent idea for them to leave that behind. Any reason they can grasp on to, they will. Any defect they can find, they'll point it out. Hopefully, over time, the smart ones will realize the dead end and move on to D.

Actually, I think that anyone who's put a lot of effort into Boost-style template code will have a huge list of C++ quirks that they wish would be fixed. My first impression of D was "there's loads of cool stuff in here that I wish was in C++, but the templates aren't good enough, because there's no IFTI". My second impression was there was enough interesting features (like mixins and static if) to give D a go despite the absence of IFTI. Well, now that we have IFTI and tuples(!) I seriously don't think any template affectionado is likely to evaluate D negatively in that regard. Once the word gets around, I think there'll be a lot of defections.
Nov 22 2006
parent reply Bill Baxter <wbaxter gmail.com> writes:
Don Clugston wrote:

 Well, now that we have IFTI and tuples(!) I seriously don't think any 
 template affectionado is likely to evaluate D negatively in that regard.
 Once the word gets around, I think there'll be a lot of defections.

Metaprogramming in C++ is OOP in C all over again. Sure you can do it, but... they definitely didn't have that in mind when they designed the language, so it ain't gonna be pretty. --bb
Nov 22 2006
parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Thu, 23 Nov 2006 01:52:01 +0900, Bill Baxter <wbaxter gmail.com>
wrote:

Don Clugston wrote:

 Well, now that we have IFTI and tuples(!) I seriously don't think any 
 template affectionado is likely to evaluate D negatively in that regard.
 Once the word gets around, I think there'll be a lot of defections.

Metaprogramming in C++ is OOP in C all over again. Sure you can do it, but... they definitely didn't have that in mind when they designed the language, so it ain't gonna be pretty.

I saw someone asking about the VC++ __if_exists and something like static if in a GCC discussion once, about whether GCC would support it. The reply was that template metaprogramming creates unmaintainable messes and shouldn't be encouraged. And I thought to myself - but metaprogramming isn't going away, presumably because it is needed. And most of the reason for the mess is that conditional parts need to be handled using specialisation rather than simple conditionals. So why not make life simpler and more maintainable? I was going to say so, but that would have meant registering and blah blah, and I put it off to the later that never happens. Which is a shame. It needed saying. -- Remove 'wants' and 'nospam' from e-mail.
Nov 24 2006
parent Don Clugston <dac nospam.com.au> writes:
Steve Horne wrote:
 On Thu, 23 Nov 2006 01:52:01 +0900, Bill Baxter <wbaxter gmail.com>
 wrote:
 
 Don Clugston wrote:

 Well, now that we have IFTI and tuples(!) I seriously don't think any 
 template affectionado is likely to evaluate D negatively in that regard.
 Once the word gets around, I think there'll be a lot of defections.

but... they definitely didn't have that in mind when they designed the language, so it ain't gonna be pretty.

I saw someone asking about the VC++ __if_exists and something like static if in a GCC discussion once, about whether GCC would support it. The reply was that template metaprogramming creates unmaintainable messes and shouldn't be encouraged. And I thought to myself - but metaprogramming isn't going away, presumably because it is needed. And most of the reason for the mess is that conditional parts need to be handled using specialisation rather than simple conditionals. So why not make life simpler and more maintainable?

Exactly. In C++ metaprogramming, the only control structure you have is: (x==CONST_VALUE) ? func1() : func2() where x must be an integer. No wonder C++ metaprogramming code is so disgusting. I've been amazed at how D metaprogramming on strings can sometimes be shorter than the equivalent C++ runtime code !
 
 I was going to say so, but that would have meant registering and blah
 blah, and I put it off to the later that never happens. Which is a
 shame. It needed saying.
 

Nov 25 2006
prev sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Kyle Furlong wrote:
 I seriously think there is a sizable group of people who use C++ at 
 their workplace, and for their hobbies, and maybe have written a 
 convoluted something or other for Boost. These people have invested a 
 huge ammount of time and effort to carve out something usable from the 
 jungles that are the C++ lands.
 
 These people fight D because they see how it will simply negate that 
 time investment by making it irrelevant.

How true. And because it's on an emotional level, most may not even be aware of it, which makes it that much harder to handle, for us and for themselves.
 Any reason they can grasp on to, they will. Any defect they can find, 
 they'll point it out. Hopefully, over time, the smart ones will realize 
 the dead end and move on to D.

Here, as in so many other situations, we have to come out as winners (or at least not losers) from the arguments. These people will not be converted, but gradually many in the audience will convert. That's simply the dynamics of Public Debate, and it's been like that ever since talking was invented in the first place. So, fighting with diehards is something that simply belongs to the current phase in D's history. And it's not about winning them over, it's about the bystanders, the audience, the silent masses.
Nov 22 2006
parent Walter Bright <newshound digitalmars.com> writes:
Georg Wrede wrote:
 So, fighting with diehards is something that simply belongs to the 
 current phase in D's history. And it's not about winning them over, it's 
 about the bystanders, the audience, the silent masses.

That's right. I don't expect any of the career C++ people to ever use D. My purpose in engaging them is to: 1) correct misinformation 2) provide answers to important questions about D 3) discover where glaring weaknesses in D are so they can be dealt with so that people who are evaluating which language to use will get the information they need.
Nov 22 2006
prev sibling next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
John Reimer wrote:
 Wow that was long, but good, make it an article, Walter?


I think it is good material for an article.
 Concerning D and GC:
 
 The problem is that most D apologists /DO/ advertise D as having the 
 best of both worlds when it comes to memory management, but C++ fans are 
 bound and determined to see D as practically a GC-only language: the GC 
 is one of the first points they always bring up.  They keep seeing it in 
 the same light as Java and other such languages.  It's unfair and 
 short-sited, but a typical response.

A common misconception that people have against D is that since D has core arrays, strings, and complex numbers, that therefore it is not possible to create user defined types in the library. They'll say things like "I prefer to use C++ because I can create my own types!" I patiently explain that this is not so, that there is nothing stopping one from creating their own user defined D types. And then they come back a week, a month later and repeat the same misinformation. Sigh.
 If you really take an honest look at OSNEWS posts and others, you will
 realize that some of these people are literally annoyed at D and D
 promoters for a reason deeper and unrelated to the language.  You can't
 argue with that.  Some good considerations, like Steve's, just doesn't
 hit home with those boys.

That's to be expected. Many people have bet their careers on C++ being the greatest ever, and nothing can change their mind. D is a personal affront to them. It doesn't really matter, though, because if you attend a C++ conference, take a look around. They're old (my age <g>). Someone once did a survey of the ages of D adopters, and found out they are dominated by much younger folks. And that, my friends, is why D is the future.
Nov 22 2006
next sibling parent Kyle Furlong <kylefurlong gmail.com> writes:
Walter Bright wrote:
 John Reimer wrote:
 Wow that was long, but good, make it an article, Walter?


I think it is good material for an article.
 Concerning D and GC:

 The problem is that most D apologists /DO/ advertise D as having the 
 best of both worlds when it comes to memory management, but C++ fans 
 are bound and determined to see D as practically a GC-only language: 
 the GC is one of the first points they always bring up.  They keep 
 seeing it in the same light as Java and other such languages.  It's 
 unfair and short-sited, but a typical response.

A common misconception that people have against D is that since D has core arrays, strings, and complex numbers, that therefore it is not possible to create user defined types in the library. They'll say things like "I prefer to use C++ because I can create my own types!" I patiently explain that this is not so, that there is nothing stopping one from creating their own user defined D types. And then they come back a week, a month later and repeat the same misinformation. Sigh. > If you really take an honest look at OSNEWS posts and others, you will > realize that some of these people are literally annoyed at D and D > promoters for a reason deeper and unrelated to the language. You can't > argue with that. Some good considerations, like Steve's, just doesn't > hit home with those boys. That's to be expected. Many people have bet their careers on C++ being the greatest ever, and nothing can change their mind. D is a personal affront to them. It doesn't really matter, though, because if you attend a C++ conference, take a look around. They're old (my age <g>). Someone once did a survey of the ages of D adopters, and found out they are dominated by much younger folks. And that, my friends, is why D is the future.

Because I'm the future? :D (21 here)
Nov 22 2006
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 John Reimer wrote:
 Wow that was long, but good, make it an article, Walter?


I think it is good material for an article.
 Concerning D and GC:

 The problem is that most D apologists /DO/ advertise D as having the 
 best of both worlds when it comes to memory management, but C++ fans 
 are bound and determined to see D as practically a GC-only language: 
 the GC is one of the first points they always bring up.  They keep 
 seeing it in the same light as Java and other such languages.  It's 
 unfair and short-sited, but a typical response. 

A common misconception that people have against D is that since D has core arrays, strings, and complex numbers, that therefore it is not possible to create user defined types in the library. They'll say things like "I prefer to use C++ because I can create my own types!" I patiently explain that this is not so, that there is nothing stopping one from creating their own user defined D types. And then they come back a week, a month later and repeat the same misinformation. Sigh.

Shadows of c.l.c++.m? :-) I think C++ does a fairly good job of allowing users to define pseudo-types, but am not convinced it is worth the consequences. Implementing a type correctly in C++ is somewhat complicated, what with operators, various ctors, dtors, etc. And this complexity spills over into how non-type objects must be defined, increasing the chance of bugs from a missed ctor or dtor. Perhaps this sort of thing should be done in a meta-language rather than user code. As for D, I don't see the slippery slope that some C++ folks seem to. Floating point reals are a part of the language, so why not complex? It seems a natural fit. Same with dynamic arrays and maps. With GC, there's no reason not to have such features in a language. As you've said, if you don't want to use them there's no stopping you from creating a library version.
  > If you really take an honest look at OSNEWS posts and others, you will
  > realize that some of these people are literally annoyed at D and D
  > promoters for a reason deeper and unrelated to the language.  You can't
  > argue with that.  Some good considerations, like Steve's, just doesn't
  > hit home with those boys.
 
 That's to be expected. Many people have bet their careers on C++ being 
 the greatest ever, and nothing can change their mind. D is a personal 
 affront to them. It doesn't really matter, though, because if you attend 
 a C++ conference, take a look around. They're old (my age <g>). Someone 
 once did a survey of the ages of D adopters, and found out they are 
 dominated by much younger folks.

I don't think conference attendance is completely representative of typical C++ users. Conferences are viewed as "training" so attendees will typically be people who have been in the workplace for a while are are looking to improve their skills or learn new things. Conferences are also expensive, so young employees are less likely to get corporate funding to attend. That said, I do agree that C++ is an "older" language in terms of its users, and that D is much "younger" in this respect. It makes perfect sense. C++ has been around for a long time and D has not, and few professional programmers seem inclined to learn new things. If anything, they're more likely to refine their existing skill set and stick to their "specialty." One thing that struck me about SDWest is that few of the C++ conference attendees were what I'd consider "experts." And the few that were, were all c.l.c++.m regulars and many were even involved in organizing things somehow. I did meet a few young programmers--one was an electrical engineer with far more experience in C than C++ and wanted to learn more about the language, and the others were Bay Area locals from one particular company, somewhat more competent with C++, but looking to learn a bit as well. I suppose this is to be expected with a conference, but for the C++ series I expected attendance to be a bit different. Sean
Nov 22 2006
next sibling parent Walter Bright <newshound digitalmars.com> writes:
Sean Kelly wrote:
 I don't think conference attendance is completely representative of 
 typical C++ users.

You're right, it isn't, and it's surely unscientific to use them as a representative sample. On the other hand, I've been attending such conferences for 20 years, and the crowd seems to have gotten older with me.
Nov 22 2006
prev sibling parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Wed, 22 Nov 2006 09:53:41 -0800, Sean Kelly <sean f4.ca> wrote:

Walter Bright wrote:

 That's to be expected. Many people have bet their careers on C++ being 
 the greatest ever, and nothing can change their mind.


That said, I do agree that C++ is an "older" 
language in terms of its users, and that D is much "younger" in this 
respect.  It makes perfect sense.  C++ has been around for a long time 
and D has not, and few professional programmers seem inclined to learn 
new things.  If anything, they're more likely to refine their existing 
skill set and stick to their "specialty."

Oh no. C++ is the new COBOL. AAARRRGGGHHH!!! -- Remove 'wants' and 'nospam' from e-mail.
Nov 24 2006
parent reply "John Reimer" <terminal.node gmail.com> writes:
On Fri, 24 Nov 2006 19:29:07 -0800, Steve Horne  
<stephenwantshornenospam100 aol.com> wrote:

 On Wed, 22 Nov 2006 09:53:41 -0800, Sean Kelly <sean f4.ca> wrote:

 Walter Bright wrote:

 That's to be expected. Many people have bet their careers on C++ being
 the greatest ever, and nothing can change their mind.


 That said, I do agree that C++ is an "older"
 language in terms of its users, and that D is much "younger" in this
 respect.  It makes perfect sense.  C++ has been around for a long time
 and D has not, and few professional programmers seem inclined to learn
 new things.  If anything, they're more likely to refine their existing
 skill set and stick to their "specialty."

Oh no. C++ is the new COBOL. AAARRRGGGHHH!!!

He he.. It's inevitable... the languages start to date developers. The same thing works for operating systems. When I mention I used some of the very first Slackware linux releases in the 1990s (maybe around 19993 or 94) because I was desperate to move away from the DOS/Win16 platform... well, even a minor thing like that starts dating me among the younger generation of linux gurus (a linux guru, I am not... still after all these years). At 31, I'm an in-betweener... not that old... but old enough that computer history is leaving it's mark in my memories. :) -JJR
Nov 24 2006
parent reply Georg Wrede <georg.wrede nospam.org> writes:
John Reimer wrote:
 On Fri, 24 Nov 2006 19:29:07 -0800, Steve Horne  
 <stephenwantshornenospam100 aol.com> wrote:
 
 On Wed, 22 Nov 2006 09:53:41 -0800, Sean Kelly <sean f4.ca> wrote:

 Walter Bright wrote:

 That's to be expected. Many people have bet their careers on C++ being
 the greatest ever, and nothing can change their mind.


 That said, I do agree that C++ is an "older"
 language in terms of its users, and that D is much "younger" in this
 respect.  It makes perfect sense.  C++ has been around for a long time
 and D has not, and few professional programmers seem inclined to learn
 new things.  If anything, they're more likely to refine their existing
 skill set and stick to their "specialty."

Oh no. C++ is the new COBOL. AAARRRGGGHHH!!!

He he.. It's inevitable... the languages start to date developers. The same thing works for operating systems. When I mention I used some of the very first Slackware linux releases in the 1990s (maybe around 19993 or 94) because I was desperate to move away from the DOS/Win16 platform... well, even a minor thing like that starts dating me among the younger generation of linux gurus (a linux guru, I am not... still after all these years). At 31, I'm an in-betweener... not that old... but old enough that computer history is leaving it's mark in my memories. :)

Uh-oh. Reading that made me feel ancient. I wrote my first programs in FORTRAN, back in the 1960's. And I still have three different computers that run CP/M (what we had before Microsoft). And yes, I still play with them occasionally. Last week I spent two hours reading the CP/M ASM listing of the Osborne. And I regularly have my HP 28S calculator near when I do programming. (Made in 1986.) It mixes a version of Forth and RPN, and I feel it's hugely more usable than even today's calculators. Guess I'm simply a prehistoric relic.
Nov 27 2006
parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Tue, 28 Nov 2006 02:28:21 +0200, Georg Wrede
<georg.wrede nospam.org> wrote:

Uh-oh.

Reading that made me feel ancient. I wrote my first programs in FORTRAN, 
back in the 1960's. And I still have three different computers that run 
CP/M (what we had before Microsoft). And yes, I still play with them 
occasionally. Last week I spent two hours reading the CP/M ASM listing 
of the Osborne.

And I regularly have my HP 28S calculator near when I do programming. 
(Made in 1986.) It mixes a version of Forth and RPN, and I feel it's 
hugely more usable than even today's calculators.

Guess I'm simply a prehistoric relic.

Well, I wasn't born until 71, and I'm basically a child of the Basic/Pascal/C and 8-bit micros generation, but I'm feeling like a prehistoric relic too. I'll tell you what, I'll be the fossil from the Cretaceous - you can be Jurassic ;-) -- Remove 'wants' and 'nospam' from e-mail.
Nov 27 2006
prev sibling parent reply Mike Capp <mike.capp gmail.com> writes:
Walter Bright (and assorted quoted people) wrote:
 [John Reimer] most D apologists /DO/ advertise D as having the
 best of both worlds when it comes to memory management, but C++ fans are
 bound and determined to see D as practically a GC-only language: the GC
 is one of the first points they always bring up. [...] It's unfair and
 short-sited, but a typical response.


It's not that unfair. D has good support for RAII now - possibly better than C++'s on balance, though with different strengths and weaknesses. But GC-less programming (as opposed to GC+manual) is ignored - no compiler checking, no standard library beyond C's unless you're willing to vet the source with a fine-toothed comb. The post John is applauding here states this assumption explicitly: that there's no need for or value in a GC-less library. What scares the bejesus out of me is a future combination of: 1) a copying collector, 2) multiple threads and 3) raw pointers. We talked about this about a year ago, and didn't come up with an obvious solution. (Though I may have missed one in between drive-by lurkings.) Manual pinning is waaay too easy to get wrong, IMHO.
  > some of these people are literally annoyed at D and D
  > promoters

Not all of those people are diehards, though. I like D, and I sometimes get annoyed by D promoters. There does seem to be an underlying attitude among some of the younger and more enthusiastic posters here that D is essentially perfect for everything, that anyone expressing reservations is automatically a closed-minded fogey, and that no amount of experience with other languages is relevant because D is a whole new paradigm.
 That's to be expected. Many people have bet their careers on C++ being
 the greatest ever, and nothing can change their mind.

Some may see it that way, but it's a bit of a non sequitur. Even if no new C++ projects were launched from now until the end of time, there's more than enough legacy code out there to keep a competent C++ programmer (un)comfortably employed maintaining it for the rest of their career, if that's really what they want to be working on. It's COBOL all over again. If anything, their knowledge becomes _more_ valuable if people coming onto the job market are learning D instead; basic supply and demand.
 It doesn't really matter, though, because if you attend
 a C++ conference, take a look around. They're old (my age <g>). Someone
 once did a survey of the ages of D adopters, and found out they are
 dominated by much younger folks.

Well, yes. Every new language is dominated by younger folks, whether it's eventually successful or not. Something about the combination of copious free time and unscarred optimism...
Nov 22 2006
next sibling parent reply Kyle Furlong <kylefurlong gmail.com> writes:
Mike Capp wrote:
 Walter Bright (and assorted quoted people) wrote:
 [John Reimer] most D apologists /DO/ advertise D as having the
 best of both worlds when it comes to memory management, but C++ fans are
 bound and determined to see D as practically a GC-only language: the GC
 is one of the first points they always bring up. [...] It's unfair and
 short-sited, but a typical response.


It's not that unfair. D has good support for RAII now - possibly better than C++'s on balance, though with different strengths and weaknesses. But GC-less programming (as opposed to GC+manual) is ignored - no compiler checking, no standard library beyond C's unless you're willing to vet the source with a fine-toothed comb. The post John is applauding here states this assumption explicitly: that there's no need for or value in a GC-less library. What scares the bejesus out of me is a future combination of: 1) a copying collector, 2) multiple threads and 3) raw pointers. We talked about this about a year ago, and didn't come up with an obvious solution. (Though I may have missed one in between drive-by lurkings.) Manual pinning is waaay too easy to get wrong, IMHO.
  > some of these people are literally annoyed at D and D
  > promoters

Not all of those people are diehards, though. I like D, and I sometimes get annoyed by D promoters. There does seem to be an underlying attitude among some of the younger and more enthusiastic posters here that D is essentially perfect for everything, that anyone expressing reservations is automatically a closed-minded fogey, and that no amount of experience with other languages is relevant because D is a whole new paradigm.

If this is talking about my first post in this thread, thats not what I said. I merely said that trying to apply the conventional wisdom of C++ to D is misguided. Is that incorrect?
 That's to be expected. Many people have bet their careers on C++ being
 the greatest ever, and nothing can change their mind.

Some may see it that way, but it's a bit of a non sequitur. Even if no new C++ projects were launched from now until the end of time, there's more than enough legacy code out there to keep a competent C++ programmer (un)comfortably employed maintaining it for the rest of their career, if that's really what they want to be working on. It's COBOL all over again. If anything, their knowledge becomes _more_ valuable if people coming onto the job market are learning D instead; basic supply and demand.
 It doesn't really matter, though, because if you attend
 a C++ conference, take a look around. They're old (my age <g>). Someone
 once did a survey of the ages of D adopters, and found out they are
 dominated by much younger folks.

Well, yes. Every new language is dominated by younger folks, whether it's eventually successful or not. Something about the combination of copious free time and unscarred optimism...

Nov 22 2006
parent Mike Capp <mike.capp gmail.com> writes:
Kyle Furlong wrote:

 If this is talking about my first post in this

I wasn't ranting at you in particular, no. :-) It's more of a general vibe around here sometimes.
 thread, thats not what I said. I merely said that
 trying to apply the conventional wisdom of C++
 to D is misguided.
 Is that incorrect?

I can't remember the context of your original comment (and the web interface to this NG doesn't do threading), so I'm not sure what "conventional wisdom of C++" you were talking about. If it was some specific rote-learned rule like "every new must have a delete" then you're right, that's clearly daft. If it was a general statement then I disagree. They're two different languages, of course, but aimed at similar niches and subject to similar constraints. The design of D was in large part driven by "conventional C++ wisdom", both in terms of better built-in support for the useful idioms that have evolved in C++ and of avoiding widely accepted what-the-hell-were-they-thinking howlers. (I don't know many if any C++ "fans", in the sense that D or Ruby has fans; it's usually more of a pragmatic "better the devil you know" attitude.) Also, there's not yet any experience using D in big (mloc) software systems, or maintaining it over many many years, or guaranteeing ridiculous levels of uptime. Those raise issues - build scalability, dependency management, source and binary compatibility, heap fragmentation - that just don't come up in small exploratory projects. I fully agree with you that such experience doesn't port across directly, but as a source of flags for problems that *might* come up it's better than nothing. cheers Mike
Nov 22 2006
prev sibling next sibling parent reply "John Reimer" <terminal.node gmail.com> writes:
On Wed, 22 Nov 2006 11:10:36 -0800, Mike Capp <mike.capp gmail.com> wrote:

 Walter Bright (and assorted quoted people) wrote:
 [John Reimer] most D apologists /DO/ advertise D as having the
 best of both worlds when it comes to memory management, but C++ fans  

 bound and determined to see D as practically a GC-only language: the  

 is one of the first points they always bring up. [...] It's unfair and
 short-sited, but a typical response.


It's not that unfair. D has good support for RAII now - possibly better than C++'s

Huh? I'm not following. I said it's unfair that C++ users frequently see D as GC-only. Your response seems to indicate that this is not unfair, but I can't determine your line of reasoning.
 on balance, though with different strengths and weaknesses. But GC-less
 programming (as opposed to GC+manual) is ignored - no compiler checking,  
 no
 standard library beyond C's unless you're willing to vet the source with  
 a
 fine-toothed comb. The post John is applauding here states this  
 assumption
 explicitly: that there's no need for or value in a GC-less library.

I'm sorry, Mike. What post are you saying I'm applauding? I can't see how relating my applauding to the conclusion in that sentence makes any sense. Is there something implied or did you mean to point something out? Confused, -JJR
Nov 22 2006
parent reply Mike Capp <mike.capp gmail.com> writes:
John Reimer wrote:

 Huh?  I'm not following.  I said it's unfair that
 C++ users frequently see D as GC-only.  Your
 response seems to indicate that this is not unfair,
 but I can't determine your line of reasoning.

I may be being naive. There's a difference between "a D program CAN ONLY allocate memory on the GC heap" and "a D program WILL allocate memory on the GC heap". The first statement is plain wrong, and once you point out that malloc is still available there's not much to discuss, so I can't believe that this is what C++ users have a problem with. The second statement is technically wrong, but only if you carefully avoid certain language features and don't use the standard library. Hence, if you don't want GC (because of concerns about pausing, or working-set footprint, or whatever) then you're not using the language as it was intended to be used. GC and GC-less approaches are not on an equal footing.
 I'm sorry, Mike.  What post are you saying I'm
 applauding?  I can't see how relating my applauding
 to the conclusion in that sentence makes any
 sense.  Is there something implied or did you mean
 to point something out?

Steve Horne's post - http://www.digitalmars.com/pnews/read.php?server=news.digitalmars.com&group=digitalmars.D&artnum=44644 - which I thought you were agreeing with. If I misread or got muddled about attribution, apologies. My point was just this: regardless of whether GC-less D is a reasonable thing to want, if you take the attitude that it's not worth supporting then it's hard to see why a C++ users' perception of D as "GC whether you want it or not" is unfair.
Nov 22 2006
parent reply "John Reimer" <terminal.node gmail.com> writes:
On Wed, 22 Nov 2006 14:17:26 -0800, Mike Capp <mike.capp gmail.com> wrot=
e:

 John Reimer wrote:

 Huh?  I'm not following.  I said it's unfair that
 C++ users frequently see D as GC-only.  Your
 response seems to indicate that this is not unfair,
 but I can't determine your line of reasoning.

I may be being naive. There's a difference between "a D program CAN ONLY allocate memory on =

 the GC heap"
 and "a D program WILL allocate memory on the GC heap".

 The first statement is plain wrong, and once you point out that malloc=

 is still
 available there's not much to discuss, so I can't believe that this is=

 what C++
 users have a problem with.

 The second statement is technically wrong, but only if you carefully  =

 avoid certain
 language features and don't use the standard library.

Avoid new/delete, dynamic arrays, array slice operations, and array = concatenation. I think that's it... Further, new/delete can be = reimplemented to provide custom allocators. You can use stack based = variables if necessary (soon with scope attribute, I hope). For those = special cases, knowing how to do this would be important anyway: careful= = programming knowledge and practice is a requirement regardless. It is = clearly possible in D. You do not need to use the GC, if you feel the = situation warrants such avoidance. What I find strange is that some C++ users, who do not use D, make = complaints about D in this area, then fall into a debate about C++ verse= s = D memory management (when you mention the possibility of malloc or = otherwise); no the argument is not over; when you prove the point that D= = is flexible here, they then digress into a discussion on how they can = improve on C++ default memory management by implementing a /custom/ = solution in there own C++ programs. How is this different than in D? I= = think D makes it even easier to do this. These guys likely would be = serious D wizards if they ever tried it out. They do find much to discuss whatever point is made! That's why I said = = that it's not all about D; it's more about being entrenched in something= = they are comfortable with.
 Hence, if you don't want GC (because of concerns about pausing, or  =

 working-set
 footprint, or whatever) then you're not using the language as it was  =

 intended to
 be used. GC and GC-less approaches are not on an equal footing.

D is used however someone wants to use it. For D based kernels, you mus= t = avoid using a GC and interface with a custom memory management system of= = some sort. Does that mean kernels should not be programmed in D, becaus= e = they avoid using a GC (apparently an intended part of the D language)? = = It's no problem avoiding the GC, so that tells me that's part of the = intended workings of D as well. I believe D was /intended/ to work with= = in both situations (though it's not a language that was intended to work= = without a gc, /all the time/). It takes a different mindset to learn how to do this -- when and when no= t = to use a gc, even in different parts of a program. D Developers working= = 3D games/libraries seem to realize this (and have worked successfully wi= th = the gc). Some C++ users seem to want to argue from there C++ experience= = only. So what we have is two groups debating from different = perspectives. Neither side is necessarily wrong. But the argument is = rather lame. And C++ users that don't use D don't have the perspective = of = a D user -- which, yes, makes there accusations unfair. GC is indeed the preferred way to go. But as a systems language, D canno= t = promote that as the one and only way. It wisely leaves room for other = options. I support that. And D has so many powerful feature, that = subracting a precious few features from it to accomodate non-gc based = programming is hardly damaging to D's expressiveness.
 I'm sorry, Mike.  What post are you saying I'm
 applauding?  I can't see how relating my applauding
 to the conclusion in that sentence makes any
 sense.  Is there something implied or did you mean
 to point something out?

Steve Horne's post - http://www.digitalmars.com/pnews/read.php?server=3Dnews.digitalmars.co=

 - which I thought you were agreeing with. If I misread or got muddled =

 about
 attribution, apologies.

I supported what I thought was his conclusion. Maybe I was confused? I= = supported that he thought D was optimal for it's support of both gc base= d = memory management and manual memory management at the same time, that no= = one dimension should be force over another. I agreed with that.
 My point was just this: regardless of whether GC-less D is a reasonabl=

 thing to
 want, if you take the attitude that it's not worth supporting then it'=

 hard to
 see why a C++ users' perception of D as "GC whether you want it or not=

 is unfair.

It appears my "unfair" statement is slowly being dragged into broader = realms. I'm not so sure it's maintaining its original application = anymore. The discussion about making a GC-less standard library was = debated here as to whether it was worth supporting or not. That was = another discussion. A standard non-gc based library is likely not going= = to meet with massive approval from those already in the community who ha= ve = experienced D for themselves, especially merely to attract the attention= = of a skeptical C++ crowd (although I admitted that it would be an = interesting experiment; and Walter seems to have stated that it wasn't = even necessary). I doubt that would work anyway. If it is "unfair" for C++ skeptics to be told that there isn't support f= or = a non-gc library given your expression of how they perceive the message = = ("GC way or the highway"), I'd have to disagree. C++ skeptics are still= = operating from a different perspective -- one they are quite committed = to. If they are spreading mis-information about how D works because the= y = are unwilling to test a different perspective, then it's kind of hard to= = feel sorry for them if they don't get what they want from a language the= y = have most certainly decided they will never use anyway. Naturally, the = = result for the D apologist, is that he'll never conve the C++ skeptic. B= ut = as others have mentioned in other posts, that goal is futile anyway. Th= e = positive side-effect of the debate does appear to be education of others= = that might be interested in D -- it's an opportunity to intercept = mis-information. In summary, the GC is there and available for use if you want it. But yo= u = don't have to use it. A good mix of GC and manual-memory management is = = fully supported and recommended as the situation requires. There is no = = reason to fear being stuck with the gc in D. I do have my own mis-givings about some things in D, but, as you probabl= y = guessed, this is not one of them. And for the most part, D is a very = enjoyable langauge to use. :) -JJR
Nov 22 2006
parent reply Mike Capp <mike.capp gmail.com> writes:
John Reimer wrote:

 Avoid new/delete, dynamic arrays, array slice
 operations, and array concatenation.  I think
 that's it...

Also associative arrays. I'm not convinced, though, that slices require GC. I'd expect GC-less slicing to be safe so long as the programmer ensured that the lifetime of the whole array exceeded that of the slice. In many cases that's trivial to do. I don't like the fact that the same piece of code that initializes a stack array in C will create a dynamic array in D. Porting accident waiting to happen.
 What I find strange is that some C++ users [...]
 then digress into a discussion on how they can
 improve on C++ default memory management by
 implementing a /custom/ solution in there own C++
 programs.  How is this different than in D?

Dunno; I'd never argue otherwise. I've never found a need to implement a custom allocator in C++ (especially STL allocators, which turned out to be a complete waste of paper).
 D makes it even easier to do this.  These guys
 likely would be serious D wizards if they ever
 tried it out.

Yeah. Similarly, I think Walter's too pessimistic when he doubts that the "career" C++ people will ever switch. C++ has been a peculiar language lately; for all practical intents and purposes it forked a few years back into two diverging dialects. One is fairly conservative and is used to write production code that needs to maintained by people not named Andrei. The other is a cloud-cuckoo-land research language used to see what can be done with template metaprogramming. The people drawn to the latter dialect, from Alexandrescu all the way back to Stepanov, enjoy pushing a language's features beyond what the original designer anticipated. Given D's different and richer feature set, I'd be amazed if they didn't have fun with it.
 It's no problem avoiding the GC,

No? Hypothetical: your boss dumps a million lines of D code in your lap and says, "Verify that this avoids the GC in all possible circumstances". What do you do? What do you grep for? What tests do you run? That's not rhetorical; I'm not saying there isn't an answer. I just don't see what it is.
 It appears my "unfair" statement is slowly being
 dragged into broader realms. I'm not so sure it's
 maintaining its original application anymore.

Fair enough; I don't mean to drag you kicking and screaming out of context.
 A standard non-gc based library is likely not
 going to meet with massive approval from those
 already in the community

I should clarify. I'm not proposing that the entire standard library should have a GC-less implementation, just that it should be as useful as possible without introducing GC usage when the user is trying to avoid it. Crude analogy: imagine everything in Phobos being wrapped in a version(GC) block. Phobos is normally built with -version=GC, so no change for GC users. GC-less users use Phobos built without GC, so none of the library is available, which is where they were anyway. Now, a GC-less user feels the need for std.foo.bar(). They look at the source, and find that std.foo.bar() won't ever use GC because all it does is add two ints, so they take it out of the version(GC) block and can now use it. Or they write a less efficient but GC-less implementation in a version(NOGC) block. Either way, they get what they want without affecting GC users. It's incremental, and the work is done by the people (if any) who care. cheers Mike
Nov 22 2006
next sibling parent reply Sean Kelly <sean f4.ca> writes:
Mike Capp wrote:
 John Reimer wrote:
 
 Avoid new/delete, dynamic arrays, array slice
 operations, and array concatenation.  I think
 that's it...

Also associative arrays. I'm not convinced, though, that slices require GC. I'd expect GC-less slicing to be safe so long as the programmer ensured that the lifetime of the whole array exceeded that of the slice. In many cases that's trivial to do. I don't like the fact that the same piece of code that initializes a stack array in C will create a dynamic array in D. Porting accident waiting to happen.
 What I find strange is that some C++ users [...]
 then digress into a discussion on how they can
 improve on C++ default memory management by
 implementing a /custom/ solution in there own C++
 programs.  How is this different than in D?

Dunno; I'd never argue otherwise. I've never found a need to implement a custom allocator in C++ (especially STL allocators, which turned out to be a complete waste of paper).

STL allocators can be useful for adapting the containers to work with shared memory. And I created a debug allocator I use from time to time. But the complexity allocators add to container implementation is significant, for arguably little return.
 D makes it even easier to do this.  These guys
 likely would be serious D wizards if they ever
 tried it out.

Yeah. Similarly, I think Walter's too pessimistic when he doubts that the "career" C++ people will ever switch.

I suppose it depends on how you define "career." If it's simply that a person uses the language because it is the best fit for their particular problem domain and they are skilled at using it, then I think such a user would consider D (I certainly have). But if you define it as someone who is clinging to their (possibly limited) knowledge of C++ and is not interested in learning new skills then probably not. Frankly, I think the greatest obstacle D will have is gaining traction in existing projects. With a decade or two of legacy C++ code to support, adopting to D is simply not feasible. Training is another issue. A small development shop could switch languages relatively easily, but things are much more difficult if a build team, QA, various levels of programmers, etc, all need to learn a new language or tool set. At that point the decision has more to do with short-term cost than anything.
 C++ has been a peculiar language lately; for all practical intents and
purposes it
 forked a few years back into two diverging dialects. One is fairly conservative
 and is used to write production code that needs to maintained by people not
named
 Andrei. The other is a cloud-cuckoo-land research language used to see what
can be
 done with template metaprogramming.

LOL. Pretty much. Professionally, it's uncommon that I'll meet C++ programmers that have much experience with STL containers, let alone knowledge of template metaprogramming. Even techniques that I consider commonplace, like RAII, seem to be unknown to many/most C++ programmers. I think the reality is that most firms still write C++ code as if it were 1996, not 2006.
 It's no problem avoiding the GC,

No? Hypothetical: your boss dumps a million lines of D code in your lap and says, "Verify that this avoids the GC in all possible circumstances". What do you do? What do you grep for? What tests do you run?

I'd probably begin by hooking the GC collection routine and dumping data on what was being cleaned up non-deterministically. This should at least point out specific types of objects which may need to be managed some other way. But it still leaves bits of dynamic arrays discarded by resizes, AA segments, etc, to fall through the cracks. A more complete solution would be to hook both allocation and collection and generate reports, or rely on a tool like Purify. Probably pretty slow work in most cases.
 I should clarify. I'm not proposing that the entire standard library should
have a
 GC-less implementation, just that it should be as useful as possible without
 introducing GC usage when the user is trying to avoid it.

This is really almost situational. Personally, I try to balance elegance and intended usage with optional destination buffer parameters and such. In the algorithm code I've written, for example, the only routines that allocate are pushHeap, unionOf (set union), and intersectionOf (set intersection). It would be possible to allow for optional destination buffers for the set operations, but with optional comparison predicates already in place, I'm faced with deciding whether to put the buffer argument before or after the predicate... and neither is ideal. I guess a bunch of overloads could solve the problem, but what a mess. Sean
Nov 22 2006
next sibling parent Bill Baxter <wbaxter gmail.com> writes:
Sean Kelly wrote:
 Mike Capp wrote:

 Dunno; I'd never argue otherwise. I've never found a need to implement 
 a custom
 allocator in C++ (especially STL allocators, which turned out to be a 
 complete
 waste of paper).

STL allocators can be useful for adapting the containers to work with shared memory. And I created a debug allocator I use from time to time. But the complexity allocators add to container implementation is significant, for arguably little return.

I ended up doing just that thing for a project once. That is, I used STL custom allocators to allocate the memory from SGI's shmem shared memory pools. It not at all fun, though. Especially since the allocator comes at the very end of every parameter list. So using allocators means you have to specify *all* the parameters of every STL container you use. --bb
Nov 23 2006
prev sibling parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Wed, 22 Nov 2006 19:07:59 -0800, Sean Kelly <sean f4.ca> wrote:


 No? Hypothetical: your boss dumps a million lines of D code in your lap and
says,
 "Verify that this avoids the GC in all possible circumstances". What do you do?
 What do you grep for? What tests do you run?

I'd probably begin by hooking the GC collection routine and dumping data on what was being cleaned up non-deterministically.

1. If possible, relink without the GC library. If that fails, it doesn't necessarily mean the GC gets used, so relink with the GC library patched so that any attempt to allocate memory from the GC heap fails with a loud noise. 2. Run all unit tests, and check that full coverage is achieved. Of course that assumes that there *are* unit tests... -- Remove 'wants' and 'nospam' from e-mail.
Nov 24 2006
parent reply %u <hal nospam.gmail.com> writes:
 No? Hypothetical: your boss dumps a million lines of D code in your lap and
says,
 "Verify that this avoids the GC in all possible circumstances". What do you do?
 What do you grep for? What tests do you run?

I'd probably begin by hooking the GC collection routine and dumping data on what was being cleaned up non-deterministically.

1. If possible, relink without the GC library. If that fails, it doesn't necessarily mean the GC gets used, so relink with the GC library patched so that any attempt to allocate memory from the GC heap fails with a loud noise. 2. Run all unit tests, and check that full coverage is achieved. Of course that assumes that there *are* unit tests...

It should be as simple as choosing a compile option and letting the compiler complain the use of GC.
Nov 26 2006
parent reply Alexander Panek <a.panek brainsware.org> writes:
%u, (:P)

as soon as you compile to object files and do the linking yourself, you 
are in any way getting undefined references to some GC functions, as 
soon as you try to use GC-enabled features, anyways. And as this thread 
is about OSnews discussions, in OS development, you *do* link yourself 
anyways (gcc -c, ld -Tlinker-script).

Kind regards,
Alex

%u wrote:
 No? Hypothetical: your boss dumps a million lines of D code in your lap and
says,
 "Verify that this avoids the GC in all possible circumstances". What do you do?
 What do you grep for? What tests do you run?

on what was being cleaned up non-deterministically.

doesn't necessarily mean the GC gets used, so relink with the GC library patched so that any attempt to allocate memory from the GC heap fails with a loud noise. 2. Run all unit tests, and check that full coverage is achieved. Of course that assumes that there *are* unit tests...

It should be as simple as choosing a compile option and letting the compiler complain the use of GC.

Nov 27 2006
parent Alexander Panek <a.panek brainsware.org> writes:
Arr!
gdc -c, of course.

Alexander Panek wrote:
 %u, (:P)
 
 as soon as you compile to object files and do the linking yourself, you 
 are in any way getting undefined references to some GC functions, as 
 soon as you try to use GC-enabled features, anyways. And as this thread 
 is about OSnews discussions, in OS development, you *do* link yourself 
 anyways (gcc -c, ld -Tlinker-script).
 
 Kind regards,
 Alex
 
 %u wrote:
 No? Hypothetical: your boss dumps a million lines of D code in your 
 lap and says,
 "Verify that this avoids the GC in all possible circumstances". 
 What do you do?
 What do you grep for? What tests do you run?

data on what was being cleaned up non-deterministically.

doesn't necessarily mean the GC gets used, so relink with the GC library patched so that any attempt to allocate memory from the GC heap fails with a loud noise. 2. Run all unit tests, and check that full coverage is achieved. Of course that assumes that there *are* unit tests...

It should be as simple as choosing a compile option and letting the compiler complain the use of GC.


Nov 27 2006
prev sibling parent Georg Wrede <georg.wrede nospam.org> writes:
Mike Capp wrote:
 John Reimer wrote:
A standard non-gc based library is likely not
going to meet with massive approval from those
already in the community

I should clarify. I'm not proposing that the entire standard library should have a GC-less implementation, just that it should be as useful as possible without introducing GC usage when the user is trying to avoid it. Crude analogy: imagine everything in Phobos being wrapped in a version(GC) block. Phobos is normally built with -version=GC, so no change for GC users. GC-less users use Phobos built without GC, so none of the library is available, which is where they were anyway. Now, a GC-less user feels the need for std.foo.bar(). They look at the source, and find that std.foo.bar() won't ever use GC because all it does is add two ints, so they take it out of the version(GC) block and can now use it. Or they write a less efficient but GC-less implementation in a version(NOGC) block. Either way, they get what they want without affecting GC users. It's incremental, and the work is done by the people (if any) who care.

If DMD were an old fashioned shrink wrapped product, this would be solved by having a little note in the library documentation next to each Phobos function, stating whether GC {will|may|won't} get used. (The last statement is not strictly correct, it should rather be something like "allocate from the heap", "GC-safe", or whatever, but the phrasing serves the point here.) If somebody were to actually check Phobos, the obvious first thing to do is to grep for "new". But what's the next thing?
Nov 22 2006
prev sibling parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Wed, 22 Nov 2006 19:10:36 +0000 (UTC), Mike Capp
<mike.capp gmail.com> wrote:

  > some of these people are literally annoyed at D and D
  > promoters

Not all of those people are diehards, though. I like D, and I sometimes get annoyed by D promoters. There does seem to be an underlying attitude among some of the younger and more enthusiastic posters here that D is essentially perfect for everything, that anyone expressing reservations is automatically a closed-minded fogey, and that no amount of experience with other languages is relevant because D is a whole new paradigm.

It sounds like you have a lot of sympathy with this view... http://www.perl.com/pub/a/2000/12/advocacy.html Me too, but it's not all one side. The anti-advocacy-resistance can overreact as well by becoming defensive and adopting a the-best-defence-is-a-strong-offense approach. And if they're attacking, well, we need to defend ourselves - and again, the best defence is a strong offense. And now they know that we're definitely on the attack, so... The problem is one of human nature, as opposed to the people on one side or the other. Just be glad that bullets don't work over the internet ;-) -- Remove 'wants' and 'nospam' from e-mail.
Nov 24 2006
prev sibling parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Tue, 21 Nov 2006 22:55:27 -0800, "John Reimer"
<terminal.node gmail.com> wrote:

It was too long, but with good points.  If it were pared down, it would  
read easier and the points might hit home even harder.

That's my writing style, normally - except when no-one agrees with the 'good points' bit, anyway. Trouble is, if I were to got through and try to pare down, it would get longer. I'd worry that actually there is a narrow range of platforms and applications where non-GC might work but GC not - those that are right on the edge of coping with malloc/free and unable to bare any GC overhead. Its an Aspergers thing. People misunderstand what you say, so you get more verbose to try and avoid the misunderstandings. You have no common sense yourself so you can't know what can be left to the common sense of others. Besides, odd non-verbals tend to trigger mistrust, and that triggers defensiveness in the form of nit-picking every possible gap in your reasoning.
If you really take an honest look at OSNEWS posts and others, you will  
realize that some of these people are literally annoyed at D and D  
promoters for a reason deeper and unrelated to the language.  You can't  
argue with that.

D is openly embracing something that people have stereotyped as a feature of scripting languages. Sure some of those scripting languages are good for full applications, and sure Java is much more aimed at applications, and sure theres all those 'academic' languages too, but a lot of systems level programmers had GC tagged as something for 'lesser' programmers who might manage the odd high level app, or academic geeks who never write a line of real-world code. Stereotypes. Status. In-groups. Non-GC has become a symbol, really. When I first encountered D, and read that it is a systems-level language with GC, at first I laughed and then all the 'reasons' why thats bad went through my head. Looking back, that sounds like a defence mechanism to me. Why should I need a defence mechanism? Perhaps I felt under attack? This is just me, of course, and I got over it, but anyone care to bet that I'm the only one? Of course putting this kind of thing out as advocacy is a bad idea. When people feel under attack, the worst thing you can do is accuse them of being irrational. -- Remove 'wants' and 'nospam' from e-mail.
Nov 23 2006
prev sibling parent reply Boris Kolar <boris.kolar globera.com> writes:
== Quote from Steve Horne (stephenwantshornenospam100 aol.com)'s article
 Most real world code has a mix of
 high-level and low-level.

True. It feels so liberating when you at least have an option to cast reference to int, mirror internal structure of another class, or mess with stack frames. Those are all ugly hacks, but ability to use them makes programming much more fun. The ideal solution would be to have a safe language with optional unsafe features, so hacks like that would have to be explicitly marked as unsafe. Maybe that's a good idea for D 2.0 :) If D's popularity keeps rising, there will be eventually people who will want Java or .NET backend. With unsafe features, you can really put a lot of extra power in the language (opAssign, opIdentity,...) that may work or may not work as intended - but it's programmer's error if it doesn't (intimate knowledge of compiler internals is assumed).
Nov 22 2006
next sibling parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Wed, 22 Nov 2006 09:20:17 +0000 (UTC), Boris Kolar
<boris.kolar globera.com> wrote:

== Quote from Steve Horne (stephenwantshornenospam100 aol.com)'s article
 Most real world code has a mix of
 high-level and low-level.

True. It feels so liberating when you at least have an option to cast reference to int, mirror internal structure of another class, or mess with stack frames. Those are all ugly hacks, but ability to use them makes programming much more fun. The ideal solution would be to have a safe language with optional unsafe features, so hacks like that would have to be explicitly marked as unsafe. Maybe that's a good idea for D 2.0 :) If D's popularity keeps rising, there will be eventually people who will want Java or .NET backend. With unsafe features, you can really put a lot of extra power in the language (opAssign, opIdentity,...) that may work or may not work as intended - but it's programmer's error if it doesn't (intimate knowledge of compiler internals is assumed).

Hmmm C# does that safe vs. unsafe thing, doesn't it. My reaction was basically that I never used the 'unsafe' stuff at all. I learned it, but for anything that would need 'unsafe' I avoided .NET altogether. Why? Well, what I didn't learn is exactly what impact it has on users. As soon as I realised there is an impact on users, I felt very nervous. If code could be marked as unsafe, and then be allowed to use some subset of unsafe features, I'd say that could be a good thing. But it should be an issue for developers to deal with, not users. D is in a good position for this, since basically unsafe blocks should be highlighted in generated documentation (to ensure they get more attention in code reviews etc). Also, possibly there should be a white-list of unsafe blocks to be allowed during compilation - something that each developer can hack for his own testing builds, but for the main build comes from some central source. Unsafe modules that aren't on the list should trigger either errors or warnings, depending on compiler options. -- Remove 'wants' and 'nospam' from e-mail.
Nov 23 2006
next sibling parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Thu, 23 Nov 2006 21:25:29 +0000, Steve Horne
<stephenwantshornenospam100 aol.com> wrote:

Well, what I didn't learn is exactly what impact it has on users. As
soon as I realised there is an impact on users, I felt very nervous.

Just to expand on it... If you use the 'obsolete' keyword, your users don't get a "this application uses obsolete code!!!" message every time they start it. The keyword triggers warnings for developers, not users. -- Remove 'wants' and 'nospam' from e-mail.
Nov 23 2006
prev sibling parent reply "John Reimer" <terminal.node gmail.com> writes:
On Thu, 23 Nov 2006 13:25:29 -0800, Steve Horne  =

<stephenwantshornenospam100 aol.com> wrote:

 On Wed, 22 Nov 2006 09:20:17 +0000 (UTC), Boris Kolar
 <boris.kolar globera.com> wrote:

 =3D=3D Quote from Steve Horne (stephenwantshornenospam100 aol.com)'s =


 Most real world code has a mix of
 high-level and low-level.

True. It feels so liberating when you at least have an option to cast reference to int, mirror internal structure of another class, or mess with stack frames. Those are all ugly hacks, but ability to use them makes programming much more fun. The ideal solution would be to have a safe language with optional unsafe features, so hacks like that would have to be explicitly marke=


 as unsafe. Maybe that's a good idea for D 2.0 :) If D's popularity
 keeps rising, there will be eventually people who will want Java or
 .NET backend. With unsafe features, you can really put a lot of extra=


 power in the language (opAssign, opIdentity,...) that may work or may=


 not work as intended - but it's programmer's error if it doesn't
 (intimate knowledge of compiler internals is assumed).

Hmmm C# does that safe vs. unsafe thing, doesn't it. My reaction was basically that I never used the 'unsafe' stuff at all. I learned it, but for anything that would need 'unsafe' I avoided .NET altogether. Why? Well, what I didn't learn is exactly what impact it has on users. As soon as I realised there is an impact on users, I felt very nervous. If code could be marked as unsafe, and then be allowed to use some subset of unsafe features, I'd say that could be a good thing. But it should be an issue for developers to deal with, not users. D is in a good position for this, since basically unsafe blocks should=

 be highlighted in generated documentation (to ensure they get more
 attention in code reviews etc).

 Also, possibly there should be a white-list of unsafe blocks to be
 allowed during compilation - something that each developer can hack
 for his own testing builds, but for the main build comes from some
 central source. Unsafe modules that aren't on the list should trigger
 either errors or warnings, depending on compiler options.

I think one problem with labeling things "unsafe" is that it could imply= = "buggy" or something else perhaps. It's an unkown and the implication = might /not/ be a popular one in large software projects. I'm not quite = = sure how to express that thought completely, though. I just wonder if = developers might look at "unsafe" code section and think "why is it even= = here if it's unsafe?". Would NASA like to have "unsafe" code in their = projects? What does it really mean? Does it have different meanings for= = different language? I know Modula 3 went that route and obviously thoug= ht = it was a good idea; but, then again, Modula 3 didn't catch on. Another issue: C# is designed to be a VM-based language, while D is = designed to be a both a general purpose language and a systems programmi= ng = language. It seems to me that an "unsafe" keyword has different = implications on each language platform, given their different focus. Whi= le = I understand the idea of "unsafe", I wonder at how such a term might = affect D's representation as a system programming language. What might = = work well for a VM-based language, might not be very good for D. Or it m= ay = be the other way around. Either way, I don't think comparing C# and D c= an = be done on the same level in this regard. D likely has to work out these ideas on it's own merit, given the domain= = it's creating for itself. -JJR
Nov 24 2006
parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Fri, 24 Nov 2006 11:52:20 -0800, "John Reimer"
<terminal.node gmail.com> wrote:

I think one problem with labeling things "unsafe" is that it could imply  
"buggy" or something else perhaps.

Well, that is just terminology though. Change 'unsafe' to 'permit', as in perhaps some kind of 'permit <feature-name>' block. Reframing it as security permissions as opposed to unsafe code doesn't really change anything, but could make it sound more acceptable.
Another issue: C# is designed to be a VM-based language, while D is  
designed to be a both a general purpose language and a systems programming  
language.  It seems to me that an "unsafe" keyword has different  
implications on each language platform, given their different focus.

To me, it's mostly about what users see. In particular, I don't want them to see... """ WARNING - WARNING - EVIL UNSAFE TERRORIST CODE IS TRYING TO INVADE YOUR COMPUTER!!!! PANIC IMMEDIATELY!!!! """ -- Remove 'wants' and 'nospam' from e-mail.
Nov 24 2006
prev sibling parent reply Dave <Dave_member pathlink.com> writes:
Boris Kolar wrote:
 == Quote from Steve Horne (stephenwantshornenospam100 aol.com)'s article
 Most real world code has a mix of
 high-level and low-level.

True. It feels so liberating when you at least have an option to cast reference to int, mirror internal structure of another class, or mess with stack frames. Those are all ugly hacks, but ability to use them makes programming much more fun. The ideal solution would be to have a safe language with optional unsafe features, so hacks like that would have to be explicitly marked as unsafe. Maybe that's a good idea for D 2.0 :) If D's popularity keeps rising, there will be eventually people who will want Java or .NET backend. With unsafe features, you can really put a lot of extra

Good Gosh, I hope not, not if that means wrecking the language to conform to those runtimes. Look at what MS has done with (or to!) C++.Net - yikes!. D is aimed primarily at the native compilation / systems programming space, with great support for general application programming. Just like C/++. And there will be plenty of room for all of the best native / JIT / interpreted languages for a long time to come. It's the old 80-20 rule - 20% of the available .Net and Java libraries are used for 80% of development work. So if most of the effort is concentrated on the 20% most often used, D libraries will be a reasonable alternative for 80% of the applications out there. The other library fluff can come later. Actually I wouldn't be surprised to learn that it's more like 90-10.
 power in the language (opAssign, opIdentity,...) that may work or may
 not work as intended - but it's programmer's error if it doesn't
 (intimate knowledge of compiler internals is assumed).

Nov 24 2006
parent reply Don Clugston <dac nospam.com.au> writes:
Dave wrote:
 Boris Kolar wrote:
 == Quote from Steve Horne (stephenwantshornenospam100 aol.com)'s article
 Most real world code has a mix of
 high-level and low-level.

True. It feels so liberating when you at least have an option to cast reference to int, mirror internal structure of another class, or mess with stack frames. Those are all ugly hacks, but ability to use them makes programming much more fun. The ideal solution would be to have a safe language with optional unsafe features, so hacks like that would have to be explicitly marked as unsafe. Maybe that's a good idea for D 2.0 :) If D's popularity keeps rising, there will be eventually people who will want Java or .NET backend. With unsafe features, you can really put a lot of extra

Good Gosh, I hope not, not if that means wrecking the language to conform to those runtimes. Look at what MS has done with (or to!) C++.Net - yikes!.

I think it's even worse than that. The opposite of 'unsafe' is *not* safe! My brother has worked with medical software which contain software bugs which kill people. And the bugs are NOT 'dangling pointers', they are incorrect mathematics (wrong dosage, etc). The code is 'safe', yet people have been taken out in body bags. I think this whole "safe"/"unsafe" concept can be distracting -- the goal is software with no bugs! It's just a tool to reduce a specific class of bugs. D does many features which help to reduce bugs, the concept of 'safe' code just isn't one of them.
 D is aimed primarily at the native compilation / systems programming 
 space, with great support for general application programming. Just like 
 C/++. And there will be plenty of room for all of the best native / JIT 
 / interpreted languages for a long time to come.
 
 It's the old 80-20 rule - 20% of the available .Net and Java libraries 
 are used for 80% of development work. So if most of the effort is 
 concentrated on the 20% most often used, D libraries will be a 
 reasonable alternative for 80% of the applications out there. The other 
 library fluff can come later. Actually I wouldn't be surprised to learn 
 that it's more like 90-10.

That's an excellent point.
Nov 25 2006
parent reply Benji Smith <dlanguage benjismith.net> writes:
Don Clugston wrote:
 I think it's even worse than that. The opposite of 'unsafe' is *not* safe!
 
 My brother has worked with medical software which contain software bugs 
 which kill people. And the bugs are NOT 'dangling pointers', they are 
 incorrect mathematics (wrong dosage, etc). The code is 'safe', yet 
 people have been taken out in body bags.
 
 I think this whole "safe"/"unsafe" concept can be distracting -- the 
 goal is software with no bugs! It's just a tool to reduce a specific 
 class of bugs. D does many features which help to reduce bugs, the 
 concept of 'safe' code just isn't one of them.

I actually like the "unsafe" keyword in C# (never used C++.NET). The words "safe" and "unsafe" refer only to type-safety, so it would be more accurate (but cumbersome) if the keyword was "untypesafe" to indicate blocks of code circumventing the type system. It's nice to know that the default assumption in C# is that nearly all code will subject itself to the compiler's static type checking. Sure, sometimes it's necessary circumvent the type system by casting pointers, but I think it helps enforce good programming practice that those untypesafe operations have to be specifically annotated before the compiler will accept them. --benji
Nov 27 2006
parent Sean Kelly <sean f4.ca> writes:
Benji Smith wrote:
 
 It's nice to know that the default assumption in C# is that nearly all 
 code will subject itself to the compiler's static type checking. Sure, 
 sometimes it's necessary circumvent the type system by casting pointers, 
 but I think it helps enforce good programming practice that those 
 untypesafe operations have to be specifically annotated before the 
 compiler will accept them.

But isn't the presence of a cast annotation in itself? Sean
Nov 27 2006
prev sibling next sibling parent reply Mike Capp <mike.capp gmail.com> writes:
 One issue brought up is that of D "requiring" the
 use of a GC. What would it take to prove that wrong
 by making a full blown standard lib that doesn't
 use a GC, and in fact doesn't have a GC?

 It would be painful to work with but no more so
 than in C++. OTOH with scope() and such, it might
 be easy.

It's not just a library issue, and in some ways I think it *would* be significantly more painful than in C++. D has a lot of innocuous-looking syntax that allocates heap memory non-obviously; AA and dynamic array initialization syntax, for instance. Without a GC, that's a lot of leaks and/or runtime errors waiting to happen. I don't think it's an impossible sell, though. It would help to have a -nogc compiler switch or syntax attribute that disallowed usage of these constructs. And the scope stuff is coming on nicely; if Walter extends it to support RAII member data, as he's mentioned a few times, it'd be great. The other big issue mentioned in the article comments is that D's GC implementation is lacking. It's hard to say whether a language with native pointers would really play nice with a copying collector, f'rinstance. Now that Java is GPL it'll be interesting to see whether the JVM's GC implementation can be yanked out for use by other languages: it's generational, copying and pretty well-tuned. There's some other stuff that's been buzzing around in my head about thread-local heaps lately, but it's not coherent enough to constitute a suggestion yet.
Nov 19 2006
next sibling parent "Unknown W. Brackets" <unknown simplemachines.org> writes:
But those things *are* in Phobos.

One could, for example, throw an exception or generate an assert when 
they are used, at the very least, were they to be disallowed.

Or, those things could be garbage collected, and manual runs could be 
scheduled for any such.  in any case, the rest of the library could have 
no gc usage.

It's really any time you get something for nothing, after all.  It's not 
really that difficult to tell what uses memory, whether it has "new" in 
it or not - imho.

-[Unknown]


 It's not just a library issue, and in some ways I think it *would* be
 significantly more painful than in C++. D has a lot of innocuous-looking syntax
 that allocates heap memory non-obviously; AA and dynamic array initialization
 syntax, for instance. Without a GC, that's a lot of leaks and/or runtime errors
 waiting to happen.

Nov 19 2006
prev sibling parent Dave <Dave_member pathlink.com> writes:
Mike Capp wrote:
 One issue brought up is that of D "requiring" the
 use of a GC. What would it take to prove that wrong
 by making a full blown standard lib that doesn't
 use a GC, and in fact doesn't have a GC?

 It would be painful to work with but no more so
 than in C++. OTOH with scope() and such, it might
 be easy.

It's not just a library issue, and in some ways I think it *would* be significantly more painful than in C++. D has a lot of innocuous-looking syntax that allocates heap memory non-obviously; AA and dynamic array initialization syntax, for instance. Without a GC, that's a lot of leaks and/or runtime errors waiting to happen. I don't think it's an impossible sell, though. It would help to have a -nogc compiler switch or syntax attribute that disallowed usage of these constructs. And the scope stuff is coming on nicely; if Walter extends it to support RAII member data, as he's mentioned a few times, it'd be great. The other big issue mentioned in the article comments is that D's GC implementation is lacking. It's hard to say whether a language with native pointers would really play nice with a copying collector, f'rinstance. Now that

IIRC, Walter has mentioned that he has some ideas for that. Anyhow, I think the restrictions listed in the D GC doc. covers most/all the concerns about pointers and a moving collector: http://digitalmars.com/d/garbage.html
 Java is GPL it'll be interesting to see whether the JVM's GC implementation
can be
 yanked out for use by other languages: it's generational, copying and pretty
 well-tuned.
 
 There's some other stuff that's been buzzing around in my head about
thread-local
 heaps lately, but it's not coherent enough to constitute a suggestion yet.

Nov 19 2006
prev sibling next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
BCS wrote:
 Mars wrote:
 
 http://www.osnews.com/comment.php?news_id=16526

One issue brought up is that of D "requiring" the use of a GC. What would it take to prove that wrong by making a full blown standard lib that doesn't use a GC, and in fact doesn't have a GC? It would be painful to work with but no more so than in C++. OTOH with scope() and such, it might be easy. Anyway, just a thought.

Having such a library would make a huge difference in every C++ vs D discussion! The opposition would have a lot less ammunition against us.
Nov 19 2006
next sibling parent reply Mike Capp <mike.capp gmail.com> writes:
Georg Wrede wrote:

 Having such a library would make a huge difference
 in every C++ vs D discussion! The opposition would
 have a lot less ammunition against us.

Huh? When did this become an adversarial thing? If you're trying to convince someone that D is a better fit for their needs, labelling them "the opposition" probably isn't going to help. Bear in mind that a lot of C++ programmers lived through the whole Java hypefest and tend to start twitching when told that GC will magically solve all their problems and they don't need to worry about it. There's a reason why Java's "simple" memory model sprouted PhantomReferences and SoftReferences and WeakReferences a few years down the line: resource lifetime management is a tricksy area with a lot of dark corners, and one size does not fit all. If D is successful it'll sprout similar accommodations in time, though hopefully cleaner ones. Until then, accept that D might not be the right answer for everyone just yet, and the discussion will be less combative and more constructive. cheers Mike
Nov 19 2006
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Mike Capp wrote:
 Georg Wrede wrote:
 
 Having such a library would make a huge difference
 in every C++ vs D discussion! The opposition would
 have a lot less ammunition against us.

Huh? When did this become an adversarial thing? If you're trying to convince someone that D is a better fit for their needs, labelling them "the opposition" probably isn't going to help. Bear in mind that a lot of C++ programmers lived through the whole Java hypefest and tend to start twitching when told that GC will magically solve all their problems and they don't need to worry about it. There's a reason why Java's "simple" memory model sprouted PhantomReferences and SoftReferences and WeakReferences a few years down the line: resource lifetime management is a tricksy area with a lot of dark corners, and one size does not fit all. If D is successful it'll sprout similar accommodations in time, though hopefully cleaner ones. Until then, accept that D might not be the right answer for everyone just yet, and the discussion will be less combative and more constructive. cheers Mike

Indeed. The people who really need to be GC-free are a tiny minority. Wasting a lot of manpower creating a non-GC library just people who don't want to convert from C++ will have one less justification for not using D sees like a huge waste of effort to me. --bb
Nov 19 2006
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Bill Baxter wrote:
 Indeed.  The people who really need to be GC-free are a tiny minority. 
 Wasting a lot of manpower creating a non-GC library just people who 
 don't want to convert from C++ will have one less justification for not 
 using D sees like a huge waste of effort to me.

I translated Empire from C to D. It worked fine, and did not use the gc at all. It didn't require a new runtime library.
Nov 19 2006
parent Georg Wrede <georg.wrede nospam.org> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 
 Indeed.  The people who really need to be GC-free are a tiny minority. 
 Wasting a lot of manpower creating a non-GC library just people who 
 don't want to convert from C++ will have one less justification for 
 not using D sees like a huge waste of effort to me.

I translated Empire from C to D. It worked fine, and did not use the gc at all. It didn't require a new runtime library.

Cool! And I guess I'm not the only one who'd never have guessed. Shouldn't this get a whole lot more exposure than now? This ought to be the first thing one thinks of anytime somebody takes up the GC "issue".
Nov 20 2006
prev sibling parent reply Jeff <jeffrparsons optusnet.com.au> writes:
 Indeed.  The people who really need to be GC-free are a tiny minority. 
 Wasting a lot of manpower creating a non-GC library just people who 
 don't want to convert from C++ will have one less justification for not 
 using D sees like a huge waste of effort to me.

What about game developers? At least with the current GC, although use of malloc/free isn't strictly deterministic, aren't you far less likely to end up with huge (by game standards, and hence unacceptable) pauses to execution? Preallocation of everything isn't always possible.
Nov 20 2006
parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Mon, 20 Nov 2006 20:37:11 +1100, Jeff
<jeffrparsons optusnet.com.au> wrote:

 Indeed.  The people who really need to be GC-free are a tiny minority. 
 Wasting a lot of manpower creating a non-GC library just people who 
 don't want to convert from C++ will have one less justification for not 
 using D sees like a huge waste of effort to me.

What about game developers? At least with the current GC, although use of malloc/free isn't strictly deterministic, aren't you far less likely to end up with huge (by game standards, and hence unacceptable) pauses to execution? Preallocation of everything isn't always possible.

You do the allocation/deallocation at times when you can afford the pauses, such as between levels. But then, sometimes there are no such times. I've never seen the source of the GTA games, of course, but here is some reasoning based on playing them... Consider Vice City. When crossing from one side of the map to another, you get a delay for loading etc. Now consider San Andreas - a much larger game that avoids those delays. But if you drive fast enough in San Andreas, you can outrun the loading for LOD and textures, and can end up driving through low-detail scenery and crashing into things you can't see. That is, San Andreas basically has (1) a high priority thread for rendering and up-to-the-millisecond game logic, and (2) a low prioriy thread that lags behind, loading new scenery using left-over time. Thread (1) must read dynamically allocated memory - after all, the scenery that is loaded must get rendered - but I doubt that it dynamically allocates or frees any memory at all for itself. Thread (2) could use GC or malloc/free. Either way, it has essentially the same issue. It is designed as a slow process that doesn't need to keep bang up to date, and so in some situations it may lag. Thread 1 cannot afford to do dynamic allocation and freeing. Thread 2 could use either GC or malloc/free and it doesn't matter. I suspect that that's a very common pattern. -- Remove 'wants' and 'nospam' from e-mail.
Nov 23 2006
prev sibling parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Sun, 19 Nov 2006 23:56:22 +0000 (UTC), Mike Capp
<mike.capp gmail.com> wrote:

Georg Wrede wrote:

 Having such a library would make a huge difference
 in every C++ vs D discussion! The opposition would
 have a lot less ammunition against us.

Huh? When did this become an adversarial thing? If you're trying to convince someone that D is a better fit for their needs, labelling them "the opposition" probably isn't going to help.

Yeah, but, well... I often get told I'm oversensitive and I take things too literally. My view is that it happens to everyone. There obviously is an 'us' in this issue, and at least one 'them'. And there is a disagreement between the 'us' and 'them' groups. No matter how benign and constructive the intent is, adversarial language can be hard to avoid. -- Remove 'wants' and 'nospam' from e-mail.
Nov 23 2006
prev sibling next sibling parent reply Dave <Dave_member pathlink.com> writes:
Georg Wrede wrote:
 BCS wrote:
 Mars wrote:

 http://www.osnews.com/comment.php?news_id=16526

One issue brought up is that of D "requiring" the use of a GC. What would it take to prove that wrong by making a full blown standard lib that doesn't use a GC, and in fact doesn't have a GC? It would be painful to work with but no more so than in C++. OTOH with scope() and such, it might be easy. Anyway, just a thought.

Having such a library would make a huge difference in every C++ vs D discussion! The opposition would have a lot less ammunition against us.

But the whole concern centers around two canards: a) GC is really slow and b) malloc/free offer deterministic performance for real-time appplications. I actually think that the best defense is dispelling those two myths. a) for D will come in time and b) is just plain not true for general purpose malloc/free implementations on modern operating systems.
Nov 19 2006
next sibling parent "John Reimer" <terminal.node gmail.com> writes:
On Sun, 19 Nov 2006 19:27:40 -0800, Dave <Dave_member pathlink.com> wrot=
e:

 Georg Wrede wrote:
 BCS wrote:
 Mars wrote:

 http://www.osnews.com/comment.php?news_id=3D16526

One issue brought up is that of D "requiring" the use of a GC. What would it take to prove that wrong by making a full blown standa=



 lib that doesn't use a GC, and in fact doesn't have a GC?

 It would be painful to work with but no more so than in C++. OTOH wi=



 scope() and such, it might be easy.

 Anyway, just a thought.



 discussion! The opposition would have a lot less ammunition against u=


 But the whole concern centers around two canards: a) GC is really slow=

 and b) malloc/free offer deterministic performance for real-time  =

 appplications.

 I actually think that the best defense is dispelling those two myths. =

 for D will come in time and b) is just plain not true for general  =

 purpose malloc/free implementations on modern operating systems.

Good points! -JJR
Nov 19 2006
prev sibling next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Dave wrote:
 But the whole concern centers around two canards: a) GC is really slow 
 and b) malloc/free offer deterministic performance for real-time 
 appplications.
 
 I actually think that the best defense is dispelling those two myths. a) 
 for D will come in time and b) is just plain not true for general 
 purpose malloc/free implementations on modern operating systems.

If you talk to the people who actually do real time software, they don't use malloc/free precisely because they are not deterministic. They preallocate all data.
Nov 19 2006
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Dave wrote:
 But the whole concern centers around two canards: a) GC is really slow 
 and b) malloc/free offer deterministic performance for real-time 
 appplications.

 I actually think that the best defense is dispelling those two myths. 
 a) for D will come in time and b) is just plain not true for general 
 purpose malloc/free implementations on modern operating systems.

If you talk to the people who actually do real time software, they don't use malloc/free precisely because they are not deterministic. They preallocate all data.

I'd like to see a standard response on the website along these lines. Something like: FAQ: Q.Isn't GC slow and non-deterministic? A. Yes, but *all* dynamic memory management is slow and non-deterministic. If you talk to the people who actually do real time software, they don't use malloc/free precisely because they are not deterministic. They preallocate all data. However, the use of GC instead of malloc enables advanced language constructs (especially, more powerful array syntax), which greatly reduce the number of memory allocations which need to be made.
Nov 19 2006
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Don Clugston wrote:
 I'd like to see a standard response on the website along these lines.

Done.
Nov 20 2006
parent reply Pierre Rouleau <prouleau impathnetworks.com> writes:
Walter Bright wrote:

 Don Clugston wrote:
 
 I'd like to see a standard response on the website along these lines.

Done.

Where is it Walter? In Digital Mars' site or in OSNews thread? I'v been looking and couldl'nt see (http://www.digitalmars.com/d/memory.html was last updated Nov 15: before the email). Thanks.
Nov 21 2006
parent Don Clugston <dac nospam.com.au> writes:
Pierre Rouleau wrote:
 Walter Bright wrote:
 
 Don Clugston wrote:

 I'd like to see a standard response on the website along these lines.

Done.

Where is it Walter? In Digital Mars' site or in OSNews thread? I'v been looking and couldl'nt see (http://www.digitalmars.com/d/memory.html was last updated Nov 15: before the email). Thanks.

Nov 22 2006
prev sibling parent reply Miles <_______ _______.____> writes:
Don Clugston wrote:
 However, the use of GC instead of malloc enables advanced language
 constructs (especially, more powerful array syntax), which greatly
 reduce the number of memory allocations which need to be made.

Could you explain precisely what GC enables that is not possible with malloc? This just doesn't compute for me.
Nov 20 2006
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Miles wrote:
 Don Clugston wrote:
 However, the use of GC instead of malloc enables advanced language
 constructs (especially, more powerful array syntax), which greatly
 reduce the number of memory allocations which need to be made.

Could you explain precisely what GC enables that is not possible with malloc? This just doesn't compute for me.

Array slicing, for one. While technically possible with malloc, it would require maintaining a reference count and an extra pointer to the start of the allocated memory area to call free() on. And that still doesn't solve the problem of cyclic references (if the array doesn't just contain primitive data), as well as requiring synchronization in multi-threaded applications.
Nov 20 2006
parent reply Miles <_______ _______.____> writes:
Frits van Bommel wrote:
 Array slicing, for one.
 While technically possible with malloc, it would require maintaining a
 reference count and an extra pointer to the start of the allocated
 memory area to call free() on. And that still doesn't solve the problem
 of cyclic references (if the array doesn't just contain primitive data),
 as well as requiring synchronization in multi-threaded applications.

Ok, I see. But looking at the problem, being someone from algorithms, I think it is possible and feasible to implement array slicing using malloc allocation. An extra field will be needed anyway since you need reference count for the array, just put it along with the beginning of the array and only a single pointer will be needed for both reference count and beginning of array. Cyclic references is a problem only when using GC. If the programmer wants to use a malloc-based allocation, he knows he should handle cyclic references himself. Not a problem. As for synchronization, I think this is more a problem when GC is used than when it is not. Malloc-based allocation needs synchronization, of course, but I think GC-based does also. Deallocation is very atomic and can be implemented without synchronization and still be thread-safe. GC-based, OTOH, needs to freeze the whole application (not just the threads doing allocation) in order to collect.
Nov 20 2006
next sibling parent reply xs0 <xs0 xs0.com> writes:
Miles wrote:
 An extra field will be needed anyway since you need
 reference count for the array, just put it along with the beginning of
 the array and only a single pointer will be needed for both reference
 count and beginning of array.

But with GC, the extra field is not "needed anyway".. Furthermore, you can "slice" an arbitrary region of memory, and it still behaves like any other slice; with malloc, you can only slice regions specifically enabled for slicing (i.e. those that have the reference counter)
 Cyclic references is a problem only when using GC. If the programmer
 wants to use a malloc-based allocation, he knows he should handle cyclic
 references himself. Not a problem.

Cyclic references are a problem only when using reference counting. And you can't just say it's not a problem, because one knows it needs to be dealt with.. That's like saying it's really not a problem to win a lottery, because you know you need to have the right numbers on your ticket.
 As for synchronization, I think this is more a problem when GC is used
 than when it is not. Malloc-based allocation needs synchronization, of
 course, but I think GC-based does also. 

Well, for allocation it depends on implementation in both cases, but with reference counting you also need to synchronize on every reference count update, which can be very often.
 GC-based, OTOH, needs to freeze the whole application (not just the
 threads doing allocation) in order to collect.

It's not strictly necessary, though the current implementation does.. xs0
Nov 20 2006
parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Mon, 20 Nov 2006 16:57:36 +0100, xs0 <xs0 xs0.com> wrote:

Miles wrote:
 An extra field will be needed anyway since you need
 reference count for the array, just put it along with the beginning of
 the array and only a single pointer will be needed for both reference
 count and beginning of array.

But with GC, the extra field is not "needed anyway".. Furthermore, you can "slice" an arbitrary region of memory, and it still behaves like any other slice; with malloc, you can only slice regions specifically enabled for slicing (i.e. those that have the reference counter)

I haven't really tried to understand all these extra field and other bits about slices. I just can't help being reminded about an old COM criticism. In COM, you only ever have pointers to interfaces, which are not considered to equate to the object itself. The object is created when you get the first interface, and deleted when the reference count drops to zero. If you want an object for some reason, but don't want any particular interface, you acquire the IUnknown interface. Reference counting is one of the main hassles in COM. It sounds like GC, but the programmer has to ensure for himself that every increment is correctly matched by a decrement. According to one ex-Microsoft guy (sorry, I forget who) he pointed out the mistake in this before COM was every released, but they had already committed a lot to it and didn't make the proposed change. The point is this - instead of reference counting, the programmer should explicitly create the object once. Then, he should request the needed interfaces. Requesting and releasing interfaces should not require reference counting. Its the programmers responsibility to not use interfaces after the object has been destroyed (though it would be possible to break the links to the interfaces when the object is destroyed, so using dead interfaces would trigger an exception). If this sounds hard work, consider that programmers do exactly this all the time. You shouldn't use a record lock when the file has been closed, for instance, or (in Windows) send a message to a window that has been destroyed. The same could apply to a slice. The slice is valid only as long as the sliced object is valid. If the slice will outlive the original object, you have to make a copy. Call it a 'slice pointer' or 'slice handle' and it should be understood that it is only valid as long as the original object is valid. And there are efficiency arguments to doing this - the original object isn't artificially kept hanging around when you only need the slice. Just as there's other efficiency arguments for leaving it to the GC, of course. It's horses for courses again. And while I doubt I'd ever really do slicing like that in D (or rather, I doubt I'd be concerned about losing the special syntax), it's nice to know that I can opt out of using the GC if I ever find a special case where I care. -- Remove 'wants' and 'nospam' from e-mail.
Nov 23 2006
prev sibling next sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Miles wrote:
 Frits van Bommel wrote:
 Array slicing, for one.
 While technically possible with malloc, it would require maintaining a
 reference count and an extra pointer to the start of the allocated
 memory area to call free() on. And that still doesn't solve the problem
 of cyclic references (if the array doesn't just contain primitive data),
 as well as requiring synchronization in multi-threaded applications.

Ok, I see. But looking at the problem, being someone from algorithms, I think it is possible and feasible to implement array slicing using malloc allocation. An extra field will be needed anyway since you need reference count for the array, just put it along with the beginning of the array and only a single pointer will be needed for both reference count and beginning of array.

With malloc/refcounting you need: For every allocation: * Reference count (possibly in array) * Some kind of synchronization structure. (may be just a bit in the refcount, since 31 bits is probably enough :) ) In every reference: * Pointer to array (for refcount & free when refcount == 0) * Pointer to start of slice * Length of slice or pointer to end of slice (or one byte further) Current D implementation has (AFAIK): * No extra overhead per allocation In every reference: * Pointer to start of slice * Length of slice So malloc/refcount takes an extra 4 bytes per allocation (assuming 32-bit refcount + synch) plus an extra 4 bytes per _reference_ (assuming 32-bit pointer). (on top of synch issues discussed below)
 Cyclic references is a problem only when using GC. If the programmer
 wants to use a malloc-based allocation, he knows he should handle cyclic
 references himself. Not a problem.

Actually, cyclic references are not a problem for GC. Not having to handle them is in fact one of the benefits of GC, or (depending on how you look at it), having to handle them is a problem with reference counting.
 As for synchronization, I think this is more a problem when GC is used
 than when it is not. Malloc-based allocation needs synchronization, of
 course, but I think GC-based does also. Deallocation is very atomic and
 can be implemented without synchronization and still be thread-safe.
 GC-based, OTOH, needs to freeze the whole application (not just the
 threads doing allocation) in order to collect.

I wasn't talking about allocation, I was talking about slicing arrays, copying of references, passing references as parameters, deleting objects containing references, returning from functions holding references and returning references. (though return-value optimization may remove that last one, if only one reference to it existed in the function)
Nov 20 2006
parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Mon, 20 Nov 2006 17:25:26 +0100, Frits van Bommel
<fvbommel REMwOVExCAPSs.nl> wrote:

 Cyclic references is a problem only when using GC. If the programmer
 wants to use a malloc-based allocation, he knows he should handle cyclic
 references himself. Not a problem.

Actually, cyclic references are not a problem for GC. Not having to handle them is in fact one of the benefits of GC, or (depending on how you look at it), having to handle them is a problem with reference counting.

The problem is ensuring proper cleanup, such as closing files, releasing handles and locks, etc when using an RAII approach. The GC cannot know what order to do the destructors/finalisers/whatever in. The Java solution is not to bother - it doesn't guarantee that finalisers will be called. BUT - how do cyclic references occur in relation to slicing? It sounds very odd. I can certainly see the point of chains (slices of slices) but I can't see how cycles could arise at all. Cycles are only common in certain types of programming, such as data structure handling (which should normally be packaged up in a container library). They can happen in user interface stuff (child and parent windows having references to each other) but even this should be easily avoidable. Don't save the parent/child references in your own objects - trust the platforms GUI data structures and request the references you need when you need them. -- Remove 'wants' and 'nospam' from e-mail.
Nov 23 2006
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Steve Horne wrote:
 BUT - how do cyclic references occur in relation to slicing? It sounds
 very odd. I can certainly see the point of chains (slices of slices)
 but I can't see how cycles could arise at all.

Take this type: struct S { S[] slice; // ... some more stuff } Then just allocate some arrays of these and set members of their elements to slices of them.
Nov 24 2006
parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Fri, 24 Nov 2006 10:19:33 +0100, Frits van Bommel
<fvbommel REMwOVExCAPSs.nl> wrote:

Steve Horne wrote:
 BUT - how do cyclic references occur in relation to slicing? It sounds
 very odd. I can certainly see the point of chains (slices of slices)
 but I can't see how cycles could arise at all.

Take this type: struct S { S[] slice; // ... some more stuff } Then just allocate some arrays of these and set members of their elements to slices of them.

OK, that certainly answers my question. OTOH why would you do that? This doesn't look to me like something that would arise in practice. If you have an object of type S that contains (a reference to) an array of type S, that suggests some kind of digraph data structure - eg a tree - to me. Recursive types are almost always data structure related. And in this case, it's a type that probably needs rethinking - it is generally much better for each node to have a (fixed size) array of references to other nodes rather than a single reference to an array of other nodes. Anyway, if this is a data structure node, you really don't want to assign a slice to that array. If you remove items by slicing (using a reference-based slicing method) you never really remove dead items. You only hide them, since the original array remains so that the slice can reference it. To do genuine removal you need to either modify the original array, or else copy from the slice to create a replacement array. -- Remove 'wants' and 'nospam' from e-mail.
Nov 24 2006
prev sibling next sibling parent Benji Smith <dlanguage benjismith.net> writes:
Miles wrote:
 But looking at the problem, being someone from algorithms, I think it is
 possible and feasible to implement array slicing using malloc
 allocation. An extra field will be needed anyway since you need
 reference count for the array, just put it along with the beginning of
 the array and only a single pointer will be needed for both reference
 count and beginning of array.
 
 Cyclic references is a problem only when using GC. If the programmer
 wants to use a malloc-based allocation, he knows he should handle cyclic
 references himself. Not a problem.

Both of these comments indicate an assumption that the GC will be using a reference count anyhow, so you may as well explicitly manage a reference count where necessary. But that's not the case for D's GC. It doesn't use reference counts at all. If I remember correctly, the D garbage collector uses a mark-and-sweep collector. These kinds of collectors are generally more performant and more deterministic than reference-counting collectors. And they have no problem with cycling references. --benji
Nov 20 2006
prev sibling parent Sean Kelly <sean f4.ca> writes:
Miles wrote:
 
 As for synchronization, I think this is more a problem when GC is used
 than when it is not. Malloc-based allocation needs synchronization, of
 course, but I think GC-based does also.

Theoretically, neither require synchronization so long as they maintain per-thread heaps. The obvious consequence being a greater amount of unused memory in the application. GC collection obviously requires synchronization however.
 Deallocation is very atomic and
 can be implemented without synchronization and still be thread-safe.
 GC-based, OTOH, needs to freeze the whole application (not just the
 threads doing allocation) in order to collect.

Yup. There are GC designs which do not require this, but they don't seem terribly compatible with D. Instead, the focus is more on minimizing the time that any "stop the world" phase requires. There are a bunch of different GC designs and refinements to accomplish this, and I expect we will see more of them as D matures. Sean
Nov 20 2006
prev sibling parent reply Boris Kolar <boris.kolar globera.com> writes:
== Quote from Walter Bright (newshound digitalmars.com)'s article
 If you talk to the people who actually do real time software, they don't
 use malloc/free precisely because they are not deterministic. They
 preallocate all data.

Preallocating all data is a lot of pain. Please consider adding something to the language that would solve the problem. Some suggestion involve opAssign and/or implicit casts. Also take a look at my suggestion for value classes: http://www.digitalmars.com/pnews/read.php?server=news.digitalmars.com&group=digitalmars.D&artnum=44163 Performance reason: I actually did some benchmarking, comparing C++ stack allocated classes versus Java classes - C++ was ~30x faster than Java and Java was ~20x faster than D. I don't understand why a good RIAA is not already a part of D. C++ has it, so it obviously can be done. The new 'scoped' keyword is insufficient for me, because I'd like to make all my classes scoped. I can usually develop 99% of my C++ code without a single 'new' or 'malloc'. Most of my classes are small (average ~2 fields and ~4 methods) and only used locally, so I'm really angry when I think about negative performance impact they will have in D simply because a decent RIAA is missing.
Nov 20 2006
parent reply Walter Bright <newshound digitalmars.com> writes:
Boris Kolar wrote:
 I don't understand why a good RIAA is not already a part of D. C++ has it, so
 it obviously can be done. The new 'scoped' keyword is insufficient for me,
 because I'd like to make all my classes scoped. I can usually develop
 99% of my C++ code without a single 'new' or 'malloc'. Most of my classes
 are small (average ~2 fields and ~4 methods) and only used locally, so I'm
 really angry when I think about negative performance impact they will have
 in D simply because a decent RIAA is missing.

Have you considered using structs instead of classes? They are allocated on the stack.
Nov 20 2006
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Boris Kolar wrote:
 I don't understand why a good RIAA is not already a part of D. C++ has 
 it, so
 it obviously can be done. The new 'scoped' keyword is insufficient for 
 me,
 because I'd like to make all my classes scoped. I can usually develop
 99% of my C++ code without a single 'new' or 'malloc'. Most of my classes
 are small (average ~2 fields and ~4 methods) and only used locally, so 
 I'm
 really angry when I think about negative performance impact they will 
 have
 in D simply because a decent RIAA is missing.

Have you considered using structs instead of classes? They are allocated on the stack.

Can you do RAII with them? I thought that a struct cannot have a destructor, but reading the spec again I notice there's an entry for StructAllocator and StructDeallocator, but no indication of how to use it.
Nov 20 2006
next sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Don Clugston wrote:
 Walter Bright wrote:
 Boris Kolar wrote:
 I don't understand why a good RIAA is not already a part of D. C++ 
 has it, so
 it obviously can be done. The new 'scoped' keyword is insufficient 
 for me,
 because I'd like to make all my classes scoped. I can usually develop
 99% of my C++ code without a single 'new' or 'malloc'. Most of my 
 classes
 are small (average ~2 fields and ~4 methods) and only used locally, 
 so I'm
 really angry when I think about negative performance impact they will 
 have
 in D simply because a decent RIAA is missing.

Have you considered using structs instead of classes? They are allocated on the stack.

Can you do RAII with them? I thought that a struct cannot have a destructor, but reading the spec again I notice there's an entry for StructAllocator and StructDeallocator, but no indication of how to use it.

They are the same as class allocators and deallocators (which are not *ctors): http://www.digitalmars.com/d/class.html#allocators -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Nov 20 2006
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Don Clugston wrote:
 Walter Bright wrote:
 Boris Kolar wrote:
 I don't understand why a good RIAA is not already a part of D. C++ 
 has it, so
 it obviously can be done. The new 'scoped' keyword is insufficient 
 for me,
 because I'd like to make all my classes scoped. I can usually develop
 99% of my C++ code without a single 'new' or 'malloc'. Most of my 
 classes
 are small (average ~2 fields and ~4 methods) and only used locally, 
 so I'm
 really angry when I think about negative performance impact they will 
 have
 in D simply because a decent RIAA is missing.

Have you considered using structs instead of classes? They are allocated on the stack.

Can you do RAII with them?

No. But most RAII usage is for managing memory, and Boris didn't say why he needed RAII for the stack allocated objects.
Nov 20 2006
parent reply Boris Kolar <boris.kolar globera.com> writes:
== Quote from Walter Bright (newshound digitalmars.com)'s article
 No. But most RAII usage is for managing memory, and Boris didn't say why
 he needed RAII for the stack allocated objects.

Mostly for closing OS handles, locking, caching and stuff. Like: getFile("a.txt").getText() Normally, one would have to: 1. open the file (which may allocate caching buffers, lock the file, etc.) 2. use the file (efficiently) 3. close the file (frees handles, buffers, releases locks, etc.) It's a common design pattern, really.
Nov 21 2006
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Boris Kolar wrote:
 == Quote from Walter Bright (newshound digitalmars.com)'s article
 No. But most RAII usage is for managing memory, and Boris didn't say why
 he needed RAII for the stack allocated objects.

Mostly for closing OS handles, locking, caching and stuff. Like: getFile("a.txt").getText() Normally, one would have to: 1. open the file (which may allocate caching buffers, lock the file, etc.) 2. use the file (efficiently) 3. close the file (frees handles, buffers, releases locks, etc.) It's a common design pattern, really.

A lot of file reads and writes can be done atomically with the functions in std.file, without need for RAII.
Nov 21 2006
parent reply Boris Kolar <boris.kolar globera.com> writes:
== Quote from Walter Bright (newshound digitalmars.com)'s article
 A lot of file reads and writes can be done atomically with the functions
 in std.file, without need for RAII.

I know, but I rarely use standard libraries directly. One of the first I do when I start programming in a new language is abstracting most of std libraries. For most programmers, file is something on your disk. For me, file is an abstract concept: it may be something on a network, it may be something calculated on demand,... Some "files" need opening/closing, some don't. I usually even go as far as defining a temlate File(T) (a file of elements of type T). Anyway, File is not the only example, there are also locks, widgets, sockets,.... All of them just as abstract if not more :) Sometimes I need RIAA, sometimes I don't. Because of my programming style I very freequently encounter a situation when I need very small classes, like selection ((from, to) pair), parser event ((event, selection) pair) - these classes are just abstract enought they can't be structs and simple enough they shouldn't trigger GC pauses. A vast majority of such classes is immutable, (some having copy-on-write semantics) and are often returned from functions. One very recent specific example: I created socket class and 3 implementations (socket over TCP, socket over Netbios, socket over buffer). The last one (socket over buffer) doesn't need to open/close connections, but the other two do. In my scenario, a real sockets reads encrypted data, write decrypted data to buffer, and a "fake" socket reads buffer as if it was an unencrypted connection. Anyway, my almost 20 years of programming experience has tought me enough that I can tell when some missing feature is making my life harder. And I'm not a feature freak - I wouldn't miss goto or even array slicing (you guessed it, I abstract arrays as well ;), but I do miss a decent RIAA and deterministic object destruction.
Nov 21 2006
parent reply Walter Bright <newshound digitalmars.com> writes:
Boris Kolar wrote:
 Anyway, my almost 20 years of programming experience has tought me enough that
 I can tell when some missing feature is making my life harder. And I'm not a
 feature freak - I wouldn't miss goto or even array slicing (you guessed it,
 I abstract arrays as well ;), but I do miss a decent RIAA and deterministic
 object destruction.

I hear you. The best suggestion I can make is to use the RIAA features the compiler has now, and wait for it to be upgraded to true stack allocation. Then, your code will just need a recompile.
Nov 21 2006
parent reply BCS <BCS pathilink.com> writes:
Walter Bright wrote:
 
 I hear you. The best suggestion I can make is to use the RIAA features 
 the compiler has now, 

s/RIAA/RAII/ But I wonder what RIAA features DMD could add.
Nov 21 2006
parent Kyle Furlong <kylefurlong gmail.com> writes:
BCS wrote:
 Walter Bright wrote:
 I hear you. The best suggestion I can make is to use the RIAA features 
 the compiler has now, 

s/RIAA/RAII/ But I wonder what RIAA features DMD could add.

Hopefully none... :-/
Nov 21 2006
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
Boris Kolar wrote:
 == Quote from Walter Bright (newshound digitalmars.com)'s article
 No. But most RAII usage is for managing memory, and Boris didn't say why
 he needed RAII for the stack allocated objects.

Mostly for closing OS handles, locking, caching and stuff. Like: getFile("a.txt").getText() Normally, one would have to: 1. open the file (which may allocate caching buffers, lock the file, etc.) 2. use the file (efficiently) 3. close the file (frees handles, buffers, releases locks, etc.) It's a common design pattern, really.

A lot of this can be handled by "scope." Though I grant that using objects for the rest and relying on the GC for clean-up is possibly not ideal for resources that must be cleaned up in a timely manner. Sean
Nov 21 2006
parent reply Mike Capp <mike.capp gmail.com> writes:
Sean Kelly wrote:

 A lot of this can be handled by "scope."  Though I grant that using
 objects for the rest and relying on the GC for clean-up is possibly not
 ideal for resources that must be cleaned up in a timely manner.

Indeed. Long, long ago I suggested disallowing destructors for classes not declared 'scope' (or 'auto', as it was then), on the grounds that if you need stuff done there you really don't want to rely on the GC to do it. It was a bit of a Devil's Advocate thing, but the response was surprisingly positive, and (as I recall) nobody came up with a counterexample where a dtor was needed but timeliness wasn't.
Nov 21 2006
parent Sean Kelly <sean f4.ca> writes:
Mike Capp wrote:
 Sean Kelly wrote:
 
 A lot of this can be handled by "scope."  Though I grant that using
 objects for the rest and relying on the GC for clean-up is possibly not
 ideal for resources that must be cleaned up in a timely manner.

Indeed. Long, long ago I suggested disallowing destructors for classes not declared 'scope' (or 'auto', as it was then), on the grounds that if you need stuff done there you really don't want to rely on the GC to do it. It was a bit of a Devil's Advocate thing, but the response was surprisingly positive, and (as I recall) nobody came up with a counterexample where a dtor was needed but timeliness wasn't.

I've actually got a test build of Ares (not sure if it's in SVN) that hooks the GC collection process so the user can be notified when an object is being cleaned up and can optionally prevent the object's dtor from being run. The intent is to allow the user to detect "leaks" of resources intended to have deterministic scope and to allow the dtors of such objects to perform activities normally not allowed in GCed objects. I haven't used the feature much yet in testing, but it seems a good compromise between the current D behavior and your suggestion. Sean
Nov 21 2006
prev sibling next sibling parent reply David Medlock <noone nowhere.com> writes:
Boris Kolar wrote:
 == Quote from Walter Bright (newshound digitalmars.com)'s article
 
No. But most RAII usage is for managing memory, and Boris didn't say why
he needed RAII for the stack allocated objects.

Mostly for closing OS handles, locking, caching and stuff. Like: getFile("a.txt").getText() Normally, one would have to: 1. open the file (which may allocate caching buffers, lock the file, etc.) 2. use the file (efficiently) 3. close the file (frees handles, buffers, releases locks, etc.) It's a common design pattern, really.

I know Sean suggested scope (RAII) but how about: File file = new FileStream(...); scope(exit) { file.close(); } ... -DavidM
Nov 21 2006
parent Boris Kolar <boris.kolar globera.com> writes:
== Quote from David Medlock (noone nowhere.com)'s article
 File file = new FileStream(...);
 scope(exit) { file.close(); }
 ...
 -DavidM

Now we can't return that file: File file = new FileStream(...); scope(exit) { file.close(); } return file; // :(
Nov 22 2006
prev sibling parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Tue, 21 Nov 2006 09:01:58 +0000 (UTC), Boris Kolar
<boris.kolar globera.com> wrote:

== Quote from Walter Bright (newshound digitalmars.com)'s article
 No. But most RAII usage is for managing memory, and Boris didn't say why
 he needed RAII for the stack allocated objects.

Mostly for closing OS handles, locking, caching and stuff. Like: getFile("a.txt").getText() Normally, one would have to: 1. open the file (which may allocate caching buffers, lock the file, etc.) 2. use the file (efficiently) 3. close the file (frees handles, buffers, releases locks, etc.) It's a common design pattern, really.

I do a lot the same too. And then I find that there are cases where I really wish I could switch off the constructor and destructor, since I want uninitialised memory ready to hold an object that I can't construct yet. In C++ I fix that by using placement new and explicit destructor calls, and using types along the lines of char[sizeof(blah)]. In D it works out a bit different. I use a struct instead of a class, and so I opt out of having initialisation and cleanup by default. If I want them, I can use the scope statement to ensure they get done. Of course that also means opting out of having inheritance etc. Anyway, beyond a certain point, you cannot eliminate complexity. You can only move it from one place to another. In the case stack-allocated objects, there is (currently) a bit more complexity in D. You can't really do RAII except in the slightly defeating-the-point approach of explicit initialisation and cleanup (though the scope statement at least allows you to keep them together in the code). It can be a bit annoying, I agree. But then, D is being actively developed and stack allocation of objects does seem to be on the to-do list. -- Remove 'wants' and 'nospam' from e-mail.
Nov 23 2006
prev sibling parent Boris Kolar <boris.kolar globera.com> writes:
== Quote from Walter Bright (newshound digitalmars.com)'s article
 Have you considered using structs instead of classes? They are allocated
 on the stack.

Yes, but I need to override things too. There is often some functionality that I need to keep abstract. I actually use structs quite often. Sometimes I use structs in combination with interfaces, like: struct Foo { interface Model { void foo(); } static Foo opCall(Model foo) { Foo result; result._foo = foo; return result; } void foo() { if (_foo) _foo.foo(); } private Model _foo; } ... but that doesn't solve the problem, because there are no destructors in structs (and it's too verbose for my taste too).
Nov 20 2006
prev sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Dave wrote:
 But the whole concern centers around two canards: a) GC is really slow 
 and b) malloc/free offer deterministic performance for real-time 
 appplications.
 
 I actually think that the best defense is dispelling those two myths. a) 
 for D will come in time and b) is just plain not true for general 
 purpose malloc/free implementations on modern operating systems.

Although I don't think these are big problems, I'm not convinced myself that they are myths. As for the slow GC getting fixed, I hope that will come but it still is a valid point in response to the idea that D is usable *right now*, and it will be when D get's the 1.0 label. To say the problem will be fixed doesn't mean it is not there. The point about deterministic memory allocation (as in realtime) may be a myth, but a lot of C++ argue that C++ has a lot more means to control memory allocation, not just the C way of malloc/free. Boost offers some allocators that can be used with the STL out of the box, for instance. Techniques of resource management based on scope is just better supported in C++ (not in C!) than in D. I see no myth in that point, it's just a different way of handling the poblem. Well this is actually a different point than you adressed but it is raised sometimes.
Nov 20 2006
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Georg Wrede wrote:
 BCS wrote:
 One issue brought up is that of D "requiring" the use of a GC.
 What would it take to prove that wrong by making a full blown standard 
 lib that doesn't use a GC, and in fact doesn't have a GC?

 It would be painful to work with but no more so than in C++. OTOH with 
 scope() and such, it might be easy.

 Anyway, just a thought.

Having such a library would make a huge difference in every C++ vs D discussion! The opposition would have a lot less ammunition against us.

The ones who don't want to use D will find the first excuse, valid or not. Fix the excuse, and they'll just move on to the next excuse, valid or not. It's a fool's game. I've been around that circle before. The people we should listen to are the people who actually *use* D, not the ones who just glanced at a chart looking for fault. The poster who claimed that conservative gc is somehow incompatible with cryptographic software is misinformed. Even if he were correct, the cryptographic buffers could be allocated with malloc() and would then have no effect whatsoever on the gc.
Nov 19 2006
next sibling parent "John Reimer" <terminal.node gmail.com> writes:
On Sun, 19 Nov 2006 22:38:46 -0800, Walter Bright  
<newshound digitalmars.com> wrote:

 The ones who don't want to use D will find the first excuse, valid or  
 not. Fix the excuse, and they'll just move on to the next excuse, valid  
 or not. It's a fool's game. I've been around that circle before. The  
 people we should listen to are the people who actually *use* D, not the  
 ones who just glanced at a chart looking for fault.

You're so right. :( -JJR
Nov 20 2006
prev sibling parent reply Miles <_______ _______.____> writes:
Walter Bright wrote:
 The ones who don't want to use D will find the first excuse, valid or
 not. Fix the excuse, and they'll just move on to the next excuse, valid
 or not. It's a fool's game. I've been around that circle before. The
 people we should listen to are the people who actually *use* D, not the
 ones who just glanced at a chart looking for fault.

People who are using any tool today are people that benefit from its features and are not affected by its shortcomings. Any other person won't use that tool because it is not appropriate. It is the same with D. If you keep listening only to people who actually use D, D will end as a niche programming language: nobody outside that niche will ever use it because D doesn't fit their needs, and their needs are of no concern to the current users of D. In my example, I would like to unplug the GC from D sometimes, or have a more predictable reference-count-based GC (even if it would mean disabling unions with pointers and a few other constructs).
 The poster who claimed that conservative gc is somehow incompatible with
 cryptographic software is misinformed. Even if he were correct, the
 cryptographic buffers could be allocated with malloc() and would then
 have no effect whatsoever on the gc.

Using two allocation methods in the same process address space looks really bad, not to say hackish. And you don't need cryptographic buffers or multimedia data, a single int variable is enough to hold a large block of unused data in memory, and the larger the block is, the easier it is for this to happen. Even if it was 1/2^32 of chances of this happening, it still will happen.
Nov 20 2006
parent reply Walter Bright <newshound digitalmars.com> writes:
Miles wrote:
 The poster who claimed that conservative gc is somehow incompatible with
 cryptographic software is misinformed. Even if he were correct, the
 cryptographic buffers could be allocated with malloc() and would then
 have no effect whatsoever on the gc.

Using two allocation methods in the same process address space looks really bad, not to say hackish.

I strongly disagree. A complex application has different needs for different structures in the program. Just like OOP isn't the solution for every programming problem, one allocation method isn't either.
 And you don't need cryptographic buffers
 or multimedia data, a single int variable is enough to hold a large
 block of unused data in memory, and the larger the block is, the easier
 it is for this to happen. Even if it was 1/2^32 of chances of this
 happening, it still will happen.

In real, long lived gc applications I've been involved with, this is much more of a theoretical problem than an actual one. I found it surprising how little of a problem it actually was in practice. The reason for this is not so obvious. It isn't "random" with a 1/2^32 probability, that integers (and other types) contain random values with an even distribution. They don't. The overwhelming majority of ints have values that are between -100 and +100. The most common value is 0. Those values are nowhere near where the gc pools are located.
Nov 20 2006
next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Walter Bright wrote:
 In real, long lived gc applications I've been involved with, this is 
 much more of a theoretical problem than an actual one. I found it 
 surprising how little of a problem it actually was in practice.

If you found it surprising, then we could consider it legitimate and expected that "the C++ crowd" has a hard time believing this. Maybe we should be more vociferous about it. The quote from Don is a good first step.
Nov 20 2006
parent Kyle Furlong <kylefurlong gmail.com> writes:
Georg Wrede wrote:
 Walter Bright wrote:
 In real, long lived gc applications I've been involved with, this is 
 much more of a theoretical problem than an actual one. I found it 
 surprising how little of a problem it actually was in practice.

If you found it surprising, then we could consider it legitimate and expected that "the C++ crowd" has a hard time believing this. Maybe we should be more vociferous about it. The quote from Don is a good first step.

The problem with this thread is that it is the new guys coming from C++ land pitting their conventional wisdom against our experience. D is not C++. The conventional wisdom you gleaned, your "gut feeling" about what is The Right Way(tm), does not necessarily apply here. Please leave your bias' at the door, and look at D as what it is, a NEW language, with its own character.
Nov 20 2006
prev sibling parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Mon, 20 Nov 2006 11:35:03 -0800, Walter Bright
<newshound digitalmars.com> wrote:

The reason for this is not so obvious. It isn't "random" with a 1/2^32 
probability, that integers (and other types) contain random values with 
an even distribution. They don't. The overwhelming majority of ints have 
values that are between -100 and +100. The most common value is 0. Those 
values are nowhere near where the gc pools are located.

It's also worth bearing in mind that most random-looking data doesn't hang around in current objects too long. For example, if you're doing encryption or compression, you're probably streaming that data in/out of a file or socket or whatever. -- Remove 'wants' and 'nospam' from e-mail.
Nov 24 2006
prev sibling parent Brad Roberts <braddr puremagic.com> writes:
On Mon, 27 Nov 2006, Benji Smith wrote:

 Date: Mon, 27 Nov 2006 12:59:45 -0700
 From: Benji Smith <dlanguage benjismith.net>
 Reply-To: digitalmars.D <digitalmars-d puremagic.com>
 To: digitalmars-d puremagic.com
 Newsgroups: digitalmars.D
 Subject: Re: OSNews article about C++09 degenerates into C++ vs. D discussion
 
 Don Clugston wrote:
 I think it's even worse than that. The opposite of 'unsafe' is *not* safe!
 
 My brother has worked with medical software which contain software bugs
 which kill people. And the bugs are NOT 'dangling pointers', they are
 incorrect mathematics (wrong dosage, etc). The code is 'safe', yet people
 have been taken out in body bags.
 
 I think this whole "safe"/"unsafe" concept can be distracting -- the goal is
 software with no bugs! It's just a tool to reduce a specific class of bugs.
 D does many features which help to reduce bugs, the concept of 'safe' code
 just isn't one of them.

I actually like the "unsafe" keyword in C# (never used C++.NET). The words "safe" and "unsafe" refer only to type-safety, so it would be more accurate (but cumbersome) if the keyword was "untypesafe" to indicate blocks of code circumventing the type system. It's nice to know that the default assumption in C# is that nearly all code will subject itself to the compiler's static type checking. Sure, sometimes it's necessary circumvent the type system by casting pointers, but I think it helps enforce good programming practice that those untypesafe operations have to be specifically annotated before the compiler will accept them. --benji

(Sorry Benji.. using your post to reply to this thread. I'm not specifically replying to your post, just gotta have that hook.) I really hate the term 'safe'. It's ambiguous. What's safe? How safe? It's just as useless a term as 'managed'. Both terms are specifically designed to enduce that warm fuzzy feeling and sidestepping the issue of what they actually really mean. I recognize that a major portion of my own personal bias against VM based runtime environments is due to the frequent association with this sort of need for warm fuzzies with a careful avoidance to specifying the exact real gained benefits. I fully recognize that there _are_ benefits, just that the conflation with non-specific benefits diminishes the whole picture in my world-view. Grumble, Brad
Nov 27 2006
prev sibling parent reply =?ISO-8859-1?Q?Julio_C=E9sar_Carrascal_Urquijo?= writes:
Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

RE[2]: Not much of an update
 By luzr (2.10) on 2006-11-19 19:44:17 UTC in reply to "RE: Not much of
 an update"
I second that. D is a very nice language with a clear focus. My first 
impression was that it has the best of Java, the best of C++ and none 
of they're major weaknesses.

Adds one major weekness - its memory model is based on conservative

important >applications (like cryptography or any other software that deals with >noise-like data). This is one thing that bothers me with the current GC. If you store data with a lot of entropy in an array (Sound, encrypted data, sensor data, etc...) you start to experience memory leaks because the GC starts to see the data as references to other objects. Is there a way to tell the garbage collector "don't look for references here" without using malloc and friends? This would be for a standard sliceable garbage collected array with any kind of data except references. Something like gc.doNotLookForReferences(myArray) would be nice.
Nov 19 2006
next sibling parent reply "Unknown W. Brackets" <unknown simplemachines.org> writes:
Yes.

std.gc.removeRange(myArray);

As far as I recall.

But, iirc you do have to do this on a full range (e.g., not a sliced 
array but the whole allocated array.)

-[Unknown]


 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

RE[2]: Not much of an update > By luzr (2.10) on 2006-11-19 19:44:17 UTC in reply to "RE: Not much of > an update" >>I second that. D is a very nice language with a clear focus. My first >>impression was that it has the best of Java, the best of C++ and none >>of they're major weaknesses. > >Adds one major weekness - its memory model is based on conservative GC, >which makes it unpredictable and in reality unusable for some important >applications (like cryptography or any other software that deals with >noise-like data). This is one thing that bothers me with the current GC. If you store data with a lot of entropy in an array (Sound, encrypted data, sensor data, etc...) you start to experience memory leaks because the GC starts to see the data as references to other objects. Is there a way to tell the garbage collector "don't look for references here" without using malloc and friends? This would be for a standard sliceable garbage collected array with any kind of data except references. Something like gc.doNotLookForReferences(myArray) would be nice.

Nov 19 2006
parent "Unknown W. Brackets" <unknown simplemachines.org> writes:
My mistake, this will only work if each is addRange()'d instead of 
searched by pool.

-[Unknown]


 Yes.
 
 std.gc.removeRange(myArray);
 
 As far as I recall.
 
 But, iirc you do have to do this on a full range (e.g., not a sliced 
 array but the whole allocated array.)
 
 -[Unknown]
 
 
 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

RE[2]: Not much of an update > By luzr (2.10) on 2006-11-19 19:44:17 UTC in reply to "RE: Not much of > an update" >>I second that. D is a very nice language with a clear focus. My first >>impression was that it has the best of Java, the best of C++ and none >>of they're major weaknesses. > >Adds one major weekness - its memory model is based on conservative GC, >which makes it unpredictable and in reality unusable for some important >applications (like cryptography or any other software that deals with >noise-like data). This is one thing that bothers me with the current GC. If you store data with a lot of entropy in an array (Sound, encrypted data, sensor data, etc...) you start to experience memory leaks because the GC starts to see the data as references to other objects. Is there a way to tell the garbage collector "don't look for references here" without using malloc and friends? This would be for a standard sliceable garbage collected array with any kind of data except references. Something like gc.doNotLookForReferences(myArray) would be nice.


Nov 19 2006
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Julio CÚsar Carrascal Urquijo wrote:
 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

This is one thing that bothers me with the current GC. If you store data with a lot of entropy in an array (Sound, encrypted data, sensor data, etc...) you start to experience memory leaks because the GC starts to see the data as references to other objects.

This is the kind of comment that scares me. How does one reconcile this with Walter's comment "The GC has been pretty heavily tested. It's 6 years old, and it's stood up extremely well." --(digitalmars.com digitalmars.D:43916) --bb
Nov 19 2006
next sibling parent "John Reimer" <terminal.node gmail.com> writes:
On Sun, 19 Nov 2006 19:03:18 -0800, Bill Baxter  
<dnewsgroup billbaxter.com> wrote:

 Julio CÚsar Carrascal Urquijo wrote:
 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

data with a lot of entropy in an array (Sound, encrypted data, sensor data, etc...) you start to experience memory leaks because the GC starts to see the data as references to other objects.

This is the kind of comment that scares me. How does one reconcile this with Walter's comment "The GC has been pretty heavily tested. It's 6 years old, and it's stood up extremely well." --(digitalmars.com digitalmars.D:43916) --bb

The GC maybe reliable for what it does, but it's certainly not optimal. I think Walter has admitted that in the past also. -JJR
Nov 19 2006
prev sibling next sibling parent reply Kyle Furlong <kylefurlong gmail.com> writes:
Bill Baxter wrote:
 Julio CÚsar Carrascal Urquijo wrote:
 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

This is one thing that bothers me with the current GC. If you store data with a lot of entropy in an array (Sound, encrypted data, sensor data, etc...) you start to experience memory leaks because the GC starts to see the data as references to other objects.

This is the kind of comment that scares me. How does one reconcile this with Walter's comment "The GC has been pretty heavily tested. It's 6 years old, and it's stood up extremely well." --(digitalmars.com digitalmars.D:43916) --bb

The problem is that if the data isn't typed, the GC cannot say absolutely that the data is not pointers into the GC allocated space. Since a collector that freed all these "ambiguous" roots would eventually free live objects, one must treat these roots as pointers. The good news is that most data that falls into this category is temporary, such that the region kept alive by the false pointer will be freed on a subsequent collection. Now in the case of arrays, I think that probably the current collector is being far too conservative. Since each array is typed, with the exception of void[], the GC should differentiate between, for example, a byte[] and a Class[] or void*[] and treat them appropriately. So the solution for your example is to store the data not in a void[] but in a more appropriately typed container, possibly byte[]. Of course the optimal solution depends on your application.
Nov 19 2006
parent reply Sean Kelly <sean f4.ca> writes:
Kyle Furlong wrote:
 Bill Baxter wrote:
 Julio CÚsar Carrascal Urquijo wrote:
 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

This is one thing that bothers me with the current GC. If you store data with a lot of entropy in an array (Sound, encrypted data, sensor data, etc...) you start to experience memory leaks because the GC starts to see the data as references to other objects.

This is the kind of comment that scares me. How does one reconcile this with Walter's comment "The GC has been pretty heavily tested. It's 6 years old, and it's stood up extremely well." --(digitalmars.com digitalmars.D:43916)

The problem is that if the data isn't typed, the GC cannot say absolutely that the data is not pointers into the GC allocated space.

See my other post. So long as the user doesn't try to pack pointers into a byte array or something similar, simply using element size can rule out a significant portion of GCed memory. At the very least, char strings would be ignored.
 Now in the case of arrays, I think that probably the current collector 
 is being far too conservative. Since each array is typed, with the 
 exception of void[], the GC should differentiate between, for example, a 
 byte[] and a Class[] or void*[] and treat them appropriately.

The type isn't currently available in the compiler runtime or GC code, but element size is. Passing in a TypeInfo object for allocations may be a bit more specific, but I'm not sure the additional complexity would be worthwhile. I suppose it depends on the application. Sean
Nov 19 2006
parent Dave <Dave_member pathlink.com> writes:
Sean Kelly wrote:
 Kyle Furlong wrote:
 Bill Baxter wrote:
 Julio CÚsar Carrascal Urquijo wrote:
 Mars wrote:
 http://www.osnews.com/comment.php?news_id=16526

This is one thing that bothers me with the current GC. If you store data with a lot of entropy in an array (Sound, encrypted data, sensor data, etc...) you start to experience memory leaks because the GC starts to see the data as references to other objects.

This is the kind of comment that scares me. How does one reconcile this with Walter's comment "The GC has been pretty heavily tested. It's 6 years old, and it's stood up extremely well." --(digitalmars.com digitalmars.D:43916)

The problem is that if the data isn't typed, the GC cannot say absolutely that the data is not pointers into the GC allocated space.

See my other post. So long as the user doesn't try to pack pointers into a byte array or something similar, simply using element size can rule out a significant portion of GCed memory. At the very least, char strings would be ignored.
 Now in the case of arrays, I think that probably the current collector 
 is being far too conservative. Since each array is typed, with the 
 exception of void[], the GC should differentiate between, for example, 
 a byte[] and a Class[] or void*[] and treat them appropriately.

The type isn't currently available in the compiler runtime or GC code,

The TypeInfo is passed in for AA's (aaA.d), so maybe it's a smaller step than we think? Take your idea of how to skip scanning roots in gcx.d, and add to that TypeInfo for the API in gc.d and it may be readily do-able.
 but element size is.  Passing in a TypeInfo object for allocations may 
 be a bit more specific, but I'm not sure the additional complexity would 
 be worthwhile.  I suppose it depends on the application.
 

Right now I think a different allocation routine is called for byte[] vs. char[] anyway so maybe for strings it could be done as-is?
 
 Sean

Nov 19 2006
prev sibling parent reply =?ISO-8859-1?Q?Julio_C=E9sar_Carrascal_Urquijo?= writes:
Bill Baxter wrote:
 This is the kind of comment that scares me.
 How does one reconcile this with Walter's comment "The GC has been 
 pretty heavily tested. It's 6 years old, and it's stood up extremely well."
   --(digitalmars.com digitalmars.D:43916)
 
 --bb

Even if a piece of software is reliable and has been heavily tested doesn't mean it can't be improved. The current GC has it's shortcomings and that has been acknowledged by Walter. That doesn't mean it is unusable (On the contrary), just that under certain circumstances you should use other implementations and that's why a pluggable architecture is needed for garbage collection in the language.
Nov 20 2006
parent Sean Kelly <sean f4.ca> writes:
Julio CÚsar Carrascal Urquijo wrote:
 Bill Baxter wrote:
 This is the kind of comment that scares me.
 How does one reconcile this with Walter's comment "The GC has been 
 pretty heavily tested. It's 6 years old, and it's stood up extremely 
 well."
   --(digitalmars.com digitalmars.D:43916)

Even if a piece of software is reliable and has been heavily tested doesn't mean it can't be improved. The current GC has it's shortcomings and that has been acknowledged by Walter. That doesn't mean it is unusable (On the contrary), just that under certain circumstances you should use other implementations and that's why a pluggable architecture is needed for garbage collection in the language.

Agreed. And for what it's worth, I think this plugging should occur at link-time, not run-time. Hot-swapping GCs while the app is running just raises too many weird issues. Sean
Nov 20 2006
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
Julio CÚsar Carrascal Urquijo wrote:
 
 Is there a way to tell the garbage collector "don't look for references 
 here" without using malloc and friends?

Not in its current form, but the modifications to allow this are fairly simple. The compiler can even set the same flag for dynamic arrays containing elements smaller than pointer size. Sean
Nov 19 2006
parent =?ISO-8859-1?Q?Julio_C=E9sar_Carrascal_Urquijo?= writes:
Sean Kelly wrote:
 Julio CÚsar Carrascal Urquijo wrote:
 Is there a way to tell the garbage collector "don't look for 
 references here" without using malloc and friends?

Not in its current form, but the modifications to allow this are fairly simple. The compiler can even set the same flag for dynamic arrays containing elements smaller than pointer size. Sean

So as long as I use byte[] or short[] this would be possible in a future implementation of the GC, right? Well that's better, at least for cryptographic data. Still sound data is often represented as int/long samples witch could pose problems on 32bit or 64bit platforms. Anyway we still need a better implementation of the GC to address this concerns.
Nov 20 2006