www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - D Language 2.0

reply Daniel <dan.from.tokyo gmail.com> writes:
I don't think I like D 2.0 over 1.0.  Before you all run out to get me some
tissue, I figured I'd explain my
rationale.



The cool parts:


Closures are great.

I like that we moved D execution to a TLS.



The bad parts:


D still takes 80kb just to print "hello world" to the prompt.  When are we
going to fix things so stuff is only
imported if it's used?  I went to write a small program for an embedded device,
but because of this I had to
abandon and rewrite it in C - and I have to work through a bit to find where
the program actually starts in IDA
Pro.

Const does not provide me any guarantees that I couldn't already get with
in/out/inout and function contracts.
It does not guarantee that a value will not change because D doesn't control
the execution environment, what
const does is declare that it won't be changed by the program itself.  We can
already check that automatically.
My problem is that now we have six dozen extra rules and a few more keywords
because of it.

D is still aimed at the i486, and is just starting to handle threading.  My CPU
is a Core i7, which is a quad-core.
It also has SSE4.2.  It's been 20 years.



Things D missed:


cent and ucent should be available via SSE.  Over 97% of computers now have it
and I still can't even assign to an
SSE register?

Don Clungston had some excellent code written up in BLADE found on dsource 2
years ago.  Shouldn't that have
become part of D native?

D ought to have a way to express math without automatically executing it.  Some
math can't execute on an x86,
and sometimes we just want to reduce instead of execute.  Hell, if a function
were something we could reduce
and see, that would count.  This also has value for optimization.

An AST?

D missed big on short circuit evaluation because they want to typedef bool.  I
was hoping I'd be able to see
things like ||=  &&= and x = y || z

I also kind of hoped we'd see standard real world unit types in D.  Like sqft,
meters, ohms, kg, L, Hz, seconds,
etc. native to D.  Being able to intrinsically convert types between these
things would make D the most
approachable General Programming Language for real world problems.

Thanks for your time,
Dan
Jan 17 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Daniel:

I answer to just the things I know and understand, and I leave the other points
to other people, sorry.

 D still takes 80kb just to print "hello world" to the prompt.
When D2 is out of alpha for some time, I think devs will start to think about this problem too. Currently it's low priority, because the language isn't finished yet :-)
I went to write a small program for an embedded device, but because of this I
had to
 abandon and rewrite it in C
I am not sure D2 will be fit for embedded devices... even 32 bit ones.
 D is still aimed at the i486, and is just starting to handle threading.  My
CPU is a Core i7, which is a quad-core. It also has SSE4.2.  It's been 20
years.<
LDC compiler (D1 still) manages SSE registers well (here the problem is the opposite, it's the x86 FPU that's badly used, but most times today this doesn't matter much). When D2 is out of alpha, I presume some people will create a D2 compiler based on LLVM with good SSE support. Regarding multi-processing, it's being worked on now.
 cent and ucent should be available via SSE.<
LLVM already supports such values, so I think it's not a hard work to add them to ldc. The problem is that I don't often need to sum 128 bit signed integers. I may need to perform a bitwise && among SSE registers, but you can't tie a language to the width of the current CPUs. What if tomorrow the registers become 256 or 1024 bits wide? D language must be used 10 years from now, when you have 2048 bits wide registers too, you can't keep adding wider and wider built in types. It's better to have a way to represent bit arrays and to perform bitwise operations among them, as LLVM already does. Wait, that's what array operation syntax is already designed to do :-) They just need to be implemented better (so for example if the compiler knows that two fixed-sized arrays of 256 bits each are && together, then this needs to implement it with two inlined SSE bitwise ops). This doesn't require a language change, but a bit better optimization stage, it's a matter of language implementation, not language design. So what are the purposes of a true cent/ucent type again?
Don Clungston had some excellent code written up in BLADE found on dsource 2
years ago.  Shouldn't that have become part of D native?<
Things change, as time passes.
I also kind of hoped we'd see standard real world unit types in D.<
A mostly std lib solution can be acceptable, if there's a bit of support from the language. I am sure Andrei doesn't want to do what was done for complex numbers. The problem is that I don't like solutions on both extrema: I don't like complex numbers to be fully lib-based with a raw syntax, and I think I don't need them fully built-in that increase compiler complexity. So I'd like just a bit of support from the language to improve the syntax of unit types. Two times in the recent past NASA has broken things because of programs that have mixed miles with kilometres and similar things. With a language that enforces unit types very well, such class of bugs can be avoided. Bye, bearophile
Jan 17 2010
next sibling parent retard <re tard.com.invalid> writes:
Mon, 18 Jan 2010 00:36:47 -0500, bearophile wrote:

 What if
 tomorrow the registers become 256 or 1024 bits wide? D language must be
 used 10 years from now, when you have 2048 bits wide registers too, you
 can't keep adding wider and wider built in types. It's better to have a
 way to represent bit arrays and to perform bitwise operations among
 them, as LLVM already does. Wait, that's what array operation syntax is
 already designed to do :-)
The wider registers will most likely be vector registers in the future. It has often little sense to express wider value ranges than the 64-bit ints represent. Even 16 or 32 bits is large enough in many cases.
Jan 17 2010
prev sibling parent reply BCS <none anon.com> writes:
Hello bearophile,

 Daniel:
 
 I also kind of hoped we'd see standard real world unit types in D.<
 
A mostly std lib solution can be acceptable,
Here's my (compile time) run at this: http://www.dsource.org/projects/scrapple/browser/trunk/units I'd be happy to fix up the ownership/license stuff and let it get added to phobos and/or tango (it's more or less independent of all that) and if that happened I'd be very willing to clean up the API (that "dynamic" name thing has me thinking of was to use this new fangled D2.0 thing)
Jan 17 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
BCS:
 http://www.dsource.org/projects/scrapple/browser/trunk/units
With a little more compiler support to improve the syntax this stuff can be more usable. Is the usage of units common enough and important enough to deserve a bit of compiler/language support? (I don't know yet what a nice syntax can be.) Bye, bearophile
Jan 18 2010
parent BCS <none anon.com> writes:
Hello bearophile,

 BCS:
 
 http://www.dsource.org/projects/scrapple/browser/trunk/units
 
With a little more compiler support to improve the syntax this stuff can be more usable. Is the usage of units common enough and important enough to deserve a bit of compiler/language support? (I don't know yet what a nice syntax can be.)
Language features just for that? No, never! Something more general, maybe. But as I already said, If y'all are interested, I'll take a crack at cleaning it up using the features we have NOW (and I think it might actually make compiling it faster to boot)
 
 Bye,
 bearophile
Jan 18 2010
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
BCS wrote:
 Hello bearophile,
 
 Daniel:

 I also kind of hoped we'd see standard real world unit types in D.<
A mostly std lib solution can be acceptable,
Here's my (compile time) run at this: http://www.dsource.org/projects/scrapple/browser/trunk/units I'd be happy to fix up the ownership/license stuff and let it get added to phobos and/or tango (it's more or less independent of all that) and if that happened I'd be very willing to clean up the API (that "dynamic" name thing has me thinking of was to use this new fangled D2.0 thing)
I think that's great. Phobos could really use a solid units library. Andrei
Jan 18 2010
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
 "Daniel" <dan.from.tokyo gmail.com> wrote in message 
news:hj0pvc$2sil$1 digitalmars.com...

Haven't we already had enough posts about "I don't like D2 because it 
doesn't add *enough* stuff"? Fuck, people, this shit takes time!

 D still takes 80kb just to print "hello world" to the prompt.
We *just* had a large debate over whether or not 200k was a problem for a "hello world". And now we're getting bellyaching over a measly 80k? WTF? Sure, it certainly makes a big difference on embedded stuff, but...well, see a couple paragraphs below:
 When are we going to fix things so stuff is only
 imported if it's used?
When there aren't far more important things to take care of.
 I went to write a small program for an embedded device, but because of 
 this I had to
 abandon and rewrite it in C - and I have to work through a bit to find 
 where the program actually starts in IDA
 Pro.
D isn't even available for embedded platforms yet, AFAIK. Certainly not D2 or DMD or in a real robust form anyway. And yea, I'd be one of the first people to agree that's a *major* missed opportunity, but there's only two real things to do about it: 1. help make it happen or 2. not whine about it. But until real embedded D actually does happen, trying to trim out a few k is worthless.
 Const does not provide me any guarantees that I couldn't already get with 
 in/out/inout and function contracts.
 It does not guarantee that a value will not change because D doesn't 
 control the execution environment, what
 const does is declare that it won't be changed by the program itself.  We 
 can already check that automatically.
 My problem is that now we have six dozen extra rules and a few more 
 keywords because of it.
Function contracts are run-time. As such, they are absolutely no substitute for const, which is compile-time. If you don't have any problem with 1. bogging down execution with useless processing that could have already been handled at compile-time and 2. ignoring errors at compile-time so that they may-or-may-not be caught at run-time, then that's what those dynamic-language toys are for. The in/out/inout are only for function parameters and are not as fine-grained as const, so they're no substitute either. And if you don't care for const, don't use it.
 D is still aimed at the i486,
Yea! That's terrible! Let's just deprecate anything less than 64-bit quad-core, because anything less than top-of-the-line is useless. And I don't want to hear that damn "but that's all that the stores sell!" strawman that I've heard a thousand times because everyone knows damn well that's not all that's *actually in use*.
 and is just starting to handle threading.
It's been no worse at threading than C/C++ for quite some time. It's just starting to have a threading model that kicks the crap out of the threading in the vast majority of languages out there. But that's a hell of a far cry from "just starting to handle threading".
 My CPU is a Core i7, which is a quad-core.
 It also has SSE4.2
1. Good for you. 2. Next week I'll sell my house to buy a 50-unit cluster of quintuple-cores with 1TB RAM each, and start bitching about any software or language that supports a pathetic, measly non-cluster with merely 8GB or so of RAM. Then I'll buy a Porsche and bitch and moan and carry on about any road that allows people to drive on it with a Corvette or, god-forbid, a sedan or anything else with a top-speed below 250+ mph.
 It's been 20 years.
Let's all be consumer whores! Remember kids: "old" means "unpopular", and above all allse, we certainly don't want to be unpopular!
 cent and ucent should be available via SSE.  Over 97% of computers now 
 have it and I still can't even assign to an
 SSE register?
Submit a patch.
 Don Clungston had some excellent code written up in BLADE found on dsource 
 2 years ago.  Shouldn't that have
 become part of D native?
Maybe. Update it, merge it, and submit a patch.
 D ought to have a way to express math without automatically executing it. 
 Some math can't execute on an x86,
 and sometimes we just want to reduce instead of execute.  Hell, if a 
 function were something we could reduce
 and see, that would count.  This also has value for optimization.
That would belong in a math library. See if it's already supported in one of the existing math libs, and if not, write it and put it on dsource.
 An AST?
Already an intended future feature. But we obviously can't have everything and have it all right now.
 D missed big on short circuit evaluation because they want to typedef 
 bool.
D does do short-circuit evaluation. Not sure what you mean about it wanting to typedef bool.
 I was hoping I'd be able to see
 things like ||=  &&=
Submit a path or put a feature request on bugzilla. If there's already a ticket for it on bugzilla, then vote on it.
 and x = y || z
That works just fine: void main() { bool x, y, z; x = y || z; } If it isn't doing short-circuit eval on that, then submit a patch or a bugzilla ticket, etc.
 I also kind of hoped we'd see standard real world unit types in D.  Like 
 sqft, meters, ohms, kg, L, Hz, seconds,
 etc. native to D.  Being able to intrinsically convert types between these 
 things would make D the most
 approachable General Programming Language for real world problems.
1. Considering how few languages support that, a lack of it is hardly something to hold against D. 2. Someone here has already made/posted a lib that does that. Don't remember who though, maybe they'll reply here. Or try searching the newsgroup for it. I know it's around somewhere.
 Thanks for your time,
 Dan
If this post comes across overly harsh and inhospitable (and I realize it probably does), then I really do apologize. But it's just that we've had these very same discussions ("A: The problem with D is it doesn't have X!" "B: These things take time. / Then help out.") sooo many times here already, I'm getting quite tired of it.
Jan 17 2010
next sibling parent BCS <none anon.com> writes:
Hello Nick,
 2. Someone here has already made/posted a lib that does that. Don't
 remember who though, maybe they'll reply here.
Me (at compile time, see reply to bearophile) and IIRC, someone else did a runtime version.
Jan 17 2010
prev sibling next sibling parent Bane <branimir.milosavljevic gmail.com> writes:
Nick Sabalausky Wrote:

  "Daniel" <dan.from.tokyo gmail.com> wrote in message 
 news:hj0pvc$2sil$1 digitalmars.com...

 
 Haven't we already had enough posts about "I don't like D2 because it 
 doesn't add *enough* stuff"? Fuck, people, this shit takes time!
 
 D still takes 80kb just to print "hello world" to the prompt.
We *just* had a large debate over whether or not 200k was a problem for a "hello world". And now we're getting bellyaching over a measly 80k? WTF? Sure, it certainly makes a big difference on embedded stuff, but...well, see a couple paragraphs below:
 When are we going to fix things so stuff is only
 imported if it's used?
When there aren't far more important things to take care of.
 I went to write a small program for an embedded device, but because of 
 this I had to
 abandon and rewrite it in C - and I have to work through a bit to find 
 where the program actually starts in IDA
 Pro.
D isn't even available for embedded platforms yet, AFAIK. Certainly not D2 or DMD or in a real robust form anyway. And yea, I'd be one of the first people to agree that's a *major* missed opportunity, but there's only two real things to do about it: 1. help make it happen or 2. not whine about it. But until real embedded D actually does happen, trying to trim out a few k is worthless.
 Const does not provide me any guarantees that I couldn't already get with 
 in/out/inout and function contracts.
 It does not guarantee that a value will not change because D doesn't 
 control the execution environment, what
 const does is declare that it won't be changed by the program itself.  We 
 can already check that automatically.
 My problem is that now we have six dozen extra rules and a few more 
 keywords because of it.
Function contracts are run-time. As such, they are absolutely no substitute for const, which is compile-time. If you don't have any problem with 1. bogging down execution with useless processing that could have already been handled at compile-time and 2. ignoring errors at compile-time so that they may-or-may-not be caught at run-time, then that's what those dynamic-language toys are for. The in/out/inout are only for function parameters and are not as fine-grained as const, so they're no substitute either. And if you don't care for const, don't use it.
 D is still aimed at the i486,
Yea! That's terrible! Let's just deprecate anything less than 64-bit quad-core, because anything less than top-of-the-line is useless. And I don't want to hear that damn "but that's all that the stores sell!" strawman that I've heard a thousand times because everyone knows damn well that's not all that's *actually in use*.
 and is just starting to handle threading.
It's been no worse at threading than C/C++ for quite some time. It's just starting to have a threading model that kicks the crap out of the threading in the vast majority of languages out there. But that's a hell of a far cry from "just starting to handle threading".
 My CPU is a Core i7, which is a quad-core.
 It also has SSE4.2
1. Good for you. 2. Next week I'll sell my house to buy a 50-unit cluster of quintuple-cores with 1TB RAM each, and start bitching about any software or language that supports a pathetic, measly non-cluster with merely 8GB or so of RAM. Then I'll buy a Porsche and bitch and moan and carry on about any road that allows people to drive on it with a Corvette or, god-forbid, a sedan or anything else with a top-speed below 250+ mph.
 It's been 20 years.
Let's all be consumer whores! Remember kids: "old" means "unpopular", and above all allse, we certainly don't want to be unpopular!
 cent and ucent should be available via SSE.  Over 97% of computers now 
 have it and I still can't even assign to an
 SSE register?
Submit a patch.
 Don Clungston had some excellent code written up in BLADE found on dsource 
 2 years ago.  Shouldn't that have
 become part of D native?
Maybe. Update it, merge it, and submit a patch.
 D ought to have a way to express math without automatically executing it. 
 Some math can't execute on an x86,
 and sometimes we just want to reduce instead of execute.  Hell, if a 
 function were something we could reduce
 and see, that would count.  This also has value for optimization.
That would belong in a math library. See if it's already supported in one of the existing math libs, and if not, write it and put it on dsource.
 An AST?
Already an intended future feature. But we obviously can't have everything and have it all right now.
 D missed big on short circuit evaluation because they want to typedef 
 bool.
D does do short-circuit evaluation. Not sure what you mean about it wanting to typedef bool.
 I was hoping I'd be able to see
 things like ||=  &&=
Submit a path or put a feature request on bugzilla. If there's already a ticket for it on bugzilla, then vote on it.
 and x = y || z
That works just fine: void main() { bool x, y, z; x = y || z; } If it isn't doing short-circuit eval on that, then submit a patch or a bugzilla ticket, etc.
 I also kind of hoped we'd see standard real world unit types in D.  Like 
 sqft, meters, ohms, kg, L, Hz, seconds,
 etc. native to D.  Being able to intrinsically convert types between these 
 things would make D the most
 approachable General Programming Language for real world problems.
1. Considering how few languages support that, a lack of it is hardly something to hold against D. 2. Someone here has already made/posted a lib that does that. Don't remember who though, maybe they'll reply here. Or try searching the newsgroup for it. I know it's around somewhere.
 Thanks for your time,
 Dan
If this post comes across overly harsh and inhospitable (and I realize it probably does), then I really do apologize. But it's just that we've had these very same discussions ("A: The problem with D is it doesn't have X!" "B: These things take time. / Then help out.") sooo many times here already, I'm getting quite tired of it.
Good points and colorful choice of words. I enjoyed reading this.
Jan 18 2010
prev sibling next sibling parent reply retard <re tard.com.invalid> writes:
Mon, 18 Jan 2010 02:05:14 -0500, Nick Sabalausky wrote:

 "Daniel" <dan.from.tokyo gmail.com> wrote in message
 news:hj0pvc$2sil$1 digitalmars.com...

 Haven't we already had enough posts about "I don't like D2 because it
 doesn't add *enough* stuff"? Fuck, people, this shit takes time!
 
 D still takes 80kb just to print "hello world" to the prompt.
We *just* had a large debate over whether or not 200k was a problem for a "hello world". And now we're getting bellyaching over a measly 80k? WTF?
The size matters when max instruction memory size is for example 32 or 64 kB.. embedded systems are most widely used computers in the world of today.
 The in/out/inout are only for function parameters and are not as
 fine-grained as const, so they're no substitute either.
Function contracts are pretty much orthogonal to const. Even with both enabled, this doesn't guarantee 100% bug free code. High level errors in human reasoning usually cannot be spotted with low level verification systems. Both also come with both syntactical and semantical burden. It's not always wrong to argue that less features is more. Const doesn't always lend itself well to various use cases.
 D is still aimed at the i486,
Yea! That's terrible! Let's just deprecate anything less than 64-bit quad-core, because anything less than top-of-the-line is useless. And I don't want to hear that damn "but that's all that the stores sell!" strawman that I've heard a thousand times because everyone knows damn well that's not all that's *actually in use*.
Last 486 generations were manufactured 16 years ago. When Walter started D, something similar to what i486 represents today was VIC-20 or the previous generation machines with about 1..2 kB of RAM. Back in the 1999, the last manufactured Commodore 64 was already 13 yo and VIC-20 (with 3583 bytes available to the user!) 14 yo - and even back in 1999 the famous platforms of 80s were considered dead outside demoscene. Let's be honest, 486 isn't widely used anywhere anymore. Many embedded applications have been upgraded to a better platform. Majority of new operating systems (including Linux distributions which default to i586, i686, or x86-64) won't run on it. It still has its uses in some places, but one shouldn't focus on it too much anymore. Those machines had hard disks with on avg max 500 MB of unpartitioned space. Best models came with about 16 MB of RAM. Most of the 486s had more modest set of features (mine had 33Mhz, 4MB RAM, and a 100MB disk - so a typical DWT hello world app would consume all of the available RAM) and I think a large part of them lacked FPU support. The FPU performance was so bad that lookup tables were often faster than the ones performed directly by the FPU. The cache behavior differs pretty much from what we have in Pentiums, Core2, or Core i7. Why should we care about all that now? The embedded/legacy market requires a totally different kind of compiler - possibly without RTTI and garbage collector. C, BitC, C--, Forth, Assembler et al fill this niche nicely.
 
 and is just starting to handle threading.
It's been no worse at threading than C/C++ for quite some time. It's just starting to have a threading model that kicks the crap out of the threading in the vast majority of languages out there. But that's a hell of a far cry from "just starting to handle threading".
I don't think D will replace languages with for example data-flow concurrency support.
 
 My CPU is a Core i7, which is a quad-core.
 It also has SSE4.2
1. Good for you.
I also have a Core i7. Jeff Atwood has one (http://www.codinghorror.com/ blog/archives/001316.html) - and he represents the average Joe Six-pack developer. They're pretty common these days, cheapest will cost you far less than $700..$800. I just upgraded from 12 GB to 24 GB of RAM - these have as much as six memory slots!
Jan 18 2010
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:hj1hfe$1bp4$1 digitalmars.com...
 Mon, 18 Jan 2010 02:05:14 -0500, Nick Sabalausky wrote:

 "Daniel" <dan.from.tokyo gmail.com> wrote in message
 news:hj0pvc$2sil$1 digitalmars.com...

 Haven't we already had enough posts about "I don't like D2 because it
 doesn't add *enough* stuff"? Fuck, people, this shit takes time!

 D still takes 80kb just to print "hello world" to the prompt.
We *just* had a large debate over whether or not 200k was a problem for a "hello world". And now we're getting bellyaching over a measly 80k? WTF?
The size matters when max instruction memory size is for example 32 or 64 kB.. embedded systems are most widely used computers in the world of today.
Right, but like I had said below that, D isn't really usable for embedded ATM, so until that happens (and I'm *really* anxious to see that happen), D is still desktop-only, and even on the lowest-end desktops 80k is nothing.
 The in/out/inout are only for function parameters and are not as
 fine-grained as const, so they're no substitute either.
Function contracts are pretty much orthogonal to const. Even with both enabled, this doesn't guarantee 100% bug free code. High level errors in human reasoning usually cannot be spotted with low level verification systems. Both also come with both syntactical and semantical burden. It's not always wrong to argue that less features is more. Const doesn't always lend itself well to various use cases.
Right.
 D is still aimed at the i486,
Yea! That's terrible! Let's just deprecate anything less than 64-bit quad-core, because anything less than top-of-the-line is useless. And I don't want to hear that damn "but that's all that the stores sell!" strawman that I've heard a thousand times because everyone knows damn well that's not all that's *actually in use*.
Last 486 generations were manufactured 16 years ago. When Walter started D, something similar to what i486 represents today was VIC-20 or the previous generation machines with about 1..2 kB of RAM. Back in the 1999, the last manufactured Commodore 64 was already 13 yo and VIC-20 (with 3583 bytes available to the user!) 14 yo - and even back in 1999 the famous platforms of 80s were considered dead outside demoscene. Let's be honest, 486 isn't widely used anywhere anymore. Many embedded applications have been upgraded to a better platform. Majority of new operating systems (including Linux distributions which default to i586, i686, or x86-64) won't run on it. It still has its uses in some places, but one shouldn't focus on it too much anymore. Those machines had hard disks with on avg max 500 MB of unpartitioned space. Best models came with about 16 MB of RAM. Most of the 486s had more modest set of features (mine had 33Mhz, 4MB RAM, and a 100MB disk - so a typical DWT hello world app would consume all of the available RAM) and I think a large part of them lacked FPU support. The FPU performance was so bad that lookup tables were often faster than the ones performed directly by the FPU. The cache behavior differs pretty much from what we have in Pentiums, Core2, or Core i7. Why should we care about all that now? The embedded/legacy market requires a totally different kind of compiler - possibly without RTTI and garbage collector. C, BitC, C--, Forth, Assembler et al fill this niche nicely.
I can totally accept that 486 in particular is pretty much dead, but unless there's some specific advantage that can be only be gained by breaking 486 support, I see no reason for "It supports 486" to be something worth whining about.
 My CPU is a Core i7, which is a quad-core.
 It also has SSE4.2
1. Good for you.
I also have a Core i7. Jeff Atwood has one (http://www.codinghorror.com/ blog/archives/001316.html) - and he represents the average Joe Six-pack developer. They're pretty common these days, cheapest will cost you far less than $700..$800. I just upgraded from 12 GB to 24 GB of RAM - these have as much as six memory slots!
Or, people like me who can do everything they need to do on a 32-bit single-core just fine, can just stick with it and use those hundreds of dollars on something that actually fucking matters. Where the fuck are you people living that you can act as if $700-$800 were pocket change to be casually tossed around? LA? Tokyo? Hawaii? Your own little private islands? Delusion-land, most likely.
Jan 18 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 Right, but like I had said below that, D isn't really usable for embedded 
 ATM, so until that happens (and I'm *really* anxious to see that happen), D 
 is still desktop-only, and even on the lowest-end desktops 80k is nothing.
I didn't know you were an embedded systems developer. I haven't done embedded systems since the 6800! So I don't know what's involved these days. Can you spell it out for me exactly what needs to be done to support this with DMD? (Yes, I know, do the ARM instruction set, but what about embedded x86?)
Jan 18 2010
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:hj2ef2$4eg$1 digitalmars.com...
 Nick Sabalausky wrote:
 Right, but like I had said below that, D isn't really usable for embedded 
 ATM, so until that happens (and I'm *really* anxious to see that happen), 
 D is still desktop-only, and even on the lowest-end desktops 80k is 
 nothing.
I didn't know you were an embedded systems developer. I haven't done embedded systems since the 6800! So I don't know what's involved these days. Can you spell it out for me exactly what needs to be done to support this with DMD? (Yes, I know, do the ARM instruction set, but what about embedded x86?)
Well, unfortunately, my embedded experience so far has been limited to hobbyist stuff, like homebrew GameBoy Advance (ARM) and VCS/2600 and Parallax's Propeller microcontroller (which can only just barely do C and it comes at a cost), very little on the tool-development side, plus it's been awhile since I've had a chance to do anything in that whole area anyway (damn web jobs...), so I can't really say. I can say I'd definitely like to see D usable for Wii/XBox/Playstation/etc, because console games (*real* ones, that is) are still realistically limited to just C++ right now and games are probably the biggest reason I'm interested in embedded D. (And yea, Wii can do Flash but that's very But again, I've been too far removed lately to really be sure what exactly would be needed. A C-generating backend would probably help though, although I assume that might be more work than adding another instruction set.
Jan 18 2010
prev sibling parent reply =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
Walter Bright wrote:
 Nick Sabalausky wrote:
 Right, but like I had said below that, D isn't really usable for
 embedded ATM, so until that happens (and I'm *really* anxious to see
 that happen), D is still desktop-only, and even on the lowest-end
 desktops 80k is nothing.
=20 I didn't know you were an embedded systems developer. I haven't done embedded systems since the 6800! So I don't know what's involved these days. =20 Can you spell it out for me exactly what needs to be done to support this with DMD? (Yes, I know, do the ARM instruction set, but what about=
 embedded x86?)
Embedded x86 is an oxymoron. Yes, I know, it exists (and btw, 8 years ago they were still selling 486s as "embedded" processors) but mostly it doesn't need any special support (except possibly on the binary size front and even there 80k is nothing to the XXX megabytes used by the off-the-shelf OS+GUI+Web browser). Face it, there are two kinds of embedded developers: - Those who want performance at very low power usage, who use ARM and C with a specialized OS. Those won't use D, period. Most of the time, they won't even use malloc or most of the C standard library (not saying they're right here, but I doubt you will change them); - Those who only care about cost, who use x86 with Windows or Linux, off-the-shelf software and an AJAX GUI and wonder why their systems are so slow and won't even run a full day before needing to be plugged to a power outlet. Those won't use D because "nobody uses it" and anyway it takes too much space (don't ask me to explain the logic behind that statement, I don't understand it either). More seriously, I don't expect D to see much usage in the embedded market unless it becomes a huge success on the PC first (if then). But nothing you can do on the technical front will change that: it's mostly due to prejudice and preconceptions, not an actual cost-benefit evaluation of the language. Jerome PS: At work, we mustn't use C++ because: - It's slow; - Its standard library is too big (100k); - In a future product, we might want to reuse this module and not have C++ (Oh, yes I didn't tell you that we *do* have the C++ stdlib in our products because the web browser they bought to run their HTML+Javascript+HTML+XML+C+XML+C+XML+C GUI uses it, but *we* aren't allowed to, fckng morons) --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jan 18 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
J閞鬽e M. Berger wrote:
 	Embedded x86 is an oxymoron. Yes, I know, it exists (and btw, 8
 years ago they were still selling 486s as "embedded" processors) but
 mostly it doesn't need any special support (except possibly on the
 binary size front and even there 80k is nothing to the XXX megabytes
 used by the off-the-shelf OS+GUI+Web browser). Face it, there are
 two kinds of embedded developers:
 
 - Those who want performance at very low power usage, who use ARM
 and C with a specialized OS. Those won't use D, period. Most of the
 time, they won't even use malloc or most of the C standard library
 (not saying they're right here, but I doubt you will change them);
I've looked at some embedded ARM evaluation boards that have Linux on them. Don't know much else about them. What about things like phones, game machines?
 - Those who only care about cost, who use x86 with Windows or Linux,
  off-the-shelf software and an AJAX GUI and wonder why their systems
 are so slow and won't even run a full day before needing to be
 plugged to a power outlet. Those won't use D because "nobody uses
 it" and anyway it takes too much space (don't ask me to explain the
 logic behind that statement, I don't understand it either).
 
 	More seriously, I don't expect D to see much usage in the embedded
 market unless it becomes a huge success on the PC first (if then).
 But nothing you can do on the technical front will change that: it's
 mostly due to prejudice and preconceptions, not an actual
 cost-benefit evaluation of the language.
Yeah, I know, I run into the pseudo-problem all the time of D using garbage collection. I point out that you can call/use malloc in D as easily as you can in C++, but it makes no difference. They're convinced that gc will send their app to hell. <g> because those languages make it really hard to use malloc. They just don't believe that malloc is trivial to use in D. I get this perspective even from career experts in programming.
 PS: At work, we mustn't use C++ because:
 - It's slow;
 - Its standard library is too big (100k);
 - In a future product, we might want to reuse this module and not
 have C++ (Oh, yes I didn't tell you that we *do* have the C++ stdlib
 in our products because the web browser they bought to run their
 HTML+Javascript+HTML+XML+C+XML+C+XML+C GUI uses it, but *we* aren't
 allowed to, fckng morons)
!!
Jan 18 2010
next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
On 01/18/2010 11:31 PM, Walter Bright wrote:
 J閞鬽e M. Berger wrote:
 Embedded x86 is an oxymoron. Yes, I know, it exists (and btw, 8
 years ago they were still selling 486s as "embedded" processors) but
 mostly it doesn't need any special support (except possibly on the
 binary size front and even there 80k is nothing to the XXX megabytes
 used by the off-the-shelf OS+GUI+Web browser). Face it, there are
 two kinds of embedded developers:

 - Those who want performance at very low power usage, who use ARM
 and C with a specialized OS. Those won't use D, period. Most of the
 time, they won't even use malloc or most of the C standard library
 (not saying they're right here, but I doubt you will change them);
I've looked at some embedded ARM evaluation boards that have Linux on them. Don't know much else about them. What about things like phones, game machines?
This should be possible. I have a N900 which runs C, C++ and even Python apps. Nokia is going Qt (ergo C++) all the way for mobile (symbian and Maemo). D fits in very well. These are not exactly embedded devices, but performance still matters and the platform is open enough to try to sneak in. Same goes for android, I believe they lifted the Java only restriction so it may be possible. If I ever have to time I'll attempt to get GDC or LDC running on the N900.
Jan 18 2010
prev sibling next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 18 de enero a las 14:31 me escribiste:
	More seriously, I don't expect D to see much usage in the embedded
market unless it becomes a huge success on the PC first (if then).
But nothing you can do on the technical front will change that: it's
mostly due to prejudice and preconceptions, not an actual
cost-benefit evaluation of the language.
Yeah, I know, I run into the pseudo-problem all the time of D using garbage collection. I point out that you can call/use malloc in D as easily as you can in C++, but it makes no difference. They're convinced that gc will send their app to hell. <g> because those languages make it really hard to use malloc. They just don't believe that malloc is trivial to use in D.
Well, I think it's a little D's fault too, because several language features use the GC (and all the stdlib as well). It's trivial to use malloc in D, but, even when it's possible, it's not so easy to completely avoid the GC. You just have to be too careful, and I think the features that use the GC are not very well documented (same for Phobos). So I think the GC-will-get-in-the-way fear in D is not totally unjustified, even when malloc is trivial to use. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Vivimos en una 茅poca muy contempor谩nea, Don Inodoro... -- Mendieta
Jan 19 2010
parent reply "Craig Black" <craigblack2 cox.net> writes:
"Leandro Lucarella" <llucax gmail.com> wrote in message 
news:20100119173057.GD14794 llucax.com.ar...
 Walter Bright, el 18 de enero a las 14:31 me escribiste:
 More seriously, I don't expect D to see much usage in the embedded
market unless it becomes a huge success on the PC first (if then).
But nothing you can do on the technical front will change that: it's
mostly due to prejudice and preconceptions, not an actual
cost-benefit evaluation of the language.
Yeah, I know, I run into the pseudo-problem all the time of D using garbage collection. I point out that you can call/use malloc in D as easily as you can in C++, but it makes no difference. They're convinced that gc will send their app to hell. <g> because those languages make it really hard to use malloc. They just don't believe that malloc is trivial to use in D.
Well, I think it's a little D's fault too, because several language features use the GC (and all the stdlib as well). It's trivial to use malloc in D, but, even when it's possible, it's not so easy to completely avoid the GC. You just have to be too careful, and I think the features that use the GC are not very well documented (same for Phobos). So I think the GC-will-get-in-the-way fear in D is not totally unjustified, even when malloc is trivial to use.
I would have to agree and this is one of my causes for hesisation in adopting D. The code I write requires the highest performance possible. I am concerned that when I port it over to D, I will have to avoid using a lot of D features that use the GC (built-in arrays, closures, standard library features, etc.) in order to get the best possible performance. D does not adhere to the C++ zero-overhead principle, and I see this as a risk. So if/when I end up porting my code to D I may evolve my own dialect of D that uses only the subset of features that tend to provide the best performance. I don't know if the GC will send my code to hell, but if the performance drops by more than 20% by porting it from C++ to D, then I would be disappointed. The GC is a huge concern for me, and it's not an unreasonable concern. I got about a 15% performance improvement by switching from the system allocator to nedmalloc, so my code is very sensitive to heap allocation performance. I use dynamic arrays quite a lot, and if I switched all of these to use D's built-in GC-based arrays, I think I would see a tremendous performance drop. Even so, I do hope to use D when I see its stability improve to the point where I can sell it to my boss and my coworkers. There are a lot of great features and a lot of good work being done. The rich feature set of the language is very attractive to me. It can be frustrating that people tend to focus only on the negative aspects. I think D is moving forward at a very good pace, especially when I compare it to C++. The D language and the community that surrounds it has qualities that I have not seen anywhere else. Altogether it seems headed in a promising direction, and I am appreciative of all the work that so many have put into it. -Craig
Jan 19 2010
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Craig Black (craigblack2 cox.net)'s article
 I would have to agree and this is one of my causes for hesisation in
 adopting D.  The code I write requires the highest performance possible.  I
 am concerned that when I port it over to D, I will have to avoid using a lot
 of D features that use the GC (built-in arrays, closures, standard library
 features, etc.)  in order to get the best possible performance.  D does not
 adhere to the C++ zero-overhead principle, and I see this as a risk.  So
 if/when I end up porting my code to D I may evolve my own dialect of D that
 uses only the subset of features that tend to provide the best performance.
D's garbage collector is admittedly not that good, but there are some pretty important mitigating factors that you should be aware of: 1. You can use your limited dialect only in the performance-critical parts of your code and program in a more Java-style "just allocate whatever needs to be allocated and let it get GC'd whenever it gets GC'd" way in the other 80% of your code. 2. D provides enough low-level features (most importantly the ability to allocate an untyped memory block) that you can write some pretty efficient custom memory management schemes. You can do object pools pretty well. You can do mark-release pretty well. If you're doing a lot of numerics, you can implement a second stack (similar to Andrei's proposed SuperStack, or my TempAlloc) to efficiently allocate temporary workspace arrays. Furthermore, D provides enough abilities to make these hacks well-encapsulated that they start to appear significantly less ugly than they would in C or C++, where the encapsulation would be weaker. 3. You can use C's malloc (or nedmalloc) from D, though you do have to be careful about making sure any region of the C heap that contains pointers into the GC heap is marked with addRange().
Jan 19 2010
prev sibling next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Craig Black, el 19 de enero a las 17:27 me escribiste:
 
 "Leandro Lucarella" <llucax gmail.com> wrote in message
 news:20100119173057.GD14794 llucax.com.ar...
Walter Bright, el 18 de enero a las 14:31 me escribiste:
 More seriously, I don't expect D to see much usage in the embedded
market unless it becomes a huge success on the PC first (if then).
But nothing you can do on the technical front will change that: it's
mostly due to prejudice and preconceptions, not an actual
cost-benefit evaluation of the language.
Yeah, I know, I run into the pseudo-problem all the time of D using garbage collection. I point out that you can call/use malloc in D as easily as you can in C++, but it makes no difference. They're convinced that gc will send their app to hell. <g> because those languages make it really hard to use malloc. They just don't believe that malloc is trivial to use in D.
Well, I think it's a little D's fault too, because several language features use the GC (and all the stdlib as well). It's trivial to use malloc in D, but, even when it's possible, it's not so easy to completely avoid the GC. You just have to be too careful, and I think the features that use the GC are not very well documented (same for Phobos). So I think the GC-will-get-in-the-way fear in D is not totally unjustified, even when malloc is trivial to use.
I would have to agree and this is one of my causes for hesisation in adopting D. The code I write requires the highest performance possible. I am concerned that when I port it over to D, I will have to avoid using a lot of D features that use the GC (built-in arrays, closures, standard library features, etc.) in order to get the best possible performance. D does not adhere to the C++ zero-overhead principle, and I see this as a risk. So if/when I end up porting my code to D I may evolve my own dialect of D that uses only the subset of features that tend to provide the best performance.
One thing that can help a lot here is an option for the compiler to avoid compiling stuff that implicitly call the GC (LDC has an option to avoid runtime calls altogether, -noruntime, but maybe that's too extreme). That just helps to avoiding hidden GC usage, you still have to use your own dialect and you probably have to avoid Phobos too.
 I don't know if the GC will send my code to hell, but if the
 performance drops by more than 20% by porting it from C++ to D, then
 I would be disappointed.  The GC is a huge concern for me, and it's
 not an unreasonable concern.  I got about a 15% performance
 improvement by switching from the system allocator to nedmalloc, so
 my code is very sensitive to heap allocation performance.  I use
 dynamic arrays quite a lot, and if I switched all of these to use
 D's built-in GC-based arrays, I think I would see a tremendous
 performance drop.
Allocation could be very slow, specially in MT programs because of the GC lock. But you can preallocate or use other tricks like the ones that David mentioned. But my point still stands, even when possible, is far from trivial to code in D avoiding the GC. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- RENUNCIO PARA IR A REZARLE A SAN CAYETANO -- Cr贸nica TV
Jan 19 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Craig Black, el 19 de enero a las 17:27 me escribiste:
 "Leandro Lucarella" <llucax gmail.com> wrote in message
 news:20100119173057.GD14794 llucax.com.ar...
 Walter Bright, el 18 de enero a las 14:31 me escribiste:
 More seriously, I don't expect D to see much usage in the embedded
 market unless it becomes a huge success on the PC first (if then).
 But nothing you can do on the technical front will change that: it's
 mostly due to prejudice and preconceptions, not an actual
 cost-benefit evaluation of the language.
Yeah, I know, I run into the pseudo-problem all the time of D using garbage collection. I point out that you can call/use malloc in D as easily as you can in C++, but it makes no difference. They're convinced that gc will send their app to hell. <g> because those languages make it really hard to use malloc. They just don't believe that malloc is trivial to use in D.
Well, I think it's a little D's fault too, because several language features use the GC (and all the stdlib as well). It's trivial to use malloc in D, but, even when it's possible, it's not so easy to completely avoid the GC. You just have to be too careful, and I think the features that use the GC are not very well documented (same for Phobos). So I think the GC-will-get-in-the-way fear in D is not totally unjustified, even when malloc is trivial to use.
I would have to agree and this is one of my causes for hesisation in adopting D. The code I write requires the highest performance possible. I am concerned that when I port it over to D, I will have to avoid using a lot of D features that use the GC (built-in arrays, closures, standard library features, etc.) in order to get the best possible performance. D does not adhere to the C++ zero-overhead principle, and I see this as a risk. So if/when I end up porting my code to D I may evolve my own dialect of D that uses only the subset of features that tend to provide the best performance.
One thing that can help a lot here is an option for the compiler to avoid compiling stuff that implicitly call the GC (LDC has an option to avoid runtime calls altogether, -noruntime, but maybe that's too extreme). That just helps to avoiding hidden GC usage, you still have to use your own dialect and you probably have to avoid Phobos too.
I'd love -nogc. Then we can think of designing parts of Phobos to work under that regime. Andrei
Jan 19 2010
next sibling parent "Craig Black" <craigblack2 cox.net> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:hj5llq$tqb$1 digitalmars.com...
 Leandro Lucarella wrote:
 Craig Black, el 19 de enero a las 17:27 me escribiste:
 "Leandro Lucarella" <llucax gmail.com> wrote in message
 news:20100119173057.GD14794 llucax.com.ar...
 Walter Bright, el 18 de enero a las 14:31 me escribiste:
 More seriously, I don't expect D to see much usage in the embedded
 market unless it becomes a huge success on the PC first (if then).
 But nothing you can do on the technical front will change that: it's
 mostly due to prejudice and preconceptions, not an actual
 cost-benefit evaluation of the language.
Yeah, I know, I run into the pseudo-problem all the time of D using garbage collection. I point out that you can call/use malloc in D as easily as you can in C++, but it makes no difference. They're convinced that gc will send their app to hell. <g> because those languages make it really hard to use malloc. They just don't believe that malloc is trivial to use in D.
Well, I think it's a little D's fault too, because several language features use the GC (and all the stdlib as well). It's trivial to use malloc in D, but, even when it's possible, it's not so easy to completely avoid the GC. You just have to be too careful, and I think the features that use the GC are not very well documented (same for Phobos). So I think the GC-will-get-in-the-way fear in D is not totally unjustified, even when malloc is trivial to use.
I would have to agree and this is one of my causes for hesisation in adopting D. The code I write requires the highest performance possible. I am concerned that when I port it over to D, I will have to avoid using a lot of D features that use the GC (built-in arrays, closures, standard library features, etc.) in order to get the best possible performance. D does not adhere to the C++ zero-overhead principle, and I see this as a risk. So if/when I end up porting my code to D I may evolve my own dialect of D that uses only the subset of features that tend to provide the best performance.
One thing that can help a lot here is an option for the compiler to avoid compiling stuff that implicitly call the GC (LDC has an option to avoid runtime calls altogether, -noruntime, but maybe that's too extreme). That just helps to avoiding hidden GC usage, you still have to use your own dialect and you probably have to avoid Phobos too.
I'd love -nogc. Then we can think of designing parts of Phobos to work under that regime. Andrei
That would be great. Especially if there was a GC-free implementation of D's built-in arrays. Or at the very least a GC-free template for dynamic arrays in Phobos. -Craig
Jan 19 2010
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el 19 de enero a las 17:18 me escribiste:
I would have to agree and this is one of my causes for hesisation in
adopting D.  The code I write requires the highest performance
possible.  I am concerned that when I port it over to D, I will have
to avoid using a lot of D features that use the GC (built-in arrays,
closures, standard library features, etc.)  in order to get the best
possible performance.  D does not adhere to the C++ zero-overhead
principle, and I see this as a risk.  So if/when I end up porting my
code to D I may evolve my own dialect of D that uses only the subset
of features that tend to provide the best performance.
One thing that can help a lot here is an option for the compiler to avoid compiling stuff that implicitly call the GC (LDC has an option to avoid runtime calls altogether, -noruntime, but maybe that's too extreme). That just helps to avoiding hidden GC usage, you still have to use your own dialect and you probably have to avoid Phobos too.
I'd love -nogc. Then we can think of designing parts of Phobos to work under that regime.
It's nice to know there is some interest in this. Maybe some day we can have an EmbeddeD ;) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Y Gloria Carr谩, Gloria Estephan, Gloria Gaynor y Gloria Trevi. -- Peperino P贸moro
Jan 19 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 I'd love -nogc. Then we can think of designing parts of Phobos to work 
 under that regime.
But you must do this with lot of care: programmers coming from C++ will be tempted to write code that uses those GC-free parts of Phobos a lot, the end result will be a lot of D code in the wild that's like C++ or worse. So when you want to use one of those modules or libraries, you may need to dance their no-GC dance. This can invalidate the good idea of designing a GC-based language. A better strategy is first of all to improve a lot the D GC, if necessary to introduce in the language other details to help the design of a more efficient GC (like giving ways to tell apart pinned objects from normal ones, make the unpinned ones the default ones, and modify the type system so mixing pinned-memory and unpinned-memory pointers is generally safe, etc). Only when further improvements to the GC become too much hard, you can start to write no-GC parts of Phobos, few years from now. I have seen many cases where Java code run with HotSpot is faster than very similar D1 code compiled with LDC. Avoiding the GC is a easy shortcut, but I think it's not a good long-term strategy for D. Bye, bearophile
Jan 19 2010
next sibling parent reply BCS <none anon.com> writes:
Hello bearophile,

 Andrei Alexandrescu:
 
 I'd love -nogc. Then we can think of designing parts of Phobos to
 work under that regime.
 
But you must do this with lot of care: programmers coming from C++ will be tempted to write code that uses those GC-free parts of Phobos a lot, the end result will be a lot of D code in the wild that's like C++ or worse. So when you want to use one of those modules or libraries, you may need to dance their no-GC dance. This can invalidate the good idea of designing a GC-based language.
I think the approach would be to take whatever parts of phobos you can make work without the GC and *without making them suck* and do so. Also, given that -nogc could be done just as a static check without any effect on the emitted code, any code that is valid without a GC is valid with it (aside from issue of the GC not being able to find pointers but I don't think that apply here).
 A better strategy is first of all to improve a lot the D GC, if
That's true regardless :)
 necessary to introduce in the language other details to help the
 design of a more efficient GC (like giving ways to tell apart pinned
 objects from normal ones, make the unpinned ones the default ones, and
 modify the type system so mixing pinned-memory and unpinned-memory
 pointers is generally safe, etc). Only when further improvements to
 the GC become too much hard, you can start to write no-GC parts of
 Phobos, few years from now.
 
 I have seen many cases where Java code run with HotSpot is faster than
 very similar D1 code compiled with LDC. Avoiding the GC is a easy
 shortcut, but I think it's not a good long-term strategy for D.
being ABLE to avoid it is always a plus. One use I see is perf critical code kernels being compile with -nogc and linked to from non critical code compiler without -nogc
 
 Bye,
 bearophile
Jan 19 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
BCS:
 A better strategy is first of all to improve a lot the D GC, if
That's true regardless :)
I don't agree, because that idea of mine can be wrong :-) What I was saying is that first you improve the GC performance (if necessary modifying the language too) and you don't write parts of Phobos designed to not use the GC. And few years later (when there's already some amount of D2 code in the wild that uses the GC) when improving the GC is not possible anymore then you try to squeeze the lemon some more avoiding to use the GC :-) Bye, bearophile
Jan 19 2010
parent BCS <none anon.com> writes:
Hello bearophile,

 BCS:
 
 A better strategy is first of all to improve a lot the D GC, if
 
That's true regardless :)
I don't agree, because that idea of mine can be wrong :-) What I was saying is that first you improve the GC performance (if necessary modifying the language too) and you don't write parts of Phobos designed to not use the GC. And few years later (when there's already some amount of D2 code in the wild that uses the GC) when improving the GC is not possible anymore then you try to squeeze the lemon some more avoiding to use the GC :-)
I'm not following. Improving the GC (without changing the language) benefits everyone who uses it and hurts no one. Also, I'm puzzled as to why anyone would be opposed to having parts of phobos that don't use the GC. I could see a point if someone wanted to make something that wouldn't work if you DO have the GC running or if it required some ugly hacks to get rid of it but I don't think that's what anyone is talking about.
 Bye,
 bearophile
Jan 20 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Andrei Alexandrescu:
 I'd love -nogc. Then we can think of designing parts of Phobos to work 
 under that regime.
But you must do this with lot of care: programmers coming from C++ will be tempted to write code that uses those GC-free parts of Phobos a lot, the end result will be a lot of D code in the wild that's like C++ or worse. So when you want to use one of those modules or libraries, you may need to dance their no-GC dance. This can invalidate the good idea of designing a GC-based language. A better strategy is first of all to improve a lot the D GC, if necessary to introduce in the language other details to help the design of a more efficient GC (like giving ways to tell apart pinned objects from normal ones, make the unpinned ones the default ones, and modify the type system so mixing pinned-memory and unpinned-memory pointers is generally safe, etc). Only when further improvements to the GC become too much hard, you can start to write no-GC parts of Phobos, few years from now. I have seen many cases where Java code run with HotSpot is faster than very similar D1 code compiled with LDC. Avoiding the GC is a easy shortcut, but I think it's not a good long-term strategy for D. Bye, bearophile
Walter and I talked for hours about a no-gc model for D, and the conclusion was that with only a little compiler support, things can be arranged such that swapping different object.d implementations, the entire D allocation model can be swapped between traditional GC and reference counting. But there's a long way from here to there. One essential thing to be done is transform built-in arrays into normal types defined in object.d. (Walter just did this for associative arrays.) Then some special steps must be taken about Object and the semantics of new. Essentially all allocation primitives must forward to functions or template functions defined in object.d. With such a system in place, object.d can essentially gain entire control about an entire program's memory management policy. I don't have the time to pursue this at the moment, but I'm sure I will. Walter and I are very convinced that the approach based on rewriting/lowering is very promising. Andrei
Jan 19 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
With such a system in place, object.d can essentially gain entire control about
an entire program's memory management policy.<
This is interesting, thank you for your answer. Today you have to design a language not just for what's good for the single programmer, and not even just for what's good for a good-sized group of programmers working on a single program, but also for what's good for the community of programmers that will use the language, how they will share reusable components of programs (like the modules you can find in the Python and Perl community), you need an ecological point of view too :-) Sometimes what's good for the community is not the best for the single programmer. Languages today become successful thanks to their community, and way they are developed (for example Guido V. Rossum is very good in creating an open source community of people interested to develop the Python language and its interpreter. Walter in the last years is improving in such regard, but some further improvements can be quite useful or even necessary if he wants D2/D3 to succeed. The good thing is that Walter is not frozen in place yet, he slowly keeps improving still, this is probably the second most important thing to do to have success in life). So this whole design of gc/refcount has to take into account the needs of good modular programming too: what does it happen if you want to create a program using modules and packages written by different people that have different ideas/needs regarding how to manage memory. An advantage of a the current design that uses one GC is that I think such inter-parts interaction problems are quite less present. Bye, bearophile
Jan 20 2010
prev sibling next sibling parent reply retard <re tard.com.invalid> writes:
Tue, 19 Jan 2010 23:17:44 -0800, Andrei Alexandrescu wrote:

 Walter and I are very convinced that the approach based on
 rewriting/lowering is very promising.
This sounds really good. I'm sure this could be extended to other built- in types as well.
Jan 20 2010
parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 20.01.2010 12:42, schrieb retard:
 Tue, 19 Jan 2010 23:17:44 -0800, Andrei Alexandrescu wrote:

  Walter and I are very convinced that the approach based on
  rewriting/lowering is very promising.
This sounds really good. I'm sure this could be extended to other built- in types as well.
i don't think its needed for other types then arrays - because of the slicing
Jan 20 2010
parent Tuple!() <tuple dataty.pe> writes:
dennis luehring Wrote:

 Am 20.01.2010 12:42, schrieb retard:
 Tue, 19 Jan 2010 23:17:44 -0800, Andrei Alexandrescu wrote:

  Walter and I are very convinced that the approach based on
  rewriting/lowering is very promising.
This sounds really good. I'm sure this could be extended to other built- in types as well.
i don't think its needed for other types then arrays - because of the slicing
Runtime tuples come to my mind. Also complex numbers etc. But we already know that the complex number issue has been already decided. So let's concentrate on the tuple answer from Walter :)
Jan 20 2010
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el 19 de enero a las 23:17 me escribiste:
 bearophile wrote:
Andrei Alexandrescu:
I'd love -nogc. Then we can think of designing parts of Phobos
to work under that regime.
But you must do this with lot of care: programmers coming from C++ will be tempted to write code that uses those GC-free parts of Phobos a lot, the end result will be a lot of D code in the wild that's like C++ or worse. So when you want to use one of those modules or libraries, you may need to dance their no-GC dance. This can invalidate the good idea of designing a GC-based language. A better strategy is first of all to improve a lot the D GC, if necessary to introduce in the language other details to help the design of a more efficient GC (like giving ways to tell apart pinned objects from normal ones, make the unpinned ones the default ones, and modify the type system so mixing pinned-memory and unpinned-memory pointers is generally safe, etc). Only when further improvements to the GC become too much hard, you can start to write no-GC parts of Phobos, few years from now. I have seen many cases where Java code run with HotSpot is faster than very similar D1 code compiled with LDC. Avoiding the GC is a easy shortcut, but I think it's not a good long-term strategy for D. Bye, bearophile
Walter and I talked for hours about a no-gc model for D, and the conclusion was that with only a little compiler support, things can be arranged such that swapping different object.d implementations, the entire D allocation model can be swapped between traditional GC and reference counting.
Again? RC is *not* -nogc, is -anothergc. And reference counting won't do the trick unless you add a backing GC to free cycles. What I mean about -nogc is *no* GC, is "please, mr compiler, give me an error when a GC facility is used". -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- <Damian_Des> Me anDa MaL eL CaPSLoCK
Jan 20 2010
next sibling parent reply "Danny Wilson" <danny decube.net> writes:
On Wed, 20 Jan 2010 14:18:52 +0100, Leandro Lucarella <llucax gmail.com>  
wrote:

 Again? RC is *not* -nogc, is -anothergc. And reference counting won't do
 the trick unless you add a backing GC to free cycles. What I mean about
 -nogc is *no* GC, is "please, mr compiler, give me an error when a GC
 facility is used".
I guess a custom object.d which static asserts as soon as a GC facility is used would do that trick?
Jan 20 2010
parent reply Leandro Lucarella <llucax gmail.com> writes:
Danny Wilson, el 20 de enero a las 16:44 me escribiste:
 On Wed, 20 Jan 2010 14:18:52 +0100, Leandro Lucarella
 <llucax gmail.com> wrote:
 
Again? RC is *not* -nogc, is -anothergc. And reference counting won't do
the trick unless you add a backing GC to free cycles. What I mean about
-nogc is *no* GC, is "please, mr compiler, give me an error when a GC
facility is used".
I guess a custom object.d which static asserts as soon as a GC facility is used would do that trick?
I don't see how you can do that, all the code calling the GC is in the runtime AFAIK. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Your success is measured by your ability to finish things
Jan 20 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Danny Wilson, el 20 de enero a las 16:44 me escribiste:
 On Wed, 20 Jan 2010 14:18:52 +0100, Leandro Lucarella
 <llucax gmail.com> wrote:

 Again? RC is *not* -nogc, is -anothergc. And reference counting won't do
 the trick unless you add a backing GC to free cycles. What I mean about
 -nogc is *no* GC, is "please, mr compiler, give me an error when a GC
 facility is used".
I guess a custom object.d which static asserts as soon as a GC facility is used would do that trick?
I don't see how you can do that, all the code calling the GC is in the runtime AFAIK.
Things can be arranged such that all calls originate in object.d. Andrei
Jan 20 2010
parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el 20 de enero a las 17:39 me escribiste:
 Leandro Lucarella wrote:
Danny Wilson, el 20 de enero a las 16:44 me escribiste:
On Wed, 20 Jan 2010 14:18:52 +0100, Leandro Lucarella
<llucax gmail.com> wrote:

Again? RC is *not* -nogc, is -anothergc. And reference counting won't do
the trick unless you add a backing GC to free cycles. What I mean about
-nogc is *no* GC, is "please, mr compiler, give me an error when a GC
facility is used".
I guess a custom object.d which static asserts as soon as a GC facility is used would do that trick?
I don't see how you can do that, all the code calling the GC is in the runtime AFAIK.
Things can be arranged such that all calls originate in object.d.
But that would mean moving a lot of funcionality from the runtime to object.d, which doesn't seems to be practical. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- VECINOS RESCATARON A CABALLITO ATROPELLADO -- Cr贸nica TV
Jan 20 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el 20 de enero a las 17:39 me escribiste:
 Leandro Lucarella wrote:
 Danny Wilson, el 20 de enero a las 16:44 me escribiste:
 On Wed, 20 Jan 2010 14:18:52 +0100, Leandro Lucarella
 <llucax gmail.com> wrote:

 Again? RC is *not* -nogc, is -anothergc. And reference counting won't do
 the trick unless you add a backing GC to free cycles. What I mean about
 -nogc is *no* GC, is "please, mr compiler, give me an error when a GC
 facility is used".
I guess a custom object.d which static asserts as soon as a GC facility is used would do that trick?
I don't see how you can do that, all the code calling the GC is in the runtime AFAIK.
Things can be arranged such that all calls originate in object.d.
But that would mean moving a lot of funcionality from the runtime to object.d, which doesn't seems to be practical.
Well someone needs to do it to assess practicality and other consequences. Andrei
Jan 20 2010
parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el 20 de enero a las 19:13 me escribiste:
 Leandro Lucarella wrote:
Andrei Alexandrescu, el 20 de enero a las 17:39 me escribiste:
Leandro Lucarella wrote:
Danny Wilson, el 20 de enero a las 16:44 me escribiste:
On Wed, 20 Jan 2010 14:18:52 +0100, Leandro Lucarella
<llucax gmail.com> wrote:

Again? RC is *not* -nogc, is -anothergc. And reference counting won't do
the trick unless you add a backing GC to free cycles. What I mean about
-nogc is *no* GC, is "please, mr compiler, give me an error when a GC
facility is used".
I guess a custom object.d which static asserts as soon as a GC facility is used would do that trick?
I don't see how you can do that, all the code calling the GC is in the runtime AFAIK.
Things can be arranged such that all calls originate in object.d.
But that would mean moving a lot of funcionality from the runtime to object.d, which doesn't seems to be practical.
Well someone needs to do it to assess practicality and other consequences.
Having half the runtime in object.d is not practical, come on! object.d was supposed to be the file defining the Object class, that's it! -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Y2K - what a disappointment... i had at least expected one nuclear plant to blow
Jan 21 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el 20 de enero a las 19:13 me escribiste:
 Leandro Lucarella wrote:
 Andrei Alexandrescu, el 20 de enero a las 17:39 me escribiste:
 Leandro Lucarella wrote:
 Danny Wilson, el 20 de enero a las 16:44 me escribiste:
 On Wed, 20 Jan 2010 14:18:52 +0100, Leandro Lucarella
 <llucax gmail.com> wrote:

 Again? RC is *not* -nogc, is -anothergc. And reference counting won't do
 the trick unless you add a backing GC to free cycles. What I mean about
 -nogc is *no* GC, is "please, mr compiler, give me an error when a GC
 facility is used".
I guess a custom object.d which static asserts as soon as a GC facility is used would do that trick?
I don't see how you can do that, all the code calling the GC is in the runtime AFAIK.
Things can be arranged such that all calls originate in object.d.
But that would mean moving a lot of funcionality from the runtime to object.d, which doesn't seems to be practical.
Well someone needs to do it to assess practicality and other consequences.
Having half the runtime in object.d is not practical, come on! object.d was supposed to be the file defining the Object class, that's it!
It doesn't have to contain the runtime, only the entry points to it. Andrei
Jan 21 2010
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wed, Jan 20, 2010 at 10:18:52AM -0300, Leandro Lucarella wrote:
 Walter and I talked for hours about a no-gc model for D, and the
 conclusion was that with only a little compiler support, things can
 be arranged such that swapping different object.d implementations,
 the entire D allocation model can be swapped between traditional GC
 and reference counting.
Again? RC is *not* -nogc, is -anothergc. And reference counting won't do the trick unless you add a backing GC to free cycles. What I mean about -nogc is *no* GC, is "please, mr compiler, give me an error when a GC facility is used".
The changes done to the compiler to support this should open the window for nogc too. If all those types were in object.d, we should (hopefully) have the option of using an object.d with the relevant functions stubbed with static assert(0); -- Adam D. Ruppe http://arsdnet.net
Jan 20 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Again? RC is *not* -nogc, is -anothergc.
I agree. With reference counting, you'd be no worse than a C++ project that decided to use refcounted smart pointers for all allocated objects. That sounds good to me.
 And reference counting won't do
 the trick unless you add a backing GC to free cycles.
Well there are techniques for lifting cycles but really I think it wouldn't be bad if the user were given the possibility (e.g. weak pointers).
 What I mean about
 -nogc is *no* GC, is "please, mr compiler, give me an error when a GC
 facility is used".
I know. That could be another object.d implementation that would disable certain functions. The nice part about refcounting is that for the most part you don't need to cripple the language. Andrei
Jan 20 2010
next sibling parent reply BCS <none anon.com> writes:
Hello Andrei,

 The nice part about refcounting is that for the most part you don't
 need to cripple the language.
 
I think people are trying to say that disallowing use of GC stuff wouldn't cripple the language. Also there is one thing that -nogc would have over what you are talking about; you could use it on some modules and not others. If I have some performance critical code where attempting to use the GC would break it's perf contract, I can put it in it's own module and compile just it with -nogc and then link it in with code that does use the GC.
Jan 20 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
BCS wrote:
 Hello Andrei,
 
 The nice part about refcounting is that for the most part you don't
 need to cripple the language.
I think people are trying to say that disallowing use of GC stuff wouldn't cripple the language.
Well it's a fact that there would be fewer idioms and options accessible. So I didn't mean it in a derogatory way as much as a factual statement.
 Also there is one thing that -nogc would have over what you are talking 
 about; you could use it on some modules and not others. If I have some 
 performance critical code where attempting to use the GC would break 
 it's perf contract, I can put it in it's own module and compile just it 
 with -nogc and then link it in with code that does use the GC.
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't. Andrei
Jan 20 2010
next sibling parent reply "Craig Black" <craigblack2 cox.net> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:hj7vnu$2008$1 digitalmars.com...
 BCS wrote:
 Hello Andrei,

 The nice part about refcounting is that for the most part you don't
 need to cripple the language.
I think people are trying to say that disallowing use of GC stuff wouldn't cripple the language.
Well it's a fact that there would be fewer idioms and options accessible. So I didn't mean it in a derogatory way as much as a factual statement.
 Also there is one thing that -nogc would have over what you are talking 
 about; you could use it on some modules and not others. If I have some 
 performance critical code where attempting to use the GC would break it's 
 perf contract, I can put it in it's own module and compile just it 
 with -nogc and then link it in with code that does use the GC.
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't.
There are certainly challenges, even perhaps some that I haven't thought of, with mixing manual memory management and GC code. But perhaps there is a bigger problem with the approach you describe. Correct me if I'm wrong, but it seems that what you propose is a -nogc option that would fragment all D code into two incompatible groups. If my -nogc code could not be used with anyone else's D code, then what I have is not a dialect of D at all, but a different incompatible language. I know this issue is a challenging problem from a language design standpoint, and I see why you would have some disdain for it. However, this is not an impossible problem to solve. For example, I believe Managed C++ does this, albeit with some weird syntax, but this proves that it can be done. Anyway, just my 2 cents. -Craig
Jan 20 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Craig Black wrote:
 
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:hj7vnu$2008$1 digitalmars.com...
 BCS wrote:
 Hello Andrei,

 The nice part about refcounting is that for the most part you don't
 need to cripple the language.
I think people are trying to say that disallowing use of GC stuff wouldn't cripple the language.
Well it's a fact that there would be fewer idioms and options accessible. So I didn't mean it in a derogatory way as much as a factual statement.
 Also there is one thing that -nogc would have over what you are 
 talking about; you could use it on some modules and not others. If I 
 have some performance critical code where attempting to use the GC 
 would break it's perf contract, I can put it in it's own module and 
 compile just it with -nogc and then link it in with code that does 
 use the GC.
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't.
There are certainly challenges, even perhaps some that I haven't thought of, with mixing manual memory management and GC code. But perhaps there is a bigger problem with the approach you describe. Correct me if I'm wrong, but it seems that what you propose is a -nogc option that would fragment all D code into two incompatible groups. If my -nogc code could not be used with anyone else's D code, then what I have is not a dialect of D at all, but a different incompatible language. I know this issue is a challenging problem from a language design standpoint, and I see why you would have some disdain for it. However, this is not an impossible problem to solve. For example, I believe Managed C++ does this, albeit with some weird syntax, but this proves that it can be done. Anyway, just my 2 cents.
It's reasonable to say that you decide at application design level what memory management approach you want to choose. That doesn't fragment the community. The decision is similar to many others made at the same level: libraries used, build flags, target platform(s), pointer size (32 vs. 64, not an option yet for dmd), etc. Andrei
Jan 20 2010
next sibling parent reply BCS <none anon.com> writes:
Hello Andrei,

 It's reasonable to say that you decide at application design level
 what memory management approach you want to choose. That doesn't
 fragment the community. The decision is similar to many others made at
 the same level: libraries used, build flags, target platform(s),
 pointer size (32 vs. 64, not an option yet for dmd), etc.
 
IIRC you can right now point to code that makes the memory management choice at a much finer level: lots of graphics and other types of RT code have large chunks of code that do zero allocation (they don't even uses malloc) but rather keep externally supplied buffers around to work with. In D where detecting allocations is harder than just greping for malloc, having compiler support would be really nice.
Jan 20 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
BCS wrote:
 Hello Andrei,
 
 It's reasonable to say that you decide at application design level
 what memory management approach you want to choose. That doesn't
 fragment the community. The decision is similar to many others made at
 the same level: libraries used, build flags, target platform(s),
 pointer size (32 vs. 64, not an option yet for dmd), etc.
IIRC you can right now point to code that makes the memory management choice at a much finer level: lots of graphics and other types of RT code have large chunks of code that do zero allocation (they don't even uses malloc) but rather keep externally supplied buffers around to work with.
Nonono. That's not a memory management choice. It's a choice to avoid any memory management altogether.
 In D where detecting allocations is harder than just greping for 
 malloc, having compiler support would be really nice.
I don't think the language needs to cater to all special needs. Tomorrow someone comes and says they don't want to deal with unsigned integers. Oh wait... Andrei
Jan 20 2010
prev sibling parent reply "Craig Black" <craigblack2 cox.net> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:hj8gd7$2soe$1 digitalmars.com...
 Craig Black wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:hj7vnu$2008$1 digitalmars.com...
 BCS wrote:
 Hello Andrei,

 The nice part about refcounting is that for the most part you don't
 need to cripple the language.
I think people are trying to say that disallowing use of GC stuff wouldn't cripple the language.
Well it's a fact that there would be fewer idioms and options accessible. So I didn't mean it in a derogatory way as much as a factual statement.
 Also there is one thing that -nogc would have over what you are talking 
 about; you could use it on some modules and not others. If I have some 
 performance critical code where attempting to use the GC would break 
 it's perf contract, I can put it in it's own module and compile just it 
 with -nogc and then link it in with code that does use the GC.
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't.
There are certainly challenges, even perhaps some that I haven't thought of, with mixing manual memory management and GC code. But perhaps there is a bigger problem with the approach you describe. Correct me if I'm wrong, but it seems that what you propose is a -nogc option that would fragment all D code into two incompatible groups. If my -nogc code could not be used with anyone else's D code, then what I have is not a dialect of D at all, but a different incompatible language. I know this issue is a challenging problem from a language design standpoint, and I see why you would have some disdain for it. However, this is not an impossible problem to solve. For example, I believe Managed C++ does this, albeit with some weird syntax, but this proves that it can be done. Anyway, just my 2 cents.
It's reasonable to say that you decide at application design level what memory management approach you want to choose. That doesn't fragment the community. The decision is similar to many others made at the same level: libraries used, build flags, target platform(s), pointer size (32 vs. 64, not an option yet for dmd), etc. Andrei
Are we talking about the same thing here? The things you mention here do not create two incompatible styles of programming. I would like to point out how great it is that Phobos and Tango can be compiled together in the same application. Now they are interoperable. If -nogc code can't be used together with GC code, then we create a similar schism as having two incompatible "standard" libraries. -Craig
Jan 21 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Craig Black wrote:
 
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:hj8gd7$2soe$1 digitalmars.com...
 Craig Black wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in 
 message news:hj7vnu$2008$1 digitalmars.com...
 BCS wrote:
 Hello Andrei,

 The nice part about refcounting is that for the most part you don't
 need to cripple the language.
I think people are trying to say that disallowing use of GC stuff wouldn't cripple the language.
Well it's a fact that there would be fewer idioms and options accessible. So I didn't mean it in a derogatory way as much as a factual statement.
 Also there is one thing that -nogc would have over what you are 
 talking about; you could use it on some modules and not others. If 
 I have some performance critical code where attempting to use the 
 GC would break it's perf contract, I can put it in it's own module 
 and compile just it with -nogc and then link it in with code that 
 does use the GC.
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't.
There are certainly challenges, even perhaps some that I haven't thought of, with mixing manual memory management and GC code. But perhaps there is a bigger problem with the approach you describe. Correct me if I'm wrong, but it seems that what you propose is a -nogc option that would fragment all D code into two incompatible groups. If my -nogc code could not be used with anyone else's D code, then what I have is not a dialect of D at all, but a different incompatible language. I know this issue is a challenging problem from a language design standpoint, and I see why you would have some disdain for it. However, this is not an impossible problem to solve. For example, I believe Managed C++ does this, albeit with some weird syntax, but this proves that it can be done. Anyway, just my 2 cents.
It's reasonable to say that you decide at application design level what memory management approach you want to choose. That doesn't fragment the community. The decision is similar to many others made at the same level: libraries used, build flags, target platform(s), pointer size (32 vs. 64, not an option yet for dmd), etc. Andrei
Are we talking about the same thing here? The things you mention here do not create two incompatible styles of programming. I would like to point out how great it is that Phobos and Tango can be compiled together in the same application. Now they are interoperable. If -nogc code can't be used together with GC code, then we create a similar schism as having two incompatible "standard" libraries.
Well if we get into details we'll figure that things must be quite different for different memory management models. For example Object in ref-counted mode is not a class anymore, it's a struct. So now there's going to be two parts in an app: those in which Object is a class, and those in which Object is a struct. Reconciling those would be a tall order. Andrei
Jan 21 2010
next sibling parent reply BCS <none anon.com> writes:
Hello Andrei,

 Well if we get into details we'll figure that things must be quite
 different for different memory management models. For example Object
 in ref-counted mode is not a class anymore, it's a struct. So now
 there's going to be two parts in an app: those in which Object is a
 class, and those in which Object is a struct. Reconciling those would
 be a tall order.
I think we *are* taking about different things because that argument doesn't hold for the -nogc I'm thinking of, ref counted objects are still GCed objects, allocating them would also be disallows in a module compiled under -nogc.
 
 Andrei
 
Jan 21 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
BCS wrote:
 Hello Andrei,
 
 Well if we get into details we'll figure that things must be quite
 different for different memory management models. For example Object
 in ref-counted mode is not a class anymore, it's a struct. So now
 there's going to be two parts in an app: those in which Object is a
 class, and those in which Object is a struct. Reconciling those would
 be a tall order.
I think we *are* taking about different things because that argument doesn't hold for the -nogc I'm thinking of, ref counted objects are still GCed objects, allocating them would also be disallows in a module compiled under -nogc.
Yah, but say the module is calling a function in another module. Could that function allocate stuff? Then who's releasing it? Could that function return allocated stuff into the -nogc module? Then who's releasing that guy? Andrei
Jan 21 2010
parent BCS <none anon.com> writes:
Hello Andrei,

 BCS wrote:
 
 Hello Andrei,
 
 Well if we get into details we'll figure that things must be quite
 different for different memory management models. For example Object
 in ref-counted mode is not a class anymore, it's a struct. So now
 there's going to be two parts in an app: those in which Object is a
 class, and those in which Object is a struct. Reconciling those
 would be a tall order.
 
I think we *are* taking about different things because that argument doesn't hold for the -nogc I'm thinking of, ref counted objects are still GCed objects, allocating them would also be disallows in a module compiled under -nogc.
Yah, but say the module is calling a function in another module. Could that function allocate stuff?
Yes.
 Then who's releasing it?
The gc.
 Could that
 function return allocated stuff into the -nogc module?
Yes.
 Then who's
 releasing that guy?
The gc. The GC is not turned off, it's just not accessable. It's the same case as you have right now calling D from C.
Jan 21 2010
prev sibling next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Well if we get into details we'll figure that things must be quite
 different for different memory management models. For example Object in
 ref-counted mode is not a class anymore, it's a struct. So now there's
 going to be two parts in an app: those in which Object is a class, and
 those in which Object is a struct. Reconciling those would be a tall order.
 Andrei
????? Isn't the **whole point** of Object to be the root of the **class** hierarchy?????
Jan 21 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Well if we get into details we'll figure that things must be quite
 different for different memory management models. For example Object in
 ref-counted mode is not a class anymore, it's a struct. So now there's
 going to be two parts in an app: those in which Object is a class, and
 those in which Object is a struct. Reconciling those would be a tall order.
 Andrei
????? Isn't the **whole point** of Object to be the root of the **class** hierarchy?????
It is. Object will be a wrapper refcounted struct that will contain a reference to a cloaked class object. The compiler will ensure that all accesses to the class object will be through the struct. The this(this) and the destructor of the struct will take care of incrementing and decrementing the intrusive reference counter embedded inside the class. Andrei
Jan 21 2010
prev sibling parent Craig Black <cblack ara.com> writes:
Andrei Alexandrescu Wrote:

 Craig Black wrote:
 
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:hj8gd7$2soe$1 digitalmars.com...
 Craig Black wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in 
 message news:hj7vnu$2008$1 digitalmars.com...
 BCS wrote:
 Hello Andrei,

 The nice part about refcounting is that for the most part you don't
 need to cripple the language.
I think people are trying to say that disallowing use of GC stuff wouldn't cripple the language.
Well it's a fact that there would be fewer idioms and options accessible. So I didn't mean it in a derogatory way as much as a factual statement.
 Also there is one thing that -nogc would have over what you are 
 talking about; you could use it on some modules and not others. If 
 I have some performance critical code where attempting to use the 
 GC would break it's perf contract, I can put it in it's own module 
 and compile just it with -nogc and then link it in with code that 
 does use the GC.
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't.
There are certainly challenges, even perhaps some that I haven't thought of, with mixing manual memory management and GC code. But perhaps there is a bigger problem with the approach you describe. Correct me if I'm wrong, but it seems that what you propose is a -nogc option that would fragment all D code into two incompatible groups. If my -nogc code could not be used with anyone else's D code, then what I have is not a dialect of D at all, but a different incompatible language. I know this issue is a challenging problem from a language design standpoint, and I see why you would have some disdain for it. However, this is not an impossible problem to solve. For example, I believe Managed C++ does this, albeit with some weird syntax, but this proves that it can be done. Anyway, just my 2 cents.
It's reasonable to say that you decide at application design level what memory management approach you want to choose. That doesn't fragment the community. The decision is similar to many others made at the same level: libraries used, build flags, target platform(s), pointer size (32 vs. 64, not an option yet for dmd), etc. Andrei
Are we talking about the same thing here? The things you mention here do not create two incompatible styles of programming. I would like to point out how great it is that Phobos and Tango can be compiled together in the same application. Now they are interoperable. If -nogc code can't be used together with GC code, then we create a similar schism as having two incompatible "standard" libraries.
Well if we get into details we'll figure that things must be quite different for different memory management models. For example Object in ref-counted mode is not a class anymore, it's a struct. So now there's going to be two parts in an app: those in which Object is a class, and those in which Object is a struct. Reconciling those would be a tall order. Andrei
That's the problem. We ARE talking about two different things. You are thinking -nogc should denote reference counting. If we are talking about reference counting, then I agree with you. However, circular references become an issue, and D code compiled this way will have to be fixed somehow to resolve this, perhaps using weak pointers as you mentioned earlier. -Craig
Jan 22 2010
prev sibling parent reply BCS <none anon.com> writes:
Hello Andrei,

 BCS wrote:
 
 Also there is one thing that -nogc would have over what you are
 talking about; you could use it on some modules and not others. If I
 have some performance critical code where attempting to use the GC
 would break it's perf contract, I can put it in it's own module and
 compile just it with -nogc and then link it in with code that does
 use the GC.
 
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't.
Why would having one chunk of code get checked for calls to the GC and another not be any more complicated than mixing malloc/free+add/removeRoot with normal GC? I'm beginning to wonder if I'm calling for something different than other people are. What I'm thinking of would have zero effect on the generated code, the only effect it would have is to cause an error when some code would normally attempt to invoke the GC.
Jan 20 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
BCS wrote:
 Hello Andrei,
 
 BCS wrote:

 Also there is one thing that -nogc would have over what you are
 talking about; you could use it on some modules and not others. If I
 have some performance critical code where attempting to use the GC
 would break it's perf contract, I can put it in it's own module and
 compile just it with -nogc and then link it in with code that does
 use the GC.
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't.
Why would having one chunk of code get checked for calls to the GC and another not be any more complicated than mixing malloc/free+add/removeRoot with normal GC? I'm beginning to wonder if I'm calling for something different than other people are. What I'm thinking of would have zero effect on the generated code, the only effect it would have is to cause an error when some code would normally attempt to invoke the GC.
It's much more complicated than that. What if a library returns an object or an array to another library? Memory allocation strategy is a cross-cutting concern. Andrei
Jan 20 2010
next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el 20 de enero a las 20:48 me escribiste:
 BCS wrote:
Hello Andrei,

BCS wrote:

Also there is one thing that -nogc would have over what you are
talking about; you could use it on some modules and not others. If I
have some performance critical code where attempting to use the GC
would break it's perf contract, I can put it in it's own module and
compile just it with -nogc and then link it in with code that does
use the GC.
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't.
Why would having one chunk of code get checked for calls to the GC and another not be any more complicated than mixing malloc/free+add/removeRoot with normal GC? I'm beginning to wonder if I'm calling for something different than other people are. What I'm thinking of would have zero effect on the generated code, the only effect it would have is to cause an error when some code would normally attempt to invoke the GC.
It's much more complicated than that. What if a library returns an object or an array to another library?
The same that happens in C now, memory management is part of the interface and you should state if the returned object's memory is managed by the library or the user. This introduces a problem D doesn't have now, but let's keep the focus here, this will be used mostly only in embedded environments, where this problem is already present, so it's not too bad. People using "regular" D probably won't be using any "embedded library". They other way arround could be a little more likely, and since "embedded guys" are tought, they should be able to cope with this issue (or avoid using "regular" D libraries that use the GC).
 Memory allocation strategy is a cross-cutting concern.
Yes. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- HACIA NEUQUEN: EL JUEVES SALDRA CARAVANA CON PERROS DESDE CAPITAL EN APOYO AL CACHORRO CONDENADO A MUERTE -- Cr贸nica TV
Jan 21 2010
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el 20 de enero a las 20:48 me escribiste:
 BCS wrote:
 Hello Andrei,

 BCS wrote:

 Also there is one thing that -nogc would have over what you are
 talking about; you could use it on some modules and not others. If I
 have some performance critical code where attempting to use the GC
 would break it's perf contract, I can put it in it's own module and
 compile just it with -nogc and then link it in with code that does
 use the GC.
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't.
Why would having one chunk of code get checked for calls to the GC and another not be any more complicated than mixing malloc/free+add/removeRoot with normal GC? I'm beginning to wonder if I'm calling for something different than other people are. What I'm thinking of would have zero effect on the generated code, the only effect it would have is to cause an error when some code would normally attempt to invoke the GC.
It's much more complicated than that. What if a library returns an object or an array to another library?
The same that happens in C now, memory management is part of the interface and you should state if the returned object's memory is managed by the library or the user. This introduces a problem D doesn't have now, but let's keep the focus here, this will be used mostly only in embedded environments, where this problem is already present, so it's not too bad. People using "regular" D probably won't be using any "embedded library". They other way arround could be a little more likely, and since "embedded guys" are tought, they should be able to cope with this issue (or avoid using "regular" D libraries that use the GC).
I very much see people using a reference-counted approach to GC in a non-embedded application. Andrei
Jan 21 2010
prev sibling parent sclytrack <idiot hotmail.com> writes:
 It's much more complicated than that. What if a library returns an
 object or an array to another library?
The same that happens in C now, memory management is part of the interface and you should state if the returned object's memory is managed by the library or the user.
library or user Could we have some "typedef" for this, or is it not practical?
 This introduces a problem D doesn't have now, but let's keep the focus
 here, this will be used mostly only in embedded environments, where this
 problem is already present, so it's not too bad. People using "regular"
 D probably won't be using any "embedded library". They other way arround
 could be a little more likely, and since "embedded guys" are tought, they
 should be able to cope with this issue (or avoid using "regular"
 D libraries that use the GC).
 Memory allocation strategy is a cross-cutting concern.
Yes.
Jan 21 2010
prev sibling parent Johan Granberg <lijat.meREM OVEgmail.com> writes:
Andrei Alexandrescu wrote:

 BCS wrote:
 Hello Andrei,
 
 BCS wrote:

 Also there is one thing that -nogc would have over what you are
 talking about; you could use it on some modules and not others. If I
 have some performance critical code where attempting to use the GC
 would break it's perf contract, I can put it in it's own module and
 compile just it with -nogc and then link it in with code that does
 use the GC.
Meh. This has been discussed in the C++ standardization committee, and it gets really tricky real fast when you e.g. use together several libraries, each with its own view of memory management. My impression: don't.
Why would having one chunk of code get checked for calls to the GC and another not be any more complicated than mixing malloc/free+add/removeRoot with normal GC? I'm beginning to wonder if I'm calling for something different than other people are. What I'm thinking of would have zero effect on the generated code, the only effect it would have is to cause an error when some code would normally attempt to invoke the GC.
It's much more complicated than that. What if a library returns an object or an array to another library? Memory allocation strategy is a cross-cutting concern. Andrei
If allocating memmory with new in the no-gc module is causing trouble why not disallow it so it becomes a compilation error. Programmers can use object pools or no allocation strategys in these sections of their code. I personaly would very much like a flag that disables allocations and collections caused by code in selected modules. I was working on a class project involving realtime physics simulation about a year ago and the gc was a major source of slowdowns and some of the allocations was quite hard to track down wasting a lot of time for me. Thank you for your good work on the language thou, I don't want to come across as to critical. /Johan Granberg
Jan 21 2010
prev sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2010-01-20 23:43:58 -0500, BCS <none anon.com> said:

 Why would having one chunk of code get checked for calls to the GC and 
 another not be any more complicated than mixing 
 malloc/free+add/removeRoot with normal GC? I'm beginning to wonder if 
 I'm calling for something different than other people are.
 
 What I'm thinking of would have zero effect on the generated code, the 
 only effect it would have is to cause an error when some code would 
 normally attempt to invoke the GC.
Theoretically, I think you should be able to avoid GC calls in a function by using nothrow: void func() nothrow { auto a = new char[1]; // error: may throw } Unfortunately, it doesn't seem to always work: void func(string a, string b) nothrow { auto c = a ~ b; // no error? } But that's probably just a bug somewhere. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Jan 20 2010
next sibling parent BCS <none anon.com> writes:
Hello Michel,

 Theoretically, I think you should be able to avoid GC calls in a
 function by using nothrow:
 
 void func() nothrow {
 auto a = new char[1]; // error: may throw
 }
 Unfortunately, it doesn't seem to always work:
 
 void func(string a, string b) nothrow {
 auto c = a ~ b; // no error?
 }
 But that's probably just a bug somewhere.
 
IIRC there was a big long thread on if allocation failure was a fatal enough error to throw anyway with something that you aren't supposed to catch.
Jan 20 2010
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Michel Fortin wrote:
 On 2010-01-20 23:43:58 -0500, BCS <none anon.com> said:
 
 Why would having one chunk of code get checked for calls to the GC and 
 another not be any more complicated than mixing 
 malloc/free+add/removeRoot with normal GC? I'm beginning to wonder if 
 I'm calling for something different than other people are.

 What I'm thinking of would have zero effect on the generated code, the 
 only effect it would have is to cause an error when some code would 
 normally attempt to invoke the GC.
Theoretically, I think you should be able to avoid GC calls in a function by using nothrow: void func() nothrow { auto a = new char[1]; // error: may throw } Unfortunately, it doesn't seem to always work: void func(string a, string b) nothrow { auto c = a ~ b; // no error? } But that's probably just a bug somewhere.
I may be wrong about the current implementation, but allocation errors aren't supposed to be sensed by nothrow. Andrei
Jan 20 2010
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Michel Fortin wrote:
 But that's probably just a bug somewhere.
We decided that gc allocations are allowable inside a "nothrow" function. The idea is that there are two classifications of exceptions - recoverable and non-recoverable. "nothrow" only refers to recoverable ones. Out of memory is non-recoverable. (Assert errors are also non-recoverable.) Yes, you can argue that o-o-m should be recoverable. But in practice, it rarely is. Furthermore, by classifying o-o-m as recoverable, it makes "nothrow" functions next to useless. It seems a very reasonable tradeoff to make it unrecoverable. So no, it's not a bug, it's a deliberate design choice. =================================== It's very, very difficult to design a program that can recover from out of memory if it allocates memory in bits and pieces all over the place. Any one of those can fail, and then the recovery code will fail as well as it cannot allocate memory either. Often programs that purportedly can recover from oom actually cannot, because they were never tested and the recovery code doesn't work. It's kinda hard to devise a test suite that will try failing at every single point memory is allocated. Hardly anyone bothers. You can design a system that has "free these blobs of memory I'm keeping in reserve if I run out and hopefully that will be enough", but that strategy needs to be part of the gc itself, not user recovery code. Generally, what programs need to do if they run out of memory is try to abort as gracefully as possible, and hopefully restart the program. You don't need exception unwinding to make that happen.
Jan 21 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 You can design a system that has "free these blobs of memory I'm keeping 
 in reserve if I run out and hopefully that will be enough", but that 
 strategy needs to be part of the gc itself, not user recovery code.
Do you mean that the D2 GC API needs to grow something to tell that a less important memory block (object, array) can be deallocated by the GC in low memory conditions? How can this work in the user code? (And it can be positive to define a standard way the D GC talks with the virtual memory subsystem of the operationg system, to avoid useless swaps from and to disk). Bye, bearophile
Jan 21 2010
parent reply BCS <none anon.com> writes:
Hello bearophile,

 Walter Bright:
 
 You can design a system that has "free these blobs of memory I'm
 keeping in reserve if I run out and hopefully that will be enough",
 but that strategy needs to be part of the gc itself, not user
 recovery code.
 
Do you mean that the D2 GC API needs to grow something to tell that a less important memory block (object, array) can be deallocated by the GC in low memory conditions? How can this work in the user code?
I'd add some way to registere GC panic delegates that get called until enough memory is available, maybe from some sort of priority queue.
 (And it can be positive to define a standard way the D GC talks with
 the virtual memory subsystem of the operationg system, to avoid
 useless swaps from and to disk).
IIRC virtual memeory and swapping has little or nothing to do with OOM errors.
 
 Bye,
 bearophile
Jan 21 2010
parent reply retard <re tard.com.invalid> writes:
Thu, 21 Jan 2010 18:38:07 +0000, BCS wrote:

 Hello bearophile,
 (And it can be positive to define a standard way the D GC talks with
 the virtual memory subsystem of the operationg system, to avoid useless
 swaps from and to disk).
IIRC virtual memeory and swapping has little or nothing to do with OOM errors.
On Linux the processes almost always stay on main memory, and only start to fill swap when running out of main memory. So unless you have no swap set up, OOM cannot happen unless the swap is >95% filled. OOM inside the GC's virtual memory space can happen earlier, of course.
Jan 21 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 On Linux the processes almost always stay on main memory, and only start 
 to fill swap when running out of main memory. So unless you have no swap 
 set up, OOM cannot happen unless the swap is >95% filled. OOM inside the 
 GC's virtual memory space can happen earlier, of course.
Yeah, that's another thing I should have mentioned. When you're running Windows or Linux at the edge of running out of virtual memory, which is when the gc would fail to allocate memory, the system tends to go unstable anyway. This is because (as I mentioned before) few apps handle out of memory properly.
Jan 21 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 retard wrote:
 On Linux the processes almost always stay on main memory, and only 
 start to fill swap when running out of main memory. So unless you have 
 no swap set up, OOM cannot happen unless the swap is >95% filled. OOM 
 inside the GC's virtual memory space can happen earlier, of course.
Yeah, that's another thing I should have mentioned. When you're running Windows or Linux at the edge of running out of virtual memory, which is when the gc would fail to allocate memory, the system tends to go unstable anyway. This is because (as I mentioned before) few apps handle out of memory properly.
Please stop spreading that information. Even if it has truth to it, it's not a reason to throw our hands in the air. In my field apps routinely encounter and handle the problem of running tight on memory. Let me make it very clear: I have had malloc return 0 on me. Andrei
Jan 21 2010
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Thu, Jan 21, 2010 at 2:43 PM, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:
 Walter Bright wrote:
 retard wrote:
 On Linux the processes almost always stay on main memory, and only start
 to fill swap when running out of main memory. So unless you have no swap set
 up, OOM cannot happen unless the swap is >95% filled. OOM inside the GC's
 virtual memory space can happen earlier, of course.
Yeah, that's another thing I should have mentioned. When you're running Windows or Linux at the edge of running out of virtual memory, which is when the gc would fail to allocate memory, the system tends to go unstable anyway. This is because (as I mentioned before) few apps handle out of memory properly.
Please stop spreading that information. Even if it has truth to it, it's not a reason to throw our hands in the air. In my field apps routinely encounter and handle the problem of running tight on memory. Let me make it very clear: I have had malloc return 0 on me.
... and recovered? Or didn't? --bb
Jan 21 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 On Thu, Jan 21, 2010 at 2:43 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Walter Bright wrote:
 retard wrote:
 On Linux the processes almost always stay on main memory, and only start
 to fill swap when running out of main memory. So unless you have no swap set
 up, OOM cannot happen unless the swap is >95% filled. OOM inside the GC's
 virtual memory space can happen earlier, of course.
Yeah, that's another thing I should have mentioned. When you're running Windows or Linux at the edge of running out of virtual memory, which is when the gc would fail to allocate memory, the system tends to go unstable anyway. This is because (as I mentioned before) few apps handle out of memory properly.
Please stop spreading that information. Even if it has truth to it, it's not a reason to throw our hands in the air. In my field apps routinely encounter and handle the problem of running tight on memory. Let me make it very clear: I have had malloc return 0 on me.
.... and recovered? Or didn't?
Sure as heck recovered. I couldn't afford to lose all of the good work I'd done in the previous 7 hours. Andrei
Jan 21 2010
prev sibling parent Rainer Deyke <rainerd eldwood.com> writes:
Andrei Alexandrescu wrote:
 Please stop spreading that information. Even if it has truth to it, it's
 not a reason to throw our hands in the air. In my field apps routinely
 encounter and handle the problem of running tight on memory.
I think in general it's better to detect and respond to low memory conditions before actually running out of memory. That said, I've also had malloc return 0, and recovered from it. -- Rainer Deyke - rainerd eldwood.com
Jan 21 2010
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 21 de enero a las 13:00 me escribiste:
 retard wrote:
On Linux the processes almost always stay on main memory, and only
start to fill swap when running out of main memory. So unless you
have no swap set up, OOM cannot happen unless the swap is >95%
filled. OOM inside the GC's virtual memory space can happen
earlier, of course.
Yeah, that's another thing I should have mentioned. When you're running Windows or Linux at the edge of running out of virtual memory, which is when the gc would fail to allocate memory, the system tends to go unstable anyway.
You can run your program with a memory limit, using ulimit for example. They only way to get an error on allocation is not when the whole system is going down... -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------
Jan 22 2010
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 21 de enero a las 02:15 me escribiste:
 Often programs that purportedly can recover from oom actually
 cannot, because they were never tested and the recovery code doesn't
 work.
Unless you use fault-injection. Is *not* that rare... -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- It's not a lie, if you believe it. -- George Constanza
Jan 21 2010
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el 20 de enero a las 08:32 me escribiste:
 Leandro Lucarella wrote:
Again? RC is *not* -nogc, is -anothergc.
I agree. With reference counting, you'd be no worse than a C++ project that decided to use refcounted smart pointers for all allocated objects. That sounds good to me.
And reference counting won't do
the trick unless you add a backing GC to free cycles.
Well there are techniques for lifting cycles but really I think it wouldn't be bad if the user were given the possibility (e.g. weak pointers).
What I mean about
-nogc is *no* GC, is "please, mr compiler, give me an error when a GC
facility is used".
I know. That could be another object.d implementation that would disable certain functions. The nice part about refcounting is that for the most part you don't need to cripple the language.
But I don't think people that *really* need to be in full control would see a RC GC as something tempting. As long as there is an option to (easily) avoid the GC, I'm happy, if you want to provice an RC implementation then, great. I can't see an RC implementation fitting very well in D (because of slicing and other features, mostly the very same features that makes the D GC very conservative and inefficient). -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- He used to do surgery On girls in the eighties But gravity always wins
Jan 20 2010
next sibling parent Rainer Deyke <rainerd eldwood.com> writes:
Leandro Lucarella wrote:
 But I don't think people that *really* need to be in full control would
 see a RC GC as something tempting.
I don't need full control. I need RAII for dynamically allocated objects, with destructors that really work. C++ with reference counting can give me that and D with garbage collection can't, so I use C++ instead of D. If the optional reference counting scheme in D solves this issue, then I will seriously consider D for my next project (once D2 is released and stable). -- Rainer Deyke - rainerd eldwood.com
Jan 20 2010
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 But I don't think people that *really* need to be in full control would
 see a RC GC as something tempting. As long as there is an option to
 (easily) avoid the GC, I'm happy, if you want to provice an RC
 implementation then, great. I can't see an RC implementation fitting very
 well in D (because of slicing and other features, mostly the very same
 features that makes the D GC very conservative and inefficient).
There is one way to avoid the gc now - remove it from druntime, and the linker will give you "undefined symbol" errors if there are any references to it. If you're doing an embedded system, you'll probably want to customize druntime anyway.
Jan 21 2010
parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 21 de enero a las 02:18 me escribiste:
 Leandro Lucarella wrote:
But I don't think people that *really* need to be in full control would
see a RC GC as something tempting. As long as there is an option to
(easily) avoid the GC, I'm happy, if you want to provice an RC
implementation then, great. I can't see an RC implementation fitting very
well in D (because of slicing and other features, mostly the very same
features that makes the D GC very conservative and inefficient).
There is one way to avoid the gc now - remove it from druntime, and the linker will give you "undefined symbol" errors if there are any references to it. If you're doing an embedded system, you'll probably want to customize druntime anyway.
I now you can do it now, but hacking the runtime is not what I call "easy". -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- A veces quisiera ser un barco, para flotar como floto siendo humano, y no hundirme como me hundo
Jan 21 2010
prev sibling parent reply Craig Black <cblack ara.com> writes:
Leandro Lucarella Wrote:

 Andrei Alexandrescu, el 19 de enero a las 23:17 me escribiste:
 bearophile wrote:
Andrei Alexandrescu:
I'd love -nogc. Then we can think of designing parts of Phobos
to work under that regime.
But you must do this with lot of care: programmers coming from C++ will be tempted to write code that uses those GC-free parts of Phobos a lot, the end result will be a lot of D code in the wild that's like C++ or worse. So when you want to use one of those modules or libraries, you may need to dance their no-GC dance. This can invalidate the good idea of designing a GC-based language. A better strategy is first of all to improve a lot the D GC, if necessary to introduce in the language other details to help the design of a more efficient GC (like giving ways to tell apart pinned objects from normal ones, make the unpinned ones the default ones, and modify the type system so mixing pinned-memory and unpinned-memory pointers is generally safe, etc). Only when further improvements to the GC become too much hard, you can start to write no-GC parts of Phobos, few years from now. I have seen many cases where Java code run with HotSpot is faster than very similar D1 code compiled with LDC. Avoiding the GC is a easy shortcut, but I think it's not a good long-term strategy for D. Bye, bearophile
Walter and I talked for hours about a no-gc model for D, and the conclusion was that with only a little compiler support, things can be arranged such that swapping different object.d implementations, the entire D allocation model can be swapped between traditional GC and reference counting.
Again? RC is *not* -nogc, is -anothergc. And reference counting won't do the trick unless you add a backing GC to free cycles. What I mean about -nogc is *no* GC, is "please, mr compiler, give me an error when a GC facility is used".
Yeah, this is what I thought -nogc meant as well. Not that I don't think that reference counting wouldn't be useful, but reference counting has its own problems. I would be interested in a -refcounting option or something like that though. It would be useful to compare the performance of the two systems. -Craig
Jan 20 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Craig Black wrote:
 Leandro Lucarella Wrote:
 
 Andrei Alexandrescu, el 19 de enero a las 23:17 me escribiste:
 bearophile wrote:
 Andrei Alexandrescu:
 I'd love -nogc. Then we can think of designing parts of Phobos
 to work under that regime.
But you must do this with lot of care: programmers coming from C++ will be tempted to write code that uses those GC-free parts of Phobos a lot, the end result will be a lot of D code in the wild that's like C++ or worse. So when you want to use one of those modules or libraries, you may need to dance their no-GC dance. This can invalidate the good idea of designing a GC-based language. A better strategy is first of all to improve a lot the D GC, if necessary to introduce in the language other details to help the design of a more efficient GC (like giving ways to tell apart pinned objects from normal ones, make the unpinned ones the default ones, and modify the type system so mixing pinned-memory and unpinned-memory pointers is generally safe, etc). Only when further improvements to the GC become too much hard, you can start to write no-GC parts of Phobos, few years from now. I have seen many cases where Java code run with HotSpot is faster than very similar D1 code compiled with LDC. Avoiding the GC is a easy shortcut, but I think it's not a good long-term strategy for D. Bye, bearophile
Walter and I talked for hours about a no-gc model for D, and the conclusion was that with only a little compiler support, things can be arranged such that swapping different object.d implementations, the entire D allocation model can be swapped between traditional GC and reference counting.
Again? RC is *not* -nogc, is -anothergc. And reference counting won't do the trick unless you add a backing GC to free cycles. What I mean about -nogc is *no* GC, is "please, mr compiler, give me an error when a GC facility is used".
Yeah, this is what I thought -nogc meant as well. Not that I don't think that reference counting wouldn't be useful, but reference counting has its own problems. I would be interested in a -refcounting option or something like that though. It would be useful to compare the performance of the two systems. -Craig
The actual option used would be one that selects which object.d should be used (even -I works). Comparing and contrasting would be a matter of compiling with different flags. I'm glad this is being discussed. I'd forgotten many details of how I was thinking of doing this. Andrei
Jan 20 2010
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
bearophile, el 20 de enero a las 01:51 me escribiste:
 A better strategy is first of all to improve a lot the D GC
Is not a better strategy, is another strategy. Improving the GC is as important as (or even more important than) having a -nogc option. But an extremely efficient GC will not be suitable for all purposes (specially for embedded systems, which is the origin of this thread). That's just a fact, "there is no silver buller". So ignoring the need for a -nogc option is not going to do things better. Both things should be acknowledged, the need for a better GC and the need for making easy to completely avoid the GC (which I agree, is less important than the former, because most of the D code will use the GC). -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- El techo de mi cuarto lleno de universos
Jan 20 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Craig Black:

I would have to agree and this is one of my causes for hesisation in adopting
D.  The code I write requires the highest performance possible.  I am concerned
that when I port it over to D, I will have to avoid using a lot of D features
that use the GC (built-in arrays, closures, standard library features, etc.) 
in order to get the best possible performance.  D does not adhere to the C++
zero-overhead principle, and I see this as a risk.  So if/when I end up porting
my code to D I may evolve my own dialect of D that uses only the subset of
features that tend to provide the best performance.<
The following comments are mostly about D1 :-) There can be spots in a program where performance is very important, in those parts in D it can be indeed better to manage memory manually, to avoid heap-allocated closures (absent in D1), interfaces and array joining, and most importantly of all to program with a lower level style. Doing this I've seen that I am often able to get more performance from D1 code compiled with LDC than from C++ compiled with G++ 4.3.2 (where "often" means more than five times on ten smallish/medium programs). Generally built-in arrays aren't a problem (you just need to use something like an array appender if you want to append items to them, and where performance matters a lot, you have to avoid appends, and keep an index to the last item used). If you do this and use this style for a significant percentage (well, more than 10-15% of it, unless it's less than 5000 lines long) of your whole program, then you are using C++ in D, and I think it's better for you to avoid using D, and to use C++ or C or asm :-)
I don't know if the GC will send my code to hell, but if the performance drops
by more than 20% by porting it from C++ to D, then I would be disappointed.<
You can try to write D programs that are faster than the C++ ones, instead. +-20% performance difference is not that important, you can aim to a higher difference.
I got about a 15% performance improvement by switching from the system
allocator to nedmalloc, so my code is very sensitive to heap allocation
performance.<
Try to use a more custom allocator, like pools, arenas, and so on, they can be much better than nedmalloc. I have seen programs get about twice faster doing this.
I use dynamic arrays quite a lot, and if I switched all of these to use D's
built-in GC-based arrays, I think I would see a tremendous performance drop.<
I am a bit suspicious of this. GC scans can slow down a little, but I'm not seeing this as a big problem so far. You can test and benchmark some of your theories. A problem I've seen is caused by the not precise nature of the GC, wrong pointers keeping dead things alive. Despite the amount of practice I've had so far with LDC, I can be wrong in many ways, so performing experiments is a good way for you to be sure. Being D2 in alpha stage it can't care too much yet about performance yet. There are things that will be improved. Bye, bearophile
Jan 19 2010
parent "Craig Black" <craigblack2 cox.net> writes:
 If you do this and use this style for a significant percentage (well, more 
 than 10-15% of it, unless it's less than 5000 lines long) of your whole 
 program, then you are using C++ in D, and I think it's better for you to 
 avoid using D, and to use C++ or C or asm :-)
No I think that even using a limited subset of D would be quite liberating when compared with the hoops I have to jump through in my C++ code. The new C++ standard will help some, but C++ isn't evolving fast enough.
 You can try to write D programs that are faster than the C++ ones, 
 instead. +-20% performance difference is not that important, you can aim 
 to a higher difference.
I always aim for high performance, but it may prove a challenge with D compiler/technology in its current state. I know this is improving and will continue to do so.
 Try to use a more custom allocator, like pools, arenas, and so on, they 
 can be much better than nedmalloc. I have seen programs get about twice 
 faster doing this.
The main thing that I do on the memory side is to try to keep my memory use compact and contiguous, which is why I use arrays so much.
I use dynamic arrays quite a lot, and if I switched all of these to use 
D's built-in GC-based arrays, I think I would see a tremendous performance 
drop.<
I am a bit suspicious of this. GC scans can slow down a little, but I'm not seeing this as a big problem so far. You can test and benchmark some of your theories. A problem I've seen is caused by the not precise nature of the GC, wrong pointers keeping dead things alive.
One thing that I noticed while benchmarking is that when I remove the call to nedfree in my overloaded delete operator, application performance suffers by 17%. You might expect this to increase performance if memory is never being freed. Instead what happens is that locality of reference suffers. Thus, if D's GC never ran a single collection cycle, it would still be 17% slower than nedmalloc. A moving GC would help this situation, but that's not the case with D. Thanks for the pointers! -Craig
Jan 20 2010
prev sibling parent =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
Walter Bright wrote:
 J=E9r=F4me M. Berger wrote:
     Embedded x86 is an oxymoron. Yes, I know, it exists (and btw, 8
 years ago they were still selling 486s as "embedded" processors) but
 mostly it doesn't need any special support (except possibly on the
 binary size front and even there 80k is nothing to the XXX megabytes
 used by the off-the-shelf OS+GUI+Web browser). Face it, there are
 two kinds of embedded developers:

 - Those who want performance at very low power usage, who use ARM
 and C with a specialized OS. Those won't use D, period. Most of the
 time, they won't even use malloc or most of the C standard library
 (not saying they're right here, but I doubt you will change them);
=20 I've looked at some embedded ARM evaluation boards that have Linux on them. Don't know much else about them. What about things like phones, game machines? =20
Well, I was caricaturing a bit. There is a lot of Linux and/or Windows Mobile development on ARM platforms, but I don't think those targets need anything special beyond ARM code generation any more than the x86 target. The reason why they use ARM is often that they have all-in-one chips with the ARM CPU and all the specialized hardware they need in one package (especially true for mobile phones). Once they start using a heavyweight OS and applications, they fall in the second category. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jan 19 2010
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from "J閞鬽e_M._Berger" (jeberger free.fr)'s article
 PS: At work, we mustn't use C++ because:
 - It's slow;
 - Its standard library is too big (100k);
 - In a future product, we might want to reuse this module and not
 have C++ (Oh, yes I didn't tell you that we *do* have the C++ stdlib
 in our products because the web browser they bought to run their
 HTML+Javascript+HTML+XML+C+XML+C+XML+C GUI uses it, but *we* aren't
 allowed to, fckng morons)
This is a great point and deserves to be highlighted: D was meant to be a better C++, not a better C. If someone won't use C++ instead of C (apparently there are a decent amount of these people), then there's not a snowball's chance in hell they'd use D, even if we fixed the binary size issue, made D more usable without a GC, and in general made it in every way at least as efficient as C++.
Jan 18 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 This is a great point and deserves to be highlighted:  D was meant to be a
better
 C++, not a better C.  If someone won't use C++ instead of C (apparently there
are
 a decent amount of these people), then there's not a snowball's chance in hell
 they'd use D, even if we fixed the binary size issue, made D more usable
without a
 GC, and in general made it in every way at least as efficient as C++.
D executes code every bit as efficiently as C++ does. Any variations are due to which back end is used, not the language. I agree with your point that people who are wedded to C and won't look at C++ will not look at D, either. Also, if you're only writing a few K of code, D's advantages aren't that compelling over C (and neither are C++'s). It's when the size of the program increases that D's strengths really begin to dominate.
Jan 18 2010
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Also, if you're only writing a few K of code, D's advantages aren't that
 compelling over C (and neither are C++'s). It's when the size of the
 program increases that D's strengths really begin to dominate.
???? For small projects, D is still a huge improvement over C. Templates, arrays that "just work", a sane import system, an OO system, and most importantly a standard library built to take advantage of these, is useful even in tiny 100-line programs. Even if all you're doing is writing a command line app to read in data from a file, perform a few calculations, and print the results to stdout, do you really want to deal with C's horribly low-level string and file I/O handling?
Jan 18 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Also, if you're only writing a few K of code, D's advantages aren't that
 compelling over C (and neither are C++'s). It's when the size of the
 program increases that D's strengths really begin to dominate.
???? For small projects, D is still a huge improvement over C. Templates, arrays that "just work", a sane import system, an OO system, and most importantly a standard library built to take advantage of these, is useful even in tiny 100-line programs. Even if all you're doing is writing a command line app to read in data from a file, perform a few calculations, and print the results to stdout, do you really want to deal with C's horribly low-level string and file I/O handling?
I suppose C is like a screwdriver. If it's in my hand, and there's just one screw, I'll just go ahead and use it rather than go to the garage to get my power screwdriver. More than one screw, and I'll get the power screwdriver. If they're sitting next to each other, I'll grab the power screwdriver regardless.
Jan 19 2010
parent retard <re tard.com.invalid> writes:
Tue, 19 Jan 2010 01:52:24 -0800, Walter Bright wrote:

 dsimcha wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Also, if you're only writing a few K of code, D's advantages aren't
 that compelling over C (and neither are C++'s). It's when the size of
 the program increases that D's strengths really begin to dominate.
???? For small projects, D is still a huge improvement over C. Templates, arrays that "just work", a sane import system, an OO system, and most importantly a standard library built to take advantage of these, is useful even in tiny 100-line programs. Even if all you're doing is writing a command line app to read in data from a file, perform a few calculations, and print the results to stdout, do you really want to deal with C's horribly low-level string and file I/O handling?
I suppose C is like a screwdriver. If it's in my hand, and there's just one screw, I'll just go ahead and use it rather than go to the garage to get my power screwdriver. More than one screw, and I'll get the power screwdriver. If they're sitting next to each other, I'll grab the power screwdriver regardless.
At least there's much less trouble setting up the development environment for a C program. This is important when you distribute the program in source form. The same thing happens with version control systems and build systems. You could use DSSS or darcs, but in many cases the potential users only have 'make' and svn or only know how to use those. That's why sometimes inferior technology has to be chosen for the project.
Jan 19 2010
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 Also, if you're only writing a few K of code, D's advantages aren't that 
 compelling over C (and neither are C++'s). It's when the size of the 
 program increases that D's strengths really begin to dominate.
I don't agree at all. D is (and has to be) fitter for short programs too, that means that even in short programs it (has to) show(s) its advantages. This is one of the first programs I've written in D, a lot of time ago, to solve a problem that's already present on your site (but there the solution is much less nice), this program is very short, but if you try to use C to write the same code you will produce lot of more code, several bugs, and a headache: import std.stdio, std.stream, std.string, std.ctype, std.gc; void traduct(char[] n, char[] digits, int start, char[][] words, char[][][char[]] gdict) { if (start >= digits.length) writefln(n, ": ", words.join(" ")); else { auto found_word = false; for(auto i = start; i < digits.length; i++) if (digits[start .. i+1] in gdict) { found_word = true; foreach(hit; gdict[digits[start .. i+1]]) traduct(n, digits, i+1, words ~ [hit], gdict); } if (!found_word && (!words || (words && !std.ctype.isdigit(words[words.length-1][0])))) traduct(n, digits, start+1, words ~ [digits[start..start+1]], gdict); } } void main() { std.gc.disable(); // to speed up the program a bit auto gtable = maketrans("ejnqrwxdsyftamcivbkulopghzEJNQRWXDSYFTAMCIVBKULOPGHZ", "0111222333445566677788899901112223334455666777888999"); char[][][char[]] gdict; foreach(char[] w; new BufferedFile("dictionary.txt")) gdict[w.translate(gtable, "\"")] ~= w.dup; foreach(char[] n; new BufferedFile("input.txt")) traduct(n, n.removechars("/-"), 0, [], gdict); } Bye, bearophile
Jan 18 2010
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 I can totally accept that 486 in particular is pretty much dead, but unless 
 there's some specific advantage that can be only be gained by breaking 486 
 support, I see no reason for "It supports 486" to be something worth whining 
 about.
The compiler does a pretty good job scheduling Pentium instructions for the U-V pipe. I had thought that was hopelessly obsolete, but then along came the Intel Atom where, guess what, doing Pentium scheduling is a big win!
Jan 18 2010
prev sibling parent BCS <none anon.com> writes:
Hello retard,

 I also have a Core i7. Jeff Atwood has one
 (http://www.codinghorror.com/ blog/archives/001316.html) - and he
 represents the average Joe Six-pack developer.
That's fine as long as you are writing code for developers. If you are writing code for users (the other 599M people in the US) that doesn't mater at all. What does Joe Six-pack USER have on there table? What we need is a language that works with what will be common in 5-30 years and a compiler that works with what we have NOW.
Jan 18 2010
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
[snip]
 It's been no worse at threading than C/C++ for quite some time. It's just 
 starting to have a threading model that kicks the crap out of the threading 
 in the vast majority of languages out there.
BTW, that effort is going quite well. For example, a producer-consumer file copy program using the proposed API has 20 lines, correctness and all. import std.algorithm, std.concurrency, std.stdio; void main() { enum bufferSize = 1024 * 100; auto tid = spawn(&writer); // Read loop auto src = stdin.by!(ubyte)(); for (;;) { auto buffer = UniqueArray!(ubyte)(bufferSize); auto length = copy(take(src, bufferSize), buffer).length; send(tid, move(buffer)); if (length == 0) break; } } void writer() { // Write loop auto tgt = stdout.by!(ubyte)(); for (;;) { auto buffer = receiveOnly!(UniqueArray!ubyte)(); copy(buffer, tgt); } } Andrei
Jan 18 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Andrei Alexandrescu wrote:
 Nick Sabalausky wrote:
 [snip]
 It's been no worse at threading than C/C++ for quite some time. It's 
 just starting to have a threading model that kicks the crap out of the 
 threading in the vast majority of languages out there.
BTW, that effort is going quite well. For example, a producer-consumer file copy program using the proposed API has 20 lines, correctness and all. import std.algorithm, std.concurrency, std.stdio; void main() { enum bufferSize = 1024 * 100; auto tid = spawn(&writer); // Read loop auto src = stdin.by!(ubyte)(); for (;;) { auto buffer = UniqueArray!(ubyte)(bufferSize); auto length = copy(take(src, bufferSize), buffer).length; send(tid, move(buffer)); if (length == 0) break; } } void writer() { // Write loop auto tgt = stdout.by!(ubyte)(); for (;;) { auto buffer = receiveOnly!(UniqueArray!ubyte)(); copy(buffer, tgt); } } Andrei
Sorry for the monologue. Actually I reworked the example into the even simpler: import std.concurrency, std.stdio; void main() { enum bufferSize = 1024 * 100; auto tid = spawn(&writer); // Read loop foreach (immutable(ubyte)[] buffer; stdin.byChunk(bufferSize)) { send(tid, buffer); } } void writer() { // Write loop for (;;) { auto buffer = receiveOnly!(immutable(ubyte)[])(); tgt.rawWrite(buffer); } } We actually have implemented all the pieces to make this work. Andrei
Jan 18 2010
parent reply =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
Andrei Alexandrescu wrote:
 Andrei Alexandrescu wrote:
 Nick Sabalausky wrote:
 [snip]
 It's been no worse at threading than C/C++ for quite some time. It's
 just starting to have a threading model that kicks the crap out of
 the threading in the vast majority of languages out there.
BTW, that effort is going quite well. For example, a producer-consumer=
 file copy program using the proposed API has 20 lines, correctness and=
 all.

 import std.algorithm, std.concurrency, std.stdio;

 void main() {
    enum bufferSize =3D 1024 * 100;
    auto tid =3D spawn(&writer);
    // Read loop
    auto src =3D stdin.by!(ubyte)();
    for (;;) {
       auto buffer =3D UniqueArray!(ubyte)(bufferSize);
       auto length =3D copy(take(src, bufferSize), buffer).length;
       send(tid, move(buffer));
       if (length =3D=3D 0) break;
    }
 }

 void writer() {
    // Write loop
    auto tgt =3D stdout.by!(ubyte)();
    for (;;) {
       auto buffer =3D receiveOnly!(UniqueArray!ubyte)();
       copy(buffer, tgt);
    }
 }


 Andrei
=20 Sorry for the monologue. Actually I reworked the example into the even simpler: =20 import std.concurrency, std.stdio; =20 void main() { enum bufferSize =3D 1024 * 100; auto tid =3D spawn(&writer); // Read loop foreach (immutable(ubyte)[] buffer; stdin.byChunk(bufferSize)) { send(tid, buffer); } } =20 void writer() { // Write loop for (;;) { auto buffer =3D receiveOnly!(immutable(ubyte)[])(); tgt.rawWrite(buffer); } } =20 We actually have implemented all the pieces to make this work. =20
Shouldn't you declare "tgt" somewhere (you did in your first example... Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jan 18 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
J閞鬽e M. Berger wrote:
 Andrei Alexandrescu wrote:
 Andrei Alexandrescu wrote:
 Nick Sabalausky wrote:
 [snip]
 It's been no worse at threading than C/C++ for quite some time. It's
 just starting to have a threading model that kicks the crap out of
 the threading in the vast majority of languages out there.
BTW, that effort is going quite well. For example, a producer-consumer file copy program using the proposed API has 20 lines, correctness and all. import std.algorithm, std.concurrency, std.stdio; void main() { enum bufferSize = 1024 * 100; auto tid = spawn(&writer); // Read loop auto src = stdin.by!(ubyte)(); for (;;) { auto buffer = UniqueArray!(ubyte)(bufferSize); auto length = copy(take(src, bufferSize), buffer).length; send(tid, move(buffer)); if (length == 0) break; } } void writer() { // Write loop auto tgt = stdout.by!(ubyte)(); for (;;) { auto buffer = receiveOnly!(UniqueArray!ubyte)(); copy(buffer, tgt); } } Andrei
Sorry for the monologue. Actually I reworked the example into the even simpler: import std.concurrency, std.stdio; void main() { enum bufferSize = 1024 * 100; auto tid = spawn(&writer); // Read loop foreach (immutable(ubyte)[] buffer; stdin.byChunk(bufferSize)) { send(tid, buffer); } } void writer() { // Write loop for (;;) { auto buffer = receiveOnly!(immutable(ubyte)[])(); tgt.rawWrite(buffer); } } We actually have implemented all the pieces to make this work.
Shouldn't you declare "tgt" somewhere (you did in your first example... Jerome
I meant stdout instead of tgt. Andrei
Jan 18 2010
prev sibling next sibling parent Don <nospam nospam.com> writes:
Daniel wrote:
 I don't think I like D 2.0 over 1.0.  Before you all run out to get me some
tissue, I figured I'd explain my
 rationale.
[snip]
 Things D missed:
 
 cent and ucent should be available via SSE.  Over 97% of computers now have it
and I still can't even assign to an
 SSE register?
SSE does NOT have cent or ucent. What it does have is float[4], which became a value type just a few compiler releases ago.
 Don Clungston had some excellent code written up in BLADE found on dsource 2
years ago.  Shouldn't that have
 become part of D native?
In a sense, it has. Array operations are the perfect front-end for it, which has made my original code largely obsolete. The back-end code generation is still not in place, but it's a matter of time. There are so many things to do...
Jan 18 2010
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Daniel wrote:
 I also kind of hoped we'd see standard real world unit types in D.  Like sqft,
meters, ohms, kg, L, Hz, seconds,
 etc. native to D.  Being able to intrinsically convert types between these
things would make D the most
 approachable General Programming Language for real world problems.
Here it is: // by Oskar Linde Aug 2006 // This is just a quick hack to test // IFTI operators opMul and opDel import std.stdio; import std.math; import std.string; version = unicode; struct SiQuantity(T,int e1, int e2, int e3, int e4, int e5, int e6, int e7) { T value = 0; alias T ValueType; const exp1 = e1; const exp2 = e2; const exp3 = e3; const exp4 = e4; const exp5 = e5; const exp6 = e6; const exp7 = e7; static assert(SiQuantity.sizeof == ValueType.sizeof); template AddDimensions(int mul, U) { static assert(is(U.ValueType == ValueType) || is(U == ValueType), "incompatible value types"); static if (is(U == ValueType)) alias SiQuantity AddDimensions; else alias SiQuantity!(T,exp1+mul*U.exp1,exp2+mul*U.exp2, exp3+mul*U.exp3,exp4+mul*U.exp4, exp5+mul*U.exp5,exp6+mul*U.exp6, exp7+U.exp7) AddDimensions; } SiQuantity opAddAssign(SiQuantity rhs) { value += rhs.value; return this; } SiQuantity opSubAssign(SiQuantity rhs) { value -= rhs.value; return this; } const { SiQuantity opAdd(SiQuantity rhs) { SiQuantity ret; ret.value = value + rhs.value; return ret; } SiQuantity opSub(SiQuantity rhs) { SiQuantity ret; ret.value = value - rhs.value; return ret; } SiQuantity opNeg() { SiQuantity ret; ret.value = -value; return ret; } SiQuantity opPos() { typeof(return) ret; ret.value = value; return ret; } int opCmp(SiQuantity rhs) { if (value > rhs.value) return 1; if (value < rhs.value) return -1; return 0; // BUG: NaN } AddDimensions!(+1,Rhs) opMul(Rhs)(Rhs rhs) { AddDimensions!(+1,Rhs) ret; static if (is(Rhs : T)) ret.value = value * rhs; else ret.value = value * rhs.value; return ret; } AddDimensions!(-1,Rhs) opDiv(Rhs)(Rhs rhs) { AddDimensions!(-1,Rhs) ret; static if (is(Rhs : T)) ret.value = value / rhs; else ret.value = value / rhs.value; return ret; } SiQuantity opMul_r(T lhs) { SiQuantity ret; ret.value = lhs * value; return ret; } SiQuantity!(T,-e1,-e2,-e3,-e4,-e5,-e6,-e7) opDiv_r(T lhs) { SiQuantity!(T,-e1,-e2,-e3,-e4,-e5,-e6,-e7) ret; ret.value = lhs / value; return ret; } string toString() { string prefix = ""; T multiplier = 1; T value = this.value; string unit; static if (is(typeof(UnitName!(SiQuantity)))) unit = UnitName!(SiQuantity); else { value *= pow(cast(real)1e3,cast(uint)e2); // convert kg -> g // Take mass (e2) first to handle kg->g prefix issue if (e2 != 0) unit ~= format("路g^%s",e2); if (e1 != 0) unit ~= format("路m^%s",e1); if (e3 != 0) unit ~= format("路s^%s",e3); if (e4 != 0) unit ~= format("路A^%s",e4); if (e5 != 0) unit ~= format("路K^%s",e5); if (e6 != 0) unit ~= format("路mol^%s",e6); if (e7 != 0) unit ~= format("路cd^%s",e7); if (unit) unit = unit[2..$].split("^1").join(""); } if (value >= 1e24) { prefix = "Y"; multiplier = 1e24; } else if (value >= 1e21) { prefix = "Z"; multiplier = 1e21; } else if (value >= 1e18) { prefix = "E"; multiplier = 1e18; } else if (value >= 1e15) { prefix = "P"; multiplier = 1e15; } else if (value >= 1e12) { prefix = "T"; multiplier = 1e12; } else if (value >= 1e9) { prefix = "G"; multiplier = 1e9; } else if (value >= 1e6) { prefix = "M"; multiplier = 1e6; } else if (value >= 1e3) { prefix = "k"; multiplier = 1e3; } else if (value >= 1) { } else if (value >= 1e-3) { prefix = "m"; multiplier = 1e-3; } else if (value >= 1e-6) { version(unicode) prefix = "碌"; else prefix = "u"; multiplier = 1e-6; } else if (value >= 1e-9) { prefix = "n"; multiplier = 1e-9; } else if (value >= 1e-12) { prefix = "p"; multiplier = 1e-12; } else if (value >= 1e-15) { prefix = "f"; multiplier = 1e-15; } else if (value >= 1e-18) { prefix = "a"; multiplier = 1e-18; } else if (value >= 1e-21) { prefix = "z"; multiplier = 1e-21; } else if (value >= 1e-24) { prefix = "y"; multiplier = 1e-24; } return format("%.3s %s%s",value/multiplier, prefix, unit); } } } //length meter m //mass kilogram kg //time second s //electric current ampere A //thermodynamic temperature kelvin K //amount of substance mole mol //luminous intensity candela cd // Si base quantities alias SiQuantity!(real,1,0,0,0,0,0,0) Length; alias SiQuantity!(real,0,1,0,0,0,0,0) Mass; alias SiQuantity!(real,0,0,1,0,0,0,0) Time; alias SiQuantity!(real,0,0,0,1,0,0,0) Current; alias SiQuantity!(real,0,0,0,0,1,0,0) Temperature; alias SiQuantity!(real,0,0,0,0,0,1,0) AmountOfSubstance; alias SiQuantity!(real,0,0,0,0,0,0,1) Intensity; alias SiQuantity!(real,0,0,0,0,0,0,0) UnitLess; // Derived quantities alias typeof(Length*Length) Area; alias typeof(Length*Area) Volume; alias typeof(Mass/Volume) Density; alias typeof(Length*Mass/Time/Time) Force; alias typeof(1/Time) Frequency; alias typeof(Force/Area) Pressure; alias typeof(Force*Length) Energy; alias typeof(Energy/Time) Power; alias typeof(Time*Current) Charge; alias typeof(Power/Current) Voltage; alias typeof(Charge/Voltage) Capacitance; alias typeof(Voltage/Current) Resistance; alias typeof(1/Resistance) Conductance; alias typeof(Voltage*Time) MagneticFlux; alias typeof(MagneticFlux/Area) MagneticFluxDensity; alias typeof(MagneticFlux/Current) Inductance; alias typeof(Intensity*UnitLess) LuminousFlux; alias typeof(LuminousFlux/Area) Illuminance; // SI fundamental units const Length meter = {1}; const Mass kilogram = {1}; const Time second = {1}; const Current ampere = {1}; const Temperature kelvin = {1}; const AmountOfSubstance mole = {1}; const Intensity candela = {1}; // Derived units const Frequency hertz = {1}; const Force newton = {1}; const Pressure pascal = {1}; const Energy joule = {1}; const Power watt = {1}; const Charge coulomb = {1}; const Voltage volt = {1}; const Capacitance farad = {1}; const Resistance ohm = {1}; const Conductance siemens = {1}; const MagneticFlux weber = {1}; const MagneticFluxDensity tesla = {1}; const Inductance henry = {1}; const LuminousFlux lumen = {1}; const Illuminance lux = {1}; template UnitName(U:Frequency) { const UnitName = "Hz"; } template UnitName(U:Force) { const UnitName = "N"; } template UnitName(U:Pressure) { const UnitName = "Pa"; } template UnitName(U:Energy) { const UnitName = "J"; } template UnitName(U:Power) { const UnitName = "W"; } template UnitName(U:Charge) { const UnitName = "C"; } template UnitName(U:Voltage) { const UnitName = "V"; } template UnitName(U:Capacitance){ const UnitName = "F"; } version(unicode) { template UnitName(U:Resistance) { const UnitName = "惟"; } } else { template UnitName(U:Resistance) { const UnitNAme = "ohm"; } } template UnitName(U:Conductance){ const UnitName = "S"; } template UnitName(U:MagneticFlux){ const UnitName = "Wb"; } template UnitName(U:MagneticFluxDensity) { const UnitName = "T"; } template UnitName(U:Inductance) { const UnitName = "H"; } void main() { Area a = 25 * meter * meter; Length l = 10 * 1e3 * meter; Volume vol = a * l; Mass m = 100 * kilogram; assert(!is(typeof(vol / m) == Density)); //Density density = vol / m; // dimension error -> syntax error Density density = m / vol; writefln("The volume is %s",vol.toString); writefln("The mass is %s",m.toString); writefln("The density is %s",density.toString); writef("\nElectrical example:\n\n"); Voltage v = 5 * volt; Resistance r = 1 * 1e3 * ohm; Current i = v/r; Time ti = 1 * second; Power w = v*v/r; Energy e = w * ti; // One wishes the .toString was unnecessary... writefln("A current of ",i.toString); writefln("through a voltage of ",v.toString); writefln("requires a resistance of ",r.toString); writefln("and produces ",w.toString," of heat."); writefln("Total energy used in ",ti.toString," is ",e.toString); writef("\nCapacitor time curve:\n\n"); Capacitance C = 0.47 * 1e-6 * farad; // Capacitance Voltage V0 = 5 * volt; // Starting voltage Resistance R = 4.7 * 1e3 * ohm; // Resistance for (Time t; t < 51 * 1e-3 * second; t += 1e-3 * second) { Voltage Vt = V0 * exp((-t / (R*C)).value); writefln("at %5s the voltage is %s",t.toString,Vt.toString); } }
Jan 18 2010
next sibling parent Trass3r <un known.com> writes:
Am 18.01.2010, 12:08 Uhr, schrieb Walter Bright  
<newshound1 digitalmars.com>:

 Here it is:

 // by Oskar Linde Aug 2006
 // This is just a quick hack to test
 // IFTI operators opMul and opDel
Wow that looks very neat. Couldn't/Shouldn't that be added to phobos?
Jan 18 2010
prev sibling parent reply BCS <none anon.com> writes:
Hello Walter,

 Daniel wrote:
 
 I also kind of hoped we'd see standard real world unit types in D.
 Like sqft, meters, ohms, kg, L, Hz, seconds,
 
 etc. native to D.  Being able to intrinsically convert types between
 these things would make D the most
 
 approachable General Programming Language for real world problems.
 
 Here it is:
 
 // by Oskar Linde Aug 2006
It might be a case of NIH, but that doesn't handle fractional units (for example MPa*m^1/2 is used in fracture mechanics and I suspect that they are fairly common as intermediate values in solving equations in odd directions).
Jan 18 2010
parent reply retard <re tard.com.invalid> writes:
Mon, 18 Jan 2010 19:43:54 +0000, BCS wrote:

 Hello Walter,
 
 Daniel wrote:
 
 I also kind of hoped we'd see standard real world unit types in D.
 Like sqft, meters, ohms, kg, L, Hz, seconds,
 
 etc. native to D.  Being able to intrinsically convert types between
 these things would make D the most
 
 approachable General Programming Language for real world problems.
 
 
 Here it is:
 
 // by Oskar Linde Aug 2006
It might be a case of NIH, but that doesn't handle fractional units (for example MPa*m^1/2 is used in fracture mechanics and I suspect that they are fairly common as intermediate values in solving equations in odd directions).
You can fix that by encoding the power of each unit as a fraction. No big deal. That's a good observation, though.
Jan 18 2010
parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
retard wrote:
 Mon, 18 Jan 2010 19:43:54 +0000, BCS wrote:
 
 Hello Walter,

 Daniel wrote:

 I also kind of hoped we'd see standard real world unit types in D.
 Like sqft, meters, ohms, kg, L, Hz, seconds,

 etc. native to D.  Being able to intrinsically convert types between
 these things would make D the most

 approachable General Programming Language for real world problems.
Here it is: // by Oskar Linde Aug 2006
It might be a case of NIH, but that doesn't handle fractional units (for example MPa*m^1/2 is used in fracture mechanics and I suspect that they are fairly common as intermediate values in solving equations in odd directions).
You can fix that by encoding the power of each unit as a fraction. No big deal. That's a good observation, though.
He already did. :) http://www.dsource.org/projects/scrapple/browser/trunk/units -Lars
Jan 18 2010
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
Lars T. Kyllingstad wrote:
 retard wrote:
 Mon, 18 Jan 2010 19:43:54 +0000, BCS wrote:

 It might be a case of NIH, but that doesn't handle fractional units (for
 example MPa*m^1/2 is used in fracture mechanics and I suspect that they
 are fairly common as intermediate values in solving equations in odd
 directions).
You can fix that by encoding the power of each unit as a fraction. No big deal. That's a good observation, though.
He already did. :) http://www.dsource.org/projects/scrapple/browser/trunk/units -Lars
Alright. One other nit then. What if I want to extend it to add more types of units? I've had occasion to use odd units like "tiles" or "pixels" or "sectors" and things like that when writing games. Then there's money. Also all of the imperial units that I don't care about, as well as the more obscure physical units. I don't expect any of this to come out of the box, but it'd be nice to be able to easily add new types of units by defining what kind of quantity they describe (perhaps as a compile time string) and what the scaling factor is to another known unit (unless it's axiomatic). At any rate I'd love to have guarantee that my units are correct. This kind of stuff has cost me soooo much debugging and frustration in the past.
Jan 18 2010
parent reply BCS <none anon.com> writes:
First:

I've updated my units type to d2.0 and cleaned it up a bit.

http://www.dsource.org/projects/scrapple/browser/trunk/units/si2.d

I'm offering this for inclusion in Phobos and/or Tango and am willing to 
license it as needed to make that work.

If anyone shows any interest at all I'll start an bugzillil ticket.

-------
Hello Chad,

 One other nit then.  What if I want to extend it to add more types of
 units?
A generally extendable type is overkill in my opinion.
 
 I've had occasion to use odd units like "tiles" or "pixels" or
 "sectors" and things like that when writing games.
I don't have a good answere to that one but...
  Then there's
 money. 
Money when? Now? At some point in the future? And what interest rate are you using? I actually considered that one when I wrote my lib and quickly decided that it's WAY to complex.
 Also all of the imperial units that I don't care about, as
 well as the more obscure physical units.
My lib has every SI unit I could fine and all the units I found that I recognised. The current vertion can have a new unit added as a single line of code.
  I don't expect any of this
 to come out of the box, but it'd be nice to be able to easily add new
 types of units by defining what kind of quantity they describe
 (perhaps as a compile time string) and what the scaling factor is to
 another known unit (unless it's axiomatic).
That's (almost) exactly what I have.
 
 At any rate I'd love to have guarantee that my units are correct.
 This kind of stuff has cost me soooo much debugging and frustration in
 the past.
 
Jan 19 2010
next sibling parent reply SiegeLord <ps335 cornell.edu> writes:
BCS Wrote:

 Also all of the imperial units that I don't care about, as
 well as the more obscure physical units.
My lib has every SI unit I could fine and all the units I found that I recognised. The current vertion can have a new unit added as a single line of code.
SI has 7 base units, your library has only 5 of them. You are missing luminosity (candellas) and amounts (moles).
Jan 19 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
SiegeLord wrote:
 BCS Wrote:
 
 Also all of the imperial units that I don't care about, as
 well as the more obscure physical units.
My lib has every SI unit I could fine and all the units I found that I recognised. The current vertion can have a new unit added as a single line of code.
SI has 7 base units, your library has only 5 of them. You are missing luminosity (candellas) and amounts (moles).
We need those. Andrei
Jan 19 2010
parent BCS <none anon.com> writes:
Hello Andrei,

 SiegeLord wrote:
 
 BCS Wrote:
 
 Also all of the imperial units that I don't care about, as well as
 the more obscure physical units.
 
My lib has every SI unit I could fine and all the units I found that I recognised. The current vertion can have a new unit added as a single line of code.
SI has 7 base units, your library has only 5 of them. You are missing luminosity (candellas) and amounts (moles).
We need those.
Yah, you can make a better case for radians.
Jan 19 2010
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello SiegeLord,

 BCS Wrote:
 
 Also all of the imperial units that I don't care about, as well as
 the more obscure physical units.
 
My lib has every SI unit I could fine and all the units I found that I recognised. The current vertion can have a new unit added as a single line of code.
SI has 7 base units, your library has only 5 of them. You are missing luminosity (candellas) and amounts (moles).
They would be easy to add. That said All the other type systems (CGS, MKS, imperial, etc.) only have those 5 and I find claiming the the last two are base units just a bit silly (candelas are defined in terms of power, stradians and an arbitrary curve and moles are a count). I left them out in protest but I'll add them if anyone cares enough to make much of an issue of it.
Jan 19 2010
next sibling parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Wed, 20 Jan 2010 03:47:58 +0100, BCS <none anon.com> wrote:

 Hello SiegeLord,

 BCS Wrote:

 Also all of the imperial units that I don't care about, as well as
 the more obscure physical units.
My lib has every SI unit I could fine and all the units I found that I recognised. The current vertion can have a new unit added as a single line of code.
SI has 7 base units, your library has only 5 of them. You are missing luminosity (candellas) and amounts (moles).
They would be easy to add. That said All the other type systems (CGS, MKS, imperial, etc.) only have those 5 and I find claiming the the last two are base units just a bit silly (candelas are defined in terms of power, stradians and an arbitrary curve and moles are a count). I left them out in protest but I'll add them if anyone cares enough to make much of an issue of it.
I can see reasons to add the Candela (though it is simply a function of other units, it is not as simple as an SI derived unit), but the mole is merely the Avogadro constant. I'd vote for this solution: enum Mole = 6.022_141_79e23; enum MolarMass = OfType.gram( 1 / Mole ); -- Simen
Jan 20 2010
prev sibling parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Wed, 20 Jan 2010 03:47:58 +0100, BCS <none anon.com> wrote:

 Hello SiegeLord,

 BCS Wrote:

 Also all of the imperial units that I don't care about, as well as
 the more obscure physical units.
My lib has every SI unit I could fine and all the units I found that I recognised. The current vertion can have a new unit added as a single line of code.
SI has 7 base units, your library has only 5 of them. You are missing luminosity (candellas) and amounts (moles).
They would be easy to add. That said All the other type systems (CGS, MKS, imperial, etc.) only have those 5 and I find claiming the the last two are base units just a bit silly (candelas are defined in terms of power, stradians and an arbitrary curve and moles are a count). I left them out in protest but I'll add them if anyone cares enough to make much of an issue of it.
I can see reasons to add the Candela (though it is simply a function of other units, it is not as simple as an SI derived unit), but the mole is merely the Avogadro constant. I'd vote for this solution: enum Mole = 6.022_141_79e23; enum MolarMass = OfType.gram( 1 / Mole ); -- Simen
Jan 20 2010
prev sibling parent reply BCS <none anon.com> writes:
Hello SiegeLord,

 BCS Wrote:
 
 Also all of the imperial units that I don't care about, as well as
 the more obscure physical units.
 
My lib has every SI unit I could fine and all the units I found that I recognised. The current vertion can have a new unit added as a single line of code.
SI has 7 base units, your library has only 5 of them. You are missing luminosity (candellas) and amounts (moles).
Because you complained I added the other two, but because I think they are silly I make it so you can switch them on and off (they are placed under a "version(FullSI)"). -- <IXOYE><
Jan 28 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
BCS:
 Because you complained I added the other two, but because I think they are 
 silly I make it so you can switch them on and off (they are placed under 
 a "version(FullSI)").
What's the definition of "silly SI units"? Bye, bearophile
Jan 29 2010
parent BCS <none anon.com> writes:
Hello bearophile,

 BCS:
 
 Because you complained I added the other two, but because I think
 they are silly I make it so you can switch them on and off (they are
 placed under a "version(FullSI)").
 
What's the definition of "silly SI units"?
Base SI units that are or can be defined in terms of other SI base units: mole and candela SI is the only unit system that has them. -- <IXOYE><
Jan 29 2010
prev sibling parent reply SiegeLord <ps335 cornell.edu> writes:
Simen kjaeraas Wrote:

 On Wed, 20 Jan 2010 03:47:58 +0100, BCS <none anon.com> wrote:
 
 Hello SiegeLord,

 BCS Wrote:

 Also all of the imperial units that I don't care about, as well as
 the more obscure physical units.
My lib has every SI unit I could fine and all the units I found that I recognised. The current vertion can have a new unit added as a single line of code.
SI has 7 base units, your library has only 5 of them. You are missing luminosity (candellas) and amounts (moles).
They would be easy to add. That said All the other type systems (CGS, MKS, imperial, etc.) only have those 5 and I find claiming the the last two are base units just a bit silly (candelas are defined in terms of power, stradians and an arbitrary curve and moles are a count). I left them out in protest but I'll add them if anyone cares enough to make much of an issue of it.
I can see reasons to add the Candela (though it is simply a function of other units, it is not as simple as an SI derived unit), but the mole is merely the Avogadro constant. I'd vote for this solution: enum Mole = 6.022_141_79e23; enum MolarMass = OfType.gram( 1 / Mole ); -- Simen
Avogadro's number is not know precisely, and because of that very reason the unit of mole is used. It is not even known precisely enough to fill out the 15 digits of precision that double type provides, which just makes it unacceptable as a hard-coded constant (unlike say, pi). Secondly, defining it as a constant like that you are robbing the user of 9 digits of precision for no good reason. With that approach, amounts like 1.234567 mol are not representable in a double, almost as bad as using a float! Chemists will not be pleased. Thirdly, it's an SI unit. If you claim that the library supports the SI units, then you should do that. There are 7 base SI units, your personal objections non-withstanding. MKS and CGS do exist, but I've never seen them used in Chemistry in my experience. Similarly, I've never seen chemists converting moles to the numbers of atoms they represent. On a related note, and as a large limitation of this library, there are these things called 'natural' units, which are SI derived units with some complex constant in front. See here Particle_Physics.29 ) for example. It is impractical to treat them as derived units, primarily due to precision losses, so they must also be base units. Since there's an innumerable number of these (scientifically) valid units, it really should be possible to define custom units by the user somehow. -SiegeLord
Jan 20 2010
next sibling parent SiegeLord <ps335 cornell.edu> writes:
Actually, nevermind on the precision arguments, precision doesn't work like
that. There may be other arguments for natural units that I can't think of now,
though...

-SiegeLord
Jan 20 2010
prev sibling parent BCS <none anon.com> writes:
Hello SiegeLord,

 Avogadro's number is not know precisely, and because of that very
 reason the unit of mole is used. It is not even known precisely enough
 to fill out the 15 digits of precision that double type provides,
 which just makes it unacceptable as a hard-coded constant (unlike say,
 pi).
Aside from precision considerations (and IIRC there's a team working on moving off an artifact defined kg and, as a side effect, will solve that), your argument holds just as well for angles.
 
 Secondly, defining it as a constant like that you are robbing the user
 of 9 digits of precision for no good reason. With that approach,
 amounts like 1.234567 mol are not representable in a double, almost as
 bad as using a float! Chemists will not be pleased.
Why are they not representable as a double? Also by treating the mole as unitless with a magnitude of one gets you almost everything you could want (in fact the only difference you get is that 1 and 1mole are different types, the math, the bits in memory and everything else is identical).
 
 Thirdly, it's an SI unit. If you claim that the library supports the
 SI units, then you should do that. There are 7 base SI units, your
 personal objections non-withstanding. MKS and CGS do exist, but I've
 never seen them used in Chemistry in my experience. Similarly, I've
 never seen chemists converting moles to the numbers of atoms they
 represent.
Good point, however aside from naming the type "SI" (and I can change that) I never made that claim.
 
 On a related note, and as a large limitation of this library, there
 are these things called 'natural' units, which are SI derived units
 with some complex constant in front. See here

 ticle_Physics.29 ) for example. It is impractical to treat them as
 derived units, primarily due to precision losses, so they must also be
 base units. Since there's an innumerable number of these
 (scientifically) valid units, it really should be possible to define
 custom units by the user somehow.
I /think/ you can show that as long as you stick to only SI or only natural units, the added error from not having natural units will be approximately equal to the round off error from a few multiplication and an equal number of divisions and be independent of the math in between or the precision of the convection constant used. I'm adamantly opposed to a system that allows adding arbitrary units, particularly one that treats meters and some natural length units as different types. I'm opposed because those units are in fact the same and the only reason to not treat them the same is because of n implementation detail; limitations of discreet system. That said, it wouldn't be to hard to set things up so that you can have an alternate set of base units (you would have one type that internally converts to SI and another, different, type that converts to, for example, the natural units for particle physics)
Jan 20 2010
prev sibling next sibling parent reply Ben Hanson <Ben.Hanson tfbplc.co.uk> writes:
dsimcha Wrote:

 == Quote from "J閞鬽e_M._Berger" (jeberger free.fr)'s article
 PS: At work, we mustn't use C++ because:
 - It's slow;
 - Its standard library is too big (100k);
 - In a future product, we might want to reuse this module and not
 have C++ (Oh, yes I didn't tell you that we *do* have the C++ stdlib
 in our products because the web browser they bought to run their
 HTML+Javascript+HTML+XML+C+XML+C+XML+C GUI uses it, but *we* aren't
 allowed to, fckng morons)
This is a great point and deserves to be highlighted: D was meant to be a better C++, not a better C. If someone won't use C++ instead of C (apparently there are a decent amount of these people), then there's not a snowball's chance in hell they'd use D, even if we fixed the binary size issue, made D more usable without a GC, and in general made it in every way at least as efficient as C++.
That's all very well - you don't have to look further than Linus Torvalds for the attitude that C is the be all and end all and anything else is just cheating... *But* there are plenty of C++ programmers who really do care about every little bit of performance. Indeed, C++ programmers who are paranoid about performance and perhaps write C++ a little too in the C style are often criticised and told to modernise 'the compiler will take care of it' etc. When I wrote lexertl (http://www.benhanson.net/lexertl.html) it was using VC++ 6 and I ended up with the following case: void remove_duplicates () { const CharT *start_ = _charset.c_str (); const CharT *end_ = start_ + _charset.size (); // Optimisation for very large charsets: // sorting via pointers is much quicker than // via iterators... std::sort (const_cast<CharT *> (start_), const_cast<CharT *> (end_)); _charset.erase (std::unique (_charset.begin (), _charset.end ()), _charset.end ()); } Later I was told 'oh, VC++ 6 is dead, no need for such crufty hacks, modern compilers take care of it'. But what's this? Microsoft now consider STL to be unsafe and have added loads of range checking, slowing down C++ massively. Now it's true that most of that checking occurs in Debug, but even in Release some of it is there. This kind of thing really burns those coders who are trying to save every cycle. Again, for the really hard core C coders point of view see this: http://blogs.msdn.com/oldnewthing/archive/2005/06/13/428534.aspx Scroll to the last comment. The point is coder mentality is a sliding scale. Don't give the C bigots any more ammunition than they already have... Please! Regards, Ben
Jan 20 2010
parent Ben Hanson <Ben.Hanson tfbplc.co.uk> writes:
By the way I forgot a link to 'Secure STL':
http://channel9.msdn.com/shows/Going+Deep/STL-Iterator-Debugging-and-Secure-SCL/

'nuff said.

Regards,

Ben
Jan 20 2010
prev sibling next sibling parent Craig Black <cblack ara.com> writes:
 I am a bit suspicious of this. GC scans can slow down a little, but I'm not
seeing this as a big problem so far. You can test and benchmark some of your
theories. A problem I've seen is caused by the not precise nature of the GC,
wrong pointers keeping dead things alive.
Here's an interesting thing I noticed when benchmarking my app. If I comment out my call to nedfree in my overloaded delete operator, the application slows down by 17%. This is because when memory is being freed as the program runs, allocated memory tends to have better locality of reference. Thus, even if a GC scan never takes place, a GC will still be 17% slower than nedmalloc in my application. -Craig
Jan 20 2010
prev sibling parent reply asd <asd example.invalid> writes:
Andrei Alexandrescu Wrote:

 Why would having one chunk of code get checked for calls to the GC and 
 another not be any more complicated than mixing 
 malloc/free+add/removeRoot with normal GC? I'm beginning to wonder if 
 I'm calling for something different than other people are.
 
 What I'm thinking of would have zero effect on the generated code, the 
 only effect it would have is to cause an error when some code would 
 normally attempt to invoke the GC.
It's much more complicated than that. What if a library returns an object or an array to another library? Memory allocation strategy is a cross-cutting concern.
Optional GC is in Objective-C already and it works great! NB: In this context by GC I mean automatic mark-sweep garbage collection and *not* refcouting. I'm ignoring here Cocoa's retain/release, because it's not directly relevant. There are 3 options: -no-gc. Like pure C - explicit manual memory management is used exclusively. Can't talk to GC code at all. (I don't recommend that for D). -gc-supported. Program's memory is not garbage collected, but the program is able to talk to GCd OS and libraries. You use manual memory management, but compiler inserts write barriers and other stuff needed for seamless mixing with GCd pointers. (I imagine that'd be D's -nogc). -gc-only. Like D (-gc-supported code looks like C from D's perspective).
Jan 24 2010
parent BCS <none anon.com> writes:
Hello asd,

 Optional GC is in Objective-C already and it works great!
 
 NB: In this context by GC I mean automatic mark-sweep garbage
 collection and *not* refcouting. I'm ignoring here Cocoa's
 retain/release, because it's not directly relevant.
 
 There are 3 options:
 
 -no-gc. Like pure C - explicit manual memory management is used
 exclusively. Can't talk to GC code at all. (I don't recommend that for
 D).
OK i can tell right there that you are not talking about what I'm talking about. The case I'm looking at would never stop one chunk of code from calling another.
 
 -gc-supported. Program's memory is not garbage collected, but the
 program is able to talk to GCd OS and libraries. You use manual memory
 management, but compiler inserts write barriers and other stuff needed
 for seamless mixing with GCd pointers. (I imagine that'd be D's
 -nogc).
Two bit's here are different than what I want: 1) the generated code wouldn't change, no write barriers nothing. And 2) there would be no difference at any level between GCd pointers and non GCd pointers. You could have the same pointer contain a reference to GCd data at one point and non GCd data at another and the compiler wouldn't even notice. The reason I'm liking the idea is that it adds some useful abilities and *all* of the complexities that it entails already exist in D right now and have existed almost from the get go. ---- <IXOYE><
Jan 24 2010