www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - GDC release 0.23

reply David Friedman <dvdfrdmn users.ess-eff.net> writes:
GDC now supports 64-bit targets! A new x86_64 Linux binary is
available and the MacOS X binary supports x86_64 and ppc64.

http://sourceforge.net/project/showfiles.php?group_id=154306

Changes:
   * Added support for 64-bit targets
   * Added multilib support
   * Updated to DMD 1.007
   * Fixed Bugzilla 984, 1013
Mar 05 2007
next sibling parent renoX <renosky free.fr> writes:
David Friedman a écrit :
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
 
 http://sourceforge.net/project/showfiles.php?group_id=154306
 
 Changes:
   * Added support for 64-bit targets
   * Added multilib support
   * Updated to DMD 1.007
   * Fixed Bugzilla 984, 1013
Thanks for your hard work! renoX
Mar 05 2007
prev sibling next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
David Friedman wrote:

 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
Excellent news! I'll try it on ppc64 Linux too (Fedora Core) Question: It says "The MacOS X universal binary package requires XCode 2.4.1 or the equivalent version of cctools." But the GCC version (5363) is from Xcode 2.4 (not 2.4.1) ? (as far as I know Apple hasn't released gcc-5367 sources) --anders
Mar 05 2007
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
Excellent news! I'll try it on ppc64 Linux too (Fedora Core)
Except for some strange (temporary?) build error with soft-float, it built just fine for powerpc64-unknown-linux-gnu (with FC5/PPC*) Could post some binaries later if wanted, the config diffs were: http://www.algonet.se/~afb/d/gdc-0.23-powerpc-config.diff (Phobos) Only tried "Hello World", but it seems to be working as advertised. (i.e. -m32 gives you a 32-bit and -m64 gives you a 64-bit binary) So you can probably add "PowerPC64" to the list of Linux platforms! --anders * The FC5/PPC installation supports both ppc and ppc64 platforms unlike FC5/x86 which has one "i386" and one "x86_64" install.
Mar 06 2007
parent reply Sean Kelly <sean f4.ca> writes:
Anders F Björklund wrote:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
Excellent news! I'll try it on ppc64 Linux too (Fedora Core)
Except for some strange (temporary?) build error with soft-float, it built just fine for powerpc64-unknown-linux-gnu (with FC5/PPC*)
That reminds me. Is it really a good idea to map the GCC/PPC "long double" to "real" in D? I know this has come up before: http://www.digitalmars.com/d/archives/digitalmars/D/20790.html and the data type seems like an aberration. Here is some more info: http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00499.html And from the ELF ABI: This "Extended precision" differs from the IEEE 754 Standard in the following ways: * The software support is restricted to round-to-nearest mode. Programs that use extended precision must ensure that this rounding mode is in effect when extended-precision calculations are performed. * Does not fully support the IEEE special numbers NaN and INF. These values are encoded in the high-order double value only. The low-order value is not significant. * Does not support the IEEE status flags for overflow, underflow, and other conditions. These flag have no meaning in this format. I can't claim to have the maths background of some folks here, but this suggests to me that this 128-bit representation isn't truly IEEE-754 compliant and therefore probably shouldn't be a default data type in D? Sean
Mar 06 2007
next sibling parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Sean Kelly wrote:
 Anders F Björklund wrote:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
Excellent news! I'll try it on ppc64 Linux too (Fedora Core)
Except for some strange (temporary?) build error with soft-float, it built just fine for powerpc64-unknown-linux-gnu (with FC5/PPC*)
That reminds me. Is it really a good idea to map the GCC/PPC "long double" to "real" in D? I know this has come up before: http://www.digitalmars.com/d/archives/digitalmars/D/20790.html and the data type seems like an aberration. Here is some more info:
[snip references]
 
 I can't claim to have the maths background of some folks here, but this 
 suggests to me that this 128-bit representation isn't truly IEEE-754 
 compliant and therefore probably shouldn't be a default data type in D?
From reading that I get the impression that this "long double" is actually two doubles that software pretends to be a more-precise single number. If that's correct, I think this may indeed be a bad idea. I'm also pretty sure it's in fact against the spec: http://www.digitalmars.com/d/type.html describes 'real' as the "largest hardware implemented floating point size". And this data type seems to be software-implemented rather than hardware-implemented...
Mar 06 2007
prev sibling next sibling parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Sean Kelly wrote:

 That reminds me.  Is it really a good idea to map the GCC/PPC "long 
 double" to "real" in D?  I know this has come up before:
 
 http://www.digitalmars.com/d/archives/digitalmars/D/20790.html
No, it's not a good idea to do this on the PowerPC, if the -mlong-double-128 option is used. (i.e. instead of 64 bits) I haven't checked what the default ABI is on later versions, but I'm afraid that "double double" might be the default now... Staying clear of the "real" type on non-X86 sounds like an idea.
 I can't claim to have the maths background of some folks here, but this 
 suggests to me that this 128-bit representation isn't truly IEEE-754 
 compliant and therefore probably shouldn't be a default data type in D?
The D definition of real as the "largest hardware supported type" indicates that it should be defined as double on PowerPC (32/64). At least until there is some kind of new model that does 128-bit floating point in hardware, instead of faking it with two 64-bit. I think that things would have been better with an 80-bit D type. --anders
Mar 06 2007
prev sibling next sibling parent Don Clugston <dac nospam.com.au> writes:
Sean Kelly wrote:
 Anders F Björklund wrote:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
Excellent news! I'll try it on ppc64 Linux too (Fedora Core)
Except for some strange (temporary?) build error with soft-float, it built just fine for powerpc64-unknown-linux-gnu (with FC5/PPC*)
That reminds me. Is it really a good idea to map the GCC/PPC "long double" to "real" in D? I know this has come up before: http://www.digitalmars.com/d/archives/digitalmars/D/20790.html and the data type seems like an aberration. Here is some more info: http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00499.html And from the ELF ABI: This "Extended precision" differs from the IEEE 754 Standard in the following ways: * The software support is restricted to round-to-nearest mode. Programs that use extended precision must ensure that this rounding mode is in effect when extended-precision calculations are performed. * Does not fully support the IEEE special numbers NaN and INF. These values are encoded in the high-order double value only. The low-order value is not significant. * Does not support the IEEE status flags for overflow, underflow, and other conditions. These flag have no meaning in this format. I can't claim to have the maths background of some folks here, but this suggests to me that this 128-bit representation isn't truly IEEE-754 compliant and therefore probably shouldn't be a default data type in D? Sean
Sean, thanks for posting that; I was about to do the same. The IEEE-754 standard is pretty clear on the fact that these double-doubles are *NOT* IEEE-754 compliant. (In fact, storing the significand in two parts is already enough to be non-compliant). Allowing this as a 'real' is a bit of a disaster; for example, it means that IEEE status flags and precision modes cannot be used portably on any platform, and it makes support for even basic mathematical functions extremely difficult. The stupid thing is, that the underlying double-double type exists on all platforms! There would be great benefit someday in supporting double-doubles in the compiler -- just not as a 'real'. IMHO, 'real' should map to 'double' on PPC.
Mar 06 2007
prev sibling parent reply David Friedman <dvdfrdmn users.ess-eff.net> writes:
Sean Kelly wrote:
 Anders F Björklund wrote:
 
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
Excellent news! I'll try it on ppc64 Linux too (Fedora Core)
Except for some strange (temporary?) build error with soft-float, it built just fine for powerpc64-unknown-linux-gnu (with FC5/PPC*)
That reminds me. Is it really a good idea to map the GCC/PPC "long double" to "real" in D? I know this has come up before: http://www.digitalmars.com/d/archives/digitalmars/D/20790.html and the data type seems like an aberration. Here is some more info: http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00499.html And from the ELF ABI: This "Extended precision" differs from the IEEE 754 Standard in the following ways: * The software support is restricted to round-to-nearest mode. Programs that use extended precision must ensure that this rounding mode is in effect when extended-precision calculations are performed. * Does not fully support the IEEE special numbers NaN and INF. These values are encoded in the high-order double value only. The low-order value is not significant. * Does not support the IEEE status flags for overflow, underflow, and other conditions. These flag have no meaning in this format. I can't claim to have the maths background of some folks here, but this suggests to me that this 128-bit representation isn't truly IEEE-754 compliant and therefore probably shouldn't be a default data type in D? Sean
The double+double type has caused me no end of trouble, but I think it is important to maintain interoperability with C. If I make the D 'real' implementation IEEE double, there would be no way interact with C code that uses 'long double'. I could add another floating point type for this purpose, but that would diverge from the D spec more than what I have now. David
Mar 06 2007
next sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
David Friedman wrote:
 Sean Kelly wrote:
 Anders F Björklund wrote:

 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
Excellent news! I'll try it on ppc64 Linux too (Fedora Core)
Except for some strange (temporary?) build error with soft-float, it built just fine for powerpc64-unknown-linux-gnu (with FC5/PPC*)
That reminds me. Is it really a good idea to map the GCC/PPC "long double" to "real" in D? I know this has come up before: http://www.digitalmars.com/d/archives/digitalmars/D/20790.html and the data type seems like an aberration. Here is some more info: http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00499.html And from the ELF ABI: This "Extended precision" differs from the IEEE 754 Standard in the following ways: * The software support is restricted to round-to-nearest mode. Programs that use extended precision must ensure that this rounding mode is in effect when extended-precision calculations are performed. * Does not fully support the IEEE special numbers NaN and INF. These values are encoded in the high-order double value only. The low-order value is not significant. * Does not support the IEEE status flags for overflow, underflow, and other conditions. These flag have no meaning in this format. I can't claim to have the maths background of some folks here, but this suggests to me that this 128-bit representation isn't truly IEEE-754 compliant and therefore probably shouldn't be a default data type in D? Sean
The double+double type has caused me no end of trouble, but I think it is important to maintain interoperability with C. If I make the D 'real' implementation IEEE double, there would be no way interact with C code that uses 'long double'. I could add another floating point type for this purpose, but that would diverge from the D spec more than what I have now. David
I might be off-base, but couldn't you do something like this?
 version( GDC )
     pragma(gdc_enable, gdc_longdouble);

 // ...

 extern(C) gdc_longdouble foo(gdc_longdouble);
Yes, you'd be adding a new type, but at least it's hidden behind a pragma. Interfacing with C code is fairly important to D, but I'd hate to have fp code that works fine under DMD and then mysteriously breaks with gdc--I think compatibility with other D implementations should come before interfacing with C. Just my AU$0.02. -- Daniel -- Unlike Knuth, I have neither proven or tried the above; it may not even make sense. v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
Mar 06 2007
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
David Friedman wrote:
 
 The double+double type has caused me no end of trouble, but I think it 
 is important to maintain interoperability with C.  If I make the D 
 'real' implementation IEEE double, there would be no way interact with C 
 code that uses 'long double'.  I could add another floating point type 
 for this purpose, but that would diverge from the D spec more than what 
 I have now.
Yeah that doesn't sound like a very attractive option. Some of the later replies in the Darwin thread mention a compiler switch: http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00471.html Is that a possibility? Or did that switch not make it into an actual release? Sean
Mar 06 2007
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Sean Kelly wrote:

 The double+double type has caused me no end of trouble, but I think it 
 is important to maintain interoperability with C.  If I make the D 
 'real' implementation IEEE double, there would be no way interact with 
 C code that uses 'long double'.  I could add another floating point 
 type for this purpose, but that would diverge from the D spec more 
 than what I have now.
Yeah that doesn't sound like a very attractive option. Some of the later replies in the Darwin thread mention a compiler switch: http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00471.html Is that a possibility? Or did that switch not make it into an actual release?
There are two switches: -mlong-double-64 and -mlong-double-128, just that the second one ("double-double") is now the default... So if you changed the meaning of "long double" back to the old one (i.e. same as "double"), it wouldn't be compatible with C/C++ ABI ? This is similar to the -m96bit-long-double and -m128bit-long-double for Intel, but those just change the padding (not the 80-bit format) But on the X86_64 architecture, a "long double" is now padded to 16 bytes instead of the previous 12 bytes (the actual data is 10 bytes) These were all known problems with adding "real" as a built-in, though. In all the D specs I've seen, it's pretty much #defined to long double. Such as http://www.digitalmars.com/d/htod.html http://www.digitalmars.com/d/interfaceToC.html Might as well keep the real <-> long double one-to-one mapping, and recommend *not* using real/ireal/creal types for any portable code ? --anders
Mar 07 2007
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Anders F Björklund wrote:
 Sean Kelly wrote:
 
 The double+double type has caused me no end of trouble, but I think 
 it is important to maintain interoperability with C.  If I make the D 
 'real' implementation IEEE double, there would be no way interact 
 with C code that uses 'long double'.  I could add another floating 
 point type for this purpose, but that would diverge from the D spec 
 more than what I have now.
Yeah that doesn't sound like a very attractive option. Some of the later replies in the Darwin thread mention a compiler switch: http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00471.html Is that a possibility? Or did that switch not make it into an actual release?
There are two switches: -mlong-double-64 and -mlong-double-128, just that the second one ("double-double") is now the default... So if you changed the meaning of "long double" back to the old one (i.e. same as "double"), it wouldn't be compatible with C/C++ ABI ? This is similar to the -m96bit-long-double and -m128bit-long-double for Intel, but those just change the padding (not the 80-bit format) But on the X86_64 architecture, a "long double" is now padded to 16 bytes instead of the previous 12 bytes (the actual data is 10 bytes) These were all known problems with adding "real" as a built-in, though. In all the D specs I've seen, it's pretty much #defined to long double. Such as http://www.digitalmars.com/d/htod.html http://www.digitalmars.com/d/interfaceToC.html Might as well keep the real <-> long double one-to-one mapping, and recommend *not* using real/ireal/creal types for any portable code ?
No, that does not work. double is *not* portable! I'll say it again, because it's such a widespread myth: **double is not portable**. Only about 20% of computers world-wide have native support for calculations at 64-bit precision! More than 90% have native support for 80-bit precision. (The most common with 64-bit precision are PPC and Pentium4. Earlier Intel/AMD CPUs do not support it). Suppose you have the code double a; a = expr1 + expr2; where expr1 and expr2 are expressions. Then you want to split this expression in two: b = expr1; a = b + expr2; Q. What type should 'b' be, so that the value of 'a' is unchanged? A. For x87, it should be an 80-bit fp number. For PPC, it should be a 64-bit fp number. Using 'double' on x87 for intermediate results causes roundoff to occur twice. That's what 'real' is for -- it prevents weird things happening behind your back. There is no choice -- intermediate calculations are done at 'real' precision, and the precision of 'real' is not constant across platforms. In adding 'real' to D, Walter hasn't just provide the possibility to use 80-bit floating point numbers -- that's actually a minor issue. 'real' reflects the underlying reality of the hardware.
Mar 07 2007
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Don Clugston wrote:

 Might as well keep the real <-> long double one-to-one mapping, and
 recommend *not* using real/ireal/creal types for any portable code ?
No, that does not work. double is *not* portable! I'll say it again, because it's such a widespread myth: **double is not portable**. Only about 20% of computers world-wide have native support for calculations at 64-bit precision! More than 90% have native support for 80-bit precision. (The most common with 64-bit precision are PPC and Pentium4. Earlier Intel/AMD CPUs do not support it).
The actual suggestion made was to make "real" into an *alias* instead. That is, you would have one type "extended" that would be 80-bit (and not available* on PowerPC/SPARC except with software emulation) and one type "double" that would be 64 and one type "float"/32... Then "quad" could be reserved as a future keyword for IEEE 128-bit, just as "cent" is reserved for 128-bit integers. (it's not important) Using extended precision floats on Intel is not a bad thing at all. I think we agree that using double on X86 (or long double on others) isn't optimal, because of the round-off (or even missing exceptions). So you probably will need different types on different architectures. But as it is now, "real" in D is the same as "long double" in C/C++. So you would have to make a new alias for the D floating point type, and then alias it over to real on X86 and to double on the others ? Or perhaps use "float" instead, for vectorization opportunities ? :-) --anders * not available or using double-double, it doesn't matter much ? software emulated or not fully IEEE, either way not for "real" (i.e. the "largest hardware implemented floating point size")
Mar 07 2007
parent Don Clugston <dac nospam.com.au> writes:
Anders F Björklund wrote:
 Don Clugston wrote:
 
 Might as well keep the real <-> long double one-to-one mapping, and
 recommend *not* using real/ireal/creal types for any portable code ?
No, that does not work. double is *not* portable! I'll say it again, because it's such a widespread myth: **double is not portable**. Only about 20% of computers world-wide have native support for calculations at 64-bit precision! More than 90% have native support for 80-bit precision. (The most common with 64-bit precision are PPC and Pentium4. Earlier Intel/AMD CPUs do not support it).
The actual suggestion made was to make "real" into an *alias* instead.
OK. In many ways that would be better; in reality, when writing a math library, you always have to know what precision you're using.
 That is, you would have one type "extended" that would be 80-bit
 (and not available* on PowerPC/SPARC except with software emulation)
 and one type "double" that would be 64 and one type "float"/32...
 
 Then "quad" could be reserved as a future keyword for IEEE 128-bit,
 just as "cent" is reserved for 128-bit integers. (it's not important)
 Using extended precision floats on Intel is not a bad thing at all.
 
 I think we agree that using double on X86 (or long double on others)
 isn't optimal, because of the round-off (or even missing exceptions).
 So you probably will need different types on different architectures.
 
 But as it is now, "real" in D is the same as "long double" in C/C++.

 So you would have to make a new alias for the D floating point type,
 and then alias it over to real on X86 and to double on the others ?
I think that could work. Although for the others, it might need to be a typedef rather than an alias, so that you can overload real + double without causing compilation problems? (I'm not sure about this). And you want 'real' to appear in error messages.
Mar 07 2007
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
Anders F Björklund wrote:
 Sean Kelly wrote:
 
 The double+double type has caused me no end of trouble, but I think 
 it is important to maintain interoperability with C.  If I make the D 
 'real' implementation IEEE double, there would be no way interact 
 with C code that uses 'long double'.  I could add another floating 
 point type for this purpose, but that would diverge from the D spec 
 more than what I have now.
Yeah that doesn't sound like a very attractive option. Some of the later replies in the Darwin thread mention a compiler switch: http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00471.html Is that a possibility? Or did that switch not make it into an actual release?
There are two switches: -mlong-double-64 and -mlong-double-128, just that the second one ("double-double") is now the default...
Oh I see. That thread above suggested the opposite. Could GDC simply key the size of real off this switch as well then? If the point is for real to map to double-double, then it must be aware of it, correct? I know it's not ideal to have the size of any variable change dynamically, but this seems like a case where doing so may actually be desirable. Sean
Mar 07 2007
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Sean Kelly wrote:

 Yeah that doesn't sound like a very attractive option.  Some of the 
 later replies in the Darwin thread mention a compiler switch:

 http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00471.html 

 Is that a possibility?  Or did that switch not make it into an actual 
 release?
There are two switches: -mlong-double-64 and -mlong-double-128, just that the second one ("double-double") is now the default...
Oh I see. That thread above suggested the opposite. Could GDC simply key the size of real off this switch as well then? If the point is for real to map to double-double, then it must be aware of it, correct? I know it's not ideal to have the size of any variable change dynamically, but this seems like a case where doing so may actually be desirable.
The thread was old, things change. Especially: from GCC 3.3 to GCC 4.0 http://developer.apple.com/releasenotes/DeveloperTools/RN-GCC4/index.html "In previous releases of GCC, the long double type was just a synonym for double. GCC 4.0 now supports true long double. In GCC 4.0 long double is made up of two double parts, arranged so that the number of bits of precision is approximately twice that of double." (this was for Apple GCC, but Linux PPC went through a similar change) Older versions of PPC operating systems used 64-bit for "long double", newer versions use 128-bit. Both are still in use, so we won't know. And since the D "real" type simply maps over to C/C++ "long double", it means that it will be either 64-bit, 80-bit or 128-bit. Varying. --anders
Mar 07 2007
parent reply Don Clugston <dac nospam.com.au> writes:
Anders F Björklund wrote:
 Sean Kelly wrote:
 
 Yeah that doesn't sound like a very attractive option.  Some of the 
 later replies in the Darwin thread mention a compiler switch:

 http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00471.html 

 Is that a possibility?  Or did that switch not make it into an 
 actual release?
There are two switches: -mlong-double-64 and -mlong-double-128, just that the second one ("double-double") is now the default...
Oh I see. That thread above suggested the opposite. Could GDC simply key the size of real off this switch as well then? If the point is for real to map to double-double, then it must be aware of it, correct? I know it's not ideal to have the size of any variable change dynamically, but this seems like a case where doing so may actually be desirable.
The thread was old, things change. Especially: from GCC 3.3 to GCC 4.0 http://developer.apple.com/releasenotes/DeveloperTools/RN-GCC4/index.html "In previous releases of GCC, the long double type was just a synonym for double. GCC 4.0 now supports true long double. In GCC 4.0 long double is made up of two double parts, arranged so that the number of bits of precision is approximately twice that of double." (this was for Apple GCC, but Linux PPC went through a similar change) Older versions of PPC operating systems used 64-bit for "long double", newer versions use 128-bit. Both are still in use, so we won't know.
Ugh. That's really horrible.
 And since the D "real" type simply maps over to C/C++ "long double",
 it means that it will be either 64-bit, 80-bit or 128-bit. Varying.
We've got to keep that piece of lunacy out of D somehow. Could we define it as __longdouble or something? (Ideally only for PPC, so that error messages remain sensible on other platforms). At least in D, we can static-if an alias or typedef on the basis of mant_dig.
Mar 07 2007
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Don Clugston wrote:

 Older versions of PPC operating systems used 64-bit for "long double",
 newer versions use 128-bit. Both are still in use, so we won't know.
Ugh. That's really horrible.
It's called progress :-)
 And since the D "real" type simply maps over to C/C++ "long double",
 it means that it will be either 64-bit, 80-bit or 128-bit. Varying.
We've got to keep that piece of lunacy out of D somehow. Could we define it as __longdouble or something? (Ideally only for PPC, so that error messages remain sensible on other platforms). At least in D, we can static-if an alias or typedef on the basis of mant_dig.
I'll let David decide which one wins: D real === C long double, or the definition of "largest hardware implemented floating point size" But that that real.sizeof varies between D platforms, that is in the very definition of the type. (similar to how int used to work, 16/32) It'll be 10/12/16 bytes on Intel, and 8/16 on those other CPU types. (i.e. 80-bits with padding on X86/x86_64, and 1 or 2 doubles on PPC) --anders
Mar 07 2007
next sibling parent Sean Kelly <sean f4.ca> writes:
Anders F Björklund wrote:
 Don Clugston wrote:
 
 Older versions of PPC operating systems used 64-bit for "long double",
 newer versions use 128-bit. Both are still in use, so we won't know.
Ugh. That's really horrible.
It's called progress :-)
*snicker*
 And since the D "real" type simply maps over to C/C++ "long double",
 it means that it will be either 64-bit, 80-bit or 128-bit. Varying.
We've got to keep that piece of lunacy out of D somehow. Could we define it as __longdouble or something? (Ideally only for PPC, so that error messages remain sensible on other platforms). At least in D, we can static-if an alias or typedef on the basis of mant_dig.
I'll let David decide which one wins: D real === C long double, or the definition of "largest hardware implemented floating point size"
I think the salient point is "hardware implemented." However, as David mentioned, C compatibility is important as well. Perhaps it would be best to restrict real to the actual maximum size supported in hardware and to add an alias, typedef, struct, whatever, to allow C interop. This would be similar to what we already have to do for interfacing with C long/ulong across 32 and 64 bit platforms. My only question here, then, is how translation might be performed between a 64 bit "hardware" real and a 128 bit "software" real? And is this distinction truly useful? Given the description of the current 128 bit doubles, I can't see ever wanting to use them. Sean
Mar 07 2007
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
Anders F Björklund wrote:
 Don Clugston wrote:
 
 Older versions of PPC operating systems used 64-bit for "long double",
 newer versions use 128-bit. Both are still in use, so we won't know.
Ugh. That's really horrible.
It's called progress :-)
I was referring to the entire section, not just that line.
 
 And since the D "real" type simply maps over to C/C++ "long double",
 it means that it will be either 64-bit, 80-bit or 128-bit. Varying.
We've got to keep that piece of lunacy out of D somehow. Could we define it as __longdouble or something? (Ideally only for PPC, so that error messages remain sensible on other platforms). At least in D, we can static-if an alias or typedef on the basis of mant_dig.
I'll let David decide which one wins: D real === C long double, or the definition of "largest hardware implemented floating point size"
There's just no way D real can be equal to C long double in general, when the C long double is playing silly games. The only reason gcc can even do that, is that the C spec for floating point is ridiculously vague, and consequently no-one actually uses long double. To mimic it is to immediately break D's support for IEEE. Seriously, the viability of D as a numeric computing platform is at stake here.
Mar 07 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Don Clugston wrote:
 There's just no way D real can be equal to C long double in general, 
 when the C long double is playing silly games. The only reason gcc can 
 even do that, is that the C spec for floating point is ridiculously 
 vague, and consequently no-one actually uses long double. To mimic it is 
 to immediately break D's support for IEEE.
 
 Seriously, the viability of D as a numeric computing platform is at 
 stake here.
I don't really understand the issues with the PPC 'double double', but: 1) real.sizeof is 10 on Win32, and 12 on Linux. This caused some (now fixed) compiler bugs, but shouldn't have affected any user code. 2) D floating point arithmetic is supposed to be IEEE 754. Some wacky real type for the PPC is going to cause grief and break code in irritating and unforeseeable ways. 3) The D implementation is allowed to evaluate floating point ops in higher precision than that specified by the type. FP algorithms should be coded to not break if more accurate answers are delivered. 4) I suggest supporting the wacky PPC double double with: a) a special type, like __doubledouble, that is NOT real, ireal, or creal. b) real should map to C's double, but still be a distinct type to the compiler c) if doing __doubledouble is a pain as an inbuilt type, perhaps do it as a struct and a library type?
Mar 07 2007
next sibling parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter Bright wrote:

 I don't really understand the issues with the PPC 'double double', but:
 
 1) real.sizeof is 10 on Win32, and 12 on Linux. This caused some (now 
 fixed) compiler bugs, but shouldn't have affected any user code.
And real.sizeof is now 16 on Linux for X86_64 (even *more* padding)
 2) D floating point arithmetic is supposed to be IEEE 754. Some wacky 
 real type for the PPC is going to cause grief and break code in 
 irritating and unforeseeable ways.
So basically we need to change PPC use of "long double" to "double"
 3) The D implementation is allowed to evaluate floating point ops in 
 higher precision than that specified by the type. FP algorithms should 
 be coded to not break if more accurate answers are delivered.
But those higher evaluations should still be IEEE standard compatible.
 4) I suggest supporting the wacky PPC double double with:
 a) a special type, like __doubledouble, that is NOT real, ireal, or creal.
 b) real should map to C's double, but still be a distinct type to the 
 compiler
 c) if doing __doubledouble is a pain as an inbuilt type, perhaps do it 
 as a struct and a library type?
That should pretty much settle it, then. GDC's "real" needs changing, so that it uses C "double" on PPC even if "long double" is available. (no idea how much work a __doubledouble or such D type would be to do, but it is probably needed for interop with C "long double" routines...) I think we can use the -mlong-double-64 option in the interim, minus the C/C++ compatibility part (but not much uses "long double" on PPC) --anders
Mar 07 2007
prev sibling parent reply David Friedman <dvdfrdmn users.ess-eff.net> writes:
Walter Bright wrote:
 Don Clugston wrote:
 
 There's just no way D real can be equal to C long double in general, 
 when the C long double is playing silly games. The only reason gcc can 
 even do that, is that the C spec for floating point is ridiculously 
 vague, and consequently no-one actually uses long double. To mimic it 
 is to immediately break D's support for IEEE.

 Seriously, the viability of D as a numeric computing platform is at 
 stake here.
I don't really understand the issues with the PPC 'double double', but: 1) real.sizeof is 10 on Win32, and 12 on Linux. This caused some (now fixed) compiler bugs, but shouldn't have affected any user code. 2) D floating point arithmetic is supposed to be IEEE 754. Some wacky real type for the PPC is going to cause grief and break code in irritating and unforeseeable ways.
Alright. Given the strict requirement for IEEE, D's real cannot be implemented as double+double.
 3) The D implementation is allowed to evaluate floating point ops in 
 higher precision than that specified by the type. FP algorithms should 
 be coded to not break if more accurate answers are delivered.
 
 4) I suggest supporting the wacky PPC double double with:
 a) a special type, like __doubledouble, that is NOT real, ireal, or creal.
 b) real should map to C's double, but still be a distinct type to the 
 compiler
 c) if doing __doubledouble is a pain as an inbuilt type, perhaps do it 
 as a struct and a library type?
If I do implement it as a primitive type, how do you suggest I mangle it and the imaginary/complex variants? Should there be an 'implementation specific extension' mangle character? David
Mar 07 2007
next sibling parent Walter Bright <newshound digitalmars.com> writes:
David Friedman wrote:
 If I do implement it as a primitive type, how do you suggest I mangle it 
  and the imaginary/complex variants?  Should there be an 'implementation 
 specific extension' mangle character?
How about giving it a leading _ ?
Mar 07 2007
prev sibling parent reply Lionello Lunesu <lio lunesu.remove.com> writes:
David Friedman wrote:
 If I do implement it as a primitive type, how do you suggest I mangle it 
  and the imaginary/complex variants?  Should there be an 'implementation 
 specific extension' mangle character?
 
 David
I'm not following this discussion 100%, but if "double double" is only needed for interfacing with C, then why would you need the imaginary/complex variants? L.
Mar 08 2007
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Lionello Lunesu wrote:
 David Friedman wrote:
 If I do implement it as a primitive type, how do you suggest I mangle 
 it  and the imaginary/complex variants?  Should there be an 
 'implementation specific extension' mangle character?
I'm not following this discussion 100%, but if "double double" is only needed for interfacing with C, then why would you need the imaginary/complex variants?
Google for "_Complex _Imaginary C99" (without the quotes). C99 added some keywords to support complex & imaginary types.
Mar 08 2007
prev sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Don Clugston wrote:

 Older versions of PPC operating systems used 64-bit for "long double",
 newer versions use 128-bit. Both are still in use, so we won't know.
Ugh. That's really horrible.
It's called progress :-)
I was referring to the entire section, not just that line.
Seriously, there was a lot of effort put into supporting 128-bit long doubles - but I guess it was done with less emphasis on it being IEEE correct and handling exceptions then what D demands ? The D spec should probably mention whether it absolutely requires the type to conform to the IEEE floating-point standard or not... i.e. whether it should just map over to C/C++ "long double" or not.
 I'll let David decide which one wins: D real === C long double, or
 the definition of "largest hardware implemented floating point size"
There's just no way D real can be equal to C long double in general, when the C long double is playing silly games. The only reason gcc can even do that, is that the C spec for floating point is ridiculously vague, and consequently no-one actually uses long double. To mimic it is to immediately break D's support for IEEE.
We can equate D "real" with C "long double", and just avoid using it on those platforms where it doesn't match the hardware size ? But then "real" would be a bad name for it, that I agree with...
 Seriously, the viability of D as a numeric computing platform is at 
 stake here.
Only on the PowerPC platform, though. Maybe on SPARC too, not sure. (it might have real 128-bit ?) But not on the DMD platform: Intel, there it will have full 80-bit support (even if not too portable). --anders
Mar 07 2007
next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Anders F Björklund wrote:
 ...
 
 Seriously, the viability of D as a numeric computing platform is at
 stake here.
Only on the PowerPC platform, though. Maybe on SPARC too, not sure. (it might have real 128-bit ?) But not on the DMD platform: Intel, there it will have full 80-bit support (even if not too portable). --anders
Make that x86... I read somewhere a few days ago that there IS no 80-bit real type on x86-64 machines; AMD basically went "80-bit? Who the hell uses that junk? Just use SSE you pack of anachronistic pansies!" and nuked the x87 in 64-bit code (you can still use it, but ONLY in 32-bit code). And of course, what does SSE use? IEEE 32-bit or 64-bit floats. Ugh. On the bright side, at least float and double are pretty safe :P -- Daniel -- Unlike Knuth, I have neither proven or tried the above; it may not even make sense. v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
Mar 07 2007
next sibling parent kris <foo bar.com> writes:
Daniel Keep wrote:
 
 Anders F Björklund wrote:
 
...

Seriously, the viability of D as a numeric computing platform is at
stake here.
Only on the PowerPC platform, though. Maybe on SPARC too, not sure. (it might have real 128-bit ?) But not on the DMD platform: Intel, there it will have full 80-bit support (even if not too portable). --anders
Make that x86... I read somewhere a few days ago that there IS no 80-bit real type on x86-64 machines; AMD basically went "80-bit? Who the hell uses that junk? Just use SSE you pack of anachronistic pansies!" and nuked the x87 in 64-bit code (you can still use it, but ONLY in 32-bit code). And of course, what does SSE use? IEEE 32-bit or 64-bit floats. Ugh. On the bright side, at least float and double are pretty safe :P -- Daniel
Yep. Read this: http://blogs.msdn.com/ericflee/archive/2004/07/02/172206.aspx It clearly states Win64 intended to eschew x87 register-saves (during a context switch), but may have had a change of heart? AMD have been pretty clear about x86-64, in 64bit mode, having *no* support for x87 -- refer to page 20 of this: http://www.amd.com/us-en/assets/content_type/DownloadableAssets/dwamd_AMD_GDC_2004_MW.pdf That pretty much rules out the use of 80bit real on future platforms? Looks like things have to revert to double after all?
Mar 07 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Daniel Keep wrote:
 
 Anders F Björklund wrote:
 ...

 Seriously, the viability of D as a numeric computing platform is at
 stake here.
Only on the PowerPC platform, though. Maybe on SPARC too, not sure. (it might have real 128-bit ?) But not on the DMD platform: Intel, there it will have full 80-bit support (even if not too portable). --anders
Make that x86... I read somewhere a few days ago that there IS no 80-bit real type on x86-64 machines; AMD basically went "80-bit? Who the hell uses that junk? Just use SSE you pack of anachronistic pansies!" and nuked the x87 in 64-bit code (you can still use it, but ONLY in 32-bit code).
The 80 bit ops are available on the AMD-64 in 64 bit mode. But Microsoft's original plan for 80 bit reals was to not save them during a context switch, meaning you couldn't use them. I know a few people at Microsoft <g>, and made a pitch that the 80 bit regs should be saved, and they fixed it.
Mar 07 2007
parent kris <foo bar.com> writes:
Walter Bright wrote:
 Daniel Keep wrote:
 
 Anders F Björklund wrote:

 ...

 Seriously, the viability of D as a numeric computing platform is at
 stake here.
Only on the PowerPC platform, though. Maybe on SPARC too, not sure. (it might have real 128-bit ?) But not on the DMD platform: Intel, there it will have full 80-bit support (even if not too portable). --anders
Make that x86... I read somewhere a few days ago that there IS no 80-bit real type on x86-64 machines; AMD basically went "80-bit? Who the hell uses that junk? Just use SSE you pack of anachronistic pansies!" and nuked the x87 in 64-bit code (you can still use it, but ONLY in 32-bit code).
The 80 bit ops are available on the AMD-64 in 64 bit mode. But Microsoft's original plan for 80 bit reals was to not save them during a context switch, meaning you couldn't use them. I know a few people at Microsoft <g>, and made a pitch that the 80 bit regs should be saved, and they fixed it.
Looks like AMD adjusted their perspective and presentations at some point also. Page 211 of this document clarifies the current situation: http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/26569.pdf However; x87 is referred to as "legacy" support. I guess we should hope that native 128bit FP will be supported by the time that legacy support is dropped?
Mar 07 2007
prev sibling parent Don Clugston <dac nospam.com.au> writes:
Anders F Björklund wrote:
 Don Clugston wrote:
 
 Older versions of PPC operating systems used 64-bit for "long double",
 newer versions use 128-bit. Both are still in use, so we won't know.
Ugh. That's really horrible.
It's called progress :-)
I was referring to the entire section, not just that line.
Seriously, there was a lot of effort put into supporting 128-bit long doubles - but I guess it was done with less emphasis on it being IEEE correct and handling exceptions then what D demands ? The D spec should probably mention whether it absolutely requires the type to conform to the IEEE floating-point standard or not... i.e. whether it should just map over to C/C++ "long double" or not.
 I'll let David decide which one wins: D real === C long double, or
 the definition of "largest hardware implemented floating point size"
There's just no way D real can be equal to C long double in general, when the C long double is playing silly games. The only reason gcc can even do that, is that the C spec for floating point is ridiculously vague, and consequently no-one actually uses long double. To mimic it is to immediately break D's support for IEEE.
We can equate D "real" with C "long double", and just avoid using it on those platforms where it doesn't match the hardware size ? But then "real" would be a bad name for it, that I agree with...
 Seriously, the viability of D as a numeric computing platform is at 
 stake here.
Only on the PowerPC platform, though. Maybe on SPARC too, not sure. (it might have real 128-bit ?)
SPARC has true 128-bit IEEE reals, but they're not actually implemented (!). Actually the Linux 16-byte 80-bit reals are binary compatible with 128-bit IEEE reals (just that the final 128-80 bits are always set to zero). But not on the DMD platform: Intel,
 there it will have full 80-bit support (even if not too portable).
My perspective is that I've been writing most of the Tango math library, trying to make it portable -- but it's just infeasible when this double+double type pops up. It just has almost nothing in common with the hardware real types!
 
 --anders
Mar 07 2007
prev sibling parent Don Clugston <dac nospam.com.au> writes:
David Friedman wrote:
 Sean Kelly wrote:
 Anders F Björklund wrote:

 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
Excellent news! I'll try it on ppc64 Linux too (Fedora Core)
Except for some strange (temporary?) build error with soft-float, it built just fine for powerpc64-unknown-linux-gnu (with FC5/PPC*)
That reminds me. Is it really a good idea to map the GCC/PPC "long double" to "real" in D? I know this has come up before: http://www.digitalmars.com/d/archives/digitalmars/D/20790.html and the data type seems like an aberration. Here is some more info: http://lists.apple.com/archives/Darwin-development/2001/Jan/msg00499.html And from the ELF ABI: This "Extended precision" differs from the IEEE 754 Standard in the following ways: * The software support is restricted to round-to-nearest mode. Programs that use extended precision must ensure that this rounding mode is in effect when extended-precision calculations are performed. * Does not fully support the IEEE special numbers NaN and INF. These values are encoded in the high-order double value only. The low-order value is not significant. * Does not support the IEEE status flags for overflow, underflow, and other conditions. These flag have no meaning in this format. I can't claim to have the maths background of some folks here, but this suggests to me that this 128-bit representation isn't truly IEEE-754 compliant and therefore probably shouldn't be a default data type in D? Sean
The double+double type has caused me no end of trouble, but I think it is important to maintain interoperability with C.
Agreed, but we need to be able to do it without wrecking interoperability with D!
  If I make the D 
 'real' implementation IEEE double, there would be no way interact with C 
 code that uses 'long double'.  I could add another floating point type 
 for this purpose, but that would diverge from the D spec more than what 
 I have now.
I disagree -- superficially, a new type looks like a bigger divergence, but when it actually comes to writing FP code, it's much less of a divergence, because it wouldn't mess with the semantics of existing floating point types. In fact, since all CPUs are capable of using double+double, I would like to see it become a regular part of the language at some point -- it's a portable data type.
Mar 07 2007
prev sibling parent reply David Friedman <dvdfrdmn users.ess-eff.net> writes:
Anders F Björklund wrote:
 David Friedman wrote:
 
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
Excellent news! I'll try it on ppc64 Linux too (Fedora Core) Question: It says "The MacOS X universal binary package requires XCode 2.4.1 or the equivalent version of cctools." But the GCC version (5363) is from Xcode 2.4 (not 2.4.1) ? (as far as I know Apple hasn't released gcc-5367 sources) --anders
Xcode 2.4 is probably enough, but I built with 2.4.1 so that is what I listed. David
Mar 06 2007
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
David Friedman wrote:

 Question: It says "The MacOS X universal binary package
 requires XCode 2.4.1 or the equivalent version of cctools."
 But the GCC version (5363) is from Xcode 2.4 (not 2.4.1) ?
 (as far as I know Apple hasn't released gcc-5367 sources)
Xcode 2.4 is probably enough, but I built with 2.4.1 so that is what I listed.
Okay, just a slight misunderstanding - I thought you meant what source code to use, but you were talking about which tool to use. All cool then, using Xcode 2.4.1 tools and Xcode 2.4 sources. (downloaded from http://www.opensource.apple.com/darwinsource/) Not that I'm rebuilding it anymore anyway, I was just curious... (gdcmac/gdcwin/gdcgnu packages now providing the dgcc binaries) --anders
Mar 06 2007
prev sibling next sibling parent John Reimer <terminal.node gmail.com> writes:
On Tue, 06 Mar 2007 01:16:07 -0500, David Friedman wrote:

 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
 
 http://sourceforge.net/project/showfiles.php?group_id=154306
 
 Changes:
    * Added support for 64-bit targets
    * Added multilib support
    * Updated to DMD 1.007
    * Fixed Bugzilla 984, 1013
Spectacular! :D
Mar 06 2007
prev sibling next sibling parent Neal Becker <ndbecker2 gmail.com> writes:
David Friedman wrote:

 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
 
 http://sourceforge.net/project/showfiles.php?group_id=154306
 
 Changes:
    * Added support for 64-bit targets
    * Added multilib support
    * Updated to DMD 1.007
    * Fixed Bugzilla 984, 1013
Thanks! Any suggested installation procedure on linux? Extracting the linux 64-bit binary gives: dmd/ bin/ include/ lib/ lib64/ libexec/ man/ share/ The obvious choice of mv dmd/bin/* /usr/bin mv dmd/include/* /usr/include mv dmd/lib/* /usr/lib <<< ooops! That last one will cause conflicts. So what's the recommended procedure?
Mar 06 2007
prev sibling next sibling parent reply Lionello Lunesu <lio lunesu.remove.com> writes:
David Friedman wrote:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
 
 http://sourceforge.net/project/showfiles.php?group_id=154306
 
 Changes:
   * Added support for 64-bit targets
   * Added multilib support
   * Updated to DMD 1.007
   * Fixed Bugzilla 984, 1013
Ah, cool! I want to try this under Windows x64, but I have no idea how to install GDC. The installer on sf.net are only "partial", what else do I need? MSYS? Tried that once, didn't work. L.
Mar 06 2007
next sibling parent Lionello Lunesu <lio lunesu.remove.com> writes:
Lionello Lunesu wrote:
 David Friedman wrote:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.

 http://sourceforge.net/project/showfiles.php?group_id=154306

 Changes:
   * Added support for 64-bit targets
   * Added multilib support
   * Updated to DMD 1.007
   * Fixed Bugzilla 984, 1013
Ah, cool! I want to try this under Windows x64, but I have no idea how to install GDC. The installer on sf.net are only "partial", what else do I need? MSYS? Tried that once, didn't work. L.
OK, so it was actually easier to install Ubuntu x64 and GDC. This took me less time than trying to get GDC (even x86) to work under vista/XP. L.
Mar 06 2007
prev sibling parent David Friedman <dvdfrdmn users.ess-eff.net> writes:
Lionello Lunesu wrote:
 David Friedman wrote:
 
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.

 http://sourceforge.net/project/showfiles.php?group_id=154306

 Changes:
   * Added support for 64-bit targets
   * Added multilib support
   * Updated to DMD 1.007
   * Fixed Bugzilla 984, 1013
Ah, cool! I want to try this under Windows x64, but I have no idea how to install GDC. The installer on sf.net are only "partial", what else do I need? MSYS? Tried that once, didn't work. L.
I don't think there is a 64-bit MinGW target for GCC yet. GDC may 64-bit Cygwin might work. David
Mar 06 2007
prev sibling next sibling parent kenny <funisher gmail.com> writes:
David Friedman wrote:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
 
 http://sourceforge.net/project/showfiles.php?group_id=154306
 
 Changes:
   * Added support for 64-bit targets
   * Added multilib support
   * Updated to DMD 1.007
   * Fixed Bugzilla 984, 1013
super awesome! thanks david! The annoying thing about this for me is as soon as I see this announcement, I want to write layman -S and get the latest ebuild of gdc. It seems like Anders and I are the ones who generally update bugzilla with the latest gdc release (coming soon, btw). Who would I email to get commit access to add gdc to the dlang overlay for layman? I have no problems updating these as soon as the new versions of dmd/gdc come out either -- as I notice layman is usually a few days behind. kenny
Mar 06 2007
prev sibling next sibling parent Carlos Santander <csantander619 gmail.com> writes:
David Friedman escribió:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
 
 http://sourceforge.net/project/showfiles.php?group_id=154306
 
 Changes:
   * Added support for 64-bit targets
   * Added multilib support
   * Updated to DMD 1.007
   * Fixed Bugzilla 984, 1013
Great! Thanks! -- Carlos Santander Bernal
Mar 06 2007
prev sibling next sibling parent Sean Kelly <sean f4.ca> writes:
David Friedman wrote:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
 
 http://sourceforge.net/project/showfiles.php?group_id=154306
 
 Changes:
   * Added support for 64-bit targets
   * Added multilib support
   * Updated to DMD 1.007
   * Fixed Bugzilla 984, 1013
Great work! Sean
Mar 06 2007
prev sibling next sibling parent Krzysztof =?UTF-8?B?U3p1a2llxYJvasSH?= <krzysztof.szukielojc gmail.com> writes:
David Friedman wrote:

 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
 
 http://sourceforge.net/project/showfiles.php?group_id=154306
 
 Changes:
    * Added support for 64-bit targets
    * Added multilib support
    * Updated to DMD 1.007
    * Fixed Bugzilla 984, 1013
hurray! Now I can finaly get all I wanted(muaha). :D
Mar 06 2007
prev sibling next sibling parent reply John Reimer <terminal.node gmail.com> writes:
On Tue, 06 Mar 2007 01:16:07 -0500, David Friedman wrote:

 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
 
 http://sourceforge.net/project/showfiles.php?group_id=154306
 
 Changes:
    * Added support for 64-bit targets
    * Added multilib support
    * Updated to DMD 1.007
    * Fixed Bugzilla 984, 1013
I just realized that gdc still hasn't arrived at 1.0 yet even though the stated criteria for a gdc 1.0 was 64-bit support. :) I guess gdc will remain pre-1.0 for awhile to see if any 64-bit bugs surface? -JJR
Mar 06 2007
next sibling parent Brad Roberts <braddr puremagic.com> writes:
John Reimer wrote:
 On Tue, 06 Mar 2007 01:16:07 -0500, David Friedman wrote:
 
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.

 http://sourceforge.net/project/showfiles.php?group_id=154306

 Changes:
    * Added support for 64-bit targets
    * Added multilib support
    * Updated to DMD 1.007
    * Fixed Bugzilla 984, 1013
I just realized that gdc still hasn't arrived at 1.0 yet even though the stated criteria for a gdc 1.0 was 64-bit support. :) I guess gdc will remain pre-1.0 for awhile to see if any 64-bit bugs surface? -JJR
This has come up before. :) 1.0 must have 64 bit support, but 64 bit support doesn't imply 1.0 You'd have to dig into the archives to find the post from David to see if it clarified what other requirements he had in mind before a 1.0 release. Later, Brad
Mar 06 2007
prev sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
John Reimer wrote:
 On Tue, 06 Mar 2007 01:16:07 -0500, David Friedman wrote:
 
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
[snip]
 
 I just realized that gdc still hasn't arrived at 1.0 yet even though the
 stated criteria for a gdc 1.0 was 64-bit support. :)
It was *a* stated criteria, not *the* stated criteria :P. In http://www.digitalmars.com/webnews/newsgroups.php?art_group=D gnu&article_id=2324 David stated: --- I still want 64-bit and workable cross-compilation for a 1.00 release. --- So I guess the next question would be "What's the status of cross-compilation?" :).
 I guess gdc will remain pre-1.0 for awhile to see if any 64-bit bugs
 surface?
Always a good idea. No need to rush that sort of stuff.
Mar 07 2007
parent John Reimer <terminal.node gmail.com> writes:
On Wed, 07 Mar 2007 09:01:18 +0100, Frits van Bommel wrote:

 John Reimer wrote:
 On Tue, 06 Mar 2007 01:16:07 -0500, David Friedman wrote:
 
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
[snip]
 
 I just realized that gdc still hasn't arrived at 1.0 yet even though the
 stated criteria for a gdc 1.0 was 64-bit support. :)
It was *a* stated criteria, not *the* stated criteria :P. In http://www.digitalmars.com/webnews/newsgroups.php?art_group=D gnu&article_id=2324 David stated: --- I still want 64-bit and workable cross-compilation for a 1.00 release. --- So I guess the next question would be "What's the status of cross-compilation?" :).
Ah, I did miss the second criteria. Oops. :) -JJR
Mar 07 2007
prev sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
David Friedman wrote:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.
 
 http://sourceforge.net/project/showfiles.php?group_id=154306
 
 Changes:
   * Added support for 64-bit targets
   * Added multilib support
   * Updated to DMD 1.007
   * Fixed Bugzilla 984, 1013
While everyone is talking about 64-bit support, I haven't seen anyone make an mention of multilib support. So I thought I'd ask: what does that mean? Great work, by the way.
Mar 07 2007
parent David Friedman <dvdfrdmn users.ess-eff.net> writes:
Frits van Bommel wrote:
 David Friedman wrote:
 GDC now supports 64-bit targets! A new x86_64 Linux binary is
 available and the MacOS X binary supports x86_64 and ppc64.

 http://sourceforge.net/project/showfiles.php?group_id=154306

 Changes:
   * Added support for 64-bit targets
   * Added multilib support
   * Updated to DMD 1.007
   * Fixed Bugzilla 984, 1013
While everyone is talking about 64-bit support, I haven't seen anyone make an mention of multilib support. So I thought I'd ask: what does that mean? Great work, by the way.
Multilib refers to multiple architecture variants in a single GCC deployment. Often this is 32/64-bit, but it would have been an issue before 0.23 for targets like ARM which have ARM and Thumb code generation.
Mar 07 2007