www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - 80 Bit Challenge

reply "Bob W" <nospam aol.com> writes:
Thread "Exotic floor() function - D is different" went
into general discussion about the internal FP format.
So I have moved this over to a new thread:


"Walter" <newshound digitalmars.com> wrote in message 
news:d2l1ds$2479$1 digitaldaemon.com...
 ...... The x86 FPU *wants* to evaluate things to 80 bits.

 The D compiler's internal paths fully support 80 bit arithmetic, that 
 means
 there are no surprising "choke points" where it gets truncated to 64 bits.
 If the type of a literal is specified to be 'double', which is the case 
 for
 no suffix, then you get 64 bits of precision. I hope you'll agree that 
 that
 is the least surprising thing to do.
I would have agreed a couple of days ago. But after carefully thinking it over, I've come to the following conclusions: - It was a good move to open up (almost) everything in D to handle the 80bit FP fomat (real format). - I'm also with you when you have mentioned the following: "... intermediate values generated are allowed to be evaluated to the largest precision available...." - But - they are by default not allowed to accept the largest precision available, because the default format for literals w/o suffix is double. - In my opinion there is no single reason why literals w/o suffix have to be treated as doubles (except maybe for C legacy). - It has never harmed floats to be fed with doubles, consequently it will not harm doubles to accept reals. The FPU gladly will take care of this. - Forget C for a moment: Doesn't it look strange parsing unsuffixed literals as doubles, converting them and evaluating them internally to reals (FPU) and eventually passing the precision-impaired results to a real? - This is why I'd like to see default (unsuffixed) literals to be parsed and evaluated in "the highest precision available" (whatever this will be in future, real for now). - Since everything else is prepared for 80bits in D, casting and/or a double suffix would be the logical way in the rare cases, when double generation has to be enforced. - Experience shows that there will be a loss of precision in the final result whenever double values are converted to real and evaluated further. But this is of no concern when reals are truncated to doubles. Finally I'd like to say that although I am convinced that the above mentioned would be worthwhile to be implemented, I am actually more concerned about the 32 bit integers. The 64 bit CPUs are coming and they'll change our way of thinking just the way the 32 bit engines have done. Internal int format 32 bits? Suffixes for 64 bit int's? For now it is maybe still a yes, in the not so distant future maybe not. I just hope D can cope and will still be "young of age" when this happens. I like the slogan "D fully supports 80 bit reals", but a marketing guy would probably suggest to change this to "D fully supports 64 bit CPUs".
Apr 02 2005
next sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 - In my opinion there is no single reason why literals
  w/o suffix have to be treated as doubles (except maybe
  for C legacy).
 - This is why I'd like to see default (unsuffixed) literals
  to be parsed and evaluated in "the highest precision
  available" (whatever this will be in future, real for now).
and human legacy. Personally I'm used to .3 being a double. If I had three overloaded function func(float), func(double) and func(real) and I wrote func(.3) I'd be surprised it chose the real one just because I'm used to literals being doubles. But that's my only complaint about your proposal. Since D doesn't have to worry about legacy code we can make .3 parse as whatever we want technically.
 The 64 bit CPUs are coming and they'll change our way
 of thinking just the way the 32 bit engines have done.
 Internal int format 32 bits? Suffixes for 64 bit int's?
 For now it is maybe still a yes, in the not so distant
 future maybe not. I just hope D can cope and will still
 be "young of age" when this happens.
I'm sure people would get thrown for a loop if given a choice between func(int) and func(long) the code func(1) called func(long). Even on a 64 bit platform. If one really didn't care which was chosen then import std.stdint; ... func(cast(int_fast32_t)1); would be a platform-independent way of choosing the "natural" size for 1 (assuming "fast32" would be 64 bits on 64 bit platforms). And more explicit, too.
Apr 02 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message
news:d2m8c1$9jj$1 digitaldaemon.com...
 - In my opinion there is no single reason why literals
  w/o suffix have to be treated as doubles (except maybe
  for C legacy).
 - This is why I'd like to see default (unsuffixed) literals
  to be parsed and evaluated in "the highest precision
  available" (whatever this will be in future, real for now).
and human legacy. Personally I'm used to .3 being a double. If I had three overloaded function func(float), func(double) and func(real) and I wrote func(.3) I'd be surprised it chose the real one just because I'm used to literals being doubles. But that's my only complaint about your proposal. Since D doesn't have to worry about legacy code we can make .3 parse as whatever we want technically.
I've been thinking about this. The real issue is not the precision of .3, but it's type. Suppose it was kept internally with full 80 bit precision, participated in constant folding as a full 80 bit type, and was only converted to 64 bits when a double literal needed to be actually inserted into the .obj file? This would tend to mimic the runtime behavior of intermediate value evaluation. It will be numerically superior. An explicit cast would still be honored: cast(double).3 will actually truncate the bits in the internal representation.
 The 64 bit CPUs are coming and they'll change our way
 of thinking just the way the 32 bit engines have done.
 Internal int format 32 bits? Suffixes for 64 bit int's?
 For now it is maybe still a yes, in the not so distant
 future maybe not. I just hope D can cope and will still
 be "young of age" when this happens.
I'm sure people would get thrown for a loop if given a choice between func(int) and func(long) the code func(1) called func(long). Even on a 64 bit platform.
I agree. The 'fuzzy' nature of C's int size has caused endless grief, porting bugs, and misguided coding styles over the last 20 years. Portability is significantly enhanced by giving a reliable, predictable size to it.
 If one really didn't care which was chosen then
   import std.stdint;
   ...
   func(cast(int_fast32_t)1);
 would be a platform-independent way of choosing the "natural" size for 1
 (assuming "fast32" would be 64 bits on 64 bit platforms). And more
explicit,
 too.
Interestingly, current C compilers for AMD and Intel's 64 bit CPUs still put int's at 32 bits. I think I was proven right <g>.
Apr 02 2005
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter wrote:

 I agree. The 'fuzzy' nature of C's int size has caused endless grief,
 porting bugs, and misguided coding styles over the last 20 years.
 Portability is significantly enhanced by giving a reliable, predictable size
 to it.
Which is why it's so strange to have a "real" FP type built-in to D, that does not have a fixed size but is instead highly CPU dependant ? My suggestion was to name the 80-bit (or more: 128) type "extended", and to make "real" into an alias - just like for instance size_t is ? As a side effect, it would also fix the "ireal" and "creal" types... (as it would mean that the "extended" types only exists on X86 CPUs) But currently, a "real" *could* be exactly the same as a "double"... (i.e how it works in DMD 0.11x and GDC 0.10, using C's: long double) 1) Revert the type names back to the old ones:
     * real -> extended
     * ireal -> iextended
     * creal -> cextended
2) // GCC: LONG_DOUBLE_TYPE_SIZE version (GNU_BitsPerReal80) // DMD: "all" { alias extended real; alias iextended imaginary; alias cextended complex; } else version (GNU_BitsPerReal64) // DMD: "none" { alias double real; alias idouble imaginary; alias cdouble complex; } else static assert(0); 3) for reference, these already exist: // GCC: POINTER_SIZE version (GNU_BitsPerPointer64) // DMD: "AMD64" { alias ulong size_t; alias long ptrdiff_t; } else version (GNU_BitsPerPointer32) // DMD: "X86" { alias uint size_t; alias int ptrdiff_t; } else static assert(0); And we would still know how to say "80-bit extended precision" ? (and avoid the "imaginary real" and "complex real" embarrasment) --anders
Apr 02 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message 
news:d2mt6p$sq6$1 digitaldaemon.com...
 Walter wrote:

 I agree. The 'fuzzy' nature of C's int size has caused endless grief,
 porting bugs, and misguided coding styles over the last 20 years.
 Portability is significantly enhanced by giving a reliable, predictable 
 size
 to it.
Which is why it's so strange to have a "real" FP type built-in to D, that does not have a fixed size but is instead highly CPU dependant ?
Specifying the floating point numeric model for portability is what Java did and it ended in disaster - it hosed performance on Intel chips. Allowing for more is the only sane thing to do (or SANE for those who remember the old Apple API and the 68881 96 bit extended type - ah that precision rocked!). Of course if you actually have to worry about a few bits of precision my own personal philosophy is to use GMP and double the precision until roundoff is no longer even close to a problem. For those who haven't been following at home I'll plug my D wrapper for GMP (the GNU multi-precision library): http://home.comcast.net/~benhinkle/gmp-d/
Apr 02 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Ben Hinkle wrote:

Which is why it's so strange to have a "real" FP type built-in to D,
that does not have a fixed size but is instead highly CPU dependant ?
Specifying the floating point numeric model for portability is what Java did and it ended in disaster - it hosed performance on Intel chips. Allowing for more is the only sane thing to do (or SANE for those who remember the old Apple API and the 68881 96 bit extended type - ah that precision rocked!).
You misunderstood. I think that having an 80-bit floating point type is a *good* thing. I just think it should be *fixed* at 80-bit, and not be 64-bit on some platforms and 80-bit on some platforms ? And rename it... (again, "real" is not the name problem here - "ireal" and "creal" are) I still think the main reason why Java does not have 80-bit floating point is that the SPARC chip doesn't have it, so Sun didn't bother ? :-) And the PowerPC 128-bit "long double" is not fully IEEE-compliant... (and for portability, it would be nice if D's extended.sizeof was 16 ?) --anders PS. GCC says:
 -m96bit-long-double, -m128bit-long-double
 
 These switches control the size of long double type. The i386
 application binary interface specifies the size to be 96 bits, so
 -m96bit-long-double is the default in 32 bit mode.
 
 Modern architectures (Pentium and newer) would prefer long double to be
 aligned to an 8 or 16 byte boundary. In arrays or structures conforming
 to the ABI, this would not be possible. So specifying a
 -m128bit-long-double will align long double to a 16 byte boundary by
 padding the long double with an additional 32 bit zero.
 
 In the x86-64 compiler, -m128bit-long-double is the default choice as
 its ABI specifies that long double is to be aligned on 16 byte boundary.
 
 Notice that neither of these options enable any extra precision over the
 x87 standard of 80 bits for a long double.
http://gcc.gnu.org/onlinedocs/gcc-3.4.3/gcc/i386-and-x86_002d64-Options.html Note that D uses a "80bit-long-double", by default (i.e. REALSIZE is 10)
Apr 02 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message
news:d2o7iv$21pg$1 digitaldaemon.com...
 You misunderstood. I think that having an 80-bit floating point type is
 a *good* thing. I just think it should be *fixed* at 80-bit, and not be
 64-bit on some platforms and 80-bit on some platforms ? And rename it...
Unfortunately, that just isn't practical. In order to implement D efficiently, the floating point size must map onto what the native hardware supports. We can get away with specifying the size of ints, longs, floats, and doubles, but not of the extended floating point type.
Apr 03 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter wrote:

You misunderstood. I think that having an 80-bit floating point type is
a *good* thing. I just think it should be *fixed* at 80-bit, and not be
64-bit on some platforms and 80-bit on some platforms ? And rename it...
Unfortunately, that just isn't practical. In order to implement D efficiently, the floating point size must map onto what the native hardware supports. We can get away with specifying the size of ints, longs, floats, and doubles, but not of the extended floating point type.
I understand this, my "solution" there was to use an alias instead... e.g. "real" would map to 80-bit on X86, and to 64-bit on PPC (in reality, it does this already in GDC. Just implicitly) --anders
Apr 03 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message
news:d2obfj$24ee$1 digitaldaemon.com...
 Walter wrote:

You misunderstood. I think that having an 80-bit floating point type is
a *good* thing. I just think it should be *fixed* at 80-bit, and not be
64-bit on some platforms and 80-bit on some platforms ? And rename it...
Unfortunately, that just isn't practical. In order to implement D efficiently, the floating point size must map onto what the native
hardware
 supports. We can get away with specifying the size of ints, longs,
floats,
 and doubles, but not of the extended floating point type.
I understand this, my "solution" there was to use an alias instead... e.g. "real" would map to 80-bit on X86, and to 64-bit on PPC (in reality, it does this already in GDC. Just implicitly)
Why does GDC do this, since gcc on linux supports 80 bit long doubles?
Apr 03 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter wrote:

I understand this, my "solution" there was to use an alias instead...

e.g. "real" would map to 80-bit on X86, and to 64-bit on PPC
      (in reality, it does this already in GDC. Just implicitly)
Why does GDC do this, since gcc on linux supports 80 bit long doubles?
Just me being vague again... Make that "GDC on PowerPC" (and probably other CPU families too, like SPARC or so) GDC just falls back on what GCC reports for long double support, so you can compile with AIX long-double-128 if you like (unfortunately those are pretty darn buggy in GCC, and not IEEE-755 compliant even in IBM AIX either) But for GDC with X87 hardware, everything should be normal. --anders
Apr 03 2005
parent "Walter" <newshound digitalmars.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message
news:d2pbds$2v38$1 digitaldaemon.com...
 GDC just falls back on what GCC reports for long double
 support, so you can compile with AIX long-double-128 if
 you like (unfortunately those are pretty darn buggy in
 GCC, and not IEEE-755 compliant even in IBM AIX either)
That's what I expected it to do. Thanks for clearing that up.
Apr 03 2005
prev sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message 
news:d2o7iv$21pg$1 digitaldaemon.com...
 Ben Hinkle wrote:

Which is why it's so strange to have a "real" FP type built-in to D,
that does not have a fixed size but is instead highly CPU dependant ?
Specifying the floating point numeric model for portability is what Java did and it ended in disaster - it hosed performance on Intel chips. Allowing for more is the only sane thing to do (or SANE for those who remember the old Apple API and the 68881 96 bit extended type - ah that precision rocked!).
You misunderstood. I think that having an 80-bit floating point type is a *good* thing. I just think it should be *fixed* at 80-bit, and not be 64-bit on some platforms and 80-bit on some platforms ?
Let me rephrase my point. Fixing the precision on a platform that doesn't support that precision is a bad idea. What if the hardware supports 96 bit extended precision and D fixes the value at 80? We should just do what the hardware will support since it varies so much between platforms.
And rename it...
 (again, "real" is not the name problem here - "ireal" and "creal" are)
That's another thread :-)
 I still think the main reason why Java does not have 80-bit floating point 
 is that the SPARC chip doesn't have it, so Sun didn't bother ? :-)
 And the PowerPC 128-bit "long double" is not fully IEEE-compliant...
could be.
 (and for portability, it would be nice if D's extended.sizeof was 16 ?)

 --anders

 PS. GCC says:

 -m96bit-long-double, -m128bit-long-double

 These switches control the size of long double type. The i386
 application binary interface specifies the size to be 96 bits, so
 -m96bit-long-double is the default in 32 bit mode.

 Modern architectures (Pentium and newer) would prefer long double to be
 aligned to an 8 or 16 byte boundary. In arrays or structures conforming
 to the ABI, this would not be possible. So specifying a
 -m128bit-long-double will align long double to a 16 byte boundary by
 padding the long double with an additional 32 bit zero.

 In the x86-64 compiler, -m128bit-long-double is the default choice as
 its ABI specifies that long double is to be aligned on 16 byte boundary.

 Notice that neither of these options enable any extra precision over the
 x87 standard of 80 bits for a long double.
http://gcc.gnu.org/onlinedocs/gcc-3.4.3/gcc/i386-and-x86_002d64-Options.html Note that D uses a "80bit-long-double", by default (i.e. REALSIZE is 10)
This section looks like a quote of a previous post since it is indented using > but I think you are quoting another source. I've noticed you use > to quote replies, which is the character I use, too. Please use > only for quoting replies since it is misleading to indent other content using >. It looks like you are putting words into other people's posts.
Apr 03 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Ben Hinkle wrote:

 Let me rephrase my point. Fixing the precision on a platform that doesn't 
 support that precision is a bad idea. What if the hardware supports 96 bit 
 extended precision and D fixes the value at 80? We should just do what the 
 hardware will support since it varies so much between platforms.
All floating point precision in D is minimum. So 96 bit would be fine, just as 128 bit would also be just fine. Again, making them all use 128 bits for storage would simplify things - even if only 80 are used ? But allowing 64 bit too for extended, like now, is somewhat confusing...
 This section looks like a quote of a previous post since it is indented 
 using > but I think you are quoting another source. I've noticed you use > 
 to quote replies, which is the character I use, too. Please use > only for 
 quoting replies since it is misleading to indent other content using >. It 
 looks like you are putting words into other people's posts. 
I use the '>' character (actually I just use Quote / Paste as Quotation), but I also quote several sources - with attributions: A wrote:
 foo
B wrote:
 bar
C wrote:
 baz
Sorry if you find this confusing, and I'll try to make it clearer... (if there's no such attribution, it's quoted from the previous post) --anders
Apr 03 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 I use the '>' character (actually I just use Quote / Paste as Quotation), 
 but I also quote several sources - with attributions:

 A wrote:
 foo
B wrote:
 bar
C wrote:
 baz
Sorry if you find this confusing, and I'll try to make it clearer... (if there's no such attribution, it's quoted from the previous post)
thanks - what newsreader do you use by the way?
Apr 03 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Ben Hinkle wrote:

 thanks - what newsreader do you use by the way? 
I am using Thunderbird: User-Agent: Mozilla Thunderbird 1.0.2 (Macintosh/20050317) And I see you use LookOut: X-Newsreader: Microsoft Outlook Express 6.00.2900.2180 I can recommend both Thunderbird and Firefox, from Mozilla. http://www.mozilla.org/products/ --anders
Apr 03 2005
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
In article <d2mn51$nju$1 digitaldaemon.com>, Walter says...
I agree. The 'fuzzy' nature of C's int size has caused endless grief,
porting bugs, and misguided coding styles over the last 20 years.
Portability is significantly enhanced by giving a reliable, predictable size
to it.
I agree, though it's worth noting that some few architectures that C has been ported to don't have 8 bit bytes. On such machines, I imagine it may be difficult to comform to standard size requirements.
Interestingly, current C compilers for AMD and Intel's 64 bit CPUs still put
int's at 32 bits. I think I was proven right <g>.
At the very least, I think it's likely that future architectures will be more consistent and the odd byte size problem will likely go away, if it ever really existed in the first place. I find it very useful to have standard size requirements for primitives as it reduces a degree of unpredictability (or the need for preprocessor code) for cross-platform code. Sean
Apr 02 2005
parent "Walter" <newshound digitalmars.com> writes:
"Sean Kelly" <sean f4.ca> wrote in message
news:d2n5ok$14si$1 digitaldaemon.com...
 In article <d2mn51$nju$1 digitaldaemon.com>, Walter says...
I agree. The 'fuzzy' nature of C's int size has caused endless grief,
porting bugs, and misguided coding styles over the last 20 years.
Portability is significantly enhanced by giving a reliable, predictable
size
to it.
I agree, though it's worth noting that some few architectures that C has
been
 ported to don't have 8 bit bytes.  On such machines, I imagine it may be
 difficult to comform to standard size requirements.
I've worked on such a machine, the PDP-10, with 36 bit sized 'ints'. They were beautiful machines for their day, the 1970's, but they went obsolete 25+ years ago. But I don't think anyone is going to make an odd bit size machine anymore - just try running Java on it.
Apr 02 2005
prev sibling parent reply "Bob W" <nospam aol.com> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
news:d2m8c1$9jj$1 digitaldaemon.com...
 - In my opinion there is no single reason why literals
  w/o suffix have to be treated as doubles (except maybe
  for C legacy).
 - This is why I'd like to see default (unsuffixed) literals
  to be parsed and evaluated in "the highest precision
  available" (whatever this will be in future, real for now).
and human legacy. Personally I'm used to .3 being a double. If I had three overloaded function func(float), func(double) and func(real) and I wrote func(.3) I'd be surprised it chose the real one just because I'm used to literals being doubles.
Would you even notice in most cases? The FPU will happily accept your real and do with it whatever it is instructed to do. On the other hand, if your .3 defaults to a double, a rounding error should not surprise you. If I was convinced that overloading is more often found than literals in mainstream (=moderately sophisticated) programs, then I'd give it more of a thought.
 But that's my only complaint about your proposal. Since D doesn't have
 to worry about legacy code we can make .3 parse as whatever we want 
 technically.
Exactly. I'd also be concerned how to explain to someone interested in D, supposedly a much more modern language than C, the following: The compiler offers an 80 bit type, the FPU calculates only in 80 bit format, but default literals are parsed for some illogical reason to double precision values. That would not really impress me.
 The 64 bit CPUs are coming and they'll change our way
 of thinking just the way the 32 bit engines have done.
 Internal int format 32 bits? Suffixes for 64 bit int's?
 For now it is maybe still a yes, in the not so distant
 future maybe not. I just hope D can cope and will still
 be "young of age" when this happens.
I'm sure people would get thrown for a loop if given a choice between func(int) and func(long) the code func(1) called func(long). Even on a 64 bit platform. If one really didn't care which was chosen then import std.stdint; ... func(cast(int_fast32_t)1); would be a platform-independent way of choosing the "natural" size for 1 (assuming "fast32" would be 64 bits on 64 bit platforms). And more explicit, too.
Not too long from now we'll be averaging 16GB of main memory, 32 bit computers will be gone and I bet the average programmer will not be bothered using anything else than 64 bits for his integer of choice. I doubt that there are many people left who are still trying to use 16 bit variables for integer calculations, even if they'd fit their requirements. The same thing will happen to 32 bit formats in PC-like equipment, I'm sure. (I am not talking about UTF-32 formats here.) The main reason is that for the first time ever the integer range will be big enough for almost anything, no overflow at 128 nor 32768 nor 2+ billion. What a relief!
Apr 02 2005
next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Bob W wrote:
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
But that's my only complaint about your proposal. Since D doesn't have
to worry about legacy code we can make .3 parse as whatever we want 
technically.
Exactly. I'd also be concerned how to explain to someone interested in D, supposedly a much more modern language than C, the following: The compiler offers an 80 bit type, the FPU calculates only in 80 bit format, but default literals are parsed for some illogical reason to double precision values. That would not really impress me.
Well put. It's plain embarrassing. Makes D look home-made. Ever since I started using D it never crossed my mind to even suspect that they'd be anything else than 80 bit. Luckily, most of my programs use integers, but had I unexpectingly stumbled upon this... It's like you're on a sunny picnic with your family, and around comes this 4 year old. Suddenly he's got a semiautomatic shotgun, and he empties it in your stomach. You'd die with a face expressing utter disbelief.
The 64 bit CPUs are coming and they'll change our way
of thinking just the way the 32 bit engines have done.
Internal int format 32 bits? Suffixes for 64 bit int's?
For now it is maybe still a yes, in the not so distant
future maybe not. I just hope D can cope and will still
be "young of age" when this happens.
I'm sure people would get thrown for a loop if given a choice between 
func(int) and func(long) the code func(1) called func(long). Even on a 64 
bit platform. If one really didn't care which was chosen then
 import std.stdint;
 ...
 func(cast(int_fast32_t)1);
I'd've said "if one really _did_ care". :-)
would be a platform-independent way of choosing the "natural" size for 1 
(assuming "fast32" would be 64 bits on 64 bit platforms). And more 
explicit, too.
Actually, I wish there were a pragma to force the default -- and it would force all internal stuff too. I hate it when somebody else knows better what I want to calculate with. And hate not trusting what is secretly cast to what and when. What if some day I'm using DMD on a Sparc, and had to read through the asm listings of my own binaries, just because my client needs to know for sure.
 Not too long from now we'll be averaging 16GB of
 main memory, 32 bit computers will be gone and I
 bet the average programmer will not be bothered
 using anything else than 64 bits for his integer
 of choice.
This'll happen too fast. When M$ gets into 64 bits on the desktop, no self-respecting suit, office clerk, or other jerk wants to be even seen with a 32 bit computer. Need'em or not.
 I doubt that there are many people left who are
 still trying to use 16 bit variables for integer
 calculations, even if they'd fit their requirements.
 The same thing will happen to 32 bit formats in
 PC-like equipment, I'm sure. (I am not talking
 about UTF-32 formats here.)
At first I had some getting used to when writing in D: for (int i=0; i<8; i++) {.....} knowing I'm "wasting", but now it seems natural. Things change.
 The main reason is that for the first time
 ever the integer range will be big enough for
 almost anything, no overflow at 128 nor 32768
 nor 2+ billion. What a relief!
That will change some things for good. For example, then it will be perfectly reasonable to do all money calculations with int64. There's enough room to do the World Budget, counted in cents. That'll make it so much easier to do serious and fast bean counting software.
Apr 03 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Georg Wrede wrote:

 The compiler offers an 80 bit type,
(some compilers)
 the FPU calculates only in 80 bit format,
(some FPUs)
 but default literals are parsed for some
 illogical reason to double precision values.
The default precision is double, f is for single and l is for extended. I'm not sure it makes sense to have the default be a non-portable type ?
 That would not really impress me.
Well put. It's plain embarrassing. Makes D look home-made.
I don't see how picking a certain default (which happens to be the same default as in most other C-like languages) is "home-made" ?
 This'll happen too fast. When M$ gets into 64 bits on the desktop, no 
 self-respecting suit, office clerk, or other jerk wants to be even seen 
 with a 32 bit computer. Need'em or not.
Currently there is a real shortage of 64-bit Windows drivers, though... However, nobody wants to be seen with a *16-bit* computer for sure :-) --anders
Apr 03 2005
next sibling parent "Bob W" <nospam aol.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message 
news:d2oas2$245e$1 digitaldaemon.com...
 Georg Wrede wrote:

 The compiler offers an 80 bit type,
(some compilers)
 the FPU calculates only in 80 bit format,
(some FPUs)
I would never advocate the real for internal calculation on compilers or target systems without 80 bit FPU (although it would not harm using 80 bit emulation during compile time except for a slight compiler performace degradation), I am just certain that you need to use the highest precision available on the (compiler) system to represent the maximum number precison correctly. Remember: you will not introduce errors in your double values by using real for evaluation, but you'll definitely have inaccuracies in most fractional real values if they are derived from a double. And yes, the compiler would be able to evaluate the expression double d=1e999/1e988; correctly, because the result is a valid double value. Currently it doesn't unless you override your defaults.
 but default literals are parsed for some
 illogical reason to double precision values.
The default precision is double, f is for single and l is for extended. I'm not sure it makes sense to have the default be a non-portable type ?
Again, there are no portability issues, except if you want to introduce inaccuracies as a compiler feature. A fractional value like 1.2 is its double (and float) equivalent even if derived from a real. But you will be way off precision compared to 1.2L if you intentionally or unintentionally create a real from a double. "real r=1.2" simply does not work properly and should be flagged by the compiler with at least a warning, if you guys for some reason have to mimic C in this respect. (Any Delphi programmers out there to comment?)
 That would not really impress me.
Well put. It's plain embarrassing. Makes D look home-made.
I don't see how picking a certain default (which happens to be the same default as in most other C-like languages) is "home-made" ?
C looks "home made" at times, but you'd have to expect that from a language which is several decades old. Why would D want to start like this in the very first place?
 This'll happen too fast. When M$ gets into 64 bits on the desktop, no 
 self-respecting suit, office clerk, or other jerk wants to be even seen 
 with a 32 bit computer. Need'em or not.
Currently there is a real shortage of 64-bit Windows drivers, though... However, nobody wants to be seen with a *16-bit* computer for sure :-) --anders
That's what I've told my parents in my student times, but they have never bought me that Ferrari ....
Apr 03 2005
prev sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Anders F Björklund wrote:
 Georg Wrede wrote:
 The default precision is double, f is for single and l is for extended.
 I'm not sure it makes sense to have the default be a non-portable type ?
My dream would be that depending on the FPU, the default would be the "best" -- i.e. 80 on Intel, 64 on, say, Sparc -- and that an undecorated float literal would be of _this_type_ by default. That'd make me euphorious. (Pardon the pun.) So, I'd like a "default floating type", that is automatically aliased to be the smartest choice on the current platform. AND that all internal confusion would be removed. As a user, I want to write 2.3 and _know_ that the system understands that it means "whatever precision we happen to use on this particular platform for floating operations anyway". If I had another opinion of my own, I'd damn well tell the compiler. If I need to work with 64 bit floats on an 80 bit machine, then I'll specify that (pragma, decorated literal, whatever), and trust that "everything" then happens with 64 bits. There's too much C legacy clouding this room. ;-) I mean, when's the last time anyone did half their float math with 64 bits and half with 80, in the same program?
Apr 03 2005
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Georg Wrede wrote:

 My dream would be that depending on the FPU, the default would be the 
 "best" -- i.e. 80 on Intel, 64 on, say, Sparc -- and that an undecorated 
 float literal would be of _this_type_ by default.
You seem to be ignoring 32-bit floating point types ? That would be a mistake, they are very useful for sound and image processing, for instance ? Using 64 or more bits per channel would be overkill... Also, with vector units one can process like 4 floats at a time. That is not too bad, either... (speedwise) Unfortunately, D does not support SSE/AltiVec (yet ?) Or maybe it's just a little side-effect of your dislike of having to type an extra 'L' to get "extended" constants ? (as been pointed out, 1.0 is universal "C" code for "double")
 So, I'd like a "default floating type", that is automatically aliased to 
 be the smartest choice on the current platform. AND that all internal 
 confusion would be removed.
From what I have seen, D is not about "automatically choosing the smartest type". It's about letting the programmer choose which type is the smartest to use ? Even if that means that one has to pick from like 5 integer types, 4 floating point types, 3 string types and even 3 boolean types... (choices, choices) And sometimes you have to cast those literals. Like for instance: -1U, or cast(wchar[]) "string". Kinda annoying, but not very complex - and avoids complicating the compiler ? D can be a pretty darn low-level language at times, IMHO... --anders
Apr 03 2005
parent reply Georg Wrede <georg.wrede nospam.org> writes:
Anders F Björklund wrote:
 Georg Wrede wrote:
 
 My dream would be that depending on the FPU, the default would be the 
 "best" -- i.e. 80 on Intel, 64 on, say, Sparc -- and that an 
 undecorated float literal would be of _this_type_ by default.
You seem to be ignoring 32-bit floating point types ?
No, I just wanted to keep focused on the 80 vs 64 issue.
 That would be a mistake, they are very useful for
 sound and image processing, for instance ? Using
 64 or more bits per channel would be overkill...
 
 Also, with vector units one can process like 4 floats
 at a time. That is not too bad, either... (speedwise)
 Unfortunately, D does not support SSE/AltiVec (yet ?)
 
 Or maybe it's just a little side-effect of your dislike
 of having to type an extra 'L' to get "extended" constants ?
 (as been pointed out, 1.0 is universal "C" code for "double")
<Sigh.> Dislike indeed. And I admit, mostly the L or not, makes no difference. So one ends up not using L. And the one day it makes a difference, one will look everywhere else for own bugs, D bugs, hardware bugs, before noticing it was the missing L _this_ time. And then one gets a hammer and bangs one's head real hard.
 So, I'd like a "default floating type", that is automatically aliased 
 to be the smartest choice on the current platform. AND that all 
 internal confusion would be removed.
From what I have seen, D is not about "automatically choosing the smartest type". It's about letting the programmer choose which type is the smartest to use ? Even if that means that one has to pick from like 5 integer types, 4 floating point types, 3 string types and even 3 boolean types... (choices, choices) And sometimes you have to cast those literals. Like for instance: -1U, or cast(wchar[]) "string". Kinda annoying, but not very complex - and avoids complicating the compiler ? D can be a pretty darn low-level language at times, IMHO...
:-) "A practical language for practical programmers." Maybe it's just me. When I read the specs, it is absolutely clear, and I agree with it. But when I write literals having a decimal point I somehow can't help "feeling" that they're full 80 bit. Even when I know they're not. Somehow I don't seem to have this same problem in C. Maybe I should see a doctor. :-(
Apr 04 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Georg Wrede wrote:

 And I admit, mostly the L or not, makes no difference. So one ends up 
 not using L. And the one day it makes a difference, one will look 
 everywhere else for own bugs, D bugs, hardware bugs, before noticing it 
 was the missing L _this_ time. And then one gets a hammer and bangs 
 one's head real hard.
D lives in a world of two schools. The string literals, for instance, they are untyped and only spring into existance when you actually do assign them to anything. But the two numeric types are "different"... To be compatible with C they default to "int" and "double", and then you have to either cast them or use the 'L' suffixes to make them use "long" or "extended" instead. Annoying, but same as before ? BTW; In Java, you get an error when you do "float f = 1.0;" I'm not sure that is all better, but more "helpful"... Would you prefer it if you had to cast your constants ? Maybe one of those new D warnings would be in place here ? "warning - implicit conversion of expression 1.0 of type double to type extended can cause loss of data" --anders
Apr 04 2005
parent reply Georg Wrede <georg.wrede nospam.org> writes:
Anders F Björklund wrote:
 Georg Wrede wrote:
 
 And I admit, mostly the L or not, makes no difference. So one ends up 
 not using L. And the one day it makes a difference, one will look 
 everywhere else for own bugs, D bugs, hardware bugs, before noticing 
 it was the missing L _this_ time. And then one gets a hammer and bangs 
 one's head real hard.
D lives in a world of two schools. The string literals, for instance, they are untyped and only spring into existance when you actually do assign them to anything. But the two numeric types are "different"... To be compatible with C they default to "int" and "double", and then you have to either cast them or use the 'L' suffixes to make them use "long" or "extended" instead. Annoying, but same as before ? BTW; In Java, you get an error when you do "float f = 1.0;" I'm not sure that is all better, but more "helpful"... Would you prefer it if you had to cast your constants ?
Any time I write numerictype variable = 2.3; I want the literal implicitly to be taken as being of "numerictype". I don't want to decorate the literal ANYWHERE ELSE than when I for some reason want it to be "of unexpected" type. What if I wrote int v = -7; and found out that "-7" is first converted to int16, then to int. Would you blame me for murdering the compiler writer? I just refuse to see what's so different here. ------------- Shit! On proofreading I noticed that my "-7" example doesn't even work the way I meant. Maybe I should murder me, and let D have L's all over the source code. Let's just assume v suddenly holds the value 65529.
Apr 04 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Georg Wrede wrote:

 Any time I write
 
 numerictype variable = 2.3;
 
 I want the literal implicitly to be taken as being of "numerictype".
That's how D strings work... char[] s = "hello"; // s.ptr now holds: "hello\0" wchar[] t = "hello"; // t.ptr now holds: "\0h\0e\0l\0l\0o\0\0" And "hello" is simply untyped. But it's *not* how numbers work...
 What if I wrote
 
 int v = -7;
 
 and found out that "-7" is first converted to int16, then to int.
-7, like all other such small integer literals, is of type "int". Therefore, you can't assign it to - for instance - an "uint" ? // cannot implicitly convert expression -7 of type int to uint And for similar historical reasons, floating literals are "double". Unfortunately, it's harder to tell double/extended apart than e.g. byte/short/int/long. (or char/wchar/dchar, for character literals) Another thing is that a Unicode string can be converted from char[] to wchar[] without any loss, and the same is true for a (small) integer ? But it's not true for extended. For instance, the compiler "knows" e.g. that 0x0FFFFFFFF is an uint and that 0x100000000 is a long. But it doesn't know what type that 1.0 has. And since there no good way to tell, it simply picks the default one. But if there was no C legacy, then D literals could probably always default to "real" and "long", or even unnamed floating and unnamed integer types - instead of the current choices of "double" and "int". Then again, there is. (legacy) --anders
Apr 04 2005
next sibling parent Georg Wrede <georg.wrede nospam.org> writes:
Anders F Björklund wrote:
 Georg Wrede wrote:
 
 Any time I write

 numerictype variable = 2.3;

 I want the literal implicitly to be taken as being of "numerictype".
That's how D strings work...
Right!
 But it's *not* how numbers work...
They should.
 What if I wrote

 int v = -7;

 and found out that "-7" is first converted to int16, then to int.
-7, like all other such small integer literals, is of type "int".
Yes. But what if they weren't. To rephrase, what if I wrote uint w = 100000; and found out that it gets the value 34464. And the docs would say "decimal literals without decimal point and without minus, are read in as int16, and then cast to the needed type". And the docs would say "this is for historical reasons bigger than us". --- Heck, Walter's done bolder things in the past. And this can't possibly be hard to implement either. I mean, either have the compiler parse it as what is "wanted" (maybe this can't be done with a context independent lexer, or whatever), or, have it parse them as the largest supported type. (This would slow down compilig, but not too much, IMHO.)
Apr 04 2005
prev sibling parent reply "Bob W" <nospam aol.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message 
news:d2roel$2g1f$1 digitaldaemon.com...
 Georg Wrede wrote:

 Any time I write

 numerictype variable = 2.3;

 I want the literal implicitly to be taken as being of "numerictype".
That's how D strings work... char[] s = "hello"; // s.ptr now holds: "hello\0" wchar[] t = "hello"; // t.ptr now holds: "\0h\0e\0l\0l\0o\0\0" And "hello" is simply untyped. But it's *not* how numbers work...
I cannot see why not: float f = 2.3; // f now holds the float value of 2.3 double d = 2.3; // d now holds D's default precison value real r = 2.3; // r should now get what it deserves otherprecision o = 2.3; // even this should work (if implemented) That's how numbers (could) work.
 -7, like all other such small integer literals, is of type "int".

 Therefore, you can't assign it to - for instance - an "uint" ?
 // cannot implicitly convert expression -7 of type int to uint

 And for similar historical reasons, floating literals are "double".
In C you could assign it to unsigned int. You probably don't want that feature back in D just for historical reasons. But you can assign a value with impaired accuracy (double -> extended) and the compiler stays mute. For historical reasons?
 Unfortunately, it's harder to tell double/extended apart than e.g.
 byte/short/int/long. (or char/wchar/dchar, for character literals)
Now tell me how you can possibly tell the difference between 1 1 and 1 (0x01 0x0001 0x00000001) ? You cannot, but the 1 byte, 2 bytes or 4 bytes somehow tend to find their correct detination. Or just tell me why you possibly need to see the difference between 2.3 2.3 and 2.3 (float double extended) at compile time ? Simple solution: parse the values by assuming the highest implemented precision and move them in the precision required. It's as simple as that. Of course you would have to do away first with that C compliant legacy perser.
 Another thing is that a Unicode string can be converted from char[] to
 wchar[] without any loss, and the same is true for a (small) integer ?
 But it's not true for extended.
But you can always convert an extended to float or double without consequences for the resulting value. That is what the compiler should do if it is allowed to do so. Why restrict it to just being able to assig a double to a float?
 For instance, the compiler "knows" e.g. that 0x0FFFFFFFF is an uint and
 that 0x100000000 is a long. But it doesn't know what type that 1.0 has.
Strange example. How does the compiler know that 0xFFFFFFFF is NOT going to be a long? I can take your own example: The compiler knows that 1e300 is a double and that 1e400 is extended. But it does not know what type 1 has.
 And since there no good way to tell, it simply picks the default one.
And since there is no good way to tell it picks (should pick) the maximum implemented precision in case of 2.3 . Don't worry, it will produce the proper double value if this is the required type.
 But if there was no C legacy, then D literals could probably always
 default to "real" and "long", or even unnamed floating and unnamed
 integer types - instead of the current choices of "double" and "int".
 Then again, there is. (legacy)
Legacy? Why would we need or want this? I have a huge choice of C compilers if I want legacy. D however, I would want to see as user friendly as possible or as modern as possible. This means that it shouldn't be designed just to accomodate C veterans (even including myself).
Apr 04 2005
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Bob W wrote:

But it's *not* how numbers work...
[...]
 That's how numbers (could) work.
True, just not how they do just yet.
 You probably don't want that feature back in D
 just for historical reasons.
No, not really :-) (Adding a 'U' is simple enough)
 Or just tell me why you possibly need to see the
 difference between 2.3  2.3  and  2.3  (float double
 extended) at compile time ?  Simple solution: parse
 the values by assuming the highest implemented precision
 and move them in the precision required. It's as simple
 as that. Of course you would have to do away first
 with that C compliant legacy perser.
You would have to ask Walter, or someone knowing the parser ? Me personally, I don't have any problem whatsoever with 2.3 being parsed as the biggest available floating point. I'm not sure if it's a problem if you have several such constants, but then again I have just been using "double" for quite a while. (I mean: if the compiler folds constant expressions, things like that)
 Strange example. How does the compiler know that
 0xFFFFFFFF is NOT going to be a long?
Okay, it was somewhat farfetched (as my examples tend to be) But the short answer is the same as with the floating point: since it would be "0xFFFFFFFFL", if it was a long... :-P And I wasn't defending it here, just saying that it crops up with the other types as well - the default type / suffix thing.
 And since there is no good way to tell it picks
 (should pick) the maximum implemented precision
 in case of 2.3 . Don't worry, it will produce
 the proper double value if this is the required
 type.
I'm not worried, it has worked for float for quite some time. (except in Java, but it tends to whine about a lot of things)
 Legacy? Why would we need or want this?
Beyond link compatibility, it beats me... But so it is... All I know is a lot of D features is because "C does it" ? --anders
Apr 04 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message
news:d2shm6$e0c$1 digitaldaemon.com...
 All I know is a lot of D features is because "C does it" ?
A big problem with subtly changing C semantics is that many programmers that D appeals to are longtime C programmers, and the semantics of C are burned into their brain. It would cause a lot of grief to change them. It's ok to change things in an obvious way, like how casts are done, but changing the subtle behaviors needs to be approached with a lot of caution.
Apr 04 2005
next sibling parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter wrote:

 A big problem with subtly changing C semantics is that many programmers that
 D appeals to are longtime C programmers, and the semantics of C are burned
 into their brain. It would cause a lot of grief to change them. It's ok to
 change things in an obvious way, like how casts are done, but changing the
 subtle behaviors needs to be approached with a lot of caution.
As long as it doesn't stifle innovation, that approach sounds sound to me. But keep in mind that a lot of people have not used C at all, but are starting with Java, or even D, as their first compiled language... ? After all: (http://www.digitalmars.com/d/overview.html) "Extensions to C that maintain source compatibility have already been done (C++ and Objective-C). Further work in this area is hampered by so much legacy code it is unlikely that significant improvements can be made." The same thing applies to a lot of the C semantics, perhaps now "old" ? So far I think D has maintained a balance between "same yet different", but it could still have a few remaing rough edges filed off... (IMHO) And just changing floating literals shouldn't be *that* bad, should it ? --anders
Apr 05 2005
prev sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Walter wrote:
 "Anders F Björklund" <afb algonet.se> wrote in message 
 news:d2shm6$e0c$1 digitaldaemon.com...
 
 All I know is a lot of D features is because "C does it" ?
A big problem with subtly changing C semantics is that many programmers that D appeals to are longtime C programmers, and the semantics of C are burned into their brain. It would cause a lot of grief to change them. It's ok to change things in an obvious way, like how casts are done, but changing the subtle behaviors needs to be approached with a lot of caution.
Do you have specific examples of situations where (either parsing decimal literals at full precision before assignment, or parsing them at the precision to be assigned -- your choice) does actually bite the aged C programmer?
Apr 05 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message
news:42526EB2.2050304 nospam.org...
 Walter wrote:
 "Anders F Björklund" <afb algonet.se> wrote in message
 news:d2shm6$e0c$1 digitaldaemon.com...

 All I know is a lot of D features is because "C does it" ?
A big problem with subtly changing C semantics is that many programmers that D appeals to are longtime C programmers, and the semantics of C are burned into their brain. It would cause a lot of grief to change them. It's ok to change things in an obvious way, like how casts are done, but changing the subtle behaviors needs to be approached with a lot of caution.
Do you have specific examples of situations where (either parsing decimal literals at full precision before assignment, or parsing them at the precision to be assigned -- your choice) does actually bite the aged C programmer?
I do know of several programs that are designed to "explore" the limits and characteristics of the floating point implementation that will produce incorrect results. I don't think it would be a problem if those programs broke. (*) C programs that provide a "back end" or VM to languages that require 64 bit floats, no more, no less, could break when ported to D. Another problem is that the program can produce different results when optimized - because optimization produces more opportunities for constant folding. This can already happen, though, because of the way the FPU handles intermediate results, and the only problem I know of that has caused is (*). And lastly there's the potential problem of using the D front end with a C optimizer/code generator that would be very difficult to upgrade to this new behavior with floating point constants. I know the DMD back end has this problem. I don't know if GDC does. Requiring this new behavior can retard the development of D compilers.
Apr 05 2005
parent "Bob W" <nospam aol.com> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d2uh5j$2fab$1 digitaldaemon.com...
 "Georg Wrede" <georg.wrede nospam.org> wrote in message
 news:42526EB2.2050304 nospam.org...
 Walter wrote:
 "Anders F Björklund" <afb algonet.se> wrote in message
 news:d2shm6$e0c$1 digitaldaemon.com...

 All I know is a lot of D features is because "C does it" ?
A big problem with subtly changing C semantics is that many programmers that D appeals to are longtime C programmers, and the semantics of C are burned into their brain. It would cause a lot of grief to change them. It's ok to change things in an obvious way, like how casts are done, but changing the subtle behaviors needs to be approached with a lot of caution.
Do you have specific examples of situations where (either parsing decimal literals at full precision before assignment, or parsing them at the precision to be assigned -- your choice) does actually bite the aged C programmer?
I do know of several programs that are designed to "explore" the limits and characteristics of the floating point implementation that will produce incorrect results. I don't think it would be a problem if those programs broke.
I also don't think that D has to make sure that these 0.01% of all applications will run properly.
 (*) C programs that provide a "back end" or VM to languages that require
 64 bit floats, no more, no less, could break when ported to D.
If you require "no more, no less" you cannot use the IA32 architecture the comventional way. You'd have to make sure that each and every intermedite result is converted back to 64 bits. This was already done in several portability-paranoic designs by storing intermediates to memory and reloading them.
 Another problem is that the program can produce different results when
 optimized - because optimization produces more opportunities for constant
 folding. This can already happen, though, because of the way the FPU 
 handles
 intermediate results, and the only problem I know of that has caused is 
 (*).
That problem is not just limited to optimisation and constant folding. It can strike ones program even at runtime if internal FPU precision is higher than the target precision. I just fail to understand that this is a problem, because there is just too much ported software running happily on IA32 architecture. Yes, I am cruel enough not to care at all about these remaining 0.01%, because I think that sophisticated portable programs need sophisticated programmers. That is not exactly the group which needs all the handholding they can get from the compiler.
 And lastly there's the potential problem of using the D front end with a C
 optimizer/code generator that would be very difficult to upgrade to this 
 new
 behavior with floating point constants. I know the DMD back end has this
 problem. I don't know if GDC does. Requiring this new behavior can retard
 the development of D compilers.
Now we are talking! To my own surprise I have not the slightest idea how NOT to agree with that.
Apr 06 2005
prev sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Bob W wrote:

 I cannot see why not:
 
 float  f = 2.3;  // f now holds the float value of 2.3
 double d = 2.3;  // d now holds D's default precison value
 real   r = 2.3;  // r should now get what it deserves
 otherprecision o = 2.3;  // even this should work (if implemented)
 
 That's how numbers (could) work.
I think Walter mentioned in this thread that he had considered adding such floating point literals, but hadn't had the time yet ? "Suppose it was kept internally with full 80 bit precision, participated in constant folding as a full 80 bit type, and was only converted to 64 bits when a double literal needed to be actually inserted into the .obj file?" This would make the floating point work the same as the various strings do now, without adding more suffixes or changing defaults. And that I have no problem with, I was just pre-occupied with all that other talk about the non-portable 80-bit float stuff... :-) Although, one might want to consider keeping L"str" and 1.0f around, simply because it is less typing than cast(wchar[]) or cast(float) ? It would also be a nice future addition, for when we have 128-bit floating point? Then those literals would still be kept at the max... Sorry if my lame examples somehow suggested I thought it was *good*, that the compiler truncates all floating point literals into doubles. --anders
Apr 04 2005
parent reply "Bob W" <nospam aol.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message 
news:d2sit7$fac$1 digitaldaemon.com...

   "Suppose it was kept internally with full 80 bit precision,
   participated in constant folding as a full 80 bit type, and was only
   converted to 64 bits when a double literal needed to be actually
   inserted into the .obj file?"
A Sun3 compiler was even more extreme if I remember correctly: It used some extended precision for evaluations at compile time without even offering that format to the programmer. As you have already mentioned - they prefer to give us no more than doubles.
 This would make the floating point work the same as the various
 strings do now, without adding more suffixes or changing defaults.
No objections.
 And that I have no problem with, I was just pre-occupied with all
 that other talk about the non-portable 80-bit float stuff... :-)
I cannot imagine too many people complaining if constant folding produces the intended double precision 2.0 instead of 1.999999....
 Although, one might want to consider keeping L"str" and 1.0f around,
 simply because it is less typing than cast(wchar[]) or cast(float) ?
Of course I would like to keep all of that too. I just don't like 'stupid' compilers, i.e. I don't want to be required to explicitly tell them what to do if my code is sufficient to imply the action required.
 It would also be a nice future addition, for when we have 128-bit
 floating point? Then those literals would still be kept at the max...
Sure it would. But then we might have to decide between radix 2 and radix 10 formats if IEEE gets it into the new 754 revision. That will be a tough choice, especially if the performance of the latter comes close to the binary version. Ten years ago this would have been unthinkable, but now the silicon could be ready for it.
Apr 04 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Bob W wrote:

It would also be a nice future addition, for when we have 128-bit
floating point? Then those literals would still be kept at the max...
Sure it would. But then we might have to decide between radix 2 and radix 10 formats if IEEE gets it into the new 754 revision. That will be a tough choice, especially if the performance of the latter comes close to the binary version. Ten years ago this would have been unthinkable, but now the silicon could be ready for it.
Call me old-fashioned, but I prefer binary... Of course, sometimes BCD and Decimal are useful like when adding up money and things like that. Or when talking to those puny non-hex humans. :-) --anders
Apr 05 2005
parent reply "Bob W" <nospam aol.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message 
news:d2teag$19gm$1 digitaldaemon.com...
 Bob W wrote:

It would also be a nice future addition, for when we have 128-bit
floating point? Then those literals would still be kept at the max...
Sure it would. But then we might have to decide between radix 2 and radix 10 formats if IEEE gets it into the new 754 revision. That will be a tough choice, especially if the performance of the latter comes close to the binary version. Ten years ago this would have been unthinkable, but now the silicon could be ready for it.
Call me old-fashioned, but I prefer binary... Of course, sometimes BCD and Decimal are useful like when adding up money and things like that. Or when talking to those puny non-hex humans. :-) --anders
It will be application and performance dependent. My optimism about the silicon which could get radix 10 computation close to binary is most likely unfounded. So scientific work and high performance computing will have to be done in binary. But there is a huge demand for radix 10 computation for casual and financial use. Just imagine: most radix 10 fractions cannot be represented properly in a radix 2 format (except 0.75, 0.5, 0.25 etc.). You'll eliminate a great deal of rounding errors by using radix 10 formats for decimal in - decimal out apps. BCD is not an issue here because you'll be unable to pack the same amount of data in there as compared to binary. IEEE makes sure that radix 10 and radix 2 formats will be comparable in precison and range. So they will use declets to encode 3 digits each (10 bits holding 000..999), thus sacrifising only a fraction of what BCD is wasting. In general the formats look like an implementer's nightmare, especially if someone wanted to emulate the DecimalXX's in software. But I can almost smell that there is something in the FPU pipelines of several companies. They just have to wait until the 754 and 854 groups are nearing conclusion.
Apr 05 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Bob W wrote:

 In general the formats look like an implementer's
 nightmare, especially if someone wanted to
 emulate the DecimalXX's in software. But I can
 almost smell that there is something in the
 FPU pipelines of several companies. They just
 have to wait until the 754 and 854 groups are
 nearing conclusion.
Great, I just love a committee designing something... <sniff> Smells like C++ ;-) Think I'll just continue to use the time-honored workaround to count the money in "cents" instead... (and no, Walter, I don't mean the 128-bit kind :-) ) But 128-bit binary floats and integer would be nice. ---anders
Apr 05 2005
parent "Walter" <newshound digitalmars.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message
news:d2tubo$1nso$1 digitaldaemon.com...
 Think I'll just continue to use the time-honored
 workaround to count the money in "cents" instead...
I agree. I don't see any advantage BCD has over that.
Apr 05 2005
prev sibling parent "Bob W" <nospam aol.com> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message 
news:42500AC4.8000703 nospam.org...
 Anders F Björklund wrote:
 Georg Wrede wrote:
 The default precision is double, f is for single and l is for extended.
 I'm not sure it makes sense to have the default be a non-portable type ?
My dream would be that depending on the FPU, the default would be the "best" -- i.e. 80 on Intel, 64 on, say, Sparc -- and that an undecorated float literal would be of _this_type_ by default. That'd make me euphorious. (Pardon the pun.)
I bet at least 90% of D users are with you.
 So, I'd like a "default floating type", that is automatically aliased to 
 be the smartest choice on the current platform. AND that all internal 
 confusion would be removed.

 As a user, I want to write 2.3 and _know_ that the system understands that 
 it means "whatever precision we happen to use on this particular platform 
 for floating operations anyway". If I had another opinion of my own, I'd 
 damn well tell the compiler.
Fair enough. Your 2.3 will be as close as it can get for doubles and floats alike. But reals will have to be treated differently unless you are prepared to accept a 11 bit precision deficiency. This compiler behaviour is unnecessary and nobody should blame you for complaining about it.
 If I need to work with 64 bit floats on an 80 bit machine, then I'll 
 specify that (pragma, decorated literal, whatever), and trust that 
 "everything" then happens with 64 bits.

 There's too much C legacy clouding this room.  ;-)
I am too frightened to comment.
 I mean, when's the last time anyone did half their float math with 64 bits 
 and half with 80, in the same program?
I am frequently mixing floats and doubles. If literals are used to assign values to them they are all doubles by default, I'd never even think of suffixing any of the desired float format values, because it is simply unnecessary. One can always assign a higher precision FP value to a lower precision without trouble. If the literals were parsed as reals instead of doubles, my float variables would still be the same and no legacy compatibility paranoia whatsoever would come true. It just does not work in the other direction, because the D compiler on 80 bit FPU systems is currently instructed to set the 11 precision bits, which are required to form a proper real value to zero - and it does this without a warning. So you can handle doubles and floats the usual way, but do not even think about using a real if you are not absolutely sure that you'll always remember that dreaded "L" suffix. Your 2.3 is a double, it is a float and it is a crippled real by design.
Apr 03 2005
prev sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Bob W" <nospam aol.com> wrote in message 
news:d2nn80$1jsd$1 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
 news:d2m8c1$9jj$1 digitaldaemon.com...
 - In my opinion there is no single reason why literals
  w/o suffix have to be treated as doubles (except maybe
  for C legacy).
 - This is why I'd like to see default (unsuffixed) literals
  to be parsed and evaluated in "the highest precision
  available" (whatever this will be in future, real for now).
and human legacy. Personally I'm used to .3 being a double. If I had three overloaded function func(float), func(double) and func(real) and I wrote func(.3) I'd be surprised it chose the real one just because I'm used to literals being doubles.
Would you even notice in most cases? The FPU will happily accept your real and do with it whatever it is instructed to do. On the other hand, if your .3 defaults to a double, a rounding error should not surprise you. If I was convinced that overloading is more often found than literals in mainstream (=moderately sophisticated) programs, then I'd give it more of a thought.
Double is the standard in many languages. Libraries expect doubles. People expect doubles. No-one was ever fired for choosing Double (to mangle an old IBM saying).
 But that's my only complaint about your proposal. Since D doesn't have
 to worry about legacy code we can make .3 parse as whatever we want 
 technically.
Exactly. I'd also be concerned how to explain to someone interested in D, supposedly a much more modern language than C, the following: The compiler offers an 80 bit type, the FPU calculates only in 80 bit format, but default literals are parsed for some illogical reason to double precision values. That would not really impress me.
heh - I sense a slight bias creeping in "for some illogical reason". D's
 The 64 bit CPUs are coming and they'll change our way
 of thinking just the way the 32 bit engines have done.
 Internal int format 32 bits? Suffixes for 64 bit int's?
 For now it is maybe still a yes, in the not so distant
 future maybe not. I just hope D can cope and will still
 be "young of age" when this happens.
I'm sure people would get thrown for a loop if given a choice between func(int) and func(long) the code func(1) called func(long). Even on a 64 bit platform. If one really didn't care which was chosen then import std.stdint; ... func(cast(int_fast32_t)1); would be a platform-independent way of choosing the "natural" size for 1 (assuming "fast32" would be 64 bits on 64 bit platforms). And more explicit, too.
Not too long from now we'll be averaging 16GB of main memory, 32 bit computers will be gone and I bet the average programmer will not be bothered using anything else than 64 bits for his integer of choice.
The Itanium was before it's time, I guess.
 I doubt that there are many people left who are
 still trying to use 16 bit variables for integer
 calculations, even if they'd fit their requirements.
 The same thing will happen to 32 bit formats in
 PC-like equipment, I'm sure. (I am not talking
 about UTF-32 formats here.)
Could be. I can't see the future that clearly.
 The main reason is that for the first time
 ever the integer range will be big enough for
 almost anything, no overflow at 128 nor 32768
 nor 2+ billion. What a relief!
Apr 03 2005
parent reply "Bob W" <nospam aol.com> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
news:d2ookl$2eqm$1 digitaldaemon.com...

------------------------------
 If I was convinced that overloading is more
 often found than literals in mainstream
 (=moderately sophisticated) programs, then
 I'd give it more of a thought.
Double is the standard in many languages. Libraries expect doubles. People expect doubles. No-one was ever fired for choosing Double (to mangle an old IBM saying).
(I quite like that saying.) If libraries want doubles, no problem they'll get doubles freshly produced by the FPU from an internal real. The libraries won't even know that a real was involved and would be happy. People don't expect doubles (C programmers do), they expect results being as accurate as possible without suffixes, headaches, etc.
 The compiler offers an 80 bit type,
 the FPU calculates only in 80 bit format,
 but default literals are parsed for some
 illogical reason to double precision values.

 That would not really impress me.
heh - I sense a slight bias creeping in "for some illogical reason". D's
I hereby officially withdraw the "illogical reason" statement. But lets theoretically introduce a new Do you really think that they would dare to require us to use a suffix for a simple assignment like "hyperprecision x=1.2" ? I bet not.
 Not too long from now we'll be averaging 16GB of
 main memory, 32 bit computers will be gone and I
 bet the average programmer will not be bothered
 using anything else than 64 bits for his integer
 of choice.
The Itanium was before it's time, I guess.
The Itanium never existed. Just ask any mechanic, houswive, lawyer or his secretary. Its either "Pentium inside" or some "..on" from the other company. The other company made a 64 bit to let Chipzilla suffer, so Chipzilla will have "64 bit inside" for the rest of us.
Apr 03 2005
next sibling parent Georg Wrede <georg.wrede nospam.org> writes:
Bob W wrote:




 But lets theoretically introduce a new

 
 Do you really think that they would dare to require
 us to use a suffix for a simple assignment like
 "hyperprecision x=1.2" ? I bet not.
THERE! Wish I'd said that myself! :-)
Apr 04 2005
prev sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Bob W wrote:
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
The Itanium was before it's time, I guess.
 The Itanium never existed. Just ask any
 mechanic, houswive, lawyer or his secretary.
 Its either "Pentium inside" or some "..on"
 from the other company. The other company
 made a 64 bit to let Chipzilla suffer, so
 Chipzilla will have "64 bit inside" for the
 rest of us.
Actually, Chipzilla wanted to kill off x86 entirely. And AMD et al. They were becoming too cheap and ubiquitous, and there was too much competition. And carrying compatibility was an "unnecessary" expense. Skipping all that would give them massively reduced cost per chip, and ease development considerably. And they could charge unreasonable prices for their new chips. In their delusions of grandeur they thought that having Bill recompile windows for it, and with a massive campaign targeted to software and computer manufacturers, they'd create a universal belief that x86 is going to disappear Real Soon Now. And in secret, Chipzilla had a bag of patents that other chip makers would have to lease from them, after Conveniently Prolonged negotiations. Which they can now stuff up their chimney. What they forgot was, that everyone else saw through this. Unix vendors, PC vendors, customers, even Bill had nothing to win here. All it would result in would be massive grief thanks to the discontinuity, porting, driver writing, confusion, and obsolesence of good programs. AMD did what a man had to do: get quickly down to drawing board, and do it right. Right for everybody. And I guess Bill, customers, vendors, and everybody else is happy. Except Chipzilla.
Apr 04 2005
parent reply "Bob W" <nospam aol.com> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message 
news:42512B29.7010700 nospam.org...
 Bob W wrote:
 Actually, Chipzilla wanted to kill off x86 entirely. And AMD et al. They 
 were becoming too cheap ...........................................
 AMD did what a man had to do: get quickly down to drawing board, and do it 
 right. Right for everybody. And I guess Bill, customers, vendors, and 
 everybody else is happy. Except Chipzilla.
Good post! (My Outlook Express font shows me that you are running some sort of an AMD 64 engine, right? :)
Apr 04 2005
parent reply Georg Wrede <georg.wrede nospam.org> writes:
Bob W wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 
 news:42512B29.7010700 nospam.org...
 
Bob W wrote:
Actually, Chipzilla wanted to kill off x86 entirely. And AMD et al. They 
were becoming too cheap ...........................................
AMD did what a man had to do: get quickly down to drawing board, and do it 
right. Right for everybody. And I guess Bill, customers, vendors, and 
everybody else is happy. Except Chipzilla.
Good post!
Thanks!
 (My Outlook Express font shows me that you are
 running some sort of an AMD 64 engine, right?  :)
One day I will, for sure. But currently have I several computers, none of which are more than 800MHz Pentiums, and there's enough horsepower for everything I do. So I'd have to conjure up an excuse first. Maybe "get a playing machine for the kids" would do. :-) Hmm. Maybe buying one just to support and thank them would be the Right Thing to do. Oh, and while I'm at it, i write all my posts to these newsgroups using W2k. So, no, I'm not a microsoft hater. I was, however, an Apple hater. Up until OS X.
Apr 04 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Georg Wrede wrote:

 I was, however, an Apple hater. 
 Up until OS X.
Ah, you mean you hate Apple (Mac OS 9) But that you like NeXT... (Mac OS X) Confusing, these days. :-) See this link for a great history timeline: http://www.kernelthread.com/mac/oshistory/ --anders PS. Me, I'm an Apple guy. And Linux hacker. http://www.algonet.se/~afb/think.txt
Apr 04 2005
prev sibling parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Bob W wrote:

 The 64 bit CPUs are coming and they'll change our way
 of thinking just the way the 32 bit engines have done.
The 64 bit CPUs are already here, and supported by Linux... Mainstream OS support, i.e. Win XP and Mac OS X, is now GM: http://www.theinquirer.net/?article=22246 http://www.appleinsider.com/article.php?id=976
 Internal int format 32 bits? Suffixes for 64 bit int's?
I think the preferred int format is still 32 bits, even if the CPU now can handle 64 bit ints as well (but I only know PPC64) However, all pointers and indexes will *need* to be 64-bit... (means use "size_t" instead of int, and not cast void[]->long)
 I like the slogan "D fully supports 80 bit reals", but
 a marketing guy would probably suggest to change this to
 "D fully supports 64 bit CPUs".
I think the D spec and compilers are more or less 64-bit now ? Phobos, on the other hand, still have a *lot* of 32/64 bugs... See the D.gnu newsgroup, for a listing of some of them - in GDC ? Finally, it's perfectly good to run a 32-bit OS on a 64-bit CPU. I know I do. ;-) --anders
Apr 02 2005