www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Floating point minimum values are positive?

reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
Always amusing to run into those little quirks of parts of the 
language you've never worked with before ...

I just realized that while e.g. int.min gives a negative value, 
the floating point equivalent, e.g. double.min, gives a very 
small positive value -- I guess the smallest possible positive 
value.

I guess this is intentional, so I thought I'd ask why -- it's a 
little unintuitive after the integral type behaviour.

Also, how can I get the truly least value of a floating point 
number? I guess with e.g. -double.max ... ?
Jul 22 2013
next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 22 July 2013 at 21:02:30 UTC, Joseph Rushton Wakeling 
wrote:
 Always amusing to run into those little quirks of parts of the 
 language you've never worked with before ...

 I just realized that while e.g. int.min gives a negative value, 
 the floating point equivalent, e.g. double.min, gives a very 
 small positive value -- I guess the smallest possible positive 
 value.

 I guess this is intentional, so I thought I'd ask why -- it's a 
 little unintuitive after the integral type behaviour.

 Also, how can I get the truly least value of a floating point 
 number? I guess with e.g. -double.max ... ?
Pretty certain bearophile's been campaigning for the removal of these. Or am I confusing it with something else?
Jul 22 2013
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Jul 22, 2013 at 11:02:28PM +0200, Joseph Rushton Wakeling wrote:
 Always amusing to run into those little quirks of parts of the
 language you've never worked with before ...
 
 I just realized that while e.g. int.min gives a negative value, the
 floating point equivalent, e.g. double.min, gives a very small
 positive value -- I guess the smallest possible positive value.
 
 I guess this is intentional, so I thought I'd ask why -- it's a
 little unintuitive after the integral type behaviour.
 
 Also, how can I get the truly least value of a floating point
 number? I guess with e.g. -double.max ... ?
I believe double.min has been deprecated. In any case, it is a misnomer. Basically, it's supposed to give the smallest representable "normal" float, that is, it's the non-zero positive number with the smallest possible exponent and smallest possible mantissa. The new name for this IIRC is .min_normal. There are some floats that can go even smaller than this, but they are "denormal" and may incur a large runtime overhead (they are intended to prevent underflow / minimize loss of precision in certain computations involving very small quantities, and aren't supposed to be used in normal calculations). tl;dr: don't use double.min, use -double.max. :) T -- Век живи - век учись. А дураком помрёшь.
Jul 22 2013
parent reply David <d dav1d.de> writes:
 There are some floats that can go even smaller than this, but they are
 "denormal" and may incur a large runtime overhead (they are intended to
 prevent underflow / minimize loss of precision in certain computations
 involving very small quantities, and aren't supposed to be used in
 normal calculations).
We were taught something else in university. Small overhead, though CPUs can handle them (not software implemented) and they are used when needed, you can't choose to use it or not to use it.
Jul 23 2013
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Jul 23, 2013 at 09:39:05AM +0200, David wrote:
 There are some floats that can go even smaller than this, but they
 are "denormal" and may incur a large runtime overhead (they are
 intended to prevent underflow / minimize loss of precision in
 certain computations involving very small quantities, and aren't
 supposed to be used in normal calculations).
We were taught something else in university. Small overhead, though CPUs can handle them (not software implemented) and they are used when needed, you can't choose to use it or not to use it.
Well, I learned this from Wikipedia, so I'm not 100% sure whether or not it's accurate: http://en.wikipedia.org/wiki/Denormal_number#Performance_issues T -- Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian W. Kernighan
Jul 23 2013
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Joseph Rushton Wakeling:

 I just realized that while e.g. int.min gives a negative value, 
 the floating point equivalent, e.g. double.min, gives a very 
 small positive value -- I guess the smallest possible positive 
 value.
Please always compile all your D code with the "-wi" switch, because Walter is deaf at my suggestions to have informational warnings active on default in D compilations :-) void main() { immutable x = double.min; } ==> test.d(2): Warning: min property is deprecated, use min_normal instead float/double/real min property will be removed. You will have to use -max. This a patch over an historical accident of C++ limits values. Bye, bearophile
Jul 22 2013
next sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Monday, 22 July 2013 at 21:13:49 UTC, bearophile wrote:
 Please always compile all your D code with the "-wi" switch, 
 because Walter is deaf at my suggestions to have informational 
 warnings active on default in D compilations :-)
It so happens that for the code in question I did, and had got that warning, and made the switch. I just hadn't realized that in either case the value would be positive! I just noticed because of a case where I was initializing a value to real.min_normal and then taking the max of this variable and zero, and I'd anticipated 0 would be the max.
 float/double/real min property will be removed. You will have 
 to use -max. This a patch over an historical accident of C++ 
 limits values.
It's no problem to use -max, I'd just never encountered this quirk before. Thanks to all for the advice and insight! :-)
Jul 22 2013
parent "bearophile" <bearophileHUGS lycos.com> writes:
Joseph Rushton Wakeling:

 It so happens that for the code in question I did, and had got 
 that warning, and made the switch. I just hadn't realized that 
 in either case the value would be positive! I just noticed 
 because of a case where I was initializing a value to 
 real.min_normal and then taking the max of this variable and 
 zero, and I'd anticipated 0 would be the max.
I see, and indeed a better warning message should talk about both min_normal and about -max, to help both people used with C++ double min and newer programmers that don't care of C++ history... Andrej Mitrovic could improve that textual message. Bye, bearophile
Jul 22 2013
prev sibling parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 07/22/2013 02:13 PM, bearophile wrote:

 Please always compile all your D code with the "-wi" switch
Going off topic, why not -w then? If I want to be warned about something, I don't want the program to be compiled anyway but perhaps others want to look at warning messages. :) Ali
Jul 22 2013
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Ali Çehreli:

 Going off topic, why not -w then? If I want to be warned about 
 something, I don't want the program to be compiled anyway but 
 perhaps others want to look at warning messages. :)
There are discussions like this: http://d.puremagic.com/issues/show_bug.cgi?id=10147 Bye, bearophile
Jul 22 2013
parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 07/22/2013 03:53 PM, bearophile wrote:

 Ali Çehreli:

 Going off topic, why not -w then? If I want to be warned about
 something, I don't want the program to be compiled anyway but perhaps
 others want to look at warning messages. :)
There are discussions like this: http://d.puremagic.com/issues/show_bug.cgi?id=10147 Bye, bearophile
This is what I understand from that discussion: 1) There shouldn't be warnings at all; what we call warnings should be errors. I agree with that completely. 2) -w changes the compilation semantics because it may change the result of __traits(compiles). I don't understand that at all because anything that affects the compilation environment can change the behavior of __traits(compiles). (e.g. the string imports.) Getting back to -w vs. -wi, I don't understand why to favor -wi over -w. This is what the documentation says: -w enable warnings -wi enable informational warnings (i.e. compilation still proceeds normally) http://dlang.org/dmd-linux.html First of all, -w does not only enable warnings, it actually makes them errors. Great! So, everybody should use -w... Second, following from that bug report, -wi is useless because it just gives informational warnings... and then compilation proceeds normally? It is kind of entertaining I guess but it is completely useless from program development point of view. My conclusion: As long as -w exists, keep that on the compilation command line and ignore -wi. I would like to hear others' point of view. I am curious how -wi is preferred over -w by others. Ali
Jul 22 2013
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Ali Çehreli:

 1) There shouldn't be warnings at all; what we call warnings 
 should be errors.

 I agree with that completely.
Unfortunately the real world is not made of just black or white situations. There are various valid reasons to have some warnings. Sometimes the language change, and deprecation warnings are not enough, see the warning for double.min in a recent thread. Keeping the number of warnings low is good. Eventually turning some warnings into errors, or removing them once their purpose has ended is good. But removing all warnings from D right now isn't a good idea. Bye, bearophile
Jul 23 2013
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 23 July 2013 at 03:14:17 UTC, Ali Çehreli wrote:
 1) There shouldn't be warnings at all; what we call warnings 
 should be errors.

 I agree with that completely.
Not really. At least my (and, as far as I understand, Jonathan) point of view is that warnings should be either error or subject to static analysis tools. Simply making all warnings errors makes them unusable as they may prohibit some pretty legitimate code patterns (though rarely legitimate). There should not be a warning that can change language semantics depending on compiler flag.
Jul 23 2013
parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 07/23/2013 08:05 AM, Dicebot wrote:

 On Tuesday, 23 July 2013 at 03:14:17 UTC, Ali Çehreli wrote:
 1) There shouldn't be warnings at all; what we call warnings should be
 errors.

 I agree with that completely.
Not really. At least my (and, as far as I understand, Jonathan) point of view is that warnings should be either error or subject to static analysis tools. Simply making all warnings errors makes them unusable as they may prohibit some pretty legitimate code patterns (though rarely legitimate).
From my C and C++ experience, I am under the impression that warnings can be eliminated by changing code while maintaining the same behavior.
 There should not be a warning that can change language semantics 
depending
 on compiler flag.
I see compiler flags as parts of the environment just like the architecture of the cpu, string imports, floating point precision, and anything else that can be checked during compilation time... Thank you, I finally understand: The view is, warnings should not make any difference in compilation; the programmers can read the compiler output and act accordingly if they so wish. Ali
Jul 23 2013