www.digitalmars.com         C & C++   DMDScript  

D - How about REAL instead of FLOAT

reply Mark Evans <Mark_member pathlink.com> writes:
Mathematicians talk in terms of integers, real numbers, and complex numbers.
These are terms everyone understands, including C programmers.  I believe that a
new language should prefer the term "real" over the old C-ish "float," "double,"
and "extended" terms.

Most of us have been C programming so long that we seldom take a step back to
look at these name choices objectively.  They are pretty bad.  They reflect
geek-Unix lingo, not rational semantics.

Some are nouns (int), some are adjectives (short, float, double) and the
adjectives are not even consistent within the same category (float designates a
floating-point characteristic, double describes a specific precision, while
extended describes a non-specific precision which varies across platforms).

It seems to me that something like <noun><modifier> is the right way to name
these types.

I'd really love Real32, Real64, Real80, Real128.  Someone mentioned aliases, but
turn that argument on its head.  Why not use rational semantic choices for the
*native* types, and use aliases for ugly C-isms like float and double?

Think about it! <g>

Mark
Sep 18 2002
next sibling parent reply "Walter" <walter digitalmars.com> writes:
You're right, of course, but I want to retain a C flavor as far as
reasonable.

"Mark Evans" <Mark_member pathlink.com> wrote in message
news:amb0n4$30p7$1 digitaldaemon.com...
 Mathematicians talk in terms of integers, real numbers, and complex
numbers.
 These are terms everyone understands, including C programmers.  I believe
that a
 new language should prefer the term "real" over the old C-ish "float,"
"double,"
 and "extended" terms.

 Most of us have been C programming so long that we seldom take a step back
to
 look at these name choices objectively.  They are pretty bad.  They
reflect
 geek-Unix lingo, not rational semantics.

 Some are nouns (int), some are adjectives (short, float, double) and the
 adjectives are not even consistent within the same category (float
designates a
 floating-point characteristic, double describes a specific precision,
while
 extended describes a non-specific precision which varies across
platforms).
 It seems to me that something like <noun><modifier> is the right way to
name
 these types.

 I'd really love Real32, Real64, Real80, Real128.  Someone mentioned
aliases, but
 turn that argument on its head.  Why not use rational semantic choices for
the
 *native* types, and use aliases for ugly C-isms like float and double?

 Think about it! <g>

 Mark
Sep 18 2002
next sibling parent reply Mark Evans <Mark_member pathlink.com> writes:
In article <ambh6t$g3t$1 digitaldaemon.com>, Walter says...
You're right, of course, but I want to retain a C flavor as far as
reasonable.
Could you do that by relegating the C names to aliases in Phobos? You might be surprised how many C programmers would opt for the better names if given the choice. M.
Sep 18 2002
next sibling parent reply "Sandor Hojtsy" <hojtsy index.hu> writes:
"Mark Evans" <Mark_member pathlink.com> wrote in message
news:ambkd0$jkf$1 digitaldaemon.com...
 In article <ambh6t$g3t$1 digitaldaemon.com>, Walter says...
You're right, of course, but I want to retain a C flavor as far as
reasonable.
Could you do that by relegating the C names to aliases in Phobos? You
might be
 surprised how many C programmers would opt for the better names if given
the
 choice.
I agree. I am teaching C for beginners. Explaining the difference between float, double and long double for them is a difficult task, considering the unfortunate names. Escpecially when they already learned the "long int" spelled shorter as "long". I think it would me much easier if some logical convetion was followed, such as mentioned here.
Sep 19 2002
parent reply "anderson" <anderson firestar.com.au> writes:
I disagree. C's language base is already so large, why should we add another
dialect using aliases. Simply makes the language bigger and harder to read.
People will be more inclined to only learn a sub-set of the language. That
is why the addition of any  keyword must be considered with care. I know
it's only one word, but I'm talking in a general sense, where do you stop. I
don't what to have to know 4 different dialects of D to understand it.

Aliasing is generally not the answer.

"Sandor Hojtsy" <hojtsy index.hu> wrote in message
news:ambui8$10ec$1 digitaldaemon.com...
 "Mark Evans" <Mark_member pathlink.com> wrote in message
 news:ambkd0$jkf$1 digitaldaemon.com...
 In article <ambh6t$g3t$1 digitaldaemon.com>, Walter says...
You're right, of course, but I want to retain a C flavor as far as
reasonable.
Could you do that by relegating the C names to aliases in Phobos? You
might be
 surprised how many C programmers would opt for the better names if given
the
 choice.
I agree. I am teaching C for beginners. Explaining the difference between float, double and long double for them is a difficult task, considering
the
 unfortunate names. Escpecially when they already learned the "long int"
 spelled shorter as "long". I think it would me much easier if some logical
 convetion was followed, such as mentioned here.
Sep 19 2002
parent "Sandor Hojtsy" <hojtsy index.hu> writes:
"anderson" <anderson firestar.com.au> wrote in message
news:amc2o2$15hd$1 digitaldaemon.com...
 I disagree. C's language base is already so large, why should we add
another
 dialect using aliases. Simply makes the language bigger and harder to
read.
 People will be more inclined to only learn a sub-set of the language. That
 is why the addition of any  keyword must be considered with care. I know
 it's only one word, but I'm talking in a general sense, where do you stop.
I
 don't what to have to know 4 different dialects of D to understand it.

 Aliasing is generally not the answer.
Yes. We don't need the old type names! :-) (short, long, float and double) Actually I don't consider backward-compatibility so important here. D should make use of the advantage of "starting with an empty page". But nothing can/will stop a third party from creating a module with the other set of typenames as aliases, and distribute it. So whatever we choose, both dialects will still be present. The choice here is: which dialect should be the prefered one?
Sep 19 2002
prev sibling next sibling parent "Sean L. Palmer" <seanpalmer earthlink.net> writes:
I totally agree.  C compatibility is a neat bonus but shouldn't be the
single largest driving force behind language design.

Sean

"Mark Evans" <Mark_member pathlink.com> wrote in message
news:ambkd0$jkf$1 digitaldaemon.com...
 In article <ambh6t$g3t$1 digitaldaemon.com>, Walter says...
You're right, of course, but I want to retain a C flavor as far as
reasonable.
Could you do that by relegating the C names to aliases in Phobos? You
might be
 surprised how many C programmers would opt for the better names if given
the
 choice.

 M.
Sep 19 2002
prev sibling parent "Walter" <walter digitalmars.com> writes:
"Mark Evans" <Mark_member pathlink.com> wrote in message
news:ambkd0$jkf$1 digitaldaemon.com...
 In article <ambh6t$g3t$1 digitaldaemon.com>, Walter says...
You're right, of course, but I want to retain a C flavor as far as
reasonable.
Could you do that by relegating the C names to aliases in Phobos? You
might be
 surprised how many C programmers would opt for the better names if given
the
 choice.
They might be right, but in order to get C programmers to spend some initial time with D, it needs to look like C as far as reasonable. If it looks too different, they won't give D a chance.
Sep 19 2002
prev sibling parent reply Jeff Grills <jgrills soe.sony.com> writes:
If you're unwilling to rename float to real because you want to retain
a "C flavor", then why rename bool to bit?  You should be consistent,
one way or the other.

In article <ambh6t$g3t$1 digitaldaemon.com>, Walter says...
You're right, of course, but I want to retain a C flavor as far as
reasonable.
jeff grills jgrills soe.sony.com
Sep 19 2002
parent reply "anderson" <anderson firestar.com.au> writes:
But a C bool it not a D bit as a D float is a C float in D. (You get that).
Bit's need to have a different name so C programs don't mix them up with
Booleans. Where as with floats, if we called it real then we'd confuse a lot
of programmers.

New programmers using D, with a background in C, instantly know the size of
float (although it may change in different versions) and what to do with it.
If they see the word Real, well that's another thing they'd have to look up
(only to find real = float). However, we'd want them to look up bits,
because it has some subtle differnces from Booleans.


"Jeff Grills" <jgrills soe.sony.com> wrote in message
news:amd83b$2gm7$1 digitaldaemon.com...
 If you're unwilling to rename float to real because you want to retain
 a "C flavor", then why rename bool to bit?  You should be consistent,
 one way or the other.

 In article <ambh6t$g3t$1 digitaldaemon.com>, Walter says...
You're right, of course, but I want to retain a C flavor as far as
reasonable.
jeff grills jgrills soe.sony.com
Sep 19 2002
next sibling parent reply Mark Evans <Mark_member pathlink.com> writes:
Quick responses to all:

- Aliases in Phobos could keep the old names available for C die-hards

- The language could instead support built-in aliases
(duplicate keywords) -- if so, should have compiler-switch to turn
off the old C names as valid keywords

- Real numbers are the true mathematical intent, even though, in
a strict technical sense, there is *no* digital representation
covering the entire set of reals (the whole purpose of
floating-point notation is to approximate real numbers, so
calling these entities "reals" still makes semantic sense)

- "Integer" is a noun, "integral" the corresponding adjective

- There is such a thing as fixed-point, call it RealFixed32,
RealFixed64, if and when it becomes part of D (unlikely!)

- There is also such a thing as rational, many libraries exist
to implement rational arithmetic, call it Rational32 or
Rational64 if and when it becomes part of D (likely?);
a rational is an integral numerator over an integral denominator

- Clear-headed naming of numeric types makes later numeric library
objects easy to name, as indicated in previous two bullets;
just follow the standard convention

- I disagree categorically with the idea that all C programmers
"know the size of things"; I've had many fruitless sessions talking
about the size of ints and to a lesser extent floats; one very
experienced C/Windows programmer was shocked, shocked when he
learned I was correct in saying that a short is 16 bits (he thought
32); HWND under Win16 was 16 bits but became 32 under Win32

- No C programmer has to look up anything if float/double/extended
are properly aliased in D, they will be happy campers

- Thanks to everyone for their opinions

Mark
Sep 19 2002
parent reply "anderson" <anderson firestar.com.au> writes:
 - No C programmer has to look up anything if float/double/extended
 are properly aliased in D, they will be happy campers
Unless there reading some else code.
Sep 19 2002
parent reply Chris <cl nopressedmeat.tinfoilhat.ca> writes:
anderson wrote:
- No C programmer has to look up anything if float/double/extended
are properly aliased in D, they will be happy campers
Unless there reading some else code.
But these people are programmers. How hard is it going to be to remember that real is float? There's a pretty good chance that you already know, providing that you took math in highschool, or are using floating point in the first place. Chris
Sep 26 2002
parent reply "anderson" <anderson firestar.com.au> writes:
I'm looking at the general picture. Eventually every term in D could be
changed to an alias.  I know changing one thing won't make much of a
differnce, but then where do you stop. Pretty soon you've change every term
in the language, or doubled the language size. I say, there's nothing wrong
with the term float.

"Chris" <cl nopressedmeat.tinfoilhat.ca> wrote in message
news:3D93F5F6.8020104 nopressedmeat.tinfoilhat.ca...
 anderson wrote:
- No C programmer has to look up anything if float/double/extended
are properly aliased in D, they will be happy campers
Unless there reading some else code.
But these people are programmers. How hard is it going to be to remember that real is float? There's a pretty good chance that you already know, providing that you took math in highschool, or are using floating point in the first place. Chris
Sep 27 2002
parent Mark Evans <Mark_member pathlink.com> writes:
The logical fallacy in this argument is that because we give ourselves
permission to change one thing, we give ourselves permission to change
everything, and therefore we should change nothing.  On that argument we should
all revert to C.

Int16, Int32, Real32, Real64 terminology actually clarifies types that are
intentionally ambiguous in the C standard (ints, extendeds).  These terms also
clear up semantic inconsistencies (see earlier posts about nouns and
adjectives).  This combination is a great help to students, and utterly trivial
for experienced C folks.

I have a hard time picturing any C programmer scratching his head over Real32 or
even complaining about it.  FORTRAN folks would be very happy with it.
Cross-platform programmers would be ecstatic.

Mark


In article <an1ghq$c1v$1 digitaldaemon.com>, anderson says...
I'm looking at the general picture. Eventually every term in D could be
changed to an alias.  I know changing one thing won't make much of a
differnce, but then where do you stop. Pretty soon you've change every term
in the language, or doubled the language size. I say, there's nothing wrong
with the term float.
Sep 27 2002
prev sibling next sibling parent "Sean L. Palmer" <seanpalmer earthlink.net> writes:
Keeping potential developers from having to look up info on a basic language
feature the first time they see it is not a strong argument against using
the name real.

I look up stuff everyday.  You can't remember everything.

Anyway the guy might just try "float" and presto, there's a builtin alias
for real called float and by george it works the same as you expect float to
behave and you're content.  Don't have to look anything up and don't even
have to known anything about the existence of the alias float to basic type
real since it behaves exactly the same, generates the same code in fact.

So that's a good argument for having the alias from real to float if it were
called real.

You may as well use the term real since the idea is to make float as real as
possible to machine limits.  One of these days it'll be so close you can't
comprehend the difference mentally.

What if you had a machine with 4096-bit floating point registers? 65536-bit?
At what point do you give up trying to approximate the set of real numbers?

Maybe call the next thing after extended called realxxx where xxx is the
number of bits.  Or just real which is the highest precision register on the
machine.  real128 would be a size optimization if real were 256 bits.  You
can have any power of 2 there and the compiler could just add more and more
mantissa and exponent bits.  The protocols for the arithmetic are pretty
well established.

We could eventually see machines that specialize in computing polynomials
(built in polynomial T^n series parallel unit) and had builtin derivative
etc functions.  Don't they already have stuff like this?  I've never seen
one.

If you had fast hardware division and good rounding, would you even need
integer registers?  Think about it.  Just use the mantissa of the float with
exponent set to the proper range for the int, and set a hardware mask.
Presto, it looks like an int, it acts like an int.  You've all seen the fast
software float to int code trick.

Sean

"anderson" <anderson firestar.com.au> wrote in message
news:amdeq2$2nir$1 digitaldaemon.com...
 But a C bool it not a D bit as a D float is a C float in D. (You get
that).
 Bit's need to have a different name so C programs don't mix them up with
 Booleans. Where as with floats, if we called it real then we'd confuse a
lot
 of programmers.

 New programmers using D, with a background in C, instantly know the size
of
 float (although it may change in different versions) and what to do with
it.
 If they see the word Real, well that's another thing they'd have to look
up
 (only to find real = float). However, we'd want them to look up bits,
 because it has some subtle differnces from Booleans.


 "Jeff Grills" <jgrills soe.sony.com> wrote in message
 news:amd83b$2gm7$1 digitaldaemon.com...
 If you're unwilling to rename float to real because you want to retain
 a "C flavor", then why rename bool to bit?  You should be consistent,
 one way or the other.

 In article <ambh6t$g3t$1 digitaldaemon.com>, Walter says...
You're right, of course, but I want to retain a C flavor as far as
reasonable.
jeff grills jgrills soe.sony.com
Sep 19 2002
prev sibling parent "Sandor Hojtsy" <hojtsy index.hu> writes:
"anderson" <anderson firestar.com.au> wrote in message
news:amdeq2$2nir$1 digitaldaemon.com...
 But a C bool it not a D bit as a D float is a C float in D. (You get
that).
 Bit's need to have a different name so C programs don't mix them up with
 Booleans. Where as with floats, if we called it real then we'd confuse a
lot
 of programmers.

 New programmers using D, with a background in C, instantly know the size
of
 float (although it may change in different versions) and what to do with
it.
 If they see the word Real, well that's another thing they'd have to look
up
 (only to find real = float). However, we'd want them to look up bits,
 because it has some subtle differnces from Booleans.
D already has fundamental differences (from C) that makes necessary for a decent C coder to read the whole doc two times. (take my example). I think changing the built-in typenames would have lesser impact than some other changes already done. Usually it is not the keyword set which make a language hard to move to. More important is the programming logic, the "correct way of thinking" in that language. BTW: Note that "bit has subtle differnces from bool". We still need the C++ style bool! And yeah, not the keyword (the choosen keyword is secondary by importance) but anything with same semantics. Sandor
Sep 23 2002
prev sibling next sibling parent Alix Pexton <Alix seven-point-star.co.uk> writes:
I don't think that "real" should be used as a alternated type name to 
"float", because "float"s are not the same as "real"s. "Float" 
represents the subset of "real" numbers that can be represented exactly 
using a base-2 mantissa and exponant (1/3 is a real number, but it isn't 
a float).

I do agree that D's type names need not be inherited from c/c++ or any 
other language, and that a consistant and extensible type system would 
be a big improvement, but I have no idea what they should be, or what
set should be the default...

Alix Pexton.
Webmaster "The D journal"...

Mark Evans wrote:
 Mathematicians talk in terms of integers, real numbers, and complex numbers.
 These are terms everyone understands, including C programmers.  I believe that
a
 new language should prefer the term "real" over the old C-ish "float,"
"double,"
 and "extended" terms.
 
 Most of us have been C programming so long that we seldom take a step back to
 look at these name choices objectively.  They are pretty bad.  They reflect
 geek-Unix lingo, not rational semantics.
 
 Some are nouns (int), some are adjectives (short, float, double) and the
 adjectives are not even consistent within the same category (float designates a
 floating-point characteristic, double describes a specific precision, while
 extended describes a non-specific precision which varies across platforms).
 
 It seems to me that something like <noun><modifier> is the right way to name
 these types.
 
 I'd really love Real32, Real64, Real80, Real128.  Someone mentioned aliases,
but
 turn that argument on its head.  Why not use rational semantic choices for the
 *native* types, and use aliases for ugly C-isms like float and double?
 
 Think about it! <g>
 
 Mark
 
 
Sep 19 2002
prev sibling next sibling parent "Roberto Mariottini" <rmariottini lycosmail.com> writes:
"Mark Evans" <Mark_member pathlink.com> ha scritto nel messaggio
news:amb0n4$30p7$1 digitaldaemon.com...
 Mathematicians talk in terms of integers, real numbers, and complex
numbers.
 These are terms everyone understands, including C programmers.  I believe
that a
 new language should prefer the term "real" over the old C-ish "float,"
"double,"
 and "extended" terms.

 Most of us have been C programming so long that we seldom take a step back
to
 look at these name choices objectively.  They are pretty bad.  They
reflect
 geek-Unix lingo, not rational semantics.

 Some are nouns (int), some are adjectives (short, float, double) and the
 adjectives are not even consistent within the same category (float
designates a
 floating-point characteristic, double describes a specific precision,
while
 extended describes a non-specific precision which varies across
platforms). Int(eger) too is an adjective. All of them suppose "number" as the noun they refer to. Short, long and double are associated also with float(ing point) or int(eger). So: long double -> long double floating point number
 It seems to me that something like <noun><modifier> is the right way to
name
 these types.
I agree.
 I'd really love Real32, Real64, Real80, Real128.  Someone mentioned
aliases, but
 turn that argument on its head.  Why not use rational semantic choices for
the
 *native* types, and use aliases for ugly C-isms like float and double?
I prefer 'float' instead of 'real'. Real "real" numbers are not always representable in a floating point representation. Another term could be "rational", but also rational numbers can't be fully represented with such notation. And note that there are inifnite integer numbers that can't be represented, given one integer format of choice. A honest language should use: int(XX), fixed(XX,YY), float(XX,YY), but I suspect it's too complex for newbies to learn. Having this possibility would be good, indeed. Ciao
Sep 19 2002
prev sibling parent reply Jonathan Andrew <jon ece.arizona.edu> writes:
I think unfortunately it is too hard to simply break tradition here,
most people (myself included) are just too used to using int, float,
long, etc. to get used to a new system. The problem with aliases is
that when you want to specify a mathematic number,
you are still bound to the constraints of the base type. So even if
real is just an alias for float, it wouldn't truly be a real number
because for example it couldn't be over a certain number of digits,
couldn't represent true irrational numbers, etc.

I do agree that it would be handy to have a language feature supporting
all types. I think the best way to do this is through some kind of
standard number library in phobos, with different classes representing
each number type. So for example you could have a base "number" class,
and then you could have reals inherit from that, and from there you
could have rational, irrational, floating point, integers, etc.

Size wouldn't be an issue, relatively speaking, because a class could
allocate more memory as needed for huge numbers, and could use more
specific algorithms for calculations.

I could even forsee a symbolic number class that stores the number as
a string and can parse and calculate with it symbolically (i.e., TI-89)

For true "math" stuff, having number classes would be pretty convenient
I think, and you are right when you say that the names are pretty bad,
unfortunately, I think D will have to live with them as the sucessor
to C/C++.

IMHO always,
     Jon

Mark Evans wrote:
 Mathematicians talk in terms of integers, real numbers, and complex numbers.
 These are terms everyone understands, including C programmers.  I believe that
a
 new language should prefer the term "real" over the old C-ish "float,"
"double,"
 and "extended" terms.
 
 Most of us have been C programming so long that we seldom take a step back to
 look at these name choices objectively.  They are pretty bad.  They reflect
 geek-Unix lingo, not rational semantics.
 
 Some are nouns (int), some are adjectives (short, float, double) and the
 adjectives are not even consistent within the same category (float designates a
 floating-point characteristic, double describes a specific precision, while
 extended describes a non-specific precision which varies across platforms).
 
 It seems to me that something like <noun><modifier> is the right way to name
 these types.
 
 I'd really love Real32, Real64, Real80, Real128.  Someone mentioned aliases,
but
 turn that argument on its head.  Why not use rational semantic choices for the
 *native* types, and use aliases for ugly C-isms like float and double?
 
 Think about it! <g>
 
 Mark
 
 
Sep 19 2002
parent reply Mark Evans <Mark_member pathlink.com> writes:
It is easy to add new names and not break C tradition, that was the whole point
about aliases.

FORTRAN has used REAL for a long, long time.  I am not persuaded by arguments
that machine precision limitations somehow reder this term inappropriate.  In
fact you make a strong case for the naming concept of Real32, Real64, Real80,
etc. which explicitly states the bit count in the approximation.

Mark

In article <3D8A78DA.4040003 ece.arizona.edu>, Jonathan Andrew says...
I think unfortunately it is too hard to simply break tradition here,
most people (myself included) are just too used to using int, float,
Sep 20 2002
parent "Sean L. Palmer" <seanpalmer earthlink.net> writes:
Floating point, fixed point, rational... they all are approximations to real
arithmetic.

I would like to think of float and fixed and rational as being subclasses of
real.  I suppose they all fall into the larger concept of numeric or scalar.
The idea there is that inheritance can be in interface only; that you don't
necessarily have to inherit a class's data if you just want to inherit the
interface.  Or you can inherit the data and hide the interface (private
inheritance).  Or inherit both (public inheritance).  I suppose I want
interface inheritance.  In the basic type case this doesn't involve any
virtual functions.

I'd like to have lots of builtin types.  They can even vary per platform.
Appropriate typedefs should be supplied for the common ones if not natively
present.

I'd even go so far as to say that I think the precision of a type should be
specifiable.  However it should always be ok for the compiler to use a
larger type instead if it needs to, or to use the largest type it can
provide, even if it's not quite big enough (probably with a warning in this
case, or an option to emulate in software)

Sean

"Mark Evans" <Mark_member pathlink.com> wrote in message
news:amg4fg$2naf$1 digitaldaemon.com...
 It is easy to add new names and not break C tradition, that was the whole
point
 about aliases.

 FORTRAN has used REAL for a long, long time.  I am not persuaded by
arguments
 that machine precision limitations somehow reder this term inappropriate.
In
 fact you make a strong case for the naming concept of Real32, Real64,
Real80,
 etc. which explicitly states the bit count in the approximation.

 Mark

 In article <3D8A78DA.4040003 ece.arizona.edu>, Jonathan Andrew says...
I think unfortunately it is too hard to simply break tradition here,
most people (myself included) are just too used to using int, float,
Sep 21 2002