www.digitalmars.com         C & C++   DMDScript  

D - 'Type' suggestions

reply "Andreas Nicoletti" <andreasn videotron.ca> writes:
After reading the 'D' language site I have the following suggestions.



Use the following naming scheme.



byte à int8



short à int16



int à int32



long à int64



cent à int128



bit à uint1



ubyte à uint8



ushort à uint16



uint à uint32



ulong à uint64



ucent à uint128



float à float32



double à float64



extended à float80



I think this is more meaningful:-

  a.. This improves portability. This avoids confusion between 'C/C++'
language 'long' and the 'D' long size difference, or confusion between ints
16bit platforms and ints from 32bit platforms.
  b.. Future types fit in seamlessly and clearly (i.e. int128, uint128,
float128, or whatever64).
  c.. Avoids monstrosities like the 'unsigned long long int' and 'long
double'


Anyone who is porting software can use the typedef command to get their own
'short' or 'long' types.



Andreas
Jan 14 2003
parent reply "Andreas Nicoletti" <andreasn videotron.ca> writes:
Hi,

After reading the 'D' language site I have the following suggestions.

Use the following naming scheme.

byte --> int8
short --> int16
int --> int32
long --> int64
cent --> int128

bit --> uint1
ubyte --> uint8
ushort --> uint16
uint --> uint32
ulong --> uint64
ucent --> uint128

float --> float32
double --> float64
extended --> float80


I think this is more meaningful:-

* This improves portability. This avoids confusion between 'C/C++' language
'long' and the 'D' long size difference, or confusion between ints 16bit
platforms and ints from 32bit platforms.

* Future types fit in seamlessly and clearly (i.e. int128, uint128, or
float128).

* Avoids monstrosities like the 'unsigned long long int' and 'long double'

Anyone who is porting software can use the typedef command to get their own
'short' or 'long' types.
Jan 14 2003
parent reply Evan McClanahan <evan dontSPAMaltarinteractive.com> writes:
Okish ideas, (other than uint1 for bit), but it's been discussed to 
death, and isn't likely to pass.  Various reforms have been proposed and 
all have been shot down.

Evan



Andreas Nicoletti wrote:
 Hi,
 
 After reading the 'D' language site I have the following suggestions.
 
 Use the following naming scheme.
 
 byte --> int8
 short --> int16
 int --> int32
 long --> int64
 cent --> int128
 
 bit --> uint1
 ubyte --> uint8
 ushort --> uint16
 uint --> uint32
 ulong --> uint64
 ucent --> uint128
 
 float --> float32
 double --> float64
 extended --> float80
 
 
 I think this is more meaningful:-
 
 * This improves portability. This avoids confusion between 'C/C++' language
 'long' and the 'D' long size difference, or confusion between ints 16bit
 platforms and ints from 32bit platforms.
 
 * Future types fit in seamlessly and clearly (i.e. int128, uint128, or
 float128).
 
 * Avoids monstrosities like the 'unsigned long long int' and 'long double'
 
 Anyone who is porting software can use the typedef command to get their own
 'short' or 'long' types.
 
 
 
Jan 14 2003
parent Steve <Steve_member pathlink.com> writes:
The naming uintnn convention is fine, but can already be done in C upwards by
performing a typedef. An evolutionary improvement seems to be to specify scalar
types as follows :-

int(nn)
unsigned int(nn)
float(nn)           

Where nn is the width in bits of the int, unsigned int or floating point value.

You could then do something like :-

void MyFunc(bool high_precision)
{
float(high_precision ? 64 : 32) value;

// Perform some calculations ...

.. etc.
}

.. which is quite nice ...



In article <b01n69$1ke9$1 digitaldaemon.com>, Evan McClanahan says...
Okish ideas, (other than uint1 for bit), but it's been discussed to 
death, and isn't likely to pass.  Various reforms have been proposed and 
all have been shot down.

Evan



Andreas Nicoletti wrote:
 Hi,
 
 After reading the 'D' language site I have the following suggestions.
 
 Use the following naming scheme.
 
 byte --> int8
 short --> int16
 int --> int32
 long --> int64
 cent --> int128
 
 bit --> uint1
 ubyte --> uint8
 ushort --> uint16
 uint --> uint32
 ulong --> uint64
 ucent --> uint128
 
 float --> float32
 double --> float64
 extended --> float80
 
 
 I think this is more meaningful:-
 
 * This improves portability. This avoids confusion between 'C/C++' language
 'long' and the 'D' long size difference, or confusion between ints 16bit
 platforms and ints from 32bit platforms.
 
 * Future types fit in seamlessly and clearly (i.e. int128, uint128, or
 float128).
 
 * Avoids monstrosities like the 'unsigned long long int' and 'long double'
 
 Anyone who is porting software can use the typedef command to get their own
 'short' or 'long' types.
 
 
 
.
Jan 15 2003