www.digitalmars.com         C & C++   DMDScript  

D - int<bits> and arbitrary size ints

reply Russ Lewis <russ deming-os.org> writes:
First of all, great language!  I've been playing around with a simple
language spec of my own...that I also called D...with many of the same
features.  Guess I'll just join on with you...

My idea here is similar to that in the "Types and sizes" thread.  I
think that it is *very important* that coders can specify a specific bit
size for their integers.  I would use a slightly different syntax than
the previous post:

unsigned int8 myByte;
unsigned int32 my4Bytes;
int128 my16Bytes;

The bit size specified *must* be a multiple of 8; you could also
require, if it made the compiler easier, that they be powers of 2.  If
the bit size was smaller than the architecture natually supported, then
the compiler would have to adjust for that; if it was larger, then the
compiler would be required to implement emulation code to handle it.

If the compiler supports large-integer emulation, then there is no
reason not to include support for integers of arbitrary size:

intX myUnlimitedInt;

Thoughts?
Aug 17 2001
next sibling parent reply "Sheldon Simms" <sheldon semanticedge.com> writes:
Im Artikel <3B7D3416.BA743773 deming-os.org> schrieb "Russ Lewis"
<russ deming-os.org>:

 First of all, great language!  I've been playing around with a simple
 language spec of my own...that I also called D...with many of the same
 features.  Guess I'll just join on with you...
 
 My idea here is similar to that in the "Types and sizes" thread.  I
 think that it is *very important* that coders can specify a specific bit
 size for their integers.
Why? -- Sheldon Simms / sheldon semanticedge.com
Aug 17 2001
parent reply Russ Lewis <russ deming-os.org> writes:
Sheldon Simms wrote:

 Im Artikel <3B7D3416.BA743773 deming-os.org> schrieb "Russ Lewis"
 <russ deming-os.org>:

 First of all, great language!  I've been playing around with a simple
 language spec of my own...that I also called D...with many of the same
 features.  Guess I'll just join on with you...

 My idea here is similar to that in the "Types and sizes" thread.  I
 think that it is *very important* that coders can specify a specific bit
 size for their integers.
Why?
Different architectures (and different compilers) use different standards for int, long, and short sizes. If you code something using an int where an int is 32 bits then port it to something where an int is 16 bits, you have automatic trouble. It's better to be able to specify that this is an "int32" and let the compiler deal with it. Many APIs implement just this to increase source code portability.
Aug 17 2001
next sibling parent reply scott tara.mvdomain (Scott Robinson) writes:
Which still comes back to the problem of portability. A very simple, but
popular, example is the UNIX time scenario.

UNIX time is the number of seconds past the UNIX epoch. It is stored in a
32-bit number.

If we were to go with your definitions, there would be hard definitions of
"int32" for time storing variables. The trick, of course, is that if we ever
compile on an alternative bit size environment we're stuck with older
definitions. D doesn't have a preprocessor, probably because due to its
strongly typed and parsable form you can use perl or any other text
processing tool for better effect. However, because of a lack of a
preprocessor our bit management is all going to have be done in-source.

Sorry, I don't buy that. Controlling bit sizes to such a level seems much
better tuned for embedded/micro-managing code - two things which D,
according to the spec, is not designed for.

I suppose an argument can be made for communication and external protocol
support, but wasn't that addressed in the spec in reference to a structure's
memory representation? (or lack of such)

Scott.

In article <3B7D35BB.25B70FDB deming-os.org>, Russ Lewis wrote:
Sheldon Simms wrote:

 Im Artikel <3B7D3416.BA743773 deming-os.org> schrieb "Russ Lewis"
 <russ deming-os.org>:

 First of all, great language!  I've been playing around with a simple
 language spec of my own...that I also called D...with many of the same
 features.  Guess I'll just join on with you...

 My idea here is similar to that in the "Types and sizes" thread.  I
 think that it is *very important* that coders can specify a specific bit
 size for their integers.
Why?
Different architectures (and different compilers) use different standards for int, long, and short sizes. If you code something using an int where an int is 32 bits then port it to something where an int is 16 bits, you have automatic trouble. It's better to be able to specify that this is an "int32" and let the compiler deal with it. Many APIs implement just this to increase source code portability.
-- jabber:quad jabber.org - Universal ID (www.jabber.org) http://dsn.itgo.com/ - Personal webpage robhome.dyndns.org - Home firewall -----BEGIN GEEK CODE BLOCK----- Version: 3.12 GAT dpu s: a--- C++ UL+++ P++ L+++ E- W++ N+ o+ K w O M V- PS+ PE Y+ PGP+++ t++ 5 X R tv b++++ DI++++ D++ G+ e+ h! r- y ------END GEEK CODE BLOCK------
Aug 17 2001
next sibling parent "Ben Cohen" <bc skygate.co.uk> writes:
In article <slrn9nqfb6.igf.scott tara.mvdomain>, "Scott Robinson"
<scott tara.mvdomain> wrote:

 If we were to go with your definitions, there would be hard definitions
 of "int32" for time storing variables. 
What's wrong with: typedef int32 time_t;
Aug 17 2001
prev sibling parent reply Russ Lewis <russ deming-os.org> writes:
Scott Robinson wrote:

 Which still comes back to the problem of portability. A very simple, but
 popular, example is the UNIX time scenario.

 UNIX time is the number of seconds past the UNIX epoch. It is stored in a
 32-bit number.

 If we were to go with your definitions, there would be hard definitions of
 "int32" for time storing variables. The trick, of course, is that if we ever
 compile on an alternative bit size environment we're stuck with older
 definitions. D doesn't have a preprocessor, probably because due to its
 strongly typed and parsable form you can use perl or any other text
 processing tool for better effect. However, because of a lack of a
 preprocessor our bit management is all going to have be done in-source.

 Sorry, I don't buy that. Controlling bit sizes to such a level seems much
 better tuned for embedded/micro-managing code - two things which D,
 according to the spec, is not designed for.

 I suppose an argument can be made for communication and external protocol
 support, but wasn't that addressed in the spec in reference to a structure's
 memory representation? (or lack of such)
Actually, my first idea was to say that the bit size was only a minimum....that a compiler could substitute an int64 for an int32 if it wanted. Not sure which is the best balance. IMHO, things like those time variables should be defined with typedef's anyway. Then you can redefine the type in the header files you have installed on the new architecture. One of the great things about the strongly typed typedef's used in D is that when you define that times are to be given as "Time" rather than "int32", then the user of the API is *strongly* encouraged to use your (forward compatible) typedef rather than the underlying type.
Aug 17 2001
parent reply "Sean L. Palmer" <spalmer iname.com> writes:
If we used ranges the whole issue could be moot... just define your own
scalar type with the range you require, and the compiler has to find a
CPU-supported type that can contain it, or generate an error.  But problem
is that it may pick one that's too large.

On the other hand, if you *know* a platform supports a type of a given size,
it'd be nice to just be able to ask for it explicitly, so you don't have to
worry about the compiler maybe using something larger (breaks struct
compatibility across platforms).  If you ask for int32 and one doesn't
exist, your program is not portable to that platform.  So only ask for int32
if you really need exactly 32 bits.  Compiler would be free to emulate so
long as it could do so precisely (using masking etc).

Sean

"Russ Lewis" <russ deming-os.org> wrote in message
news:3B7D3F0F.C3945078 deming-os.org...
 Scott Robinson wrote:

 Which still comes back to the problem of portability. A very simple, but
 popular, example is the UNIX time scenario.

 UNIX time is the number of seconds past the UNIX epoch. It is stored in
a
 32-bit number.

 If we were to go with your definitions, there would be hard definitions
of
 "int32" for time storing variables. The trick, of course, is that if we
ever
 compile on an alternative bit size environment we're stuck with older
 definitions. D doesn't have a preprocessor, probably because due to its
 strongly typed and parsable form you can use perl or any other text
 processing tool for better effect. However, because of a lack of a
 preprocessor our bit management is all going to have be done in-source.

 Sorry, I don't buy that. Controlling bit sizes to such a level seems
much
 better tuned for embedded/micro-managing code - two things which D,
 according to the spec, is not designed for.

 I suppose an argument can be made for communication and external
protocol
 support, but wasn't that addressed in the spec in reference to a
structure's
 memory representation? (or lack of such)
Actually, my first idea was to say that the bit size was only a
minimum....that a
 compiler could substitute an int64 for an int32 if it wanted.  Not sure
which is
 the best balance.

 IMHO, things like those time variables should be defined with typedef's
anyway.
 Then you can redefine the type in the header files you have installed on
the new
 architecture.

 One of the great things about the strongly typed typedef's used in D is
that when
 you define that times are to be given as "Time" rather than "int32", then
the user
 of the API is *strongly* encouraged to use your (forward compatible)
typedef
 rather than the underlying type.
Oct 25 2001
parent reply "Walter" <walter digitalmars.com> writes:
In my experience, a maximum size is rarely wished for, just a minimum size.

"Sean L. Palmer" <spalmer iname.com> wrote in message
news:9r9ujl$2ac3$1 digitaldaemon.com...
 If we used ranges the whole issue could be moot... just define your own
 scalar type with the range you require, and the compiler has to find a
 CPU-supported type that can contain it, or generate an error.  But problem
 is that it may pick one that's too large.

 On the other hand, if you *know* a platform supports a type of a given
size,
 it'd be nice to just be able to ask for it explicitly, so you don't have
to
 worry about the compiler maybe using something larger (breaks struct
 compatibility across platforms).  If you ask for int32 and one doesn't
 exist, your program is not portable to that platform.  So only ask for
int32
 if you really need exactly 32 bits.  Compiler would be free to emulate so
 long as it could do so precisely (using masking etc).

 Sean
Dec 19 2001
parent reply "Sean L. Palmer" <spalmer iname.com> writes:
Oftentimes an exact size is, however, wished for.

What I'm after is a standardized way of using a type of a known size, as
opposed to a type of some size at least as large.

Sure, there aren't many machines these days where a byte is not 8 bits, or a
pointer isn't either 32 or 64 bits.  You've defined the sizes of the first 4
types, byte, short, int, and long.  What about long long?  How long is a
long long?

If you don't standardize the naming of exact-sized types, then compilers
will provide them anyway but with incompatible conventions, such as MS's
__int64 vs. GNU's long long.  I don't want to see that happen to D.

If there are 2 kinds of machines out there that both have 10 bit bytes, I
want to be able to utilize those types in my D program designed to run on
only those kind of machines, without conditional compilation if possible;
They both should provide int10 type aliases (and probably the machine's char
and byte types would also be 10 bits)

Sean

"Walter" <walter digitalmars.com> wrote in message
news:9vqgve$1v9t$1 digitaldaemon.com...
 In my experience, a maximum size is rarely wished for, just a minimum
size.
 "Sean L. Palmer" <spalmer iname.com> wrote in message
 news:9r9ujl$2ac3$1 digitaldaemon.com...
 If we used ranges the whole issue could be moot... just define your own
 scalar type with the range you require, and the compiler has to find a
 CPU-supported type that can contain it, or generate an error.  But
problem
 is that it may pick one that's too large.

 On the other hand, if you *know* a platform supports a type of a given
size,
 it'd be nice to just be able to ask for it explicitly, so you don't have
to
 worry about the compiler maybe using something larger (breaks struct
 compatibility across platforms).  If you ask for int32 and one doesn't
 exist, your program is not portable to that platform.  So only ask for
int32
 if you really need exactly 32 bits.  Compiler would be free to emulate
so
 long as it could do so precisely (using masking etc).

 Sean
Dec 20 2001
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Sean L. Palmer" <spalmer iname.com> wrote in message
news:9vse28$l2k$1 digitaldaemon.com...
 Oftentimes an exact size is, however, wished for.

 What I'm after is a standardized way of using a type of a known size, as
 opposed to a type of some size at least as large.

 Sure, there aren't many machines these days where a byte is not 8 bits, or
a
 pointer isn't either 32 or 64 bits.  You've defined the sizes of the first
4
 types, byte, short, int, and long.  What about long long?  How long is a
 long long?
There is no long long in D, AFAIK. And long is an integer of the largest size this architecture can handle. It's not fixed.
 If you don't standardize the naming of exact-sized types, then compilers
 will provide them anyway but with incompatible conventions, such as MS's
 __int64 vs. GNU's long long.  I don't want to see that happen to D.
Personally, I'd also like to see 64-bit integers strictly defined in D. Especially since they are used in WinAPI headers. Maybe just call it "int64" or steal it from Pascal - "comp".
Dec 20 2001
parent reply "Walter" <walter digitalmars.com> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:9vsnnp$tm9$1 digitaldaemon.com...
 "Sean L. Palmer" <spalmer iname.com> wrote in message
 news:9vse28$l2k$1 digitaldaemon.com...
 Oftentimes an exact size is, however, wished for.

 What I'm after is a standardized way of using a type of a known size, as
 opposed to a type of some size at least as large.

 Sure, there aren't many machines these days where a byte is not 8 bits,
or
 a
 pointer isn't either 32 or 64 bits.  You've defined the sizes of the
first
 4
 types, byte, short, int, and long.  What about long long?  How long is a
 long long?
There is no long long in D, AFAIK. And long is an integer of the largest size this architecture can handle. It's not fixed.
 If you don't standardize the naming of exact-sized types, then compilers
 will provide them anyway but with incompatible conventions, such as MS's
 __int64 vs. GNU's long long.  I don't want to see that happen to D.
Personally, I'd also like to see 64-bit integers strictly defined in D. Especially since they are used in WinAPI headers. Maybe just call it "int64" or steal it from Pascal - "comp".
If D is ported to a platform where longer than 64 bit ints make sense, I see no problem with defining a new basic type for it. I don't like the C usage of multiple keywords for a type. It'd probably be called "longlong". For those who want exact type sizes, perhaps a standard module can be defined with aliases like int8, int16, etc.
Dec 23 2001
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Walter" <walter digitalmars.com> wrote in message
news:a0432a$1es4$1 digitaldaemon.com...

 If D is ported to a platform where longer than 64 bit ints make sense, I
see
 no problem with defining a new basic type for it. I don't like the C usage
 of multiple keywords for a type. It'd probably be called "longlong".
Hmmm... longint?
 For those who want exact type sizes, perhaps a standard module can be
 defined with aliases like int8, int16, etc.
Great idea!
Dec 23 2001
parent reply "Sean L. Palmer" <spalmer iname.com> writes:
I vote for "huge"  ;)

byte
short
int
long
huge
gargantuan
reallyreallyreallybigint

Sean


"Pavel Minayev" <evilone omen.ru> wrote in message
news:a04779$1i7j$1 digitaldaemon.com...
 "Walter" <walter digitalmars.com> wrote in message
 news:a0432a$1es4$1 digitaldaemon.com...

 If D is ported to a platform where longer than 64 bit ints make sense, I
see
 no problem with defining a new basic type for it. I don't like the C
usage
 of multiple keywords for a type. It'd probably be called "longlong".
Hmmm... longint?
 For those who want exact type sizes, perhaps a standard module can be
 defined with aliases like int8, int16, etc.
Great idea!
Dec 28 2001
parent "Walter" <walter digitalmars.com> writes:
"Sean L. Palmer" <spalmer iname.com> wrote in message
news:a0jeoa$11dv$1 digitaldaemon.com...
 I vote for "huge"  ;)

 byte
 short
 int
 long
 huge
 gargantuan
 reallyreallyreallybigint

 Sean
Ack! (You forgot "titanic", by the way!)
Dec 29 2001
prev sibling parent reply "Sheldon Simms" <sheldon semanticedge.com> writes:
Im Artikel <3B7D35BB.25B70FDB deming-os.org> schrieb "Russ Lewis"
<russ deming-os.org>:

 Sheldon Simms wrote:
 
 Im Artikel <3B7D3416.BA743773 deming-os.org> schrieb "Russ Lewis"
 <russ deming-os.org>:

 First of all, great language!  I've been playing around with a simple
 language spec of my own...that I also called D...with many of the
 same features.  Guess I'll just join on with you...

 My idea here is similar to that in the "Types and sizes" thread.  I
 think that it is *very important* that coders can specify a specific
 bit size for their integers.
Why?
Different architectures (and different compilers) use different standards for int, long, and short sizes. If you code something using an int where an int is 32 bits then port it to something where an int is 16 bits, you have automatic trouble.
If you're talking about C, then you're the one to blame for assuming that int always has at least 32 bits... But the reason I asked is because it seemed to me that you were offering a solution without specifying the problem. Specifying specific sizes for integers might be a great solution for something, but I can't judge whether or not it is without knowing what the problem is in the first place. As for the problem that (I think) you're talking about, perhaps it would be better to talk about ranges instead of the number of bits. Given that the language already offers integral types with the fixed sizes of 8,16,32, and 64 bits (you have read the D doc, haven't you?), I don't see the point of adding more programmer- defined sizes. But being able to specify particular ranges like in pascal might be useful: int{-10..10} a; // -10 to 10 inclusive allowed int{0..} b; // unsigned "infinite precision" int{..} c; // signed "infinite precision" -- Sheldon Simms / sheldon semanticedge.com
Aug 17 2001
parent reply Russ Lewis <russ deming-os.org> writes:
Sheldon Simms wrote:

 As for the problem that (I think) you're talking about, perhaps
 it would be better to talk about ranges instead of the number of
 bits. Given that the language already offers integral types with
 the fixed sizes of 8,16,32, and 64 bits (you have read the D doc,
 haven't you?), I don't see the point of adding more programmer-
 defined sizes. But being able to specify particular ranges like
 in pascal might be useful:
Ack, sorry. So D does fix the sizes, unlike C. Well done, Walter, my mistake. However, I do think that int32 is more obvious what it represents than "int", particularly for us old C programmers. And I still think that int1024 and intX are good things that the compiler could emulate.
Aug 17 2001
parent reply "Walter" <walter digitalmars.com> writes:
Remember, in D you can create strong typedefs (rather than weak type aliases
in C). So, if you really want an int32 type, you can typedef it, overload
based on it, get strong type checking on it, etc.

Russ Lewis wrote in message <3B7D3FC5.2F9E9AD5 deming-os.org>...
Sheldon Simms wrote:

 As for the problem that (I think) you're talking about, perhaps
 it would be better to talk about ranges instead of the number of
 bits. Given that the language already offers integral types with
 the fixed sizes of 8,16,32, and 64 bits (you have read the D doc,
 haven't you?), I don't see the point of adding more programmer-
 defined sizes. But being able to specify particular ranges like
 in pascal might be useful:
Ack, sorry. So D does fix the sizes, unlike C. Well done, Walter, my mistake. However, I do think that int32 is more obvious what it represents than
"int",
particularly for us old C programmers.  And I still think that int1024 and
intX are good things that the compiler could emulate.
Sep 18 2001
parent reply "Ben Cohen" <bc skygate.co.uk> writes:
In article <9o97o5$9bt$1 digitaldaemon.com>, "Walter"
<walter digitalmars.com> wrote:

 Remember, in D you can create strong typedefs (rather than weak type
 aliases in C). So, if you really want an int32 type, you can typedef it,
 overload based on it, get strong type checking on it, etc.
The disadvantage of this is that the compiler can't make use of the knowledge that the type is always (say) a 32 bit unsigned integer. If this were a base type then the compiler could give specific warnings such as if you try to compare a variable of this type with 2^32; if you use a typedef I don't think it can catch this problem.
Sep 19 2001
parent "Walter" <walter digitalmars.com> writes:
Ben Cohen wrote in message <9o9ort$l9t$1 digitaldaemon.com>...
In article <9o97o5$9bt$1 digitaldaemon.com>, "Walter"
<walter digitalmars.com> wrote:

 Remember, in D you can create strong typedefs (rather than weak type
 aliases in C). So, if you really want an int32 type, you can typedef it,
 overload based on it, get strong type checking on it, etc.
The disadvantage of this is that the compiler can't make use of the knowledge that the type is always (say) a 32 bit unsigned integer. If this were a base type then the compiler could give specific warnings such as if you try to compare a variable of this type with 2^32; if you use a typedef I don't think it can catch this problem.
You can do things like: assert(my32int.size == 4); to signal if the typedef went wrong.
Sep 19 2001
prev sibling parent reply Jeff Frohwein <jeff devrsSPAMLESS.com> writes:
 I guess the biggest problem I have about defining an 'int'
as 32 bits is why 32 bits? If there is no specific reason for it then
if this language was designed 10 years ago 'int' might be 16 bits or
if it was designed 3 years from now 'int' might be 64 bits. Looking to
the future, will it be a regret to look back and see 'int' defined as
32 bits?

 From the D Overview...

       "D is a general purpose systems and applications programming
        language. It is a higher level language than C++, but retains
        the ability to write high performance code and interface
        directly with the operating system APIs and with hardware."

 With this in mind, I agree that we do need some fixed sizes for
hardware interfacing, even if for no other reason. This would also
allow D to be used for embedded systems. After all, if D does become
the next popular language, people will want to port an embedded version
as well. So lets not put any more limits on them than are easily
justifiable. As well, if we wish to interface to hardware registers
(as stated in the overview above) we need types that are fixed in size.

 Here is one possible solution:

 Hardware (non-abstract) Types

        u1              unsigned 1 bit
        u8              unsigned 8 bits
        s8              signed 8 bits
        u16             unsigned 16 bits
        s16             signed 16 bits
        u32             unsigned 32 bits
        s32             signed 32 bits
        s64             ...etc...
        ...
        f32             32 bit floating point
        f64             64 bit floating point

 Software (semi-abstract) Types

        bit             single bit or larger
        byte            signed 8 bits or larger
        ubyte           unsigned 8 bits or larger
        short           signed 16 bits or larger
        ushort          unsigned 16 bits or larger
        int             signed 32 bits or larger
        uint            unsigned 32 bits or larger
        long            signed 64 bits or larger
        ulong           unsigned 64 bits or larger
        float           32 bit floating point or larger
        double          64 bit floating point or larger

 Why use the short u8 / s8 types?

      This is a simple and clear format where there is no
      guess work. To follow the format of the "Software Types"
      you would probably need something like "int8" or "uint8".
      Since the "Hardware Types" are designed to be extremely
      specific & extremely clear, I think this might be one
      possibly valid reason for the departure of format.
      Either format would work though.

 Why do the "Software Types" specify X bits or larger?

      This allows you to promote an 'int' to 64,128, or whatever
      bits at any time in the future with little or no regrets.
      Once 256 bit systems come out that have 10 times better
      performance when dealing with 256 bits versus 32 bits,
      wouldn't we all want the 'int' to be 256 bits at that time?
      Then we can compile our code that is filled with 'int' to
      run on the new system at max performance. Assuming our
      hardware drivers for our older plug in expansion cards use
      the u8 / s8 types, as they should, they should keep working
      even though byte, short, and int may have gotten promoted.

 Why offer two different types; "hardware" and "software" ?

      Hardware types allows you to meet the D spec goal of,
         "and interface directly with the operating system
          APIs and with hardware."

      It is true that the "hardware types" can be abused by
      people using them when they should have more appropriately
      used "software types" but such is life. Do you ban all
      scissors just because they can harm someone when misused?

*EOF*
Aug 26 2001
next sibling parent Dan Hursh <hursh infonet.isl.net> writes:
	Wow!  That looks like a moment of clarity.  One thing.  You may want to
specific a specific format.  Aside from endian-ness  I think integral
type are pretty standard in most hardware, but has the world pretty much
agree on a common floating point format for the various bit sizes?  I
know there are standards, but I don't know how many or which are in
common use.  This may not be a problem.

Dan

Jeff Frohwein wrote:
 
  I guess the biggest problem I have about defining an 'int'
 as 32 bits is why 32 bits? If there is no specific reason for it then
 if this language was designed 10 years ago 'int' might be 16 bits or
 if it was designed 3 years from now 'int' might be 64 bits. Looking to
 the future, will it be a regret to look back and see 'int' defined as
 32 bits?
 
  From the D Overview...
 
        "D is a general purpose systems and applications programming
         language. It is a higher level language than C++, but retains
         the ability to write high performance code and interface
         directly with the operating system APIs and with hardware."
 
  With this in mind, I agree that we do need some fixed sizes for
 hardware interfacing, even if for no other reason. This would also
 allow D to be used for embedded systems. After all, if D does become
 the next popular language, people will want to port an embedded version
 as well. So lets not put any more limits on them than are easily
 justifiable. As well, if we wish to interface to hardware registers
 (as stated in the overview above) we need types that are fixed in size.
 
  Here is one possible solution:
 
  Hardware (non-abstract) Types
 
         u1              unsigned 1 bit
         u8              unsigned 8 bits
         s8              signed 8 bits
         u16             unsigned 16 bits
         s16             signed 16 bits
         u32             unsigned 32 bits
         s32             signed 32 bits
         s64             ...etc...
         ...
         f32             32 bit floating point
         f64             64 bit floating point
 
  Software (semi-abstract) Types
 
         bit             single bit or larger
         byte            signed 8 bits or larger
         ubyte           unsigned 8 bits or larger
         short           signed 16 bits or larger
         ushort          unsigned 16 bits or larger
         int             signed 32 bits or larger
         uint            unsigned 32 bits or larger
         long            signed 64 bits or larger
         ulong           unsigned 64 bits or larger
         float           32 bit floating point or larger
         double          64 bit floating point or larger
 
  Why use the short u8 / s8 types?
 
       This is a simple and clear format where there is no
       guess work. To follow the format of the "Software Types"
       you would probably need something like "int8" or "uint8".
       Since the "Hardware Types" are designed to be extremely
       specific & extremely clear, I think this might be one
       possibly valid reason for the departure of format.
       Either format would work though.
 
  Why do the "Software Types" specify X bits or larger?
 
       This allows you to promote an 'int' to 64,128, or whatever
       bits at any time in the future with little or no regrets.
       Once 256 bit systems come out that have 10 times better
       performance when dealing with 256 bits versus 32 bits,
       wouldn't we all want the 'int' to be 256 bits at that time?
       Then we can compile our code that is filled with 'int' to
       run on the new system at max performance. Assuming our
       hardware drivers for our older plug in expansion cards use
       the u8 / s8 types, as they should, they should keep working
       even though byte, short, and int may have gotten promoted.
 
  Why offer two different types; "hardware" and "software" ?
 
       Hardware types allows you to meet the D spec goal of,
          "and interface directly with the operating system
           APIs and with hardware."
 
       It is true that the "hardware types" can be abused by
       people using them when they should have more appropriately
       used "software types" but such is life. Do you ban all
       scissors just because they can harm someone when misused?
 
 *EOF*
Aug 26 2001
prev sibling parent Charles Hixson <charleshixsn earthlink.net> writes:
Jeff Frohwein wrote:
  ...
  Here is one possible solution:
 
  Hardware (non-abstract) Types
 
         u1              unsigned 1 bit
         u8              unsigned 8 bits
         s8              signed 8 bits
         u16             unsigned 16 bits
         s16             signed 16 bits
         u32             unsigned 32 bits
         s32             signed 32 bits
         s64             ...etc...
         ...
I like this scheme, but would propose that it be done a bit differently, in the following way: u{i} unsigned i bits where i in [1..64] (optionally, i in [1..128] or larger) s{i} signed 1 bits where i in [1..64] (optionally, i in [1..128]) or larger. Length of i includes the sign bit These types could only be used in structures, not classes. Their secondary purpose would be to allow the packing of structures to match externally specified data. (Structures can be packed, though classes cannot.)
         f32             32 bit floating point
         f64             64 bit floating point
 
  Software (semi-abstract) Types
 
         bit             single bit or larger
         byte            signed 8 bits or larger
         ubyte           unsigned 8 bits or larger
         short           signed 16 bits or larger
         ushort          unsigned 16 bits or larger
         int             signed 32 bits or larger
         uint            unsigned 32 bits or larger
         long            signed 64 bits or larger
         ulong           unsigned 64 bits or larger
         float           32 bit floating point or larger
         double          64 bit floating point or larger
 
These are defined via typedefs on the underlying sized variables. Only types defined via typedefs are allowed in classes.
  Why use the short u8 / s8 types?
 
   ...
 *EOF*
 
Aug 27 2001