www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Var Types

reply John Smith <John_member pathlink.com> writes:
Why not also include these variable types in D?
int1 - 1 byte
int2 - 2 bytes
int4 - 4 bytes
intN - N bytes (experimental)

It must be also guaranteed that these types will always, on every machine, have
the same size.
Nov 21 2005
next sibling parent reply "Shawn Liu" <shawn666.liu gmail.com> writes:
I think int8, int16, int32, int64 is more comfortable.

"John Smith" <John_member pathlink.com> 
wrote:dlsq7f$30hv$1 digitaldaemon.com...
 Why not also include these variable types in D?
 int1 - 1 byte
 int2 - 2 bytes
 int4 - 4 bytes
 intN - N bytes (experimental)

 It must be also guaranteed that these types will always, on every machine, 
 have
 the same size.

 
Nov 21 2005
next sibling parent reply pragma <pragma_member pathlink.com> writes:
In article <dlss62$13b$1 digitaldaemon.com>, Shawn Liu says...
I think int8, int16, int32, int64 is more comfortable.
In the interest of hearing this idea out, I'll play the devil's advocate on this one. :) What is wrong with the documented convetions laid out for the byte sizes of the current values? Would it be enough to incorporate those definitions into the (eventual) D ABI, to ensure that all D compiler vendors adhere to the same sizes? While I think there's value in having a standard that is easily grasped, I don't think its necessary to clutter things up with more keywords for already well-defined types. - EricAnderton at yahoo
Nov 21 2005
parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"pragma" <pragma_member pathlink.com> wrote in message 
news:dlstrd$2i4$1 digitaldaemon.com...
 What is wrong with the documented convetions laid out for the byte sizes 
 of the
 current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size. Then there's short, which I suppose makes sense on both platforms, and int, but neither gives any indication of the size. The only type that does is "byte." I'd personally like int8, int16, int32, etc. This also makes it easy to add new, larger types. What comes after int64? int128, of course. But what comes after "long?" Why, "cent." What?! Huh? But of course, none of this will ever happen / even be considered, so it's kind of an exercise in futility.
Nov 21 2005
next sibling parent reply Tomás Rossi <Tomás_member pathlink.com> writes:
In article <dlsuq9$3d9$1 digitaldaemon.com>, Jarrett Billingsley says...
"pragma" <pragma_member pathlink.com> wrote in message 
news:dlstrd$2i4$1 digitaldaemon.com...
 What is wrong with the documented convetions laid out for the byte sizes 
 of the
 current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size.
Maybe if D bit-length specifications were relative (don't know the downsides of this approach but I'm all ears). For example: ____________________________________________________________________________, TYPE | SIZE | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES | | (relative to 1 | (in bits) | (in bits) | | CPU word) | | | | (register size)| | | ---------+----------------+------------------------+------------------------+ (u)short | 1/2 | 16 | 32 | (u)int | 1 | 32 | 64 | (u)long | 2 | 64 (as VC++s __int64) | 128 | After all, isn't it ugly and less abstract to code assuming a certain sizeof for integral types (and also maybe with other types)? (sizeof brings that information to the programmer and the programmer should code relative to the 'sizeof' of a type and not assuming that size with premeditation).
Then there's 
short, which I suppose makes sense on both platforms, and int, but neither 
gives any indication of the size.  The only type that does is "byte."
Don't know if a type should be THAT explicit with it's size.
I'd personally like int8, int16, int32, etc.  This also makes it easy to add 
new, larger types.  What comes after int64?  int128, of course.  But what 
comes after "long?"  Why, "cent."  What?!  Huh?

But of course, none of this will ever happen / even be considered, so it's 
kind of an exercise in futility. 
Hehe, I agree. Tom
Nov 21 2005
next sibling parent reply Oskar Linde <oskar.lindeREM OVEgmail.com> writes:
In article <dlt3c9$87f$1 digitaldaemon.com>, Tomás Rossi says...
In article <dlsuq9$3d9$1 digitaldaemon.com>, Jarrett Billingsley says...
"pragma" <pragma_member pathlink.com> wrote in message 
news:dlstrd$2i4$1 digitaldaemon.com...
 What is wrong with the documented convetions laid out for the byte sizes 
 of the
 current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size.
Maybe if D bit-length specifications were relative (don't know the downsides of this approach but I'm all ears). For example: ____________________________________________________________________________, TYPE | SIZE | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES | | (relative to 1 | (in bits) | (in bits) | | CPU word) | | | | (register size)| | | ---------+----------------+------------------------+------------------------+ (u)short | 1/2 | 16 | 32 | (u)int | 1 | 32 | 64 | (u)long | 2 | 64 (as VC++s __int64) | 128 |
This is exactly one of the things D was designed to avoid. But it would be nice to have an official alias for the system native register sized type. / Oskar
Nov 21 2005
next sibling parent reply Tomás Rossi <Tomás_member pathlink.com> writes:
In article <dlt4tj$9va$1 digitaldaemon.com>, Oskar Linde says...
In article <dlt3c9$87f$1 digitaldaemon.com>, Tomás Rossi says...
In article <dlsuq9$3d9$1 digitaldaemon.com>, Jarrett Billingsley says...
"pragma" <pragma_member pathlink.com> wrote in message 
news:dlstrd$2i4$1 digitaldaemon.com...
 What is wrong with the documented convetions laid out for the byte sizes 
 of the
 current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size.
What's your opinion on the above?
Maybe if D bit-length specifications were relative (don't know the downsides of
this approach but I'm all ears).
For example:
____________________________________________________________________________,
 TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  
         | (relative to 1 | (in bits)              | (in bits)              |
         | CPU word)      |                        |                        |
         | (register size)|                        |                        | 
---------+----------------+------------------------+------------------------+
(u)short | 1/2            | 16                     | 32                     |
(u)int   | 1              | 32                     | 64                     |
(u)long  | 2              | 64 (as VC++s __int64)  | 128                    |
This is exactly one of the things D was designed to avoid.
And why is that? (don't really know, is it in D presentation or docs?)
But it would be nice to have an official alias for the system native register
sized type.
Yap. Tom
Nov 21 2005
parent reply MK <MK_member pathlink.com> writes:
In article <dlt5jv$aj6$1 digitaldaemon.com>, Tomás Rossi says...
Maybe if D bit-length specifications were relative (don't know the downsides of
this approach but I'm all ears).
For example:
____________________________________________________________________________,
 TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  
         | (relative to 1 | (in bits)              | (in bits)              |
         | CPU word)      |                        |                        |
         | (register size)|                        |                        | 
---------+----------------+------------------------+------------------------+
(u)short | 1/2            | 16                     | 32                     |
(u)int   | 1              | 32                     | 64                     |
(u)long  | 2              | 64 (as VC++s __int64)  | 128                    |
This is exactly one of the things D was designed to avoid.
And why is that? (don't really know, is it in D presentation or docs?)
From experience. Its best that Mr Bright stay away from implementation specific types. I believe its better to know absolutely what a given type is. You are programming in D, not x86.
Nov 21 2005
parent reply Tomás Rossi <Tomás_member pathlink.com> writes:
In article <dlti74$k8i$1 digitaldaemon.com>, MK says...
In article <dlt5jv$aj6$1 digitaldaemon.com>, Tomás Rossi says...
Maybe if D bit-length specifications were relative (don't know the downsides of
this approach but I'm all ears).
For example:
____________________________________________________________________________,
 TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  
         | (relative to 1 | (in bits)              | (in bits)              |
         | CPU word)      |                        |                        |
         | (register size)|                        |                        | 
---------+----------------+------------------------+------------------------+
(u)short | 1/2            | 16                     | 32                     |
(u)int   | 1              | 32                     | 64                     |
(u)long  | 2              | 64 (as VC++s __int64)  | 128                    |
This is exactly one of the things D was designed to avoid.
And why is that? (don't really know, is it in D presentation or docs?)
From experience. Its best that Mr Bright stay away from implementation specific types.
Could you be more specific with the "from experience" part? That didn't really convince me, I mean, you didn't answer my question yet. What's the experience you're referring to?!
I believe its better to know absolutely what a given type is. You are
programming in D, not x86.
That's exactly the suggestion that started the discussion and with wich I agree in essence. There should be standard aliases (if there isn't yet) for intXX types just to be sure what is the exact precision of the integer you need in a platform independent manner. But with actual D approach, say I have a very efficient D app (in wich performance depends on the most efficient integer manipulation for the current CPU), written originally with int data type (because it was conceived for a 32bit system). When I port it to 64bit system, I'll have to make changes (say replacing int with long) to take advantages of the more powerful CPU. Tom
Nov 21 2005
next sibling parent "Kris" <fu bar.com> writes:
"Tomás Rossi" <Tomás_member pathlink.com> wrote.
 In article <dlti74$k8i$1 digitaldaemon.com>, MK says...
{snip]
 But with actual D approach, say I have a very efficient D app (in wich
 performance depends on the most efficient integer manipulation for the 
 current
 CPU), written originally with int data type (because it was conceived for 
 a
 32bit system). When I port it to 64bit system, I'll have to make changes 
 (say
 replacing int with long) to take advantages of the more powerful CPU.
Can't you provide your own alias in such cases, and change it when you port? Or are you asking for a "fastint" (with a minimal width of 32-bits) to be defined within std.stdint?
Nov 21 2005
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 22 Nov 2005 02:24:19 +0000 (UTC), Tomás Rossi wrote:

[snip]

 But with actual D approach, say I have a very efficient D app (in wich
 performance depends on the most efficient integer manipulation for the current
 CPU), written originally with int data type (because it was conceived for a
 32bit system). When I port it to 64bit system, I'll have to make changes (say
 replacing int with long) to take advantages of the more powerful CPU.
Yes, you are right. In D the 'int' always means 32-bits regardless of the architecture running the application. So if you port it to a different architecture *and* you want to take advantage of the longer integer then you will have to change 'int' to 'long'. Otherwise use aliases of your own making ... version(X86) { alias int stdint; alias long longint; } version(X86_64) { alias long stdint; alias cent longint; } longint foo(stdint A) { return cast(longint)A * cast(longint)A + cast(longint)1; } -- Derek (skype: derek.j.parnell) Melbourne, Australia 22/11/2005 1:35:33 PM
Nov 21 2005
parent reply Tomás Rossi <Tomás_member pathlink.com> writes:
In article <dcr6iol0nzuz.ovrmy3qsc18d.dlg 40tude.net>, Derek Parnell says...
On Tue, 22 Nov 2005 02:24:19 +0000 (UTC), Tomás Rossi wrote:

[snip]

 But with actual D approach, say I have a very efficient D app (in wich
 performance depends on the most efficient integer manipulation for the current
 CPU), written originally with int data type (because it was conceived for a
 32bit system). When I port it to 64bit system, I'll have to make changes (say
 replacing int with long) to take advantages of the more powerful CPU.
Yes, you are right. In D the 'int' always means 32-bits regardless of the architecture running the application. So if you port it to a different architecture *and* you want to take advantage of the longer integer then you will have to change 'int' to 'long'. Otherwise use aliases of your own making ... version(X86) { alias int stdint; alias long longint; } version(X86_64) { alias long stdint; alias cent longint; } longint foo(stdint A) { return cast(longint)A * cast(longint)A + cast(longint)1; }
So, what's the downsides of the platform dependent integer types? Currently, applying your above workaroud (wich is almost a MUST from now on), the downsides are very clear: developers will have to do this for sure in the most projects because 64-bits sytems are a reality in this days and 32-bit ones are rapidly staying behind. Plus the uglyness of having to use stdint everywhere where you would use int and the obvious consequences of type obfuscation due to alias. Tom
Nov 21 2005
parent David Medlock <noone nowhere.com> writes:
Tomás Rossi wrote:
 In article <dcr6iol0nzuz.ovrmy3qsc18d.dlg 40tude.net>, Derek Parnell says...
 
On Tue, 22 Nov 2005 02:24:19 +0000 (UTC), Tomás Rossi wrote:

[snip]


But with actual D approach, say I have a very efficient D app (in wich
performance depends on the most efficient integer manipulation for the current
CPU), written originally with int data type (because it was conceived for a
32bit system). When I port it to 64bit system, I'll have to make changes (say
replacing int with long) to take advantages of the more powerful CPU.
Yes, you are right. In D the 'int' always means 32-bits regardless of the architecture running the application. So if you port it to a different architecture *and* you want to take advantage of the longer integer then you will have to change 'int' to 'long'. Otherwise use aliases of your own making ... version(X86) { alias int stdint; alias long longint; } version(X86_64) { alias long stdint; alias cent longint; } longint foo(stdint A) { return cast(longint)A * cast(longint)A + cast(longint)1; }
So, what's the downsides of the platform dependent integer types? Currently, applying your above workaroud (wich is almost a MUST from now on), the downsides are very clear: developers will have to do this for sure in the most projects because 64-bits sytems are a reality in this days and 32-bit ones are rapidly staying behind. Plus the uglyness of having to use stdint everywhere where you would use int and the obvious consequences of type obfuscation due to alias. Tom
The downside is most programming is tailored to a task and not to a machine. Engineering of all types is based on knowing the capabilities of the raw materials we are working with. As a developer we *have* to know what the bounds of the data types we are working with. There are very few places in most software where 'the fastest integer possible' is needed. And in those cases you would be better off with an alias anyways. So far everything you have asked for is possible within the language, but you are asking for a change which requires hackery for *all the other cases*. -DavidM
Nov 23 2005
prev sibling parent reply =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= <jmjmak invalid_utu.fi> writes:
Oskar Linde wrote:
 In article <dlt3c9$87f$1 digitaldaemon.com>, Tomás Rossi says...
 
In article <dlsuq9$3d9$1 digitaldaemon.com>, Jarrett Billingsley says...

"pragma" <pragma_member pathlink.com> wrote in message 
news:dlstrd$2i4$1 digitaldaemon.com...

What is wrong with the documented convetions laid out for the byte sizes 
of the
current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size.
Maybe if D bit-length specifications were relative (don't know the downsides of this approach but I'm all ears). For example: ____________________________________________________________________________, TYPE | SIZE | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES | | (relative to 1 | (in bits) | (in bits) | | CPU word) | | | | (register size)| | | ---------+----------------+------------------------+------------------------+ (u)short | 1/2 | 16 | 32 | (u)int | 1 | 32 | 64 | (u)long | 2 | 64 (as VC++s __int64) | 128 |
This is exactly one of the things D was designed to avoid. But it would be nice to have an official alias for the system native register sized type.
I don't believe it would be nice. The language already has the most needed data types. It's really a pain in the ass to test these variable-length types using different architectures. Maybe they would result in a better c-interoperativity, but still it would make porting d programs harder. Of course you might say that you don't have to use this type, but I have a feeling that not all people ever get it right.
Nov 21 2005
next sibling parent reply Munchgreeble bigfoot.com writes:
In article <dlt9n6$dr6$1 digitaldaemon.com>,
=?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= says...
It's really a pain in the ass to test these 
variable-length types using different architectures. Maybe they would 
result in a better c-interoperativity, but still it would make porting d 
programs harder.
Actually, the whole point of such types is to increase ease of portability. It's so that you can say "int32" and know that your code is going to work on any platform, instead of discovering that "int" on your new platform is a different length, and oh my gosh, that data I'm sending across the network is no longer the same format that it used to be, so the different versions of the code running on different machines can no longer talk to each other... and that interface spec I just sent out is now completely worthless - nobody else can talk to my application anymore either. It's not impossible to write your own types - but I've lost count of the number of times I written a types.h file. You lose the "builtin type" highlighting in your favourite editor, and everybody has slightly different naming conventions (uint8, Uint8, UInt8, uint_8, u_int8 etc.), which annoys you when you come in half way through a project where they do it different to you. It would be nice (and I'm assuming, easy) to have this in D. It won't really matter if it's missing, but it's polish - another fix for another niggle that's been there since the year dot. And of course you don't have to use those types if you only ever write single platform pure software applications with no networking capability. Just my tuppence Munch
Nov 21 2005
next sibling parent reply =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= <jmjmak invalid_utu.fi> writes:
Munchgreeble bigfoot.com wrote:
 In article <dlt9n6$dr6$1 digitaldaemon.com>,
 =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= says...
 
It's really a pain in the ass to test these 
variable-length types using different architectures. Maybe they would 
result in a better c-interoperativity, but still it would make porting d 
programs harder.
Actually, the whole point of such types is to increase ease of portability. It's so that you can say "int32" and know that your code is going to work on any platform, instead of discovering that "int" on your new platform is a different length, and oh my gosh, that data I'm sending across the network is no longer the same format that it used to be, so the different versions of the code running on different machines can no longer talk to each other... and that interface spec I just sent out is now completely worthless - nobody else can talk to my application anymore either.
I can't see your point here. You see, the "int" is always 32 bits in D. You can alias it to be int32 if you want. Please read http://www.digitalmars.com/d/type.html. As you can see, "real" is the only implementation-depended type.
 
 It's not impossible to write your own types - but I've lost count of the number
 of times I written a types.h file. You lose the "builtin type" highlighting in
 your favourite editor, and everybody has slightly different naming conventions
 (uint8, Uint8, UInt8, uint_8, u_int8 etc.), which annoys you when you come in
 half way through a project where they do it different to you. It would be nice
 (and I'm assuming, easy) to have this in D. It won't really matter if it's
 missing, but it's polish - another fix for another niggle that's been there
 since the year dot. And of course you don't have to use those types if you only
 ever write single platform pure software applications with no networking
 capability.
True, but I'm still saying that D already has these types. If you don't like the current naming convention, you can always alias byte int8 alias short int16 alias int int32 alias long int64 and so on... In C you would have to use implementation specific sizeof-logic. I think Walter has chosen these keywords because they are widely used in other languages. They're also closer to the natural language.
Nov 21 2005
parent Derek Parnell <derek psych.ward> writes:
On Tue, 22 Nov 2005 00:23:27 +0200, Jari-Matti Mäkelä wrote:


[snip]

 True, but I'm still saying that D already has these types. If you don't 
 like the current naming convention, you can always
 
    alias byte int8
    alias short int16
    alias int int32
    alias long int64
 
 and so on...
 
 In C you would have to use implementation specific sizeof-logic.
 
 I think Walter has chosen these keywords because they are widely used in 
 other languages. They're also closer to the natural language.
Another small point would be the increased number of tests when using typeof, typeid, etc... -- Derek (skype: derek.j.parnell) Melbourne, Australia 22/11/2005 10:37:05 AM
Nov 21 2005
prev sibling parent reply MK <MK_member pathlink.com> writes:
In article <dltfj7$i8l$1 digitaldaemon.com>, Munchgreeble bigfoot.com says...
In article <dlt9n6$dr6$1 digitaldaemon.com>,
=?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= says...
It's really a pain in the ass to test these 
variable-length types using different architectures. Maybe they would 
result in a better c-interoperativity, but still it would make porting d 
programs harder.
Actually, the whole point of such types is to increase ease of portability. It's so that you can say "int32" and know that your code is going to work on any platform, instead of discovering that "int" on your new platform is a different length, and oh my gosh, that data I'm sending across the network is no longer the same format that it used to be, so the different versions of the code running on different machines can no longer talk to each other... and that interface spec I just sent out is now completely worthless - nobody else can talk to my application anymore either. It's not impossible to write your own types - but I've lost count of the number of times I written a types.h file. You lose the "builtin type" highlighting in your favourite editor, and everybody has slightly different naming conventions (uint8, Uint8, UInt8, uint_8, u_int8 etc.), which annoys you when you come in half way through a project where they do it different to you. It would be nice (and I'm assuming, easy) to have this in D. It won't really matter if it's missing, but it's polish - another fix for another niggle that's been there since the year dot. And of course you don't have to use those types if you only ever write single platform pure software applications with no networking capability. Just my tuppence Munch
But that's the whole point with D. All the integral types are completely unambiguous. I'm not sure what you are asking for? You want the unambiguous integral types renamed, and you want the regular ones to become ambiguious? That sounds a lot like what we are trying to fix in the first place.
Nov 21 2005
parent Munchgreeble <Munchgreeble_member pathlink.com> writes:
In article <dltije$kju$1 digitaldaemon.com>, MK says...
But that's the whole point with D. All the integral types are completely
unambiguous. I'm not sure what you are asking for? You want the unambiguous
integral types renamed, and you want the regular ones to become ambiguious? That
sounds a lot like what we are trying to fix in the first place.
OK - my mistake. Sorry. I should have known better really - most everything else in this language has already been fixed, what made me think this might not have been? D'oh ;-) Thanks for correcting me Munch
Nov 21 2005
prev sibling parent reply =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= <jmjmak invalid_utu.fi> writes:
Jari-Matti Mäkelä wrote:
 Oskar Linde wrote:
 
 In article <dlt3c9$87f$1 digitaldaemon.com>, Tomás Rossi says...

 In article <dlsuq9$3d9$1 digitaldaemon.com>, Jarrett Billingsley says...

 "pragma" <pragma_member pathlink.com> wrote in message 
 news:dlstrd$2i4$1 digitaldaemon.com...

 What is wrong with the documented convetions laid out for the byte 
 sizes of the
 current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size.
Maybe if D bit-length specifications were relative (don't know the downsides of this approach but I'm all ears). For example: ____________________________________________________________________________, TYPE | SIZE | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES | | (relative to 1 | (in bits) | (in bits) | | CPU word) | | | | (register size)| | | ---------+----------------+------------------------+------------------------+ (u)short | 1/2 | 16 | 32 | (u)int | 1 | 32 | 64 | (u)long | 2 | 64 (as VC++s __int64) | 128 |
This is exactly one of the things D was designed to avoid. But it would be nice to have an official alias for the system native register sized type.
I don't believe it would be nice. The language already has the most needed data types. It's really a pain in the ass to test these variable-length types using different architectures. Maybe they would result in a better c-interoperativity, but still it would make porting d programs harder. Of course you might say that you don't have to use this type, but I have a feeling that not all people ever get it right.
Sorry, I was wrong. This is a big performance issue. Currently the "int" is maybe the fastest integer type for most x86 users. But now one must use some compile time macros (version/static if/etc) to ensure that the data type used will be fast enough in other environments too.
Nov 21 2005
parent James Dunne <james.jdunne gmail.com> writes:
Jari-Matti Mäkelä wrote:
 Jari-Matti Mäkelä wrote:
 
 Oskar Linde wrote:

 In article <dlt3c9$87f$1 digitaldaemon.com>, Tomás Rossi says...

 In article <dlsuq9$3d9$1 digitaldaemon.com>, Jarrett Billingsley 
 says...

 "pragma" <pragma_member pathlink.com> wrote in message 
 news:dlstrd$2i4$1 digitaldaemon.com...

 What is wrong with the documented convetions laid out for the byte 
 sizes of the
 current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size.
Maybe if D bit-length specifications were relative (don't know the downsides of this approach but I'm all ears). For example: ____________________________________________________________________________, TYPE | SIZE | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES | | (relative to 1 | (in bits) | (in bits) | | CPU word) | | | | (register size)| | | ---------+----------------+------------------------+------------------------+ (u)short | 1/2 | 16 | 32 | (u)int | 1 | 32 | 64 | (u)long | 2 | 64 (as VC++s __int64) | 128 |
This is exactly one of the things D was designed to avoid. But it would be nice to have an official alias for the system native register sized type.
I don't believe it would be nice. The language already has the most needed data types. It's really a pain in the ass to test these variable-length types using different architectures. Maybe they would result in a better c-interoperativity, but still it would make porting d programs harder. Of course you might say that you don't have to use this type, but I have a feeling that not all people ever get it right.
Sorry, I was wrong. This is a big performance issue. Currently the "int" is maybe the fastest integer type for most x86 users. But now one must use some compile time macros (version/static if/etc) to ensure that the data type used will be fast enough in other environments too.
Exactly. Fixing type sizes becomes a problem when you're switching platforms and want to retain efficieny. You want your ints to be fast. However, varying type sizes become an even bigger problem when you're trying to send out network data or store to binary files. The only real solution is two use two (or more?) different sets of types which guarantee different things. We need a set of fast types and we need a set of fixed-size types. Do you not agree? I've already posted the gist of this idea in a lower thread.
Nov 21 2005
prev sibling parent James Dunne <james.jdunne gmail.com> writes:
Tomás Rossi wrote:
 In article <dlsuq9$3d9$1 digitaldaemon.com>, Jarrett Billingsley says...
 
"pragma" <pragma_member pathlink.com> wrote in message 
news:dlstrd$2i4$1 digitaldaemon.com...

What is wrong with the documented convetions laid out for the byte sizes 
of the
current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size.
Maybe if D bit-length specifications were relative (don't know the downsides of this approach but I'm all ears). For example: ____________________________________________________________________________, TYPE | SIZE | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES | | (relative to 1 | (in bits) | (in bits) | | CPU word) | | | | (register size)| | | ---------+----------------+------------------------+------------------------+ (u)short | 1/2 | 16 | 32 | (u)int | 1 | 32 | 64 | (u)long | 2 | 64 (as VC++s __int64) | 128 | After all, isn't it ugly and less abstract to code assuming a certain sizeof for integral types (and also maybe with other types)? (sizeof brings that information to the programmer and the programmer should code relative to the 'sizeof' of a type and not assuming that size with premeditation).
The problem is this: people need different guarantees about their types' sizes for different purposes. In one instance, you may need a set of types that are absolutely fixed-size for use in reading/writing out binary data to files or streams, etc. In another instance, you may need a set of types that match the processor's supported native word sizes for fast processing.
 
Then there's 
short, which I suppose makes sense on both platforms, and int, but neither 
gives any indication of the size.  The only type that does is "byte."
Don't know if a type should be THAT explicit with it's size.
I'd personally like int8, int16, int32, etc.  This also makes it easy to add 
new, larger types.  What comes after int64?  int128, of course.  But what 
comes after "long?"  Why, "cent."  What?!  Huh?

But of course, none of this will ever happen / even be considered, so it's 
kind of an exercise in futility. 
...unless certain ones designing new languages happen to be listening... I do like the int8, int16, int32, int64 names. It makes sense. Very easy to scale up the language for 128-bit processing and 256-bit processing.
 
 Hehe, I agree.
 
 Tom
Nov 21 2005
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 21 Nov 2005 12:06:27 -0500, Jarrett Billingsley wrote:

 "pragma" <pragma_member pathlink.com> wrote in message 
 news:dlstrd$2i4$1 digitaldaemon.com...
 What is wrong with the documented convetions laid out for the byte sizes 
 of the
 current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size. Then there's short, which I suppose makes sense on both platforms, and int, but neither gives any indication of the size. The only type that does is "byte." I'd personally like int8, int16, int32, etc. This also makes it easy to add new, larger types. What comes after int64? int128, of course. But what comes after "long?" Why, "cent." What?! Huh? But of course, none of this will ever happen / even be considered, so it's kind of an exercise in futility.
Yes it is. However, my comments are that identifiers that are a mixture of alphas and digits reduce legibility. Also, why use the number of bits? Is it likely we would use a number that is not a power of 2? Or could we have an int24? Or an int30? Using a number of bytes seems more useful because I'm sure that all such integer would be one byte boundaries. -- Derek (skype: derek.j.parnell) Melbourne, Australia 22/11/2005 10:24:41 AM
Nov 21 2005
parent reply James Dunne <james.jdunne gmail.com> writes:
Derek Parnell wrote:
 On Mon, 21 Nov 2005 12:06:27 -0500, Jarrett Billingsley wrote:
 
 
"pragma" <pragma_member pathlink.com> wrote in message 
news:dlstrd$2i4$1 digitaldaemon.com...

What is wrong with the documented convetions laid out for the byte sizes 
of the
current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size. Then there's short, which I suppose makes sense on both platforms, and int, but neither gives any indication of the size. The only type that does is "byte." I'd personally like int8, int16, int32, etc. This also makes it easy to add new, larger types. What comes after int64? int128, of course. But what comes after "long?" Why, "cent." What?! Huh? But of course, none of this will ever happen / even be considered, so it's kind of an exercise in futility.
Yes it is. However, my comments are that identifiers that are a mixture of alphas and digits reduce legibility. Also, why use the number of bits? Is it likely we would use a number that is not a power of 2? Or could we have an int24? Or an int30? Using a number of bytes seems more useful because I'm sure that all such integer would be one byte boundaries.
What would you suggest? Not saying that you're a proponent of it, but... What happens to our short, int, long language types when 256-bit processors come along? We'd find it hard to address a 16-bit integer in that system limited to only three type names.
Nov 21 2005
parent reply Tomás Rossi <Tomás_member pathlink.com> writes:
In article <dlucev$16t5$1 digitaldaemon.com>, James Dunne says...
Derek Parnell wrote:
 On Mon, 21 Nov 2005 12:06:27 -0500, Jarrett Billingsley wrote:
 
 
"pragma" <pragma_member pathlink.com> wrote in message 
news:dlstrd$2i4$1 digitaldaemon.com...

What is wrong with the documented convetions laid out for the byte sizes 
of the
current values?
Because although they're documented and strictly defined, they don't make much sense. For example, long makes sense on a 32-bit machine, but on 64-bit machines (to which everything is moving relatively soon), 64 bits is the default size. So "long" would be the "normal" size. Then there's short, which I suppose makes sense on both platforms, and int, but neither gives any indication of the size. The only type that does is "byte." I'd personally like int8, int16, int32, etc. This also makes it easy to add new, larger types. What comes after int64? int128, of course. But what comes after "long?" Why, "cent." What?! Huh? But of course, none of this will ever happen / even be considered, so it's kind of an exercise in futility.
Yes it is. However, my comments are that identifiers that are a mixture of alphas and digits reduce legibility. Also, why use the number of bits? Is it likely we would use a number that is not a power of 2? Or could we have an int24? Or an int30? Using a number of bytes seems more useful because I'm sure that all such integer would be one byte boundaries.
What would you suggest? Not saying that you're a proponent of it, but... What happens to our short, int, long language types when 256-bit processors come along? We'd find it hard to address a 16-bit integer in that system limited to only three type names.
Exactly, what would happen? Would "we" have to engineer another language, would it be D v2 :P? Certainly platform-dependent integral types are THE choice. Aliases of the type intXXX would be necessary always. Tom
Nov 22 2005
parent reply xs0 <xs0 xs0.com> writes:
What happens to our short, int, long language types when 256-bit 
processors come along?  We'd find it hard to address a 16-bit integer in 
that system limited to only three type names.
Exactly, what would happen? Would "we" have to engineer another language, would it be D v2 :P? Certainly platform-dependent integral types are THE choice. Aliases of the type intXXX would be necessary always.
I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99). Now, if you have a working app on a 32-bit platform and you move it to a 64-bit platform, is it any help if int becomes 64 bit? No, because if it was big enough before, it's big enough now (with the notable exception of memory locations and sizes, which are taken care of with size_t and ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the place will change, breaking any interface to outside-the-app. xs0
Nov 22 2005
parent reply Tomás Rossi <Tomás_member pathlink.com> writes:
In article <dlv33k$1o7u$1 digitaldaemon.com>, xs0 says...
What happens to our short, int, long language types when 256-bit 
processors come along?  We'd find it hard to address a 16-bit integer in 
that system limited to only three type names.
Exactly, what would happen? Would "we" have to engineer another language, would it be D v2 :P? Certainly platform-dependent integral types are THE choice. Aliases of the type intXXX would be necessary always.
I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99).
The same thing said some people about 32-bit machines before those were developed and now we have 64-bit CPUs. Plus, I´m sure that already exists 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to run on home computers? For example, the PlayStation2 platform is builded upon a 128-bit CPU!
Now, if you have a working app on a 32-bit platform and you move it to a 
64-bit platform, is it any help if int becomes 64 bit? No, because if it 
was big enough before, it's big enough now (with the notable exception 
of memory locations and sizes, which are taken care of with size_t and 
ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the 
place will change, breaking any interface to outside-the-app.
I can't agree with this. You port an app to 64-bit and rebuild it as a 64-bit edition, not necessarily expecting to work interfacing against a 32-bit version. Besides, why are you so sure that moving to 64-bits won't be much of a gain? "If it was big enough before, it's big enough now"???? Be careful, your ported app will still work, but'll take no benefit of the upgraded processor! If your app focus on std int performance to work better, this is much of a problem. Tom
Nov 22 2005
next sibling parent reply xs0 <xs0 xs0.com> writes:
Tomás Rossi wrote:

I think you guys are exaggerating the problem. Even 64-bit CPUs were 
developed (afaik) mainly because of the need to cleanly address more 
than 4GB of RAM, not because there's some overwhelming need for 64-bit 
calculations. Considering how much RAM/disk/whatever 2^64 is, I don't 
think anyone will need a CPU that is 128-bit, let alone 256-bit any time 
soon (and even if developed because of marketing purposes, I see no 
reason to use 32-byte variables to have loop counters from 0 to 99).
The same thing said some people about 32-bit machines before those were developed and now we have 64-bit CPUs.
Well, sure, people always make mistakes, but can you think of any application anyone will develop in the next 30 years that will need more than 17,179,869,184 GB of ram (or 512x that of disk)? Older limits, like 1MB of 8086 or 4GB of 80386 were somewhat easier to reach, I think :) I mean, even if both needs and technology double each year (and I think it's safe to say that they increase more slowly), it will take over 30 years to reach that... Plus, I´m sure that already exists
 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to
run
 on home computers? For example, the PlayStation2 platform is builded upon a
 128-bit CPU! 
Well, from what I can gather from http://arstechnica.com/reviews/hardware/ee.ars/ the PS2 is actually 64-bit, what is 128-bit are SIMD instructions (which actually work on at most 32-bit vals) and some of the internal buses.. My point was that there's not much need for operating with values over 64 bits, so I don't see the transition to 128 bits happening soon (again, I'm refering to single data items; bus widths, vectorized instructions' widths etc. are a different story, but one that is not relevant to our discussion)
Now, if you have a working app on a 32-bit platform and you move it to a 
64-bit platform, is it any help if int becomes 64 bit? No, because if it 
was big enough before, it's big enough now (with the notable exception 
of memory locations and sizes, which are taken care of with size_t and 
ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the 
place will change, breaking any interface to outside-the-app.
I can't agree with this. You port an app to 64-bit and rebuild it as a 64-bit edition, not necessarily expecting to work interfacing against a 32-bit version. Besides, why are you so sure that moving to 64-bits won't be much of a gain?
The biggest gain I see in 64 bits is, like I said, the ability to handle more memory, which naturally improves performance for some types of applications, like databases. I don't see much performance gain in general, because there aren't many quantities that require a 64-bit representation in the first place. Even if 64-bit ops are 50x faster on a 64-bit cpu than on a 32-bit cpu, they are very rare (at least in my experience), so the gain is small. Also note that you don't gain any speed by simply making your variables bigger, it that's all you do..
 "If it was big enough before, it's big enough now"???? Be careful, your ported
 app will still work, but'll take no benefit of the upgraded processor! 
Why not? It will be able to use more ram, and operations involving longs will be faster. Are there any other benefits a 64-bit architecture provides? If your
 app focus on std int performance to work better, this is much of a problem.
I don't understand that sentence, sorry :) xs0
Nov 22 2005
next sibling parent Munchgreeble <"a" b.com \"munchgreeble xATx bigfoot xDOTx com\"> writes:
xs0 wrote:

 Tomás Rossi wrote:
 
 xs0
I think you're mixing up address bus and data bus widths. When people talk about a 64-bit machine, they're talking about the size of the address bus. That's what affects how much RAM you can address. The data bus is completely independent of this. For example IIRC the now somewhat dated N64 had a 64 bit address bus and a 256 data bus. It's the data bus that affects the size of the ints you need, not the address bus. The address bus affects how wide your pointer types are. Caveat: I'm no hardware expert ;-) I hope this helps =) Munch
Nov 22 2005
prev sibling parent Tomás Rossi <Tomás_member pathlink.com> writes:
In article <dlvfpq$2410$1 digitaldaemon.com>, xs0 says...
Tomás Rossi wrote:

I think you guys are exaggerating the problem. Even 64-bit CPUs were 
developed (afaik) mainly because of the need to cleanly address more 
than 4GB of RAM, not because there's some overwhelming need for 64-bit 
calculations. Considering how much RAM/disk/whatever 2^64 is, I don't 
think anyone will need a CPU that is 128-bit, let alone 256-bit any time 
soon (and even if developed because of marketing purposes, I see no 
reason to use 32-byte variables to have loop counters from 0 to 99).
The same thing said some people about 32-bit machines before those were developed and now we have 64-bit CPUs.
Well, sure, people always make mistakes, but can you think of any application anyone will develop in the next 30 years that will need more than 17,179,869,184 GB of ram (or 512x that of disk)? Older limits, like 1MB of 8086 or 4GB of 80386 were somewhat easier to reach, I think :) I mean, even if both needs and technology double each year (and I think it's safe to say that they increase more slowly), it will take over 30 years to reach that... Plus, I´m sure that already exists
 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to
run
 on home computers? For example, the PlayStation2 platform is builded upon a
 128-bit CPU! 
Well, from what I can gather from http://arstechnica.com/reviews/hardware/ee.ars/ the PS2 is actually 64-bit, what is 128-bit are SIMD instructions (which actually work on at most 32-bit vals) and some of the internal buses..
Ok, you're right, didn't know.
My point was that there's not much need for operating with values over 
64 bits, so I don't see the transition to 128 bits happening soon 
(again, I'm refering to single data items; bus widths, vectorized 
instructions' widths etc. are a different story, but one that is not 
relevant to our discussion)
Maybe not too soon, but someday it'll happen, that's the thing. When writing a program, you code with future in mind but not so distant future (you know it'll work on 64-bits with minor changes ideally but don't care about 128-bits possibility). But when designing a language that could be implemented in so many CPU architectures, you should take that fact into account. Again, maybe not soon but someday it'll happen and what would be the solution?
Now, if you have a working app on a 32-bit platform and you move it to a 
64-bit platform, is it any help if int becomes 64 bit? No, because if it 
was big enough before, it's big enough now (with the notable exception 
of memory locations and sizes, which are taken care of with size_t and 
ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the 
place will change, breaking any interface to outside-the-app.
I can't agree with this. You port an app to 64-bit and rebuild it as a 64-bit edition, not necessarily expecting to work interfacing against a 32-bit version. Besides, why are you so sure that moving to 64-bits won't be much of a gain?
The biggest gain I see in 64 bits is, like I said, the ability to handle more memory, which naturally improves performance for some types of applications, like databases. I don't see much performance gain in general, because there aren't many quantities that require a 64-bit representation in the first place. Even if 64-bit ops are 50x faster on a 64-bit cpu than on a 32-bit cpu, they are very rare (at least in my experience), so the gain is small. Also note that you don't gain any speed by simply making your variables bigger, it that's all you do..
Not sure, but I think there are some integral operations that could be done in half the time (doing some magic at least). It's been a long time since I coded in assembly so I can't recall of an example right now.
 "If it was big enough before, it's big enough now"???? Be careful, your ported
 app will still work, but'll take no benefit of the upgraded processor! 
Why not? It will be able to use more ram, and operations involving longs will be faster. Are there any other benefits a 64-bit architecture provides?
I haven't time now, have to go, latter I'll continue with this ones. :)
If your
 app focus on std int performance to work better, this is much of a problem.
I don't understand that sentence, sorry :)
Again Tom
Nov 22 2005
prev sibling next sibling parent reply =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= <jmjmak invalid_utu.fi> writes:
Tomás Rossi wrote:
 In article <dlv33k$1o7u$1 digitaldaemon.com>, xs0 says...
 
What happens to our short, int, long language types when 256-bit 
processors come along?  We'd find it hard to address a 16-bit integer in 
that system limited to only three type names.
Exactly, what would happen? Would "we" have to engineer another language, would it be D v2 :P? Certainly platform-dependent integral types are THE choice. Aliases of the type intXXX would be necessary always.
I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99).
The same thing said some people about 32-bit machines before those were developed and now we have 64-bit CPUs. Plus, I´m sure that already exists 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to run on home computers? For example, the PlayStation2 platform is builded upon a 128-bit CPU!
Oh, come on. The reason for 64-bit registers is purely practical: computer programs need more memory (4GB limit -> 170_000_000_000GB). Another reason is that most systems have Y2038 problems: http://en.wikipedia.org/wiki/Year_2038_problem Current technology road maps tell that memory capacity will follow Moore's law for another 20-30 years now. Currently D specification even supports 128-bit registers. It means that D programs will work at least for another 60 years without any performance issues! The 64-bit time fields won't wrap around until the world explodes! I don't know about PS2, but I believe it needs 128-bit registers to achieve bigger bandwidth. I don't think this concerns you unless you're writing a fast memcpy(). Usually this code is done in assembly => you don't need high-level 128-bit ints.
 
 
Now, if you have a working app on a 32-bit platform and you move it to a 
64-bit platform, is it any help if int becomes 64 bit? No, because if it 
was big enough before, it's big enough now (with the notable exception 
of memory locations and sizes, which are taken care of with size_t and 
ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the 
place will change, breaking any interface to outside-the-app.
I can't agree with this. You port an app to 64-bit and rebuild it as a 64-bit edition, not necessarily expecting to work interfacing against a 32-bit version. Besides, why are you so sure that moving to 64-bits won't be much of a gain? "If it was big enough before, it's big enough now"???? Be careful, your ported app will still work, but'll take no benefit of the upgraded processor! If your app focus on std int performance to work better, this is much of a problem.
He means that now you can port your applications without any modifications (may not be highest performance, but works at least - actually smart compilers will optimize your code to 64 bits anyway). If you explicitly want to make high-performance 64-bit code, use compile-time logic structures and aliases. Besides, with machine-dependent ints you would need to check your code in either case.
Nov 22 2005
parent reply Tomás Rossi <Tomás_member pathlink.com> writes:
In article <dlvi6c$278u$1 digitaldaemon.com>,
=?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= says...
Tomás Rossi wrote:
 In article <dlv33k$1o7u$1 digitaldaemon.com>, xs0 says...
 
What happens to our short, int, long language types when 256-bit 
processors come along?  We'd find it hard to address a 16-bit integer in 
that system limited to only three type names.
Exactly, what would happen? Would "we" have to engineer another language, would it be D v2 :P? Certainly platform-dependent integral types are THE choice. Aliases of the type intXXX would be necessary always.
I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99).
The same thing said some people about 32-bit machines before those were developed and now we have 64-bit CPUs. Plus, I´m sure that already exists 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to run on home computers? For example, the PlayStation2 platform is builded upon a 128-bit CPU!
Oh, come on. The reason for 64-bit registers is purely practical: computer programs need more memory (4GB limit -> 170_000_000_000GB). Another reason is that most systems have Y2038 problems: http://en.wikipedia.org/wiki/Year_2038_problem Current technology road maps tell that memory capacity will follow Moore's law for another 20-30 years now. Currently D specification even supports 128-bit registers. It means that D programs will work at least for another 60 years without any performance issues! The 64-bit time fields won't wrap around until the world explodes!
I don't know about PS2, but I believe it needs 128-bit registers to 
achieve bigger bandwidth. I don't think this concerns you unless you're 
writing a fast memcpy(). Usually this code is done in assembly => you 
don't need high-level 128-bit ints.

 
 
Now, if you have a working app on a 32-bit platform and you move it to a 
64-bit platform, is it any help if int becomes 64 bit? No, because if it 
was big enough before, it's big enough now (with the notable exception 
of memory locations and sizes, which are taken care of with size_t and 
ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the 
place will change, breaking any interface to outside-the-app.
I can't agree with this. You port an app to 64-bit and rebuild it as a 64-bit edition, not necessarily expecting to work interfacing against a 32-bit version. Besides, why are you so sure that moving to 64-bits won't be much of a gain? "If it was big enough before, it's big enough now"???? Be careful, your ported app will still work, but'll take no benefit of the upgraded processor! If your app focus on std int performance to work better, this is much of a problem.
He means that now you can port your applications without any modifications (may not be highest performance, but works at least - actually smart compilers will optimize your code to 64 bits anyway). If you explicitly want to make high-performance 64-bit code, use compile-time logic structures and aliases. Besides, with machine-dependent ints you would need to check your code in either case.
I know what he meant and I know you can do all the alias stuff (you should read the other posts). However, I can say that I could agree with you (and xs0) notwithstanding, I like the platform dependent approach because IMO it makes the language more platform independent and more independent from the current computation model. I understand it's impossible to make a language that suits perfect for everybodys taste. Regards Tom
Nov 22 2005
parent Sean Kelly <sean f4.ca> writes:
Tomás Rossi wrote:
 He means that now you can port your applications without any 
 modifications (may not be highest performance, but works at least - 
 actually smart compilers will optimize your code to 64 bits anyway). If 
 you explicitly want to make high-performance 64-bit code, use 
 compile-time logic structures and aliases. Besides, with 
 machine-dependent ints you would need to check your code in either case.
I know what he meant and I know you can do all the alias stuff (you should read the other posts). However, I can say that I could agree with you (and xs0) notwithstanding, I like the platform dependent approach because IMO it makes the language more platform independent and more independent from the current computation model.
For what it's worth, the C99 stdint header is available here: http://svn.dsource.org/projects/ares/trunk/src/ares/std/c/stdint.d It might serve as a good starting point for someone looking to experiment with platform-dependent types and such. Sean
Nov 22 2005
prev sibling next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Tomás Rossi wrote:
 In article <dlv33k$1o7u$1 digitaldaemon.com>, xs0 says...
 
What happens to our short, int, long language types when 256-bit 
processors come along?  We'd find it hard to address a 16-bit integer in 
that system limited to only three type names.
Exactly, what would happen? Would "we" have to engineer another language, would it be D v2 :P? Certainly platform-dependent integral types are THE choice. Aliases of the type intXXX would be necessary always.
I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99).
The same thing said some people about 32-bit machines before those were developed and now we have 64-bit CPUs. Plus, I´m sure that already exists 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to run on home computers? For example, the PlayStation2 platform is builded upon a 128-bit CPU!
No, it's a fundamentally different situation. We're running up against the laws of physics. 2^64 is a fantastically large number. (a) Address buses. If you could store one bit of RAM per silicon atom, a memory chip big enough to require 65 bit addressing would be one cubic centimetre in size. Consider that existing memory chips are only 2D, and you need wiring to connect to each bit. Even if the cooling issues are overcome, it's really hard to imagine that happening. A memory chip big enough to require 129 bit addressing would be larger than the planet. The point is, that we're approaching Star Trek territory. The Enterprise computer probably only has a 128 bit address bus. Many people think that doubling the number of bits is exponential growth. It's not. Adding one more bit is exponential growth! It's exp(exp(x)) which is frighteningly fast function. Faster than a factorial! (b) Data buses I began programming with 8 bit data registers. That was limiting. Almost all useful numbers are greater than 256. 16 bits was better, but still many quanties are > 65536. But almost everything useful fits into a 32 bit register. 32 bits really is a natural size for an integer. The % of applications where each of these is inadequate is decreasing exponentially. Very few applications need 128 bit integers. But almost everything can use increased parallellism, hence 128 bit SIMD instructions. But they're still only using 32 bit integers. I think that even the 60 year timeframe for 128 bit address buses is a bit optimistic. (But I think we will see 1024-core processors long before then. Maybe even by 2015. And we'll see 64K core systems).
Nov 23 2005
parent reply Georg Wrede <georg.wrede nospam.org> writes:
Don Clugston wrote:
 Tomás Rossi wrote:
 In article <dlv33k$1o7u$1 digitaldaemon.com>, xs0 says...
 
 What happens to our short, int, long language types when
 256-bit processors come along?  We'd find it hard to address
 a 16-bit integer in that system limited to only three type
 names.
Exactly, what would happen? Would "we" have to engineer another language, would it be D v2 :P? Certainly platform-dependent integral types are THE choice. Aliases of the type intXXX would be necessary always.
I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99).
(Hmm, so for(byte b=0; b<99; b++) is what one writes today?) ;-) One thing that comes to mind is cryptography. Doing serious encrypting on the fly would benefit from having, say, 1024 bit processors. Oh yes, and the NSA and other spooks really need double the width that everyone else has. This is a law of nature. :-) I remember reading about a graphics card that had a 256 bit cpu. This was so long ago that I think it's on the market already.
 The same thing said some people about 32-bit machines before those
 were developed and now we have 64-bit CPUs. Plus, I´m sure that
 already exists 128/256-bit CPUs nowadays, maybe not for home PCs,
 but who say D only has to run on home computers? For example, the
 PlayStation2 platform is builded upon a 128-bit CPU!
Anybody remember the IBM boss? Or the Intel boss? Or Bill? They all said "xxx ought to be plenty enough forever".
 No, it's a fundamentally different situation. We're running up
 against the laws of physics. 2^64 is a fantastically large number.
 
 (a) Address buses. If you could store one bit of RAM per silicon
 atom, a memory chip big enough to require 65 bit addressing would be
 one cubic centimetre in size. Consider that existing memory chips are
 only 2D, and you need wiring to connect to each bit. Even if the
 cooling issues are overcome, it's really hard to imagine that
 happening. A memory chip big enough to require 129 bit addressing
 would be larger than the planet.
 
 The point is, that we're approaching Star Trek territory. The
 Enterprise computer probably only has a 128 bit address bus.
 
 Many people think that doubling the number of bits is exponential 
 growth. It's not. Adding one more bit is exponential growth! It's
 exp(exp(x)) which is frighteningly fast function. Faster than a 
 factorial!
 
 (b) Data buses
 
 I began programming with 8 bit data registers. That was limiting.
 Almost all useful numbers are greater than 256. 16 bits was better,
 but still many quanties are > 65536. But almost everything useful
 fits into a 32 bit register. 32 bits really is a natural size for an
 integer. The % of applications where each of these is inadequate is
 decreasing exponentially. Very few applications need 128 bit
 integers.
 
 But almost everything can use increased parallellism, hence 128 bit
 SIMD instructions. But they're still only using 32 bit integers.
 
 I think that even the 60 year timeframe for 128 bit address buses is
 a bit optimistic. (But I think we will see 1024-core processors long
  before then. Maybe even by 2015. And we'll see 64K core systems).
Depends. I started doing programming on an HP handheld. It had a 4 bit cpu. (Yes, four bits.) Its address bus was wider, though. Programming it was done in assembly, although they never said it in the manual, probably so as not to frighten folks away. My next assembly I wrote on the 6502, which was an 8 bit cpu. The address bus was 16 bits. Then I went on to the PC, which was touted as a 16 bit machine. True, the 8086 was 16 bits, but because that needed an expensive motherboard and memory hardware, a cheaper version was built for the masses, the 8088, so we had 16 bit PCs with an 8 bit data bus. Slow yes, but cheaper. But still 16 bit. (The software never knew.) Currently (I believe) none of the 64 bit cpus actually have address buses that are 64 bits wide. Nobody is the wiser, but when you go to the pc vendor and ask how much memory one can put on this or that 64 bit PC, the usual answer is like "16GB". It is also conceivable (somebody here know the fact?) that most of those 64 bit modern PCs actually use a 32 bit data bus. So, historically, the data bus, the address buss, and the accumulator (where integer math is done, and the width of which is often taken to be the "width of the cpu") have usually not all had the same width -- although folks seem to believe so. --- What we however do need, is a virtual address space that is large enough to accommodate the most demanding applications and data. This makes writing software (and especially operating systems and compilers) a lot easier, because we then don't have to start constructing kludges for the situations where we bang into the end of the memory range. (This is a separate issue from running out of memory.) --- The one thing that guarantees us a 256 bit cpu on everybody's table top eventually, has nothing to do with computers per se. It's to do with the structure of the Western society, and also with our inborn genes. (What???) First, the society thing: the world (within the foreseeable future) is based on capitalism. (I'm no commie, so it's fine with me.) This in itself makes vendors compete. And that's good, otherwise we'd still all be driving T-Fords. But this leads to bigger, faster, fancier, cooler, ... ad absurdum. Laws of physics, bah. In the eighties, it was common knowledge that we wouldn't have hard disks by the time a hard disk goes beyond gigabyte size. It was supposed to be physically impossible. It would have to be some kind of solid state tech instead. And now I read about Nokia phones having internal hard disks with multi gig capacity. Second, it's in our genes. There's a revealing commercial on my TV: day care kids on a break. "My mom makes better food than yours." "My mom makes better food than both of your mothers." A teenager walks by and says "My mother makes better food than any of yours." And then this 2-year old from the other kindergarten says over the fence: "My mother makes the food all your mothers serve." (Think Nestle, Kraft, whatever.) Suits and non-nerds live to brag. "A trillion bit cpu? Naaa, get out of here!" That day is nearer than Doomsday. I don't even bother to bet on it, would be like stealing the money. --- Ever heard "no matter how big your garage, it fills up with crap, and soon your car stays outside"? Ever heard "it makes no difference how big your hard disk is, it takes the same amount of time before it gets full"? Ever heard "it makes no difference how fast the PC is, the next Windows puts it on its knees anyhow"? --- Better computer games, anybody? Who wouldn't like to have a 6-month accurate weather forecast in the pocket the day Boss asks when you'd like to have this year's vacation? We need a good computer model of the human body, so we don't have to kill mice, pigs and apes whenever a new drug is developed. Earthquakes, anybody? Speech recognition that actually works? How about a computer that the law makers can ask every time they've invented a new law? They could get an answer to how the new law _really_ would impact citizens, tax revenue, and other laws!
Nov 23 2005
next sibling parent reply =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= <jmjmak invalid_utu.fi> writes:
Georg Wrede wrote:
 One thing that comes to mind is cryptography. Doing serious encrypting 
 on the fly would benefit from having, say, 1024 bit processors.
I'm no CPU-engineer, but I think there must be a trade-off between extremely huge hardware registers & other cpu optimizations like parallelism. Still current apm-libraries run significantly faster on 64-bit CPUs: http://www.swox.com/gmp/32vs64.html
 Anybody remember the IBM boss? Or the Intel boss? Or Bill? They all said 
 "xxx ought to be plenty enough forever".
 
So you would like to have bigger address space than it is possible to physically implement using all the material we have on Earth?!
 Currently (I believe) none of the 64 bit cpus actually have address 
 buses that are 64 bits wide. Nobody is the wiser, but when you go to the 
 pc vendor and ask how much memory one can put on this or that 64 bit PC, 
 the usual answer is like "16GB".
These 64-bit machines cannot use traditional page tables, otherwise you would end up filling all your available memory with page tables. I think there may be other addressing problems as well, but at least you need to use inverted page tables.
 The one thing that guarantees us a 256 bit cpu on everybody's table top 
 eventually, has nothing to do with computers per se. It's to do with the 
 structure of the Western society, and also with our inborn genes.
 
 Better computer games, anybody?
The best computer games are usually small enough to fit on a single floppy disk. :)
 
 Speech recognition that actually works?
AFAIK modern speech synthesis and recognition are based on linguistics & highly sophisticated neural networks. In fact you need less space than those cut'n'paste things a decade ago. Jari-Matti
Nov 23 2005
parent Georg Wrede <georg.wrede nospam.org> writes:
Jari-Matti Mäkelä wrote:
 Georg Wrede wrote:
 
 One thing that comes to mind is cryptography. Doing serious
 encrypting on the fly would benefit from having, say, 1024 bit
 processors.
I'm no CPU-engineer, but I think there must be a trade-off between extremely huge hardware registers & other cpu optimizations like parallelism. Still current apm-libraries run significantly faster on 64-bit CPUs: http://www.swox.com/gmp/32vs64.html
Excellent example!
 Anybody remember the IBM boss? Or the Intel boss? Or Bill? They all
 said "xxx ought to be plenty enough forever".
So you would like to have bigger address space than it is possible to physically implement using all the material we have on Earth?!
Weren't we supposed to colonize other planets too? But seriously, the day a machine with "too much" address space gets brought into the software office, the Pointy Haired boss decrees every developer his own address space. And thereafter it gets all kinds of uses we just don't have the time to invent now. Not that it's what I want -- it's what's gonna happen.
 Better computer games, anybody?
The best computer games are usually small enough to fit on a single floppy disk. :)
For us here, yes!
 Speech recognition that actually works?
AFAIK modern speech synthesis and recognition are based on linguistics & highly sophisticated neural networks. In fact you need less space than those cut'n'paste things a decade ago.
That actually works, was the phrase. :-)
Nov 23 2005
prev sibling next sibling parent reply Tom <Tom_member pathlink.com> writes:
In article <43848FF5.10909 nospam.org>, Georg Wrede says...
Don Clugston wrote:
 Tomás Rossi wrote:
 In article <dlv33k$1o7u$1 digitaldaemon.com>, xs0 says...
 
 What happens to our short, int, long language types when
 256-bit processors come along?  We'd find it hard to address
 a 16-bit integer in that system limited to only three type
 names.
Exactly, what would happen? Would "we" have to engineer another language, would it be D v2 :P? Certainly platform-dependent integral types are THE choice. Aliases of the type intXXX would be necessary always.
I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99).
(Hmm, so for(byte b=0; b<99; b++) is what one writes today?) ;-) One thing that comes to mind is cryptography. Doing serious encrypting on the fly would benefit from having, say, 1024 bit processors. Oh yes, and the NSA and other spooks really need double the width that everyone else has. This is a law of nature. :-) I remember reading about a graphics card that had a 256 bit cpu. This was so long ago that I think it's on the market already.
 The same thing said some people about 32-bit machines before those
 were developed and now we have 64-bit CPUs. Plus, I´m sure that
 already exists 128/256-bit CPUs nowadays, maybe not for home PCs,
 but who say D only has to run on home computers? For example, the
 PlayStation2 platform is builded upon a 128-bit CPU!
Anybody remember the IBM boss? Or the Intel boss? Or Bill? They all said "xxx ought to be plenty enough forever".
 No, it's a fundamentally different situation. We're running up
 against the laws of physics. 2^64 is a fantastically large number.
 
 (a) Address buses. If you could store one bit of RAM per silicon
 atom, a memory chip big enough to require 65 bit addressing would be
 one cubic centimetre in size. Consider that existing memory chips are
 only 2D, and you need wiring to connect to each bit. Even if the
 cooling issues are overcome, it's really hard to imagine that
 happening. A memory chip big enough to require 129 bit addressing
 would be larger than the planet.
 
 The point is, that we're approaching Star Trek territory. The
 Enterprise computer probably only has a 128 bit address bus.
 
 Many people think that doubling the number of bits is exponential 
 growth. It's not. Adding one more bit is exponential growth! It's
 exp(exp(x)) which is frighteningly fast function. Faster than a 
 factorial!
 
 (b) Data buses
 
 I began programming with 8 bit data registers. That was limiting.
 Almost all useful numbers are greater than 256. 16 bits was better,
 but still many quanties are > 65536. But almost everything useful
 fits into a 32 bit register. 32 bits really is a natural size for an
 integer. The % of applications where each of these is inadequate is
 decreasing exponentially. Very few applications need 128 bit
 integers.
 
 But almost everything can use increased parallellism, hence 128 bit
 SIMD instructions. But they're still only using 32 bit integers.
 
 I think that even the 60 year timeframe for 128 bit address buses is
 a bit optimistic. (But I think we will see 1024-core processors long
  before then. Maybe even by 2015. And we'll see 64K core systems).
Depends. I started doing programming on an HP handheld. It had a 4 bit cpu. (Yes, four bits.) Its address bus was wider, though. Programming it was done in assembly, although they never said it in the manual, probably so as not to frighten folks away. My next assembly I wrote on the 6502, which was an 8 bit cpu. The address bus was 16 bits. Then I went on to the PC, which was touted as a 16 bit machine. True, the 8086 was 16 bits, but because that needed an expensive motherboard and memory hardware, a cheaper version was built for the masses, the 8088, so we had 16 bit PCs with an 8 bit data bus. Slow yes, but cheaper. But still 16 bit. (The software never knew.) Currently (I believe) none of the 64 bit cpus actually have address buses that are 64 bits wide. Nobody is the wiser, but when you go to the pc vendor and ask how much memory one can put on this or that 64 bit PC, the usual answer is like "16GB". It is also conceivable (somebody here know the fact?) that most of those 64 bit modern PCs actually use a 32 bit data bus. So, historically, the data bus, the address buss, and the accumulator (where integer math is done, and the width of which is often taken to be the "width of the cpu") have usually not all had the same width -- although folks seem to believe so. --- What we however do need, is a virtual address space that is large enough to accommodate the most demanding applications and data. This makes writing software (and especially operating systems and compilers) a lot easier, because we then don't have to start constructing kludges for the situations where we bang into the end of the memory range. (This is a separate issue from running out of memory.) --- The one thing that guarantees us a 256 bit cpu on everybody's table top eventually, has nothing to do with computers per se. It's to do with the structure of the Western society, and also with our inborn genes. (What???) First, the society thing: the world (within the foreseeable future) is based on capitalism. (I'm no commie, so it's fine with me.) This in itself makes vendors compete. And that's good, otherwise we'd still all be driving T-Fords. But this leads to bigger, faster, fancier, cooler, ... ad absurdum. Laws of physics, bah. In the eighties, it was common knowledge that we wouldn't have hard disks by the time a hard disk goes beyond gigabyte size. It was supposed to be physically impossible. It would have to be some kind of solid state tech instead. And now I read about Nokia phones having internal hard disks with multi gig capacity. Second, it's in our genes. There's a revealing commercial on my TV: day care kids on a break. "My mom makes better food than yours." "My mom makes better food than both of your mothers." A teenager walks by and says "My mother makes better food than any of yours." And then this 2-year old from the other kindergarten says over the fence: "My mother makes the food all your mothers serve." (Think Nestle, Kraft, whatever.) Suits and non-nerds live to brag. "A trillion bit cpu? Naaa, get out of here!" That day is nearer than Doomsday. I don't even bother to bet on it, would be like stealing the money. --- Ever heard "no matter how big your garage, it fills up with crap, and soon your car stays outside"? Ever heard "it makes no difference how big your hard disk is, it takes the same amount of time before it gets full"? Ever heard "it makes no difference how fast the PC is, the next Windows puts it on its knees anyhow"? --- Better computer games, anybody? Who wouldn't like to have a 6-month accurate weather forecast in the pocket the day Boss asks when you'd like to have this year's vacation? We need a good computer model of the human body, so we don't have to kill mice, pigs and apes whenever a new drug is developed. Earthquakes, anybody? Speech recognition that actually works? How about a computer that the law makers can ask every time they've invented a new law? They could get an answer to how the new law _really_ would impact citizens, tax revenue, and other laws!
You're out of your mind man but I like it :P I don't like to bound short/int/long to any specific size because we don't know for sure what will happen in the forecoming years... maybe with future quantum computers 32-bit integers would end to be a ridiculous small precision to use or may even just not exist anymore. Not making integers width platform-specific makes D a lot more "unscalable" and it'll be nice that D could be used in distant future platforms as well, without changing it's spec. Tom
Nov 23 2005
next sibling parent reply xs0 <xs0 xs0.com> writes:
Tom wrote:

 I don't like to bound short/int/long to any specific size because we don't know
 for sure what will happen in the forecoming years... maybe with future quantum
 computers 32-bit integers would end to be a ridiculous small precision to use
or
 may even just not exist anymore. Not making integers width platform-specific
 makes D a lot more "unscalable" and it'll be nice that D could be used in
 distant future platforms as well, without changing it's spec.   
Well, if I understand the article http://www.iiap.res.in/outreach/blackhole5.html correctly, any device can only process 10^44 bits a second (where any device means _any_ device, even the entire universe), so even in a trillion years, you can only get about 10^63 bits processed, which is about 2^210. Considering how much smaller part of the time-space we are, and how the universe is not trying hard to produce information useful to humans, I think it's safe to say we'll _never_ need more than 128-bit addressing, at least in this universe :) As for data itself, can you think of any single quantity one would want to commonly represent in a computer using more than 128 bits? If not, D has it all covered (btw, also note that you can't measure things with infinite precision (again regardless of technology), so something like a 2048-bit double for that extra precision is not a good answer, at least if you're not into marketing ;) xs0
Nov 23 2005
parent reply Georg Wrede <georg.wrede nospam.org> writes:
xs0 wrote:
 
 Well, if I understand the article
 
 http://www.iiap.res.in/outreach/blackhole5.html
That article didn't account for the fact that because of space-time curvature under high gravitation, the actual volume of the black hole is larger than what appears when one only looks at the diameter.
 correctly, any device can only process 10^44 bits a second (where any 
 device means _any_ device, even the entire universe), so even in a 
 trillion years, you can only get about 10^63 bits processed, which is 
 about 2^210. Considering how much smaller part of the time-space we are, 
 and how the universe is not trying hard to produce information useful to 
 humans, I think it's safe to say we'll _never_ need more than 128-bit 
 addressing, at least in this universe :)
So, your figures are off by a factor of ln(2^(V/v)*c^5), where V and v are the real and the apparent volume, respectively, and c is the speed of light in vacuum.
 As for data itself, can you think of any single quantity one would want 
 to commonly represent in a computer using more than 128 bits? If not, D 
 has it all covered (btw, also note that you can't measure things with 
 infinite precision (again regardless of technology), so something like a 
 2048-bit double for that extra precision is not a good answer, at least 
 if you're not into marketing ;)
Na, just kidding.
Nov 23 2005
parent reply Tom <Tom_member pathlink.com> writes:
In article <4384DE09.7060301 nospam.org>, Georg Wrede says...
xs0 wrote:
 
 Well, if I understand the article
 
 http://www.iiap.res.in/outreach/blackhole5.html
That article didn't account for the fact that because of space-time curvature under high gravitation, the actual volume of the black hole is larger than what appears when one only looks at the diameter.
BUAAAAHHHAHAHAHAH!
 correctly, any device can only process 10^44 bits a second (where any 
 device means _any_ device, even the entire universe), so even in a 
 trillion years, you can only get about 10^63 bits processed, which is 
 about 2^210. Considering how much smaller part of the time-space we are, 
 and how the universe is not trying hard to produce information useful to 
 humans, I think it's safe to say we'll _never_ need more than 128-bit 
 addressing, at least in this universe :)
So, your figures are off by a factor of ln(2^(V/v)*c^5), where V and v are the real and the apparent volume, respectively, and c is the speed of light in vacuum.
BUAHAHAHAHAHAHAHHAAHHAHAHAHAHAHAHA! (I can't describe with onomatopoeias how I laughed after reading this, you definitively brought JOY to my day :D !!)
 As for data itself, can you think of any single quantity one would want 
 to commonly represent in a computer using more than 128 bits? If not, D 
 has it all covered (btw, also note that you can't measure things with 
 infinite precision (again regardless of technology), so something like a 
 2048-bit double for that extra precision is not a good answer, at least 
 if you're not into marketing ;)
Na, just kidding.
Ok, enough, I can't bear it anymore! I promise not to post about this issue never again, no matter what I really think about the subject! I'll be happy with actual D-way and I'll pray every night for it to stay like it is just now. God bless Walter and his magniffiecient omnipotent language... Just stop with all that PHYSICs crap!! :P PS: I can prove that God really exists, it exists because of: http://www.iiap.res.in/outreach/blackhole5.html :P Tom
Nov 23 2005
parent Georg Wrede <georg.wrede nospam.org> writes:
Tom wrote:
 In article <4384DE09.7060301 nospam.org>, Georg Wrede says...
 
Na, just kidding.
Ok, enough, I can't bear it anymore!
Thanks! Your comments made me laugh so I had tears i my eyes! georg
Nov 23 2005
prev sibling parent Georg Wrede <georg.wrede nospam.org> writes:
Tom wrote:
 In article <43848FF5.10909 nospam.org>, Georg Wrede says...
 
 Better computer games, anybody?
 
 Who wouldn't like to have a 6-month accurate weather forecast in
 the pocket the day Boss asks when you'd like to have this year's
 vacation?
 
 We need a good computer model of the human body, so we don't have
 to kill mice, pigs and apes whenever a new drug is developed.
 
 Earthquakes, anybody?
 
 Speech recognition that actually works?
 
 How about a computer that the law makers can ask every time they've
  invented a new law? They could get an answer to how the new law
 _really_ would impact citizens, tax revenue, and other laws!
You're out of your mind man but I like it :P
Gotta be, DMD is getting to be the absolutely most expensive compiler I've ever used!
 I don't like to bound short/int/long to any specific size because we
 don't know for sure what will happen in the forecoming years... maybe
 with future quantum computers 32-bit integers would end to be a
 ridiculous small precision to use or may even just not exist anymore.
 Not making integers width platform-specific makes D a lot more
 "unscalable" and it'll be nice that D could be used in distant future
 platforms as well, without changing it's spec.
The current crop of int size definitions (damn, I just forgot where they were in the D documentation) is adequate, for the time being. No code breaks if we add later a few wider integral types, so that is not a problem. And anywhere it does make a difference to the programmer, he will choose a specific width for his integers anyhow. (Within his own imagination and foresight, of course.) But anywhere id does not make a difference (like in for-loop indexes, etc.), he'll use whatever is the fastest anyhow. I see no problem with this. And if you're a heavy duty mathematician/programmer writing a new GMP library, then -- compared to the overall effort -- it is a tiny thing to write a couple of conditional typedefs right at the beginning, so your code works ok on several CPU-widths or endiannesses.
Nov 23 2005
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
Georg Wrede wrote:
 
 The same thing said some people about 32-bit machines before those
 were developed and now we have 64-bit CPUs. Plus, I´m sure that
 already exists 128/256-bit CPUs nowadays, maybe not for home PCs,
 but who say D only has to run on home computers? For example, the
 PlayStation2 platform is builded upon a 128-bit CPU!
Anybody remember the IBM boss? Or the Intel boss? Or Bill? They all said "xxx ought to be plenty enough forever".
With respect, Bill was an idiot, and it was obvious at the time. The infamous "64K will be enough for everyone" was made at a time when mainframes already had far more RAM than that, and were already increasing. From memory, some of the machines with ferrite core memories had 16K of RAM.
 The point is, that we're approaching Star Trek territory. The
 Enterprise computer probably only has a 128 bit address bus.
 But almost everything can use increased parallellism, hence 128 bit
 SIMD instructions. But they're still only using 32 bit integers.

 I think that even the 60 year timeframe for 128 bit address buses is
 a bit optimistic. (But I think we will see 1024-core processors long
  before then. Maybe even by 2015. And we'll see 64K core systems).
Depends.
 It is also conceivable (somebody here know the fact?) that most of those 
 64 bit modern PCs actually use a 32 bit data bus.
Actually many of the 32 bit Pentiums use a 64 bit or 128 bit data bus!
 So, historically, the data bus, the address buss, and the accumulator 
 (where integer math is done, and the width of which is often taken to be 
 the "width of the cpu") have usually not all had the same width -- 
 although folks seem to believe so.
All this is true. There's no intrinsic restriction on the width of registers. But there are just not many uses for really large fixed-precision numbers. (For arbitrary precision, there are; and the main use of wide arithmetic is to speed up the arbitrary precision case).
 
 What we however do need, is a virtual address space that is large enough 
 to accommodate the most demanding applications and data. This makes 
 writing software (and especially operating systems and compilers) a lot 
 easier, because we then don't have to start constructing kludges for the 
 situations where we bang into the end of the memory range. (This is a 
 separate issue from running out of memory.)
 ---
 
 The one thing that guarantees us a 256 bit cpu on everybody's table top 
 eventually, has nothing to do with computers per se. It's to do with the 
 structure of the Western society, and also with our inborn genes.
 
 (What???) First, the society thing: the world (within the foreseeable 
 future) is based on capitalism. (I'm no commie, so it's fine with me.) 
 This in itself makes vendors compete. And that's good, otherwise we'd 
 still all be driving T-Fords.
<cynic> And the US would still be using imperial measurement units.</cynic>
 But this leads to bigger, faster, fancier, cooler, ... ad absurdum. Laws 
 of physics, bah. In the eighties, it was common knowledge that we 
 wouldn't have hard disks by the time a hard disk goes beyond gigabyte 
 size. It  was supposed to be physically impossible. It would have to be 
 some kind of solid state tech instead. And now I read about Nokia phones 
 having internal hard disks with multi gig capacity.
Those arguments were not based on physics. They were based on assumptions about manufacturing technology. The technological changes required for a 128 bit address bus are so huge, the change in size_t would be irrelevant. To have 128 bits of addressable RAM, you need to store more than 1 bit per atom. This means quantum computers would be already well developed. Here's how I see it: High probability: Multi-CPU systems with huge number of cores. Cures for most cancers. Medium probability: Quantum computers. Cures for all cancers. Colonisation of other planets. Low probability: 128 bit physical address buses. :-) Sure, maybe we'll reach the end of that list. But the ones in the middle will have more impact (even on programmers!) than the last one. This is not something D needs to worry about. But thousand-core CPUs? Definitely.
Nov 24 2005
parent Georg Wrede <georg.wrede nospam.org> writes:
Don Clugston wrote:
 Georg Wrede wrote:
 
 Anybody remember the IBM boss? Or the Intel boss? Or Bill? They all
  said "xxx ought to be plenty enough forever".
With respect, Bill was an idiot, and it was obvious at the time.
<joke mode="I just couldn't resist. Don't answer these!"> With respect to me or to Bill? ;-) "was obvious at the time": Bill being an idiot, or xxx being plenty? </joke>
 It is also conceivable (somebody here know the fact?) that most of
  those 64 bit modern PCs actually use a 32 bit data bus.
Actually many of the 32 bit Pentiums use a 64 bit or 128 bit data bus!
Ah, thanks!
 In the eighties, it was common knowledge
 that we wouldn't have hard disks by the time a hard disk goes
 beyond gigabyte size. It  was supposed to be physically impossible.
Those arguments were not based on physics. They were based on assumptions about manufacturing technology.
True. They were advertised as being based on physics, though. And more to the point, on the unability to envision the advances in theoretical physics that are needed in today's magnetic storage technology.
 The technological changes required for a 128 bit address bus are so 
 huge, the change in size_t would be irrelevant.
I wouldn't skip size_t on that assumption. :-) Besides, there'll be smaller computers in the future too, like in gadgets. -- Aahh, and size_t is needed for the virtual address space, not the physical.
 To have 128 bits of addressable RAM, you need to store more than 1
 bit per atom. This means quantum computers would be already well
 developed.
True. Probably the address bus gets wider in the future at about the same rate the total memory of computers has grown historically. (What's that? Without googling around, I'd guess about 1 bit per year. Effectively doubling the RAM size each year.) The data bus might grow faster, though. Imagine being able to fetch a kilobyte at a time! (Hmmm, this issue gets totally f***ed up with level this or level that cache being on-chip, so forget the whole thing!)
 Here's how I see it:
 
 High probability: Multi-CPU systems with huge number of cores. Cures
 for most cancers.
Agreed.
 Medium probability: Quantum computers. Cures for all cancers. 
Disagreed. IMHO they'll go the way bubble memories went. And expert systems with AI. And Prolog.
 Colonisation of other planets.
:-) Yeah, I guess it's inevitable, knowing our genes! After all, we are incurable Imperialists.
 Low probability: 128 bit physical address buses.
 
 :-)
I agree!
 Sure, maybe we'll reach the end of that list. But the ones in the
 middle will have more impact (even on programmers!) than the last
 one.
 
 This is not something D needs to worry about. But thousand-core CPUs?
 Definitely.´
Seriously, I think there's one category we forgot. A lot of buzz has lately been heard about using Graphics Processors in math and dataprosessing. They do a decent job especially when the operations are simple and can use multiple data. (Hey, game graphics isn't much else!) I'd venture a guess that very soon we'll see expansion cards where an ATI or Nvidia chip exists just for coprosessing, i.e. without a monitor plug. Then it won't be long before they're right on the motherboard. Damn, I forgot the link, I recently visited a site dedicated to this. And already one can use the existing GP, using their library drivers. That's parallell processing for the masses. And I honestly think a D library for such would give some nice brag value to D.
Nov 24 2005
prev sibling parent reply Mark T <Mark_member pathlink.com> writes:
As a long time C programmer I still think D made a mistake when "fixing" int
size to 32 bits for a compiled language that is supposed to be compatible with
C. We are losing the abstraction that C was designed with. 

On the other hand, you absolutely need uint8, int32, etc types for communicating
with other programs, machines, etc but these types should only be used at the
point of interface.
====================================================
I will quote Paul Hsieh:
"
Misconception: Using smaller data types is faster than larger ones 

The original reason int was put into the C language was so that the fastest data
type on each platform remained abstracted away from the programmer himself. On
modern 32 and 64 bit platforms, small data types like chars and shorts actually
incur extra overhead when converting to and from the default machine word sized
data type. 

On the other hand, one must be wary of cache usage. Using packed data (and in
this vein, small structure fields) for large data objects may pay larger
dividends in global cache coherence, than local algorithmic optimization issues.
"
Nov 24 2005
next sibling parent MWolf <MWolf_member pathlink.com> writes:
In article <dm4eja$9hi$1 digitaldaemon.com>, Mark T says...
As a long time C programmer I still think D made a mistake when "fixing" int
size to 32 bits for a compiled language that is supposed to be compatible with
C. We are losing the abstraction that C was designed with. 

On the other hand, you absolutely need uint8, int32, etc types for communicating
with other programs, machines, etc but these types should only be used at the
point of interface.
====================================================
I will quote Paul Hsieh:
"
Misconception: Using smaller data types is faster than larger ones 

The original reason int was put into the C language was so that the fastest data
type on each platform remained abstracted away from the programmer himself. On
modern 32 and 64 bit platforms, small data types like chars and shorts actually
incur extra overhead when converting to and from the default machine word sized
data type. 

On the other hand, one must be wary of cache usage. Using packed data (and in
this vein, small structure fields) for large data objects may pay larger
dividends in global cache coherence, than local algorithmic optimization issues.
"
As a long time C programmer I don't agree with that. I prefer having fixed types that I can alias myself if I need. I have found that porting C code and having my basic types change size is a real headache. I prefer knowing and specifiying these instances myself. If you want a generic platform-dependant integer - alias one. Its that simple.
Nov 24 2005
prev sibling next sibling parent reply Munchgreeble <"a" b.com \"munchgreeble xATx bigfoot xDOTx com\"> writes:
But surely as a long time C programmer you've also realised that 
portability is usually a far bigger issue than speed. You know what I 
mean, usually there are a couple of sections of code that need to be 
done up real tight. These often end up with assembly language in them 
etc. and are always unportable. That's OK, you just fix those 
non-portable bits when you move platform. What is a pain though, is 
having your *main* integer type in non-portable form, so that you risk 
randomly breaking stuff all over your code when you e.g. move your code 
onto a 16-bit microcontroller.

I can understand C being the way round that it is, CPU cycles were 
somewhat scarcer 30 years ago, and portability was little thought of. 
Nowadays though portability is a big issue. You can still use "alias" to 
define a "fastint" type if you need to, but it's the exception rather 
than the rule. Would I like "fastint" (or some similar) to be built in 
to D, so that I didn't have to make an alias? Sure. If I had to choose 
between the D way and the C way though, I'd choose the D way every time. 
I rarely have to worry about the speed impacts of using the wrong sized 
integer type, but portability, networking/serialisation issues come up 
on virtually everything I've worked on recently.

Times have changed.

Just my tuppence

Munch

Mark T wrote:
 As a long time C programmer I still think D made a mistake when "fixing" int
 size to 32 bits for a compiled language that is supposed to be compatible with
 C. We are losing the abstraction that C was designed with. 
 
 On the other hand, you absolutely need uint8, int32, etc types for
communicating
 with other programs, machines, etc but these types should only be used at the
 point of interface.
 ====================================================
 I will quote Paul Hsieh:
 "
 Misconception: Using smaller data types is faster than larger ones 
 
 The original reason int was put into the C language was so that the fastest
data
 type on each platform remained abstracted away from the programmer himself. On
 modern 32 and 64 bit platforms, small data types like chars and shorts actually
 incur extra overhead when converting to and from the default machine word sized
 data type. 
 
 On the other hand, one must be wary of cache usage. Using packed data (and in
 this vein, small structure fields) for large data objects may pay larger
 dividends in global cache coherence, than local algorithmic optimization
issues.
 "
Nov 24 2005
parent reply Georg Wrede <georg.wrede nospam.org> writes:
Munchgreeble wrote:
 Nowadays though portability is a big issue. You can still use "alias"
 to define a "fastint" type if you need to, but it's the exception
 rather than the rule. Would I like "fastint" (or some similar) to be
 built in to D, so that I didn't have to make an alias? Sure. If I had
 to choose between the D way and the C way though, I'd choose the D
 way every time. I rarely have to worry about the speed impacts of
 using the wrong sized integer type, but portability,
 networking/serialisation issues come up on virtually everything I've
 worked on recently.
Interestingly, the different types for different cpus could (and IMHO should) be aliased in a library. No need to either define this in the language or have Walter use time coding this in DMD. Or in the <your_company_here> import files.
Nov 24 2005
parent MWolf <MWolf_member pathlink.com> writes:
In article <4385DCF1.3060100 nospam.org>, Georg Wrede says...
Munchgreeble wrote:
 Nowadays though portability is a big issue. You can still use "alias"
 to define a "fastint" type if you need to, but it's the exception
 rather than the rule. Would I like "fastint" (or some similar) to be
 built in to D, so that I didn't have to make an alias? Sure. If I had
 to choose between the D way and the C way though, I'd choose the D
 way every time. I rarely have to worry about the speed impacts of
 using the wrong sized integer type, but portability,
 networking/serialisation issues come up on virtually everything I've
 worked on recently.
Interestingly, the different types for different cpus could (and IMHO should) be aliased in a library. No need to either define this in the language or have Walter use time coding this in DMD. Or in the <your_company_here> import files.
EXACTLY! And the real bottom line is any professional who seriously needs this functionality will already have their own type aliases worked out. Its actually a benifit starting from a language that is not ambiguous about its types. I dont want my language changing on a particular platform - I will handle that myself thank you.
Nov 24 2005
prev sibling next sibling parent Tom <Tom_member pathlink.com> writes:
In article <dm4eja$9hi$1 digitaldaemon.com>, Mark T says...
As a long time C programmer I still think D made a mistake when "fixing" int
size to 32 bits for a compiled language that is supposed to be compatible with
C. We are losing the abstraction that C was designed with. 

On the other hand, you absolutely need uint8, int32, etc types for communicating
with other programs, machines, etc but these types should only be used at the
point of interface.
====================================================
I will quote Paul Hsieh:
"
Misconception: Using smaller data types is faster than larger ones 

The original reason int was put into the C language was so that the fastest data
type on each platform remained abstracted away from the programmer himself. On
modern 32 and 64 bit platforms, small data types like chars and shorts actually
incur extra overhead when converting to and from the default machine word sized
data type. 

On the other hand, one must be wary of cache usage. Using packed data (and in
this vein, small structure fields) for large data objects may pay larger
dividends in global cache coherence, than local algorithmic optimization issues.
"
mmmmhhh mmhhm mmmmmhhhhhhhh (muzzled) wish I could speak about this :D Tom
Nov 24 2005
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
 ====================================================
 I will quote Paul Hsieh:
 "
 Misconception: Using smaller data types is faster than larger ones 
 
 The original reason int was put into the C language was so that the fastest
data
 type on each platform remained abstracted away from the programmer himself. On
 modern 32 and 64 bit platforms, small data types like chars and shorts actually
 incur extra overhead when converting to and from the default machine word sized
 data type.
Yes, this was true when we went from 16-> 32 bits, but I don't think this holds for the 32 -> 64 bit transition. AMD realised that almost all useful integers fit into small sizes. AFAIK, 32 bit operations are still equal fastest on AMD64. They have the important benefit of requiring less storage, and they require less bus bandwidth. Consequently, most C++ compilers for AMD64 still keep int=32 bits. (Most of them even keep long=32 bits!). I've noticed that in the 16 bit days, my code was full of longs. 65535 is just too small, it was really annoying. But now that I have 32 bit ints, there's not much need for anything bigger. In the mid-80's I used a Pascal compiler that had 24 bit ints. They were always big enough, too. But even back then, 16 bits was not enough. 32 bits is a really useful size. I think D's got it right.
Nov 24 2005
parent Mark T <Mark_member pathlink.com> writes:
In article <dm4n1l$ikm$1 digitaldaemon.com>, Don Clugston says...
 ====================================================
 I will quote Paul Hsieh:
 "
 Misconception: Using smaller data types is faster than larger ones 
 
 The original reason int was put into the C language was so that the fastest
data
 type on each platform remained abstracted away from the programmer himself. On
 modern 32 and 64 bit platforms, small data types like chars and shorts actually
 incur extra overhead when converting to and from the default machine word sized
 data type.
Yes, this was true when we went from 16-> 32 bits, but I don't think this holds for the 32 -> 64 bit transition. AMD realised that almost all useful integers fit into small sizes. AFAIK, 32 bit operations are still equal fastest on AMD64. They have the important benefit of requiring less storage, and they require less bus bandwidth. Consequently, most C++ compilers for AMD64 still keep int=32 bits. (Most of them even keep long=32 bits!). I've noticed that in the 16 bit days, my code was full of longs. 65535 is just too small, it was really annoying. But now that I have 32 bit ints, there's not much need for anything bigger. In the mid-80's I used a Pascal compiler that had 24 bit ints. They were always big enough, too. But even back then, 16 bits was not enough. 32 bits is a really useful size. I think D's got it right.
Will this be true on 128 bit CPUs? AMD64 (x86-64)is a compromise design because the pure 64 bit CPUs such as Alpha and Itanium could not run x86-32 bit code efficiently. The game boxes, other dedicated embedded systems, etc don't have to make this compromise and have used other CPUs (I agree that for PCs the x86-64 is the proper migration path). I think many of the posters on this thread bring only a PC perspective to their arguments. C has survived a long time on many CPUs from its PDP-11 beginnings, I wonder what K + R would have to say on this topic.
Nov 24 2005
prev sibling parent Munchgreeble bigfoot.com writes:
In article <dlss62$13b$1 digitaldaemon.com>, Shawn Liu says...
I think int8, int16, int32, int64 is more comfortable.

"John Smith" <John_member pathlink.com> 
wrote:dlsq7f$30hv$1 digitaldaemon.com...
 Why not also include these variable types in D?
 int1 - 1 byte
 int2 - 2 bytes
 int4 - 4 bytes
 intN - N bytes (experimental)

 It must be also guaranteed that these types will always, on every machine, 
 have
 the same size.
I think this is a nice idea. Most projects either have network data formats to define and/or interface with hardware, both of which require you to once again write a "prim_types.h" file (in C/C++) to yet again define what a Uint8/Int8/Uint16/Int16 etc. are on the platform that you're using this time round. It doesn't take that long to do, and it's an obvious thing to use "alias" for... but it's another thing that it would be really nice to have builtin to the language instead of having to do it by hand. A key word for me in the suggestion is "also". We definitely need to be able to specify "int" as the platform-native (i.e. fastest) integer type. Just my tuppence! Munch
Nov 21 2005
prev sibling next sibling parent reply Carlos Santander <csantander619 gmail.com> writes:
John Smith escribió:
 Why not also include these variable types in D?
 int1 - 1 byte
 int2 - 2 bytes
 int4 - 4 bytes
 intN - N bytes (experimental)
 
 It must be also guaranteed that these types will always, on every machine, have
 the same size.
 
 
std.stdint contains aliases for those -- Carlos Santander Bernal
Nov 21 2005
parent reply "Lionello Lunesu" <lio remove.lunesu.com> writes:
 std.stdint contains aliases for those
That's the wrong way around if you'd ask me. I would also like decorated types, eventually aliased to the C int/short/long (with 'int' being the platform's default). L.
Nov 22 2005
parent reply Carlos Santander <csantander619 gmail.com> writes:
Lionello Lunesu escribió:
std.stdint contains aliases for those
That's the wrong way around if you'd ask me. I would also like decorated types, eventually aliased to the C int/short/long (with 'int' being the platform's default). L.
I'm not saying it's the right or wrong way. It's just that every once in a while people come and ask exactly the same: int8, fast_int, etc., and while D doesn't have them as proper types, at least they exist as aliases in the standard library, so they're guaranteed to exist. And even with sizes, as other have mentioned, D's typesystem is all about fixed sizes, so maybe you don't get the names you want, but you get the functionality. -- Carlos Santander Bernal
Nov 22 2005
parent reply "Lionello Lunesu" <lio remove.lunesu.com> writes:
 And even with sizes, as other have mentioned, D's typesystem is all about 
 fixed sizes, so maybe you don't get the names you want, but you get the 
 functionality.
Good point. But there's one reason this doesn't quite go up. Most of the time I don't care what 'int' I use. I use 'int' for any number. I use it in for-loops, in simple structs, in interfaces. Most of the time the size of the int doesn't matter and I just want a number. Only when serializing to disk or network I want fixed size types. It's nice that D fixes the size of an int, but it's not really helping me. On another platform the registers might not have 32-bits but now all my numbers I don't care about will generate excessive code since D forces them to 32-bits. It's "the wrong way around" because D makes me create and use an alias for the cases I didn't care about. Now I have to care about those cases for portability. L.
Nov 22 2005
parent Carlos Santander <csantander619 gmail.com> writes:
Lionello Lunesu escribió:
 
 But there's one reason this doesn't quite go up. Most of the time I don't 
 care what 'int' I use. I use 'int' for any number. I use it in for-loops, in 
 simple structs, in interfaces. Most of the time the size of the int doesn't 
 matter and I just want a number. Only when serializing to disk or network I 
 want fixed size types.
 
 It's nice that D fixes the size of an int, but it's not really helping me. 
 On another platform the registers might not have 32-bits but now all my 
 numbers I don't care about will generate excessive code since D forces them 
 to 32-bits.
 
 It's "the wrong way around" because D makes me create and use an alias for 
 the cases I didn't care about. Now I have to care about those cases for 
 portability.
 
 L. 
 
 
So use "auto". It won't work for all situations (like your "simple struct" example), but I think it'll work. -- Carlos Santander Bernal
Nov 23 2005
prev sibling parent "Lionello Lunesu" <lio remove.lunesu.com> writes:
A thought occurs! (to cite Zapp Brannigan)

How about a type called "register" which is guaranteed the platform's 
register size. Notice the similarity with C's "register" keyword (which is a 
type modifier):

for(register t=0; t<99; ++t) { }

What guarantees can be made with respect to a register's size?

L. 
Nov 24 2005