www.digitalmars.com         C & C++   DMDScript  

D - integer sizes on 64bit machines

reply imr1984 <imr1984_member pathlink.com> writes:
im curious - when a D compiler is made for 64bit processors (in the near future
lets hope :) what will the size of an int be? I assume it will be 8, and long
will be 16. So then what will a 2 byte integer be? It cant be a short because
that will be a 4 byte integer.

I assume that floating point names will stay the same, as they are defined by
the IEEE.
Apr 23 2004
next sibling parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
The sizes will be the same as they are now, for all the obvious benefits.

It does seem to me that we should have an additional integral type, say pint,
that is an integer of the natural size of the architecture, for maximal
efficiency.


"imr1984" <imr1984_member pathlink.com> wrote in message
news:c6b398$f01$1 digitaldaemon.com...
 im curious - when a D compiler is made for 64bit processors (in the near future
 lets hope :) what will the size of an int be? I assume it will be 8, and long
 will be 16. So then what will a 2 byte integer be? It cant be a short because
 that will be a 4 byte integer.

 I assume that floating point names will stay the same, as they are defined by
 the IEEE.
Apr 23 2004
next sibling parent Mark T <Mark_member pathlink.com> writes:
In article <c6b4ik$h62$1 digitaldaemon.com>, Matthew says...
The sizes will be the same as they are now, for all the obvious benefits.
Will D code be inefficient on 128 bit processors in the future or will all future CPUs just be x86 in disguise?
It does seem to me that we should have an additional integral type, say pint,
that is an integer of the natural size of the architecture, for maximal
efficiency.
I made a similar argument quite a while back that D should use C int (size changes with CPU) citing the 16 bit to 32 bit PC migration as an example and personal experience porting code from 32 bit UNIX to 64 bit DEC Alpha UNIX. Properly coded algorithms moved quite easily. Most of the issues were with interfaces and programmers misusing integer types to hold pointers. Because of the popularity of the x86 most people these days don't use any other CPU architectures (exception: the embedded world).
Apr 23 2004
prev sibling parent reply Ilya Minkov <minkov cs.tum.edu> writes:
Matthew schrieb:

 It does seem to me that we should have an additional integral type, say pint,
 that is an integer of the natural size of the architecture, for maximal
 efficiency.
There is one, size_t -- but its name is ugly as hell! -eye
Apr 23 2004
parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
and it's unsigned

"Ilya Minkov" <minkov cs.tum.edu> wrote in message
news:c6bki6$1auk$1 digitaldaemon.com...
 Matthew schrieb:

 It does seem to me that we should have an additional integral type, say pint,
 that is an integer of the natural size of the architecture, for maximal
 efficiency.
There is one, size_t -- but its name is ugly as hell! -eye
Apr 23 2004
parent Kevin Bealer <Kevin_member pathlink.com> writes:
In article <c6c6kv$2afq$1 digitaldaemon.com>, Matthew says...
and it's unsigned

"Ilya Minkov" <minkov cs.tum.edu> wrote in message
news:c6bki6$1auk$1 digitaldaemon.com...
 Matthew schrieb:

 It does seem to me that we should have an additional integral type, say pint,
 that is an integer of the natural size of the architecture, for maximal
 efficiency.
There is one, size_t -- but its name is ugly as hell! -eye
The docs claim that there is a second, equally ugly-named alias, ptrdiff_t, which is signed and has the natural size. More specifically, it says that these "span the address space", i.e. they are the size of a pointer. Usually this is equivalent to "register size"; is it always? It seems like only a complete <unflattering> would make an architecture where it takes extra work to copy a pointer because they don't fit in a register, but... Kevin
Apr 28 2004
prev sibling next sibling parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
imr1984 wrote:

im curious - when a D compiler is made for 64bit processors (in the near future
lets hope :) what will the size of an int be? I assume it will be 8, and long
will be 16. 
I think this is the C++ way but not the D way. Sizes should stay fixed (makes ports easier). There is already a 64 bit long and a 128 bit reserved cent. See http://www.digitalmars.com/d/type.html. If your worried simply create alias. -- -Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
"J Anderson" <REMOVEanderson badmama.com.au> wrote in message
news:c6b4nj$hdq$1 digitaldaemon.com...
 imr1984 wrote:

im curious - when a D compiler is made for 64bit processors (in the near
future
lets hope :) what will the size of an int be? I assume it will be 8, and long
will be 16.
I think this is the C++ way but not the D way. Sizes should stay fixed (makes ports easier). There is already a 64 bit long and a 128 bit reserved cent. See http://www.digitalmars.com/d/type.html. If your worried simply create alias.
Well, the point is that using an inappropriately sized integer for a given architecture will have a performance cost. Therefore, anyone using an integer for "normal" counting and such will be at a disadvantage when porting between different sized architectures. To avoid this *every* programmer who is aware of the issue will end up creating their own versioned alias. Therefore, I think it should be part of the language, or at least part of Phobos. Does this not seem sensible?
Apr 23 2004
next sibling parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Matthew wrote:

Well, the point is that using an inappropriately sized integer for a given
architecture will have a performance cost. Therefore, anyone using an integer
for
"normal" counting and such will be at a disadvantage when porting between
different sized architectures. To avoid this *every* programmer who is aware of
the issue will end up creating their own versioned alias. Therefore, I think it
should be part of the language, or at least part of Phobos. Does this not seem
sensible?
  
It does. However on 64 bit machines won't 32 bit integers still be faster because they can be sent two at a time (under certain conditions)? The same can be said for 16-bit at the moment. -- -Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
I'm not a hardware-Johny, but it's my understanding that using 32-bit integers
on
64-bit architectures will be less efficient than using 64-bit integers.

"J Anderson" <REMOVEanderson badmama.com.au> wrote in message
news:c6b8uq$okp$1 digitaldaemon.com...
 Matthew wrote:

Well, the point is that using an inappropriately sized integer for a given
architecture will have a performance cost. Therefore, anyone using an integer
for
"normal" counting and such will be at a disadvantage when porting between
different sized architectures. To avoid this *every* programmer who is aware
of
the issue will end up creating their own versioned alias. Therefore, I think
it
should be part of the language, or at least part of Phobos. Does this not seem
sensible?
It does. However on 64 bit machines won't 32 bit integers still be faster because they can be sent two at a time (under certain conditions)? The same can be said for 16-bit at the moment. -- -Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
next sibling parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Matthew wrote:

I'm not a hardware-Johny, but it's my understanding that using 32-bit integers
on
64-bit architectures will be less efficient than using 64-bit integers.

  
With 64 bit machines you have a range performace issues. One of the biggest is memory. If you use a 64-bit integer, array sizes double, now the slowest part of your achitechture's speed (other then the hard-drive which is memory anyway) has been halved. Now considering thing like locals could be sent to the CPU as 64-bits, at the very least there shouldn't be any slow down. I'm not arguing that 64-bit machines are a bad thing (64-bit calculations are now *almost* as fast as 32). With portability. Try loading an int from a file. If the int has changed to 64 bit then your program will most likely crash. PS - I just read that apparently C++ is keeping int's as 32 bit. What is changing is the pointer size, which isn't such a big issue if you avoid casting. http://www.microsoft.com/whdc/winhec/partners/64bitAMD.mspx -- -Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
parent "Matthew" <matthew.hat stlsoft.dot.org> writes:
"J Anderson" <REMOVEanderson badmama.com.au> wrote in message
news:c6bbih$ssb$1 digitaldaemon.com...
 Matthew wrote:

I'm not a hardware-Johny, but it's my understanding that using 32-bit integers
on
64-bit architectures will be less efficient than using 64-bit integers.
With 64 bit machines you have a range performace issues. One of the biggest is memory. If you use a 64-bit integer, array sizes double, now the slowest part of your achitechture's speed (other then the hard-drive which is memory anyway) has been halved. Now considering thing like locals could be sent to the CPU as 64-bits, at the very least there shouldn't be any slow down.
We're not talking about arrays, but indexer and other local variables.
 I'm not arguing that 64-bit machines are a bad thing (64-bit
 calculations are now *almost* as fast as 32).

 With portability.  Try loading an int from a file. If the int has
 changed to 64 bit then your program will most likely crash.
Another advantage of native is that serialisation APIs would be written to specifically *not* accept "native" variables, which is actually a massive improvement on the situation we experience in C and C++. (I spend a fair amount of time on this hideously vexing issue in "Imperfect C++", due out Sept. <G>)
Apr 23 2004
prev sibling next sibling parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Matthew wrote:

 I'm not a hardware-Johny, but it's my understanding that using 32-bit
 integers on 64-bit architectures will be less efficient than using 64-bit
 integers.
Most certainly not. Doing one 32bit operation will never be more expensive than doing one 64bit operation. It would, though, most certainly be more efficient to do one 64bit op instead of two 32bit ops. In general, I would think the question of performance between 32 and 64 bit is far too complex to just say: on this machine, 64 bit is more efficient, so it should be the default. Especially, you have to consider that for many applications, the bottleneck is not the processor, but the cache and the speed of the ram. If you have to shuffle twice as much data as necessary, it will definitely slow the system down.
Apr 23 2004
parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c6bcjs$uje$1 digitaldaemon.com...
 Matthew wrote:

 I'm not a hardware-Johny, but it's my understanding that using 32-bit
 integers on 64-bit architectures will be less efficient than using 64-bit
 integers.
Most certainly not. Doing one 32bit operation will never be more expensive than doing one 64bit operation. It would, though, most certainly be more efficient to do one 64bit op instead of two 32bit ops.
As I said, I'm no expert on this, but it's my understanding that it can be more expensive. "Most certainly not." sounds far too absolute for my tastes. 16-bit costs more than 32 on 32-bit machines, so why not 32 on 64? Maybe we need some multi-architecture experts to weigh in.
 In general, I would think the question of performance between 32 and 64 bit
 is far too complex to just say: on this machine, 64 bit is more efficient,
 so it should be the default.
What should be the default?
 Especially, you have to consider that for many applications, the bottleneck
 is not the processor, but the cache and the speed of the ram. If you have
 to shuffle twice as much data as necessary, it will definitely slow the
 system down.
No-one's talking about shuffling twice as much data. The issue is whether a single indexer variable is more efficient when 64-bits on a 64-bit machine than when 32-bits. It "most certainly" won't be the case that a 32-bit get on a 64-bit bus will be cheaper than a 64-bit get, surely?
Apr 23 2004
next sibling parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Matthew wrote:

As I said, I'm no expert on this, but it's my understanding that it can be more
expensive. "Most certainly not." sounds far too absolute for my tastes. 16-bit
costs more than 32 on 32-bit machines, so why not 32 on 64? Maybe we need some
multi-architecture experts to weigh in.
  
I think this is to do with alignment. It's cheaper to process one 32-bit variable then 2 16-bit variables indiviually. -- -Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
parent reply "Unknown W. Brackets" <unknown at.simplemachines.dot.org> writes:
J Anderson wrote:

 
 I think this is to do with alignment. It's cheaper to process one 32-bit 
 variable then 2 16-bit variables indiviually.
 
Unless I'm entirely mistaken, in assembly - and thus compiled code - you lose something when you switch. For example, if your ENTIRE program is 16 bit you don't lose much... but, every time you use 32 bit in that program you have to basically say, "I'm about to use 32 bit... get ready!" before hand. It reverses for 32 bit using 16 bit... But I may be remembering wrong. It has been about a year since I last worked in assembly... I would assume 64 bit works the same way... the problem comes that if pointers are 64 bit, which they are, does that put the program initially in 64 bit of 32 bit? I'd assume 64 bit in which case you'd get the penalties. However, I may just be remembering a little long - I can't remember how bad the performance penalty is, just that it adds to the bytes needed for each instruction. -[Unknown]
Apr 23 2004
parent Ilya Minkov <minkov cs.tum.edu> writes:
Unknown W. Brackets schrieb:

 Unless I'm entirely mistaken, in assembly - and thus compiled code - you 
 lose something when you switch.  For example, if your ENTIRE program is 
 16 bit you don't lose much... but, every time you use 32 bit in that 
 program you have to basically say, "I'm about to use 32 bit... get 
 ready!" before hand.  It reverses for 32 bit using 16 bit...
 
 I would assume 64 bit works the same way...
I would say this is usual but not inherent. A x86 is an evil CISC (well, less evil than VAX but anyway), which means the instruction sizes may vary. But x86 is an archaically old architecture, noone develops new CISC architectures these days. It has been recognized that architectures with uniform instruction size are more efficient, especially because decoding phase ceases to dominate the execution. The new CPUs will be either RISC, where i would guess smaller types are not bound to be slower if there are special instructions to load and store them, or VLIW of which i know too little to say anything sane.
 the problem comes that if 
 pointers are 64 bit, which they are, does that put the program initially 
 in 64 bit of 32 bit?  I'd assume 64 bit in which case you'd get the 
 penalties.
I find it unlikely that penalties would come up on AMD64. Gotta read more about it though.
 However, I may just be remembering a little long - I can't remember how 
 bad the performance penalty is, just that it adds to the bytes needed 
 for each instruction.
It might not be of principal nature. Before Pentium Pro, performance of accessing 16 bit values was very decent. Pentium Pro was the one to implement a 64-bit (or was it more?) memory bus, and also "optimized" loading routins, which were optimized for everything from 32 bit onwards. That 8 bit is still fast is only due to its tiny size and vast space savings, but 16 bit fell into a "hole" noone really cared about. If the performace of accessing 32-bit values shall diminish someday, it would be the indication that the world has changed and we don't care any longer. -eye
Apr 23 2004
prev sibling parent Ilya Minkov <minkov cs.tum.edu> writes:
Matthew schrieb:

 As I said, I'm no expert on this, but it's my understanding that it can be more
 expensive. "Most certainly not." sounds far too absolute for my tastes. 16-bit
 costs more than 32 on 32-bit machines, so why not 32 on 64? Maybe we need some
 multi-architecture experts to weigh in.
Though you are most certainly right, i would think, as long as memory sizes are not so huge yet, 64-bit CPUs would be approximately as fast for 32-bit values as for 64-bit values. If you remember, 386, 486 and Pentium were quite fast with 16-bit data, the slowdown was introduced in Pentium Pro. We might have another 3 CPU geneations until a similar change happens. And at all: i wonder why the user should bother at all. If some data type is "packed", then he will have the minimal memory usage for the desired value range, and if a type is "aligned", well, it should be done so that the highest possible performance is reached. Then the user need not specify the actual width directly. -eye
Apr 23 2004
prev sibling next sibling parent reply Bill Cox <Bill_member pathlink.com> writes:
Hi, Matthew.

In article <c6b9vv$qdr$1 digitaldaemon.com>, Matthew says...
I'm not a hardware-Johny, but it's my understanding that using 32-bit integers
on
64-bit architectures will be less efficient than using 64-bit integers.

"J Anderson" <REMOVEanderson badmama.com.au> wrote in message
news:c6b8uq$okp$1 digitaldaemon.com...
 Matthew wrote:

Well, the point is that using an inappropriately sized integer for a given
architecture will have a performance cost. Therefore, anyone using an integer
for
"normal" counting and such will be at a disadvantage when porting between
different sized architectures. To avoid this *every* programmer who is aware
of
the issue will end up creating their own versioned alias. Therefore, I think
it
should be part of the language, or at least part of Phobos. Does this not seem
sensible?
It does. However on 64 bit machines won't 32 bit integers still be faster because they can be sent two at a time (under certain conditions)? The same can be said for 16-bit at the moment. -- -Anderson: http://badmama.com.au/~anderson/
As with most things, there's the way the world is, and then the way it should be. IMO, in a perfect world, integer sizes would be a minimum size, not an exact size. Any condition that would be affected by the upper bits would cause an exception in debug mode, and not in optimized mode. There would be similar behavior for array-indexing, and other similar checks. This would allow the compiler to size-up integers to fit it's register size. Then, there's no need for a native integer type. There are other problems that reality is throwing at 64-bit computing. As Walter pointed out, all pointers will double in size. Most programs I know of that use enough memory to justify the need for 64-bit pointers fill up that memory mostly with pointers. In other words, if you switch to 64-bits, your applications may need close to 2x the memory just to run. The cache also gets less efficient, so your program may also run slower. So you pay more, and get less. IMO, in a perfect world, our compilers would be able to used integers as object references. This allows us to use up to 4 billion objects of any given class before making it's reference type 64-bits. Also, applications would not use more memory just because they're running on a 64-bit machine. This may sound far-fetched, but I've got over 500K lines of C code running this way. So far as I can tell, there are no down-sides. However, compatibility with original C examples of this. Then, there's that annoying fact that we can't get away from the x86 architecture. Intel made a real try with Itanium, but Opteron is the architecture that has won. Now that Intel is on-board, the whole world will soon be buying primarily x86 64-bit machines. However, due to the historical limitations in our software tools, few applications will use the 64-bit mode for many years to come. IMO, in a perfect world, we'd distrubute all our programs in a platform independent way. In the open-source community, we do this. For example, I just download the vim source tar-ball, and do the standard make install stuff. The same exact method of installing vim works on many different CPU platforms. If the world had nothing but open-source programs, we would have left x86 where it belongs: back in the 70's. As it is, the monster just keeps getting fatter. It's all a matter of history... Bill
Apr 24 2004
parent reply Ilya Minkov <minkov cs.tum.edu> writes:
The *Word* we have been waiting for!

Nice to see you here!


-eye


Bill Cox schrieb:

 As with most things, there's the way the world is, and then the way it should
 be.
 
 IMO, in a perfect world, integer sizes would be a minimum size, not an exact
 size.  Any condition that would be affected by the upper bits would cause an
 exception in debug mode, and not in optimized mode.  There would be similar
 behavior for array-indexing, and other similar checks.  This would allow the
 compiler to size-up integers to fit it's register size.  Then, there's no need
 for a native integer type.
 
 There are other problems that reality is throwing at 64-bit computing.  As
 Walter pointed out, all pointers will double in size.  Most programs I know of
 that use enough memory to justify the need for 64-bit pointers fill up that
 memory mostly with pointers.  In other words, if you switch to 64-bits, your
 applications may need close to 2x the memory just to run.  The cache also gets
 less efficient, so your program may also run slower.  So you pay more, and get
 less.
 
 IMO, in a perfect world, our compilers would be able to used integers as object
 references.  This allows us to use up to 4 billion objects of any given class
 before making it's reference type 64-bits.  Also, applications would not use
 more memory just because they're running on a 64-bit machine.  This may sound
 far-fetched, but I've got over 500K lines of C code running this way.  So far
as
 I can tell, there are no down-sides.  However, compatibility with original C

 examples of this.
 
 Then, there's that annoying fact that we can't get away from the x86
 architecture.  Intel made a real try with Itanium, but Opteron is the
 architecture that has won.  Now that Intel is on-board, the whole world will
 soon be buying primarily x86 64-bit machines.  However, due to the historical
 limitations in our software tools, few applications will use the 64-bit mode
for
 many years to come.
 
 IMO, in a perfect world, we'd distrubute all our programs in a platform
 independent way.  In the open-source community, we do this.  For example, I
just
 download the vim source tar-ball, and do the standard make install stuff.  The
 same exact method of installing vim works on many different CPU platforms.  If
 the world had nothing but open-source programs, we would have left x86 where it
 belongs: back in the  70's.  As it is, the monster just keeps getting fatter.
 
 It's all a matter of history...
 
 Bill
Apr 24 2004
parent "Matthew" <matthew.hat stlsoft.dot.org> writes:
 The *Word* we have been waiting for!

 Nice to see you here!
Yeah, where've you been, Bill? It's been ages.
Apr 24 2004
prev sibling parent Uwe Jesgarz <u.jesgarz kithara.de> writes:
Processing costs internally the same. But data storage would be bigger 
as 64 bit. Additionally, when using the SSE3 multimedia instruction set, 
you can process 4 32-bit integers at the same time instead of only two 
64 bit. Additionally, almost all 64 bit instruction needs an extra byte 
to specify that, and so the instruction cache would be to transfer more 
code. So, use 32 bit where appropriate!

(I know, this forum is dead, but this is a test.)

Matthew wrote:
 I'm not a hardware-Johny, but it's my understanding that using 32-bit integers
on
 64-bit architectures will be less efficient than using 64-bit integers.
 
 "J Anderson" <REMOVEanderson badmama.com.au> wrote in message
 news:c6b8uq$okp$1 digitaldaemon.com...
 
Matthew wrote:


Well, the point is that using an inappropriately sized integer for a given
architecture will have a performance cost. Therefore, anyone using an integer
for
"normal" counting and such will be at a disadvantage when porting between
different sized architectures. To avoid this *every* programmer who is aware
of
the issue will end up creating their own versioned alias. Therefore, I think
it
should be part of the language, or at least part of Phobos. Does this not seem
sensible?
It does. However on 64 bit machines won't 32 bit integers still be faster because they can be sent two at a time (under certain conditions)? The same can be said for 16-bit at the moment. -- -Anderson: http://badmama.com.au/~anderson/
Sep 29 2005
prev sibling next sibling parent reply "Ben Hinkle" <bhinkle4 juno.com> writes:
"Matthew" <matthew.hat stlsoft.dot.org> wrote in message
news:c6b5ta$j67$1 digitaldaemon.com...
 "J Anderson" <REMOVEanderson badmama.com.au> wrote in message
 news:c6b4nj$hdq$1 digitaldaemon.com...
 imr1984 wrote:

im curious - when a D compiler is made for 64bit processors (in the
near
 future
lets hope :) what will the size of an int be? I assume it will be 8,
and long
will be 16.
I think this is the C++ way but not the D way. Sizes should stay fixed (makes ports easier). There is already a 64 bit long and a 128 bit reserved cent. See http://www.digitalmars.com/d/type.html. If your worried simply create alias.
Well, the point is that using an inappropriately sized integer for a given architecture will have a performance cost. Therefore, anyone using an
integer for
 "normal" counting and such will be at a disadvantage when porting between
 different sized architectures. To avoid this *every* programmer who is
aware of
 the issue will end up creating their own versioned alias. Therefore, I
think it
 should be part of the language, or at least part of Phobos. Does this not
seem
 sensible?
I vote for putting them in the architecture-specific modules in phobos. Aliases for native int, long, etc. I have a few C library wrappers where I've made up my own aliases but having standard names would help.
Apr 23 2004
parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
"Ben Hinkle" <bhinkle4 juno.com> wrote in message
news:c6b91g$oo3$1 digitaldaemon.com...
 "Matthew" <matthew.hat stlsoft.dot.org> wrote in message
 news:c6b5ta$j67$1 digitaldaemon.com...
 "J Anderson" <REMOVEanderson badmama.com.au> wrote in message
 news:c6b4nj$hdq$1 digitaldaemon.com...
 imr1984 wrote:

im curious - when a D compiler is made for 64bit processors (in the
near
 future
lets hope :) what will the size of an int be? I assume it will be 8,
and long
will be 16.
I think this is the C++ way but not the D way. Sizes should stay fixed (makes ports easier). There is already a 64 bit long and a 128 bit reserved cent. See http://www.digitalmars.com/d/type.html. If your worried simply create alias.
Well, the point is that using an inappropriately sized integer for a given architecture will have a performance cost. Therefore, anyone using an
integer for
 "normal" counting and such will be at a disadvantage when porting between
 different sized architectures. To avoid this *every* programmer who is
aware of
 the issue will end up creating their own versioned alias. Therefore, I
think it
 should be part of the language, or at least part of Phobos. Does this not
seem
 sensible?
I vote for putting them in the architecture-specific modules in phobos. Aliases for native int, long, etc. I have a few C library wrappers where I've made up my own aliases but having standard names would help.
The only downside to this is that it's less visible/obvious, and many people could write much code before becoming aware of the issue, and be left with similar porting nasties that we currently have in C/C++, and which D is intended to avoid/obviate. Therefore, my preference is that we add a new type, "native", which is an integer of the ambient architecture size. If "native" is listed up there with the other integer types, it will be something that people will learn very early in their use of D, and will therefore not be forgotten or overlooked as is likely with the library approach.
Apr 23 2004
parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Matthew wrote:

could write much code before becoming aware of the issue, and be left with
similar porting nasties that we currently have in C/C++, and which D is intended
to avoid/obviate.

Therefore, my preference is that we add a new type, "native", which is an
integer
of the ambient architecture size. If "native" is listed up there with the other
integer types, it will be something that people will learn very early in their
use of D, and will therefore not be forgotten or overlooked as is likely with
the
library approach.
  

 The only downside to this is that it's less visible/obvious, and many 
 people
I think that people who are un-aware of the issue are more concerned about there code running, rather then running fast. People who are concern with speed would learn this kinda thing pretty soon. -- -Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
But what about people who become concerned with speed. They're left in the
position of having to backtrack through all their code and trying to judge which
"int" is size-oriented and which is speed-oriented. Aren't we effectively back
in
(pre-C99) C world?

"J Anderson" <REMOVEanderson badmama.com.au> wrote in message
news:c6bam7$qti$1 digitaldaemon.com...
 Matthew wrote:

could write much code before becoming aware of the issue, and be left with
similar porting nasties that we currently have in C/C++, and which D is
intended
to avoid/obviate.

Therefore, my preference is that we add a new type, "native", which is an
integer
of the ambient architecture size. If "native" is listed up there with the
other
integer types, it will be something that people will learn very early in their
use of D, and will therefore not be forgotten or overlooked as is likely with
the
library approach.


 The only downside to this is that it's less visible/obvious, and many
 people
I think that people who are un-aware of the issue are more concerned about there code running, rather then running fast. People who are concern with speed would learn this kinda thing pretty soon. -- -Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
next sibling parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Matthew wrote:
 But what about people who become concerned with speed. They're left in the
 position of having to backtrack through all their code and trying to judge
 which "int" is size-oriented and which is speed-oriented. Aren't we
 effectively back in (pre-C99) C world?
If you want to squeeze out performance, you will have to go through all kinds of pain. D should encourage people to write code that runs reasonably fast on any processor. People who want to go beyond that and optimize their code for their personal machine get all the tools to do so, but should not expect that it will be especially simple and comfortable.
Apr 23 2004
parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c6bcv2$uje$2 digitaldaemon.com...
 Matthew wrote:
 But what about people who become concerned with speed. They're left in the
 position of having to backtrack through all their code and trying to judge
 which "int" is size-oriented and which is speed-oriented. Aren't we
 effectively back in (pre-C99) C world?
If you want to squeeze out performance, you will have to go through all kinds of pain. D should encourage people to write code that runs reasonably fast on any processor. People who want to go beyond that and optimize their code for their personal machine get all the tools to do so, but should not expect that it will be especially simple and comfortable.
Why?
Apr 23 2004
parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Matthew wrote:
 "Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
 news:c6bcv2$uje$2 digitaldaemon.com...
 If you want to squeeze out performance, you will have to go through all
 kinds of pain. D should encourage people to write code that runs
 reasonably fast on any processor. People who want to go beyond that and
 optimize their code for their personal machine get all the tools to do
 so, but should not expect that it will be especially simple and
 comfortable.
Why?
Because usually, the time you spend optimizing code for one special machine could be just as well spend in waiting and bying a new machine half a year later. The compiler should have the means to optimize for a certain architecture, but the programmer should not think about the exact architecture too much. Of course, there are exceptions to that, but then, people optimizing for a certain architecture will have to go through all kinds of pains. distinguishing between 32bit and 64bit integers and deciding which one to use when is just one fraction of the problem.
Apr 23 2004
parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c6behs$11ou$1 digitaldaemon.com...
 Matthew wrote:
 "Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
 news:c6bcv2$uje$2 digitaldaemon.com...
 If you want to squeeze out performance, you will have to go through all
 kinds of pain. D should encourage people to write code that runs
 reasonably fast on any processor. People who want to go beyond that and
 optimize their code for their personal machine get all the tools to do
 so, but should not expect that it will be especially simple and
 comfortable.
Why?
Because usually, the time you spend optimizing code for one special machine could be just as well spend in waiting and bying a new machine half a year later. The compiler should have the means to optimize for a certain architecture, but the programmer should not think about the exact architecture too much. Of course, there are exceptions to that, but then, people optimizing for a certain architecture will have to go through all kinds of pains. distinguishing between 32bit and 64bit integers and deciding which one to use when is just one fraction of the problem.
I still fail to see why we should not address it. How do you solve a whole problem, composed of multiple parts, other than by addressing the parts?
Apr 23 2004
parent Norbert Nemec <Norbert.Nemec gmx.de> writes:
Matthew wrote:
 I still fail to see why we should not address it. How do you solve a whole
 problem, composed of multiple parts, other than by addressing the parts?
Back to citing your comment:
 But what about people who become concerned with speed. They're left in
 the position of having to backtrack through all their code and trying to
 judge which "int" is size-oriented and which is speed-oriented. Aren't we
 effectively back in (pre-C99) C world?
This is on which I reacted by saying: Well, yes, bad luck! There is no simple rule telling you where int64 might be faster than int32. On a 32bit processor, obviously int32 is faster in almost any case. On 64bit machines, we obviously do not know in general, but I can say for sure that int32 is faster at least in memory intensive code. So, on 32bit machines, you can just stick with int32 and be pretty sure you get good performance. On a 64bit machine, you have a choice: a) pick int32 in general b) pick int64 in general c) sort through the code by hand like back in the good ol' days On average, b) is unlikely to give better performance than a), so if you don't want to spend much time examining the code in question, a) is a good way to go. Picking c) will probably improve the performance, but this is just what I said: If you want to get optimum performance on your personal machine beyond what the compiler can do on portable code, be prepared to go through pains.
Apr 23 2004
prev sibling parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Matthew wrote:

But what about people who become concerned with speed. 
  
What if they suddenly have a porting issue. Since 64-bit machines are bound to be faster, most 32-bit apps should run faster then there original target machine so they should met the required efficiency. Now if someone wants to make a dynamic program that adapts to the speed of the processor then there are a lot of other issues they will have to consider in regards to the variable size. They might as well use version statements. An alias for the most efficiently sized variable could be useful but I certainly wouldn't encourage it's use unless people know what they are doing.
They're left in the
position of having to backtrack through all their code and trying to judge which
"int" is size-oriented and which is speed-oriented. Aren't we effectively back
in
(pre-C99) C world?
You p idea with just make this even harder. Do you want the compiler to work out when to use an 32 and when to use a 64 bit? How is that possible, the program has no-idea how much a particular variable will be reused and need to be kept in cache ect.... I doubt there would be very few cases where to compiler could improve performace and the programmer doesn't know about. And then the programmer would have to make use of the extra 32 bits. I've long come to the conclusion to use the best variable for the job at hand. If you need 64-bits, then use 64 bits. I think the biggest advantage of 64-bits is that double will become not much slower then float. Then you'll be able to have really accurate calculations (ie like in 3d-graphics). Double could definatly make use of this p idea, but not so much int. What about pDouble? If you can use a smaller variable then use it, it will save cache space. (I read somewhere that 62-bit processor are expected to use 30% more cache space). -- -Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
next sibling parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
"J Anderson" <REMOVEanderson badmama.com.au> wrote in message
news:c6beim$11h1$1 digitaldaemon.com...
 Matthew wrote:

But what about people who become concerned with speed.
What if they suddenly have a porting issue. Since 64-bit machines are bound to be faster, most 32-bit apps should run faster then there original target machine so they should met the required efficiency. Now if someone wants to make a dynamic program that adapts to the speed of the processor then there are a lot of other issues they will have to consider in regards to the variable size. They might as well use version statements.
I don't understand what you're saying here.
 An alias for the most efficiently sized variable could be useful but I
 certainly wouldn't encourage it's use unless people know what they are
 doing.

They're left in the
position of having to backtrack through all their code and trying to judge
which
"int" is size-oriented and which is speed-oriented. Aren't we effectively back
in
(pre-C99) C world?
You p idea with just make this even harder. Do you want the compiler to work out when to use an 32 and when to use a 64 bit? How is that possible, the program has no-idea how much a particular variable will be reused and need to be kept in cache ect....
This is all wrong. The compiler would know exactly how large to make the "native" type, because D is a compile-to-host language. On a 64-bit architecture it would be 64-bits. On a 32-bit architecture it would be 32-bits.
I doubt there would be very
 few cases where to compiler could improve performace and the programmer
 doesn't know about.
The programmer does know about them, that's the point. And the compiler handles the details, that's the point.
  And then the programmer would have to make use of
 the extra 32 bits.
What are you talking about?
 I've long come to the conclusion to use the best variable for the job at
 hand.  If you need 64-bits, then use 64 bits.  I think the biggest
 advantage of 64-bits is that double will become not much slower then
 float.  Then you'll be able to have really accurate calculations (ie
 like in 3d-graphics).
This is nonsense. "If you need 64-bits, then use 64 bits". This totally misses the point. Of course, if you have a quantity that requires a specific size, then you use that size. I'm not talking about that. I'm talking about the times when you use an integer as an indexer, or another kind of size-agnostic variable. In such cases, you want the code to perform optimally whatever platform it happens to be compile for. Since the compiler knows what architecture it is being compiled for, why not let it make the decision in such cases, informed as it would be by one's using "native" (a variable sized int reflecting the optimal integral size for a given architecture) rather than int (32-bits) or long (64-bits)?
Apr 23 2004
next sibling parent J Anderson <REMOVEanderson badmama.com.au> writes:
I'm talking about the times when
you use an integer as an indexer, or another kind of size-agnostic variable. In
such cases, you want the code to perform optimally whatever platform it happens
to be compile for. Since the compiler knows what architecture it is being
compiled for, why not let it make the decision in such cases, informed as it
would be by one's using "native" (a variable sized int reflecting the optimal
integral size for a given architecture) rather than int (32-bits) or long
(64-bits)?

  
Parhaps it could be called indexer? That way it would be used correctly. -- -Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
prev sibling parent J Anderson <REMOVEanderson badmama.com.au> writes:
 How is that
possible, the compiler has no-idea how much a particular variable will be
reused and need to be kept in cache ect....
    
This is all wrong. The compiler would know exactly how large to make the "native" type, because D is a compile-to-host language. On a 64-bit architecture it would be 64-bits. On a 32-bit architecture it would be 32-bits.
I wouldn't say all wrong. The compiler cannot predict how long a particular variable will be kept in cache.
-Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
prev sibling parent J Anderson <REMOVEanderson badmama.com.au> writes:
J Anderson wrote:

 Double could definatly make use of this p idea, but not so much int.  
 What about pDouble?
Sorry I meant what about pFloat and pDouble. Of couse p would mean that it get's at-least that size. -- -Anderson: http://badmama.com.au/~anderson/
Apr 23 2004
prev sibling parent "Carlos Santander B." <carlos8294 msn.com> writes:
"Matthew" <matthew.hat stlsoft.dot.org> wrote in message
news:c6b5ta$j67$1 digitaldaemon.com
| Well, the point is that using an inappropriately sized integer for a given
| architecture will have a performance cost. Therefore, anyone using an
integer for
| "normal" counting and such will be at a disadvantage when porting between
| different sized architectures. To avoid this *every* programmer who is
aware of
| the issue will end up creating their own versioned alias. Therefore, I
think it
| should be part of the language, or at least part of Phobos. Does this not
seem
| sensible?

There is std.stdint

-----------------------
Carlos Santander Bernal
Apr 24 2004
prev sibling next sibling parent Juan C <Juan_member pathlink.com> writes:
I realize I'm always in a minority of one, but...

It has always been my opinion that int should be whatever the largest int size
is. And then have int16 etc. for each of the specific sizes. So if it matters
what size the value is, you use the specific one. If not, you use int, and as
things progress, you don't need to keep modifying the code for the new largest
size (only a recompile would be required). 

A while back I was programming in Compaq C on OpenVMS and the largest int size
was a 64-bit "long long int", and I wanted some of my code to work on that and
in DOS too, so I had to typedef BIGGEST_INT to mean different things on the
different platforms. (And is ANSI going to use "long long int" to mean a 64-bit
int?)

I realize that you all are going to invoke "backward compatibility" and "easy
porting of C code" as reasons for continuing the "C way". But I heartily
disagree, D _is not_ and _should not_ be C. If D is to be better than C, this is
one area I feel needs improvement.

Upon further reflection I would suggest defining only the specific-sized types,
and allow the user (of the language) to typedef or alias the generic names as
desired.

In article <c6b398$f01$1 digitaldaemon.com>, imr1984 says...
im curious - when a D compiler is made for 64bit processors (in the near future
lets hope :) what will the size of an int be? I assume it will be 8, and long
will be 16. So then what will a 2 byte integer be? It cant be a short because
that will be a 4 byte integer.

I assume that floating point names will stay the same, as they are defined by
the IEEE.
Apr 23 2004
prev sibling next sibling parent "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
You might like to read these articles:

http://arstechnica.com/cpu/03q1/x86-64/x86-64-1.html

http://www.anandtech.com/guides/viewfaq.html?i=112

- Kris


"imr1984" <imr1984_member pathlink.com> wrote in message
news:c6b398$f01$1 digitaldaemon.com...
 im curious - when a D compiler is made for 64bit processors (in the near
future
 lets hope :) what will the size of an int be? I assume it will be 8, and
long
 will be 16. So then what will a 2 byte integer be? It cant be a short
because
 that will be a 4 byte integer.

 I assume that floating point names will stay the same, as they are defined
by
 the IEEE.
Apr 23 2004
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
"imr1984" <imr1984_member pathlink.com> wrote in message
news:c6b398$f01$1 digitaldaemon.com...
 im curious - when a D compiler is made for 64bit processors (in the near
future
 lets hope :) what will the size of an int be? I assume it will be 8, and
long
 will be 16. So then what will a 2 byte integer be? It cant be a short
because
 that will be a 4 byte integer.

 I assume that floating point names will stay the same, as they are defined
by
 the IEEE.
All sizes will stay the same when moving to 64 bits, with the following exceptions: 1) pointers will be 64 bits 2) object references will be 64 bits 3) dynamic array references will be 128 bits 4) pointer differences will be 64 bits 5) pointer offsets will be 64 bits 6) sizeof will be 64 bits 7) Whether real.size will stay 10 or be forced to 8 is still up in the air. To this end, and to ensure portability of D source code to 64 bits, follow the following rules: 1) Use the .sizeof property whenever depending on the size of a type. 2) Use ptrdiff_t (an alias in object.d) for signed pointer differences. 3) Use size_t (an alias in object.d) for type sizes, unsigned pointer offsets and array indices. Note that 1, 2, and 3 correspond to C's portable uses of sizeof, ptrdiff_t, and size_t. In particular, int's and long's will remain the same size as for 32 bit computing.
Apr 23 2004
parent reply imr1984 <imr1984_member pathlink.com> writes:
well if all sizes will stay the same, id like to know why D actually calls its
integers by non exact names (int, long, short etc). Why arent they called int32,
int64, int16 etc ?

In article <c6boe1$1ilt$2 digitaldaemon.com>, Walter says...
"imr1984" <imr1984_member pathlink.com> wrote in message
news:c6b398$f01$1 digitaldaemon.com...
 im curious - when a D compiler is made for 64bit processors (in the near
future
 lets hope :) what will the size of an int be? I assume it will be 8, and
long
 will be 16. So then what will a 2 byte integer be? It cant be a short
because
 that will be a 4 byte integer.

 I assume that floating point names will stay the same, as they are defined
by
 the IEEE.
All sizes will stay the same when moving to 64 bits, with the following exceptions: 1) pointers will be 64 bits 2) object references will be 64 bits 3) dynamic array references will be 128 bits 4) pointer differences will be 64 bits 5) pointer offsets will be 64 bits 6) sizeof will be 64 bits 7) Whether real.size will stay 10 or be forced to 8 is still up in the air. To this end, and to ensure portability of D source code to 64 bits, follow the following rules: 1) Use the .sizeof property whenever depending on the size of a type. 2) Use ptrdiff_t (an alias in object.d) for signed pointer differences. 3) Use size_t (an alias in object.d) for type sizes, unsigned pointer offsets and array indices. Note that 1, 2, and 3 correspond to C's portable uses of sizeof, ptrdiff_t, and size_t. In particular, int's and long's will remain the same size as for 32 bit computing.
Apr 24 2004
next sibling parent reply Ilya Minkov <minkov cs.tum.edu> writes:
imr1984 schrieb:

 well if all sizes will stay the same, id like to know why D actually calls its
 integers by non exact names (int, long, short etc). Why arent they called
int32,
 int64, int16 etc ?
According to specification, all bit-widths are to be understood as minimal. The types *might* be upscaled in the (probably far) future. For the sake of portability of algorithms, one should keep on the mind that the types might be larger someday, and such bit with names would become very unfortunate then. BTW, this explains why there are no bit rotation intrinsics in D. -eye
Apr 24 2004
parent reply Dave Sieber <dsieber spamnot.sbcglobal.net> writes:
Ilya Minkov <minkov cs.tum.edu> wrote:

 According to specification, all bit-widths are to be understood as 
 minimal. The types *might* be upscaled in the (probably far) future.
 For the sake of portability of algorithms, one should keep on the mind
 that the types might be larger someday, and such bit with names would
 become very unfortunate then.
Oh no, I certainly hope that's not true. Where is this mentioned in the spec? -- dave
Apr 24 2004
parent Ilya Minkov <minkov cs.tum.edu> writes:
Dave Sieber schrieb:

 Ilya Minkov <minkov cs.tum.edu> wrote:
 
According to specification, all bit-widths are to be understood as 
minimal. The types *might* be upscaled in the (probably far) future.
For the sake of portability of algorithms, one should keep on the mind
that the types might be larger someday, and such bit with names would
become very unfortunate then.
Oh no, I certainly hope that's not true. Where is this mentioned in the spec?
http://www.digitalmars.com/d/portability.html Right above your nose. -eye
Apr 24 2004
prev sibling next sibling parent "Matthew" <matthew.hat stlsoft.dot.org> writes:
That's a good point.

The answer, I suspect, is cosmetic appeal to C-family programmers.

"imr1984" <imr1984_member pathlink.com> wrote in message
news:c6ds7c$la7$1 digitaldaemon.com...
 well if all sizes will stay the same, id like to know why D actually calls its
 integers by non exact names (int, long, short etc). Why arent they called
int32,
 int64, int16 etc ?

 In article <c6boe1$1ilt$2 digitaldaemon.com>, Walter says...
"imr1984" <imr1984_member pathlink.com> wrote in message
news:c6b398$f01$1 digitaldaemon.com...
 im curious - when a D compiler is made for 64bit processors (in the near
future
 lets hope :) what will the size of an int be? I assume it will be 8, and
long
 will be 16. So then what will a 2 byte integer be? It cant be a short
because
 that will be a 4 byte integer.

 I assume that floating point names will stay the same, as they are defined
by
 the IEEE.
All sizes will stay the same when moving to 64 bits, with the following exceptions: 1) pointers will be 64 bits 2) object references will be 64 bits 3) dynamic array references will be 128 bits 4) pointer differences will be 64 bits 5) pointer offsets will be 64 bits 6) sizeof will be 64 bits 7) Whether real.size will stay 10 or be forced to 8 is still up in the air. To this end, and to ensure portability of D source code to 64 bits, follow the following rules: 1) Use the .sizeof property whenever depending on the size of a type. 2) Use ptrdiff_t (an alias in object.d) for signed pointer differences. 3) Use size_t (an alias in object.d) for type sizes, unsigned pointer offsets and array indices. Note that 1, 2, and 3 correspond to C's portable uses of sizeof, ptrdiff_t, and size_t. In particular, int's and long's will remain the same size as for 32 bit computing.
Apr 24 2004
prev sibling next sibling parent "Carlos Santander B." <carlos8294 msn.com> writes:
"imr1984" <imr1984_member pathlink.com> wrote in message
news:c6ds7c$la7$1 digitaldaemon.com
| well if all sizes will stay the same, id like to know why D actually calls
its
| integers by non exact names (int, long, short etc). Why arent they called
int32,
| int64, int16 etc ?
|

You can use the aliases from std.stdint

-----------------------
Carlos Santander Bernal
Apr 24 2004
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
"imr1984" <imr1984_member pathlink.com> wrote in message
news:c6ds7c$la7$1 digitaldaemon.com...
 well if all sizes will stay the same, id like to know why D actually calls
its
 integers by non exact names (int, long, short etc). Why arent they called
int32,
 int64, int16 etc ?
It's just aesthetically unappealing. But if you prefer them, there are aliases for them in std.stdint.
Apr 24 2004
parent reply Juan C <Juan_member pathlink.com> writes:
<snip>
 integers by non exact names (int, long, short etc). Why arent they called
int32,
 int64, int16 etc ?
It's just aesthetically unappealing. But if you prefer them, there are aliases for them in std.stdint.
</snip> Ah, but they aid in making the code self-documenting -- isn't that one of your goals?
Apr 24 2004
parent "Walter" <walter digitalmars.com> writes:
"Juan C" <Juan_member pathlink.com> wrote in message
news:c6evnk$2kv2$1 digitaldaemon.com...
 <snip>
 integers by non exact names (int, long, short etc). Why arent they
called
int32,
 int64, int16 etc ?
It's just aesthetically unappealing. But if you prefer them, there are aliases for them in std.stdint.
</snip> Ah, but they aid in making the code self-documenting -- isn't that one of
your
 goals?
Yes, it is. I feel that goal is achieved, however, by specifying what the sizes of the types are in the spec, rather than leaving them unspecified as in C.
Apr 24 2004