www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - cent and ucent?

reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
Hi,

Are there any current plans to implement cent and ucent? I realize no 
current processors support 128-bit integers natively, but I figure they 
could be implemented the same way 64-bit integers are on 32-bit machines.

I know I could use std.bigint, but there's no good way to declare a 
bigint as fixed-size...

-- 
- Alex
Jan 28 2012
next sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Alex Rønne Petersen" <xtzgzorex gmail.com> wrote in message 
news:jg26nr$29bh$1 digitalmars.com...
 Hi,

 Are there any current plans to implement cent and ucent? I realize no 
 current processors support 128-bit integers natively, but I figure they 
 could be implemented the same way 64-bit integers are on 32-bit machines.

 I know I could use std.bigint, but there's no good way to declare a bigint 
 as fixed-size...

 -- 
 - Alex
There are no current plans that I'm aware of. Implementing cent/ucent would probably require adding support for the type to the backend, and there are a limited number of people that can do that. It's much more likely that phobos will get something like Fixed!128 in addition to BigInt.
Jan 28 2012
parent reply bearophile <bearophileHUGS lycos.com> writes:
Daniel Murphy:

 It's much more likely that phobos will get something like Fixed!128 in 
 addition to BigInt. 
Integer numbers have some proprieties that compilers use with built-in fixed-size numbers to optimize code. I think such optimizations are not performed on library-defined numbers like a Fixed!128 or BigInt. This means there are advantages of having cent/ucent/BigInt as built-ins. Alternatively in theory special annotations are able to tell the compiler that a user-defined type shares some of the characteristics of integer numbers, allowing the compiler to optimize better at compile-time. But I think not even the Scala compiler is so powerful. Bye, bearophile
Jan 28 2012
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:jg2cku$2ljk$1 digitalmars.com...
 Integer numbers have some proprieties that compilers use with built-in 
 fixed-size numbers to optimize code. I think such optimizations are not 
 performed on library-defined numbers like a Fixed!128 or BigInt. This 
 means there are advantages of having cent/ucent/BigInt as built-ins.
Yes, but the advantages in implementation ease and portability currently favour a library solution. Do the gcc or llvm backends support 128 bit integers?
 Alternatively in theory special annotations are able to tell the compiler 
 that a user-defined type shares some of the characteristics of integer 
 numbers, allowing the compiler to optimize better at compile-time. But I 
 think not even the Scala compiler is so powerful.
This would still require backend support for many things.
Jan 28 2012
next sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 29-01-2012 04:38, Daniel Murphy wrote:
 "bearophile"<bearophileHUGS lycos.com>  wrote in message
 news:jg2cku$2ljk$1 digitalmars.com...
 Integer numbers have some proprieties that compilers use with built-in
 fixed-size numbers to optimize code. I think such optimizations are not
 performed on library-defined numbers like a Fixed!128 or BigInt. This
 means there are advantages of having cent/ucent/BigInt as built-ins.
Yes, but the advantages in implementation ease and portability currently favour a library solution. Do the gcc or llvm backends support 128 bit integers?
Can't speak for GCC, but LLVM allows arbitrary-size integers. SDC maps cent/ucent to i128.
 Alternatively in theory special annotations are able to tell the compiler
 that a user-defined type shares some of the characteristics of integer
 numbers, allowing the compiler to optimize better at compile-time. But I
 think not even the Scala compiler is so powerful.
This would still require backend support for many things.
Most of LLVM's optimizers work on arbitrary-size ints. -- - Alex
Jan 28 2012
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
 gcc does on 64-bit systems. long long is 128-bit on 64-bit Linux. I don't 
 know about llvm, but it's supposed to be gcc-compatible, so I assume that 
 it's the same.

 - Jonathan M Davis
 Can't speak for GCC, but LLVM allows arbitrary-size integers. SDC maps 
 cent/ucent to i128.

 - Alex
That's good news. I can't find any information about int128_t in 32 bit gcc, but if the support is already there then it's just the dmd backend that need to be upgraded.
Jan 28 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/28/2012 8:24 PM, Daniel Murphy wrote:
 That's good news.  I can't find any information about int128_t in 32 bit
 gcc, but if the support is already there then it's just the dmd backend that
 need to be upgraded.
There is some support for 128 bit ints already in the backend, but it is incomplete. It's a bit low on the priority list.
Jan 28 2012
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, January 28, 2012 20:41:38 Walter Bright wrote:
 There is some support for 128 bit ints already in the backend, but it is
 incomplete. It's a bit low on the priority list.
Gotta love the pun there, intended or otherwise... :) - Jonathan M Davis
Jan 28 2012
prev sibling next sibling parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jg2im4$30qi$1 digitalmars.com...
 There is some support for 128 bit ints already in the backend, but it is 
 incomplete. It's a bit low on the priority list.
 
Jan 28 2012
prev sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jg2im4$30qi$1 digitalmars.com...
 There is some support for 128 bit ints already in the backend, but it is 
 incomplete. It's a bit low on the priority list.
No rush. The backend is still a mystery to me.
Jan 28 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/28/2012 9:20 PM, Daniel Murphy wrote:
 The backend is still a mystery
Better call Nancy Drew!
Jan 28 2012
prev sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, January 29, 2012 14:38:41 Daniel Murphy wrote:
 "bearophile" <bearophileHUGS lycos.com> wrote in message
 news:jg2cku$2ljk$1 digitalmars.com...
 
 Integer numbers have some proprieties that compilers use with built-in
 fixed-size numbers to optimize code. I think such optimizations are not
 performed on library-defined numbers like a Fixed!128 or BigInt. This
 means there are advantages of having cent/ucent/BigInt as built-ins.
Yes, but the advantages in implementation ease and portability currently favour a library solution. Do the gcc or llvm backends support 128 bit integers?
gcc does on 64-bit systems. long long is 128-bit on 64-bit Linux. I don't know about llvm, but it's supposed to be gcc-compatible, so I assume that it's the same. - Jonathan M Davis
Jan 28 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 01/29/2012 04:56 AM, Jonathan M Davis wrote:
 On Sunday, January 29, 2012 14:38:41 Daniel Murphy wrote:
 "bearophile"<bearophileHUGS lycos.com>  wrote in message
 news:jg2cku$2ljk$1 digitalmars.com...

 Integer numbers have some proprieties that compilers use with built-in
 fixed-size numbers to optimize code. I think such optimizations are not
 performed on library-defined numbers like a Fixed!128 or BigInt. This
 means there are advantages of having cent/ucent/BigInt as built-ins.
Yes, but the advantages in implementation ease and portability currently favour a library solution. Do the gcc or llvm backends support 128 bit integers?
gcc does on 64-bit systems. long long is 128-bit on 64-bit Linux. I don't know about llvm, but it's supposed to be gcc-compatible, so I assume that it's the same. - Jonathan M Davis
long long is 64-bit on 64-bit linux.
Jan 29 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:
 long long is 64-bit on 64-bit linux.
Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking... Well, you're right. Now I'm seriously confused. Hmmm... long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining. In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems. - Jonathan M Davis
Jan 29 2012
next sibling parent =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 29-01-2012 23:26, Jonathan M Davis wrote:
 On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:
 long long is 64-bit on 64-bit linux.
Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking... Well, you're right. Now I'm seriously confused. Hmmm... long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining. In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems. - Jonathan M Davis
Well, with LLVM and GCC supporting it, there shouldn't be any problems with implementing it today, I guess. -- - Alex
Jan 29 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
 long double is 128-bit.
Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Jan 29 2012
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
 On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
long double is 128-bit.
Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Really?! Ugh. Hopefully D handles it better? T -- One disk to rule them all, One disk to find them. One disk to bring them all and in the darkness grind them. In the Land of Redmond where the shadows lie. -- The Silicon Valley Tarot
Jan 29 2012
next sibling parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.172.1327892267.25230.digitalmars-d puremagic.com...
 Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Really?! Ugh. Hopefully D handles it better?
No. D has to be abi compatible.
Jan 29 2012
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 01/30/2012 03:59 AM, H. S. Teoh wrote:
 On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
 On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
 long double is 128-bit.
Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Really?! Ugh. Hopefully D handles it better? T
It is what the x86 hardware supports.
Jan 30 2012
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Jan 30, 2012 at 05:00:22PM +0100, Timon Gehr wrote:
 On 01/30/2012 03:59 AM, H. S. Teoh wrote:
On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
long double is 128-bit.
Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Really?! Ugh. Hopefully D handles it better? T
It is what the x86 hardware supports.
I know, I was referring to the 48 bits of padding. Seems like such a waste. T -- What do you mean the Internet isn't filled with subliminal messages? What about all those buttons marked "submit"??
Jan 30 2012
prev sibling parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
On 30/01/2012 16:00, Timon Gehr wrote:
 On 01/30/2012 03:59 AM, H. S. Teoh wrote:
 On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
 On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
 long double is 128-bit.
Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Really?! Ugh. Hopefully D handles it better? T
It is what the x86 hardware supports.
As I try it, real.sizeof == 10. And by According to DMC 8.42n (where is 8.52?), sizeof(long double) == 10 as well. Stewart.
Jan 31 2012
parent reply "Marco Leise" <Marco.Leise gmx.de> writes:
Am 31.01.2012, 16:07 Uhr, schrieb Stewart Gordon <smjg_1998 yahoo.com>:

 On 30/01/2012 16:00, Timon Gehr wrote:
 On 01/30/2012 03:59 AM, H. S. Teoh wrote:
 On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
 On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
 long double is 128-bit.
Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Really?! Ugh. Hopefully D handles it better? T
It is what the x86 hardware supports.
As I try it, real.sizeof == 10. And by According to DMC 8.42n (where is 8.52?), sizeof(long double) == 10 as well. Stewart.
pragma(msg, real.sizeof); Prints the expected platform alignment for me: DMD64 / GDC64: 16LU DMD32: 12LU
Jan 31 2012
next sibling parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
On 31/01/2012 18:47, Marco Leise wrote:
<snip>
 pragma(msg, real.sizeof);
Prints 10u for me (2.057, Win32).
 Prints the expected platform alignment for me:

 DMD64 / GDC64: 16LU
 DMD32: 12LU
That isn't alignment, that's padding built into the type. I assume you're testing on Linux. I've heard before that long double/real is 12 bytes under Linux because it includes 2 bytes of padding. I don't know why Linux does it that way, but there you go. Stewart.
Jan 31 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/31/2012 4:28 PM, Stewart Gordon wrote:
 That isn't alignment, that's padding built into the type. I assume you're
 testing on Linux. I've heard before that long double/real is 12 bytes under
 Linux because it includes 2 bytes of padding. I don't know why Linux does it
 that way, but there you go.
Both the alignment and padding of reals changes from platform to platform.
Jan 31 2012
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 31 January 2012 18:47, Marco Leise <Marco.Leise gmx.de> wrote:
 Am 31.01.2012, 16:07 Uhr, schrieb Stewart Gordon <smjg_1998 yahoo.com>:


 On 30/01/2012 16:00, Timon Gehr wrote:
 On 01/30/2012 03:59 AM, H. S. Teoh wrote:
 On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
 On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
 long double is 128-bit.
Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Really?! Ugh. Hopefully D handles it better? T
It is what the x86 hardware supports.
As I try it, real.sizeof =3D=3D 10. =A0And by According to DMC 8.42n (wh=
ere is
 8.52?), sizeof(long double) =3D=3D 10 as well.

 Stewart.
=A0 =A0 =A0 =A0pragma(msg, real.sizeof); Prints the expected platform alignment for me: DMD64 / GDC64: 16LU
It varies from platform to platform, and depending on what target flags you pass to GDC. --=20 Iain Buclaw *(p < e ? p++ : p) =3D (c & 0x0f) + '0';
Feb 01 2012
prev sibling parent reply "Marco Leise" <Marco.Leise gmx.de> writes:
Am 30.01.2012, 03:59 Uhr, schrieb H. S. Teoh <hsteoh quickfur.ath.cx>:

 On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
 On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
long double is 128-bit.
Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Really?! Ugh. Hopefully D handles it better? T
From Wikipedia: "On the x86 architecture, most compilers implement long double as the 80-bit extended precision type supported by that hardware (sometimes stored as 12 or 16 bytes to maintain data structure alignment)." That's all there is to know I think.
Jan 30 2012
next sibling parent Don Clugston <dac nospam.com> writes:
On 30/01/12 18:06, Marco Leise wrote:
 Am 30.01.2012, 03:59 Uhr, schrieb H. S. Teoh <hsteoh quickfur.ath.cx>:

 On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:
 On 1/29/2012 2:26 PM, Jonathan M Davis wrote:
long double is 128-bit.
Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Really?! Ugh. Hopefully D handles it better? T
From Wikipedia: "On the x86 architecture, most compilers implement long double as the 80-bit extended precision type supported by that hardware (sometimes stored as 12 or 16 bytes to maintain data structure alignment)." That's all there is to know I think.
Not quite all. An 80-bit double, padded with zeros to 128 bits, is binary compatible with a quadruple real. (Not much use in practice, as far as I know).
Jan 30 2012
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/30/2012 9:06 AM, Marco Leise wrote:
 "On the x86 architecture, most compilers implement long double as the 80-bit
 extended precision type supported by that hardware (sometimes stored as 12 or
16
 bytes to maintain data structure alignment)."

 That's all there is to know I think.
10 bytes on Windows. Anyhow, as far as the C ABI goes (which is what this is), "Ours is not to Reason Why, Ours is to Implement or Fail."
Jan 31 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Jan 29, 2012 at 02:26:55PM -0800, Jonathan M Davis wrote:
[...]
 This is one of the many reasons why I think that any language which
 didn't define integers according to their _absolute_ size instead of
 relative size (with the possible exception of some types which vary
 based on the machine so that you're using the most efficient integer
 for that machine or are able to index the full memory space) made a
 huge mistake.
IMNSHO, you need both, and I can't say I'm 100% satisfied with how D uses 'int' to mean 32-bit integer no matter what. The problem with C is that there's no built-in type for guaranteeing 32-bits (stdint.h came a bit too late into the picture--by then, people had already formed too many bad habits). There's a time when code needs to be able to say "please give me the default fastest int type on the machine", and a time for code to say "I want the int type with exactly n bits 'cos I'm assuming specific properties of n-bit numbers".
 C's type scheme is nothing but trouble as far as integral sizes go
 IMHO. printf in particular is one of the more annoying things to worry
 about with cross-platform development thanks to varying integer size.
 Bleh. Enough of my whining.
[...] Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu? I think either gcc or C99 actually has a dedicated printf format for size_t, except that C++ doesn't include parts of C99, so you end up with format string #ifdef nightmare no matter what you do. I'm so glad that %s takes care of it all in D. Yet another thing D has done right. T -- MSDOS = MicroSoft's Denial Of Service
Jan 29 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/29/2012 3:31 PM, H. S. Teoh wrote:
 Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu?
 I think either gcc or C99 actually has a dedicated printf format for
 size_t, except that C++ doesn't include parts of C99, so you end up with
 format string #ifdef nightmare no matter what you do. I'm so glad that
 %s takes care of it all in D. Yet another thing D has done right.
size_t does have a C99 Standard official format %z. The trouble is, 1. many compilers *still* don't implement it. 2. that doesn't do you any good for any other typedef's that change size. printf is the single biggest nuisance in porting code between 32 and 64 bits.
Jan 29 2012
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, January 29, 2012 17:57:39 Walter Bright wrote:
 On 1/29/2012 3:31 PM, H. S. Teoh wrote:
 Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu?
 I think either gcc or C99 actually has a dedicated printf format for
 size_t, except that C++ doesn't include parts of C99, so you end up with
 format string #ifdef nightmare no matter what you do. I'm so glad that
 %s takes care of it all in D. Yet another thing D has done right.
size_t does have a C99 Standard official format %z. The trouble is, 1. many compilers *still* don't implement it. 2. that doesn't do you any good for any other typedef's that change size. printf is the single biggest nuisance in porting code between 32 and 64 bits.
It's even worse with code which you're trying to have be cross-platform between 32-bit and 64-bit. Microsoft added I32 and I64. which helps, but then you still need to add a wrapper to printf for Posix to handle them unless you want to ifdef all of your printf calls. About the only positive thing that I can say about that whole mess is that it's because of that that I learned that string literals are unaffected by macros in C/C++. The fact that I can just do %s with writefln in D and not worry about it is so fantastic it's not even funny. - Jonathan M Davis
Jan 29 2012
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Jan 29, 2012 at 05:57:39PM -0800, Walter Bright wrote:
 On 1/29/2012 3:31 PM, H. S. Teoh wrote:
Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu?
I think either gcc or C99 actually has a dedicated printf format for
size_t, except that C++ doesn't include parts of C99, so you end up with
format string #ifdef nightmare no matter what you do. I'm so glad that
%s takes care of it all in D. Yet another thing D has done right.
size_t does have a C99 Standard official format %z. The trouble is, 1. many compilers *still* don't implement it.
And C++ doesn't officially support C99. Prior to C++11 anyway, but I don't foresee myself doing any major projects in C++11 now that I have something better, i.e., D. I just can't see myself doing any more personal projects in C++, and at my day job we actually migrated from C++ to C a few years ago, and we're still happy we did so. (Don't ask, you don't want to know. When a single function call requires 6 layers of needless abstraction including a layer involving fwrite, fork, and exec, and when dtors do useful work other than cleanup, it's time to call it quits.)
 2. that doesn't do you any good for any other typedef's that change
 size.
 
 printf is the single biggest nuisance in porting code between 32 and
 64 bits.
[...] It could've been worse, though. We're lucky (most) compiler vendors decided not to make int 64 bits. That alone would've broken 90% of existing C code out there, some in obvious ways and others in subtle ways that you only find out after it's deployed on your client's production system. T -- Two wrongs don't make a right; but three rights do make a left...
Jan 29 2012
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 29 January 2012 22:26, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:
 long long is 64-bit on 64-bit linux.
Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking... Well, you're right. Now I'm seriously confused. Hmmm... long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining. In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems. - Jonathan M Davis
Can be turned on via compiler switch: -m128bit-long-double or set at the configure stage: --with-long-double-128 Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jan 29 2012
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 30 January 2012 03:17, Iain Buclaw <ibuclaw ubuntu.com> wrote:
 On 29 January 2012 22:26, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:
 long long is 64-bit on 64-bit linux.
Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking... Well, you're right. Now I'm seriously confused. Hmmm... long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining. In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems. - Jonathan M Davis
Can be turned on via compiler switch: -m128bit-long-double or set at the configure stage: --with-long-double-128
Oh wait... I've just re-read that and realised it's to do with reals (must be 3am in the morning here). -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jan 29 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, January 29, 2012 15:31:57 H. S. Teoh wrote:
 On Sun, Jan 29, 2012 at 02:26:55PM -0800, Jonathan M Davis wrote:
 [...]
 
 This is one of the many reasons why I think that any language which
 didn't define integers according to their _absolute_ size instead of
 relative size (with the possible exception of some types which vary
 based on the machine so that you're using the most efficient integer
 for that machine or are able to index the full memory space) made a
 huge mistake.
IMNSHO, you need both, and I can't say I'm 100% satisfied with how D uses 'int' to mean 32-bit integer no matter what. The problem with C is that there's no built-in type for guaranteeing 32-bits (stdint.h came a bit too late into the picture--by then, people had already formed too many bad habits). There's a time when code needs to be able to say "please give me the default fastest int type on the machine", and a time for code to say "I want the int type with exactly n bits 'cos I'm assuming specific properties of n-bit numbers".
In an ideal language, I'd probably go with an integer type with an unspecified number of bits which is used when you don't care about the size of the integer. It'll be whatever is fastest for the particular architecture that it's compiled on, and it'll probably be guaranteed to be _at least_ a particular size (probably 32 bits at this point) so that you don't have to worry about average-sized numbers not fitting. Also, you should probably have a type like size_t that deals with the differing sizes of address spaces. But _all_ other types have a fixed size. So, you don't get this nonsense of int is this on that machine, and long is that, and long long is something else, etc. So, you use them when you need a variable to be a particular size or when you need a guarantee that a larger value will fit in it. The way that C did it with _everything_ varying is horrific IMHO. are fixed in size is _far_ better IMHO. So, if the choice is between the C/C++ definitely arguments for having an integral type which is the most efficient for whatever machine that it's compiled on, and D doesn't really have that. You'd probably have to use something like c_long if you really wanted that. - Jonathan M Davis
Jan 29 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/29/2012 4:30 PM, Jonathan M Davis wrote:
 But there are
 definitely arguments for having an integral type which is the most efficient
for
 whatever machine that it's compiled on, and D doesn't really have that. You'd
 probably have to use something like c_long if you really wanted that.
I believe the notion of "most efficient integer type" was obsolete 10 years ago. In any case, D is hardly deficient even if such is valid. Just use an alias. C has varying size for builtin types and fixed size for aliases. D is just the reverse - fixed builtin sizes and varying alias sizes. My experience with both languages is that D's approach is far superior. C's varying sizes makes it clumsy to write portable numeric code, and the varying size of wchar_t is such a disaster that it is completely useless - the C++11 had to come up with completely new basic types to support UTF.
Jan 29 2012
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Jan 29, 2012 at 06:23:33PM -0800, Walter Bright wrote:
[...]
 C has varying size for builtin types and fixed size for aliases. D is
 just the reverse - fixed builtin sizes and varying alias sizes.  My
 experience with both languages is that D's approach is far superior.
I agree. It's not perfect, but it definitely beats the C system.
 C's varying sizes makes it clumsy to write portable numeric code, and
 the varying size of wchar_t is such a disaster that it is completely
 useless - the C++11 had to come up with completely new basic types to
 support UTF.
Not to mention the totally non-commital way the specs were written about wchar_t: it *could* be UTF-16, or it *could* be UTF-32, or it *could* be a non-unicode encoding, we don't guarantee anything. Oh, you want Unicode, right? Well for that you need to consult your OS-specific documentation on how to set up 15 different environment variables, all of which have non-commital descriptions, and any of which may or may not switch the system into/out of unicode mode. Oh, you want a function to guarantee unicode mode? We're sorry, that's not our department. Yeah. Useless is just about right. It's almost as bad as certain parts of the IPMI spec, which I had the misfortune to be given a project to code for at my day job once. T -- Amateurs built the Ark; professionals built the Titanic.
Jan 29 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/29/2012 6:46 PM, H. S. Teoh wrote:
 Not to mention the totally non-commital way the specs were written about
 wchar_t: it *could* be UTF-16, or it *could* be UTF-32, or it *could* be
 a non-unicode encoding, we don't guarantee anything. Oh, you want
 Unicode, right? Well for that you need to consult your OS-specific
 documentation on how to set up 15 different environment variables, all
 of which have non-commital descriptions, and any of which may or may not
 switch the system into/out of unicode mode. Oh, you want a function to
 guarantee unicode mode? We're sorry, that's not our department.
I've had people tell me this was an advantage because there are some chips where chars, shorts, ints, and wchars are all 32 bits. Isn't it awesome that the C standard supports that? The only problem with that is that while the C standard supports it, I can't think of a single C program that would work on such a system without a major, and I mean major, rewrite. It's a useless facet of the standard.
Jan 29 2012
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Jan 29, 2012 at 07:47:26PM -0800, Walter Bright wrote:
 On 1/29/2012 6:46 PM, H. S. Teoh wrote:
Not to mention the totally non-commital way the specs were written
about wchar_t: it *could* be UTF-16, or it *could* be UTF-32, or it
*could* be a non-unicode encoding, we don't guarantee anything. Oh,
you want Unicode, right? Well for that you need to consult your
OS-specific documentation on how to set up 15 different environment
variables, all of which have non-commital descriptions, and any of
which may or may not switch the system into/out of unicode mode. Oh,
you want a function to guarantee unicode mode? We're sorry, that's
not our department.
I've had people tell me this was an advantage because there are some chips where chars, shorts, ints, and wchars are all 32 bits. Isn't it awesome that the C standard supports that? The only problem with that is that while the C standard supports it, I can't think of a single C program that would work on such a system without a major, and I mean major, rewrite. It's a useless facet of the standard.
I can just see all those string malloc()'s screaming in pain as buffer overflows trample them to their miserable deaths: void f(int length) { char *p = (char *)malloc(length); /* yikes! */ int i; for (i=0; i < length; i++) { /* do something with p[i] ... */ } ... } Is there an actual, real, working C compiler that has char sized as anything but 8 bits?? This one thing alone would kill, oh, 99% of all C code? T -- Klein bottle for rent ... inquire within. -- Stephen Mulraney
Jan 29 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/29/2012 8:21 PM, H. S. Teoh wrote:
 On Sun, Jan 29, 2012 at 07:47:26PM -0800, Walter Bright wrote:
 I've had people tell me this was an advantage because there are some
 chips where chars, shorts, ints, and wchars are all 32 bits. Isn't it
 awesome that the C standard supports that?
Is there an actual, real, working C compiler that has char sized as anything but 8 bits?? This one thing alone would kill, oh, 99% of all C code?
Yes. Those chips exist, and there are Standard C compilers for them. But every bit of C code compiled for them has to be custom rewritten for it.
Jan 29 2012
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Jan 29, 2012 at 09:24:48PM -0800, Walter Bright wrote:
 On 1/29/2012 8:21 PM, H. S. Teoh wrote:
On Sun, Jan 29, 2012 at 07:47:26PM -0800, Walter Bright wrote:
I've had people tell me this was an advantage because there are some
chips where chars, shorts, ints, and wchars are all 32 bits. Isn't it
awesome that the C standard supports that?
Is there an actual, real, working C compiler that has char sized as anything but 8 bits?? This one thing alone would kill, oh, 99% of all C code?
Yes. Those chips exist, and there are Standard C compilers for them. But every bit of C code compiled for them has to be custom rewritten for it.
Interesting. How would D fare in that kind of environment, I wonder? I suppose it shouldn't be a big deal, since you have to custom rewrite everything anyways -- just use int32 throughout. T -- Lawyer: (n.) An innocence-vending machine, the effectiveness of which depends on how much money is inserted.
Jan 29 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/29/2012 10:39 PM, H. S. Teoh wrote:
 Interesting. How would D fare in that kind of environment, I wonder? I
 suppose it shouldn't be a big deal, since you have to custom rewrite
 everything anyways -- just use int32 throughout.
You could write a custom D compiler for it.
Jan 30 2012
prev sibling next sibling parent Stewart Gordon <smjg_1998 yahoo.com> writes:
On 29/01/2012 01:17, Alex Rønne Petersen wrote:
 Hi,

 Are there any current plans to implement cent and ucent?
<snip> Whether it's implemented any time soon or not, it's high time the _syntax_ allowed their use as basic types for forward/backward compatibility's sake. http://d.puremagic.com/issues/show_bug.cgi?id=785 Stewart.
Jan 31 2012
prev sibling parent ponce <spam spam.org> writes:
Le 29/01/2012 02:17, Alex Rønne Petersen a écrit :
 Are there any current plans to implement cent and ucent?
I implemented cent and ucent as a library, using division algorithm from Ian Kaplan. https://github.com/p0nce/gfm/blob/master/math/softcent.d Suggestions welcome.
Mar 29 2012