www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - ^^ limitation

reply "Tyro[17]" <nospam home.com> writes:
I believe the following two lines of code should produce the same 
output. Is there a specific reason why doesn't allow this? Of course the 
only way to store the result would be to put in into a BigInt variable 
or convert it to string but I don't that shouldn't prevent the compiler 
from producing the correct value.

(101^^1000).to!string.writeln;
(BigInt(101)^^1000).writeln;

Regards,
Andrew
Apr 24 2012
next sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Wed, 25 Apr 2012 06:00:31 +0900
schrieb "Tyro[17]" <nospam home.com>:

 I believe the following two lines of code should produce the same 
 output. Is there a specific reason why doesn't allow this? Of course the 
 only way to store the result would be to put in into a BigInt variable 
 or convert it to string but I don't that shouldn't prevent the compiler 
 from producing the correct value.
 
 (101^^1000).to!string.writeln;
 (BigInt(101)^^1000).writeln;
 
 Regards,
 Andrew

Well... what do you want to hear? I like to know that the result of mathematical operations doesn't change its type depending on the ability to compile-time evaluate it and the magnitude of the result. Imagine the mess when the numbers are replaced by constants that are defined else where. This may work in languages that are not strongly typed, but we rely on the exact data type of an expression. You are calling a function called to!string with the overload that takes an int. A BigInt or a string may be handled entirely differently by to!string. The compiler doesn't know what either BigInt is or what to!string is supposed to do. It cannot make the assumption that passing a string to it will work the same way as passing an int. What you would need is that int and BigInt have the same semantics everywhere. But once you leave the language by calling a C function for example you need an explicit 32-bit int again. If you need this functionality use a programming language that has type classes and seamlessly switches between int/BigInt types, but drops the systems language attribute. You'll find languages that support unlimited integers and floats without friction. Or you use BigInt everywhere. Maybe Python or Mathematica. -- Marco
Apr 24 2012
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04/27/2012 03:55 AM, James Miller wrote:
 On Friday, 27 April 2012 at 00:56:13 UTC, Tryo[17] wrote:
 D provides an auto type facility that determins which the type
 that can best accommodate a particular value. What prevents
 the from determining that the only type that can accommodate
 that value is a BigInt? The same way it decides between int,
 long, ulong, etc.

of the library, not the language.
 Why couldn't to!string be overloaded to take a BigInt?

 The point is this, currently 2^^31 will produce a negative long
 value on my system. Not that the value is wrong, the variable
 simply cannot support the magnitude of the result for this
 calculation so it wraps around and produces a negative value.
 However, 2^^n for n>=32 produces a value of 0. Why not
 produce the value and let the user choose what to put it into?
 Why not make the he language BigInt aware? What is the
 negative effect of taking BigInt out of the library and make it
 an official part of the language?

Because this is a native language. The idea is to be close to the hardware, and that means fixed-sized integers, fixed-sized floats and having to live with that. Making BigInt part of the language opens up the door for a whole host of other things to become "part of the language". While we're at it, why don't we make matrices part of the language, and regexes, and we might aswell move all that datetime stuff into the language too. Oh and I would love to see all the signals stuff in there too. The reason we don't put everything in the language is because the more you put into the language, the harder it is to move. There are more than enough bugs in D

s/in D/in the DMD frontend/
 right now, and adding more features into the language
 means a higher burden for core development. There is a trend of trying
 to move away from tight integration into the compiler, and by extension
 the language. Associative arrays are being worked on to make most of the
 work be done in object.d, with the end result being the compiler only
 has to convert T[U] into AA(T, U) and do a similar conversion for aa
 literals. This means that there is no extra fancy work for the compiler
 to do to support AA's

 Also, D is designed for efficiency, if I don't want a BigInt, and all of
 the extra memory that comes with, then I would rather have an error. I
 don't want what /should/ be a fast system to slow down because I
 accidentally type 1 << 33 instead of 1 << 23, I want an error of some sort.

 The real solution here isn't to just blindly allow arbitrary features to
 be "in the language" as it were, but to make it easier to integrate
 library solutions so they feel like part of the language.

 --
 James Miller

Apr 27 2012
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Marco Leise:

 If you need this functionality use a programming language that 
 has type classes and seamlessly switches between int/BigInt 
 types, but drops the systems language attribute.

I think Lisp (that beside allowing you to use fixnums that can't grow, often is used with tagged integers, that switch to multi-precision when the number grows) was used as system language too (Symbolics?) Bye, bearophile
Apr 24 2012
prev sibling next sibling parent Don Clugston <dac nospam.com> writes:
On 24/04/12 23:00, Tyro[17] wrote:
 I believe the following two lines of code should produce the same
 output. Is there a specific reason why doesn't allow this? Of course the
 only way to store the result would be to put in into a BigInt variable
 or convert it to string but I don't that shouldn't prevent the compiler
 from producing the correct value.

 (101^^1000).to!string.writeln;
 (BigInt(101)^^1000).writeln;

 Regards,
 Andrew

Because BigInt is part of the library, not part of the compiler, so the compiler doesn't know it exists. What would be the type of 3^^5 ? Would it be a BigInt as well? This kind of thing doesn't work well in C-family languages.
Apr 25 2012
prev sibling next sibling parent "Tryo[17]" <nospam home.com> writes:
On Tuesday, 24 April 2012 at 22:45:37 UTC, Marco Leise wrote:
 Am Wed, 25 Apr 2012 06:00:31 +0900
 schrieb "Tyro[17]" <nospam home.com>:

 I believe the following two lines of code should produce the 
 same output. Is there a specific reason why doesn't allow 
 this? Of course the only way to store the result would be to 
 put in into a BigInt variable or convert it to string but I 
 don't that shouldn't prevent the compiler from producing the 
 correct value.
 
 (101^^1000).to!string.writeln;
 (BigInt(101)^^1000).writeln;
 
 Regards,
 Andrew

Well... what do you want to hear? I like to know that the

Honestly, I just want to hear the rationale for why things are the way they are. I see thing possible in other languages that I know is not as powerful as D and I get to wonder why... If I don't understand enough to make a determination on my own, I ask.
 result of mathematical operations doesn't change its type 
 depending on the ability to  compile-time evaluate it and the 
 magnitude of the result. Imagine the mess when the numbers are 
 replaced by constants that are defined else where. This may

D provides an auto type facility that determins which the type that can best accommodate a particular value. What prevents the from determining that the only type that can accommodate that value is a BigInt? The same way it decides between int, long, ulong, etc.
 work in languages that are not strongly typed, but we rely on 
 the exact data type of an expression. You are calling a 
 function called to!string with the overload that takes an int.

Why couldn't to!string be overloaded to take a BigInt?
 A BigInt or a string may be handled entirely differently by 
 to!string. The compiler doesn't know what either BigInt is or 
 what to!string is supposed to do. It cannot make the assumption

The point is this, currently 2^^31 will produce a negative long value on my system. Not that the value is wrong, the variable simply cannot support the magnitude of the result for this calculation so it wraps around and produces a negative value. However, 2^^n for n>=32 produces a value of 0. Why not produce the value and let the user choose what to put it into? Why not make the he language BigInt aware? What is the negative effect of taking BigInt out of the library and make it an official part of the language?
 that passing a string to it will work the same way as passing 
 an int. What you would need is that int and BigInt have the 
 same semantics everywhere. But once you leave the language by 
 calling a C function for example you need an explicit 32-bit 
 int again.
 If you need this functionality use a programming language that 
 has type classes and seamlessly switches between int/BigInt 
 types, but drops the systems language attribute. You'll find 
 languages that support unlimited integers and floats without 
 friction. Or you use BigInt everywhere. Maybe Python or 
 Mathematica.

I am not interested in another language (maybe in then future), simply an understanding why things are the way they are. Andrew
Apr 26 2012
prev sibling next sibling parent "Tryo[17]" <nospam home.com> writes:
On Tuesday, 24 April 2012 at 22:45:37 UTC, Marco Leise wrote:
 Am Wed, 25 Apr 2012 06:00:31 +0900
 schrieb "Tyro[17]" <nospam home.com>:

 I believe the following two lines of code should produce the 
 same output. Is there a specific reason why doesn't allow 
 this? Of course the only way to store the result would be to 
 put in into a BigInt variable or convert it to string but I 
 don't that shouldn't prevent the compiler from producing the 
 correct value.
 
 (101^^1000).to!string.writeln;
 (BigInt(101)^^1000).writeln;
 
 Regards,
 Andrew

Well... what do you want to hear? I like to know that the

Honestly, I just want to hear the rationale for why things are the way they are. I see thing possible in other languages that I know are not as powerful as D and I get to wonder why... If I don't understand enough to make a determination on my own, I simply ask.
 result of mathematical operations doesn't change its type 
 depending on the ability to  compile-time evaluate it and the 
 magnitude of the result. Imagine the mess when the numbers are 
 replaced by constants that are defined else where. This may

D provides an auto type facility that determins which the type that can best accommodate a particular value. What prevents the from determining that the only type that can accommodate that value is a BigInt? The same way it decides between int, long, ulong, etc.
 work in languages that are not strongly typed, but we rely on 
 the exact data type of an expression. You are calling a 
 function called to!string with the overload that takes an int.

Why couldn't to!string be overloaded to take a BigInt?
 A BigInt or a string may be handled entirely differently by 
 to!string. The compiler doesn't know what either BigInt is or 
 what to!string is supposed to do. It cannot make the assumption

The point is this, currently 2^^31 will produce a negative long value on my system. Not that the value is wrong, the variable simply cannot support the magnitude of the result for this calculation so it wraps around and produces a negative value. However, 2^^n for n>=32 produces a value of 0. Why not produce the value and let the user choose what to put it into? Why not make the he language BigInt aware? What is the negative effect of taking BigInt out of the library and make it an official part of the language?
 that passing a string to it will work the same way as passing 
 an int. What you would need is that int and BigInt have the 
 same semantics everywhere. But once you leave the language by 
 calling a C function for example you need an explicit 32-bit 
 int again.
 If you need this functionality use a programming language that 
 has type classes and seamlessly switches between int/BigInt 
 types, but drops the systems language attribute. You'll find 
 languages that support unlimited integers and floats without 
 friction. Or you use BigInt everywhere. Maybe Python or 
 Mathematica.

I am not interested in another language (maybe in then future), simply an understanding why things are the way they are. Andrew
Apr 26 2012
prev sibling next sibling parent "James Miller" <james aatch.net> writes:
On Friday, 27 April 2012 at 00:56:13 UTC, Tryo[17] wrote:
 D provides an auto type facility that determins which the type
 that can best accommodate a particular value. What prevents
 the from determining that the only type that can accommodate
 that value is a BigInt? The same way it decides between int,
 long, ulong, etc.

part of the library, not the language.
 Why couldn't to!string be overloaded to take a BigInt?

 The point is this, currently 2^^31 will produce a negative long
 value on my system. Not that the value is wrong, the variable
 simply cannot support the magnitude of the result for this
 calculation so it wraps around and produces a negative value.
 However, 2^^n for n>=32 produces a value of 0. Why not
 produce the value and let the user choose what to put it into?
 Why not make the he language BigInt aware? What is the
 negative effect of taking BigInt out of the library and make it
 an official part of the language?

Because this is a native language. The idea is to be close to the hardware, and that means fixed-sized integers, fixed-sized floats and having to live with that. Making BigInt part of the language opens up the door for a whole host of other things to become "part of the language". While we're at it, why don't we make matrices part of the language, and regexes, and we might aswell move all that datetime stuff into the language too. Oh and I would love to see all the signals stuff in there too. The reason we don't put everything in the language is because the more you put into the language, the harder it is to move. There are more than enough bugs in D right now, and adding more features into the language means a higher burden for core development. There is a trend of trying to move away from tight integration into the compiler, and by extension the language. Associative arrays are being worked on to make most of the work be done in object.d, with the end result being the compiler only has to convert T[U] into AA(T, U) and do a similar conversion for aa literals. This means that there is no extra fancy work for the compiler to do to support AA's Also, D is designed for efficiency, if I don't want a BigInt, and all of the extra memory that comes with, then I would rather have an error. I don't want what /should/ be a fast system to slow down because I accidentally type 1 << 33 instead of 1 << 23, I want an error of some sort. The real solution here isn't to just blindly allow arbitrary features to be "in the language" as it were, but to make it easier to integrate library solutions so they feel like part of the language. -- James Miller
Apr 26 2012
prev sibling parent Marco Leise <Marco.Leise gmx.de> writes:
Am Fri, 27 Apr 2012 02:56:11 +0200
schrieb "Tryo[17]" <nospam home.com>:

 On Tuesday, 24 April 2012 at 22:45:37 UTC, Marco Leise wrote:
 Well... what do you want to hear? I like to know that the

Honestly, I just want to hear the rationale for why things are the way they are. I see thing possible in other languages that I know are not as powerful as D and I get to wonder why... If I don't understand enough to make a determination on my own, I simply ask.

In the first moment I wasn't sure if you were trolling. It seems so obvious and clear to me that the result of a calculation cannot change its type depending on the exact magnitudes of the operands, that I interpreted ^^ as *g* or :p. "Ha! Ha! Limitation!" Considering that you probably have more experience with higher-level languages, where the actual data type can be more or less hidden and dynamically changed, I can understand the confusion. The word powerful can mean different things to different people. Powerful can mean, that you have a high-level foreach loop, but it can also mean that you are able to implement a foreach loop in low-level assembly. A warning could be useful. I don't know about: (3 ^^ 99) & 0xFFFFFFFF though. I.e. cases where you may be aware of the overflow, but want the 2^32 modulo anyway for some kind of hash function. -- Marco
Apr 27 2012