www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - bigfloat

reply Paul D. Anderson <paul.d.removethis.anderson comcast.andthis.net> writes:
Is there an active project to develop arbitrary-precision floating point
numbers for D?

I've got a little extra time at the moment and would like to contribute if I
can. I've done some work in floating point arithmetic and would be willing to
start/complete/add to/test/design/etc. such a project. What I hope NOT to do is
to re-implement someone else's perfectly adequate code.

If no such project exists I'd like to start one. If there are a bunch of
half-finished attempts (I have one of those), let's pool our efforts.

I know several contributors here have a strong interest and/or background in
numerics. I'd like to hear inputs regarding:

a) the merits (or lack) of having an arbitrary-precision floating point type

b) the features and functions that should be included.

Just to be clear -- I'm talking about a library addition here, not a change in
the language.

Paul
Apr 08 2009
next sibling parent reply Frank Torte <frankt123978 gmail.com> writes:
Paul D. Anderson Wrote:

 Is there an active project to develop arbitrary-precision floating point
numbers for D?
 
 I've got a little extra time at the moment and would like to contribute if I
can. I've done some work in floating point arithmetic and would be willing to
start/complete/add to/test/design/etc. such a project. What I hope NOT to do is
to re-implement someone else's perfectly adequate code.
 
 If no such project exists I'd like to start one. If there are a bunch of
half-finished attempts (I have one of those), let's pool our efforts.
 
 I know several contributors here have a strong interest and/or background in
numerics. I'd like to hear inputs regarding:
 
 a) the merits (or lack) of having an arbitrary-precision floating point type
 
 b) the features and functions that should be included.
 
 Just to be clear -- I'm talking about a library addition here, not a change in
the language.
 
 Paul
 
 

When you can use a number in D that is more than the number of atoms in the known universe why would you want a bigger number?
Apr 08 2009
next sibling parent Bill Baxter <wbaxter gmail.com> writes:
On Thu, Apr 9, 2009 at 2:54 AM, Frank Torte <frankt123978 gmail.com> wrote:
 Paul D. Anderson Wrote:

 Is there an active project to develop arbitrary-precision floating point=


 I've got a little extra time at the moment and would like to contribute =


ling to start/complete/add to/test/design/etc. such a project. What I hope = NOT to do is to re-implement someone else's perfectly adequate code.
 If no such project exists I'd like to start one. If there are a bunch of=


 I know several contributors here have a strong interest and/or backgroun=


 a) the merits (or lack) of having an arbitrary-precision floating point =


 b) the features and functions that should be included.

 Just to be clear -- I'm talking about a library addition here, not a cha=


 Paul

When you can use a number in D that is more than the number of atoms in t=


Size isn't everything. Arbitrary _precision_ is the goal, not arbitrary bigness. Try this experiment: float i=3D0; float j=3D0; do { j =3D i; i *=3D 2.0; } while(j!=3Di+1.0); writefln("Loop terminated at j=3D%s", j); --bb
Apr 08 2009
prev sibling next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Wed, 08 Apr 2009 21:54:02 +0400, Frank Torte <frankt123978 gmail.com> wrote:

 Paul D. Anderson Wrote:

 Is there an active project to develop arbitrary-precision floating  
 point numbers for D?

 I've got a little extra time at the moment and would like to contribute  
 if I can. I've done some work in floating point arithmetic and would be  
 willing to start/complete/add to/test/design/etc. such a project. What  
 I hope NOT to do is to re-implement someone else's perfectly adequate  
 code.

 If no such project exists I'd like to start one. If there are a bunch  
 of half-finished attempts (I have one of those), let's pool our efforts.

 I know several contributors here have a strong interest and/or  
 background in numerics. I'd like to hear inputs regarding:

 a) the merits (or lack) of having an arbitrary-precision floating point  
 type

 b) the features and functions that should be included.

 Just to be clear -- I'm talking about a library addition here, not a  
 change in the language.

 Paul

When you can use a number in D that is more than the number of atoms in the known universe why would you want a bigger number?

I'd like to calculate pi with up to 20000 valid digits. Or a square root of 2 with the same precision. How do I do that?
Apr 08 2009
parent Paul D. Anderson <paul.d.removethis.anderson comcast.andthis.net> writes:
Denis Koroskin Wrote:

 On Wed, 08 Apr 2009 21:54:02 +0400, Frank Torte <frankt123978 gmail.com> wrote:
 
 Paul D. Anderson Wrote:

 Is there an active project to develop arbitrary-precision floating  
 point numbers for D?

 I've got a little extra time at the moment and would like to contribute  
 if I can. I've done some work in floating point arithmetic and would be  
 willing to start/complete/add to/test/design/etc. such a project. What  
 I hope NOT to do is to re-implement someone else's perfectly adequate  
 code.

 If no such project exists I'd like to start one. If there are a bunch  
 of half-finished attempts (I have one of those), let's pool our efforts.

 I know several contributors here have a strong interest and/or  
 background in numerics. I'd like to hear inputs regarding:

 a) the merits (or lack) of having an arbitrary-precision floating point  
 type

 b) the features and functions that should be included.

 Just to be clear -- I'm talking about a library addition here, not a  
 change in the language.

 Paul

When you can use a number in D that is more than the number of atoms in the known universe why would you want a bigger number?

I'd like to calculate pi with up to 20000 valid digits. Or a square root of 2 with the same precision. How do I do that?

I've got some Java code that will do that -- not here with me at work. Of course, it uses Java's BigDecimal class -- that's what D doesn't seem to have. Paul
Apr 08 2009
prev sibling next sibling parent reply superdan <super dan.org> writes:
Frank Torte Wrote:

 Paul D. Anderson Wrote:
 
 Is there an active project to develop arbitrary-precision floating point
numbers for D?
 
 I've got a little extra time at the moment and would like to contribute if I
can. I've done some work in floating point arithmetic and would be willing to
start/complete/add to/test/design/etc. such a project. What I hope NOT to do is
to re-implement someone else's perfectly adequate code.
 
 If no such project exists I'd like to start one. If there are a bunch of
half-finished attempts (I have one of those), let's pool our efforts.
 
 I know several contributors here have a strong interest and/or background in
numerics. I'd like to hear inputs regarding:
 
 a) the merits (or lack) of having an arbitrary-precision floating point type
 
 b) the features and functions that should be included.
 
 Just to be clear -- I'm talking about a library addition here, not a change in
the language.
 
 Paul
 
 

When you can use a number in D that is more than the number of atoms in the known universe why would you want a bigger number?

the fuckin' gov't debt.
Apr 08 2009
next sibling parent Piotrek <starpit tlen.pl> writes:
superdan wrote:
 Frank Torte Wrote:
 When you can use a number in D that is more than the number of atoms in the
known universe why would you want a bigger number?

the [/censorship/]* gov't debt.

Hehe. Nice one.They can put an arbitrary big number into the financial system. It seems no one writes soft in D for government. * Yes, I dare - why can't we stay on right side ;P. Cheers
Apr 08 2009
prev sibling parent reply Miles <_______ _______.____> writes:
superdan wrote:
 the fuckin' gov't debt.

Funny, but as a side note, currency calculation shouldn't be done with floats, but with *integers* (or fixed-precision numbers, that is ultimately equivalent to integers that represent some minimal fraction of the currency unit, usually cents).
Apr 08 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Miles Wrote:
 Funny, but as a side note, currency calculation shouldn't be done with
 floats, but with *integers* (or fixed-precision numbers, that is
 ultimately equivalent to integers that represent some minimal fraction
 of the currency unit, usually cents).

Wide (or multi-precision) floating point numbers represented in base 10 is a good starting point. You want maximum safety for such operations. An example, http://docs.python.org/library/decimal.html Bye, bearophile
Apr 08 2009
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Miles (_______ _______.____)'s article
 superdan wrote:
 the fuckin' gov't debt.

floats, but with *integers* (or fixed-precision numbers, that is ultimately equivalent to integers that represent some minimal fraction of the currency unit, usually cents).

Yes, but at the rate we're going, the only reasonable way to represent the government debt might soon be in log space.
Apr 08 2009
parent BCS <none anon.com> writes:
Hello dsimcha,

 == Quote from Miles (_______ _______.____)'s article
 
 superdan wrote:
 
 the fuckin' gov't debt.
 

with floats, but with *integers* (or fixed-precision numbers, that is ultimately equivalent to integers that represent some minimal fraction of the currency unit, usually cents).

the government debt might soon be in log space.

A CS prof around here has a plot on his door. It's some African nations exchange rate vs. USD. At first glance it looks exponential. At second glance it looks super-exponential at third glance you notice the Y-axis is log!
Apr 08 2009
prev sibling next sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Wed, 08 Apr 2009 22:54:13 +0400, Paul D. Anderson
<paul.d.removethis.anderson comcast.andthis.net> wrote:

 Denis Koroskin Wrote:

 On Wed, 08 Apr 2009 21:54:02 +0400, Frank Torte  
 <frankt123978 gmail.com> wrote:

 Paul D. Anderson Wrote:

 Is there an active project to develop arbitrary-precision floating
 point numbers for D?

 I've got a little extra time at the moment and would like to  


 if I can. I've done some work in floating point arithmetic and would  


 willing to start/complete/add to/test/design/etc. such a project.  


 I hope NOT to do is to re-implement someone else's perfectly adequate
 code.

 If no such project exists I'd like to start one. If there are a bunch
 of half-finished attempts (I have one of those), let's pool our  


 I know several contributors here have a strong interest and/or
 background in numerics. I'd like to hear inputs regarding:

 a) the merits (or lack) of having an arbitrary-precision floating  


 type

 b) the features and functions that should be included.

 Just to be clear -- I'm talking about a library addition here, not a
 change in the language.

 Paul

When you can use a number in D that is more than the number of atoms

 the known universe why would you want a bigger number?

I'd like to calculate pi with up to 20000 valid digits. Or a square root of 2 with the same precision. How do I do that?

I've got some Java code that will do that -- not here with me at work. Of course, it uses Java's BigDecimal class -- that's what D doesn't seem to have. Paul

That was exactly my point - we need some kind of a Java BigDecimal class for such arithmetics in D. So my verdict: go for it!
Apr 08 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Frank Torte wrote:
 When you can use a number in D that is more than the number of atoms
 in the known universe why would you want a bigger number?

There are a couple reasons: 1. Roundoff error in an iterative calculation can easily and quickly overwhelm the answer. Keeping more bits in the intermediate results is an easy way to alleviate this problem. 2. When two floating point numbers are added, they are first scaled (i.e. shifted) until the exponents match. This means you lose 1 bit of precision for every bit the exponents don't match. Adding more bits of precision can compensate.
Apr 08 2009
prev sibling next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Paul D. Anderson
(paul.d.removethis.anderson comcast.andthis.net)'s
article
 Is there an active project to develop arbitrary-precision floating point
numbers

 I've got a little extra time at the moment and would like to contribute if I

start/complete/add to/test/design/etc. such a project. What I hope NOT to do is to re-implement someone else's perfectly adequate code.
 If no such project exists I'd like to start one. If there are a bunch of

 I know several contributors here have a strong interest and/or background in

 a) the merits (or lack) of having an arbitrary-precision floating point type
 b) the features and functions that should be included.
 Just to be clear -- I'm talking about a library addition here, not a change in

 Paul

Absolutely, I would love having a BigFloat in D, especially if it were in Phobos and thus worked straight out of the box and had a good API (should be relatively easy to make a good API with all the new language features geared toward lib writers that have been added lately). In addition to the obvious uses for BigFloat, here's a not so obvious one: You're writing some kind of quick and dirty numerics simulation that only has to run a few times. You know of a really simple, elegant algorithm for your problem, except that it's numerically unstable. You do not want to spend the time to implement a more complicated algorithm because it's just not worth it given the computer time-programmer time tradeoff in question. Solution: Use a BigFloat and be done with it. (Flame guard up: No, I don't recommend this for any production numerics algorithms, but who the heck doesn't sometimes write bad code focused on ease of implementation if it's just a one-off thing?)
Apr 08 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Paul D. Anderson wrote:
 b) the features and functions that should be included.

I'd say NaNs and unordered comparisons. In other words, it should support the same semantics as float, double and real do. If you've got the time and interest, adding all the functions in std.math would be great!
Apr 08 2009
next sibling parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Wed, Apr 8, 2009 at 3:39 PM, Walter Bright
<newshound1 digitalmars.com> wrote:
 Paul D. Anderson wrote:
 b) the features and functions that should be included.

I'd say NaNs and unordered comparisons. In other words, it should support the same semantics as float, double and real do.

opUnorderedCmp?
Apr 08 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Jarrett Billingsley (jarrett.billingsley gmail.com)'s article
 On Wed, Apr 8, 2009 at 3:39 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Paul D. Anderson wrote:
 b) the features and functions that should be included.

I'd say NaNs and unordered comparisons. In other words, it should support the same semantics as float, double and real do.


What's wrong with just returning some sentinel from opCmp? For example, define int.max as the sentinel for when comparing with nans involved, etc. For opEquals, we don't have a problem, just return false.
Apr 08 2009
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
dsimcha wrote:
 == Quote from Jarrett Billingsley (jarrett.billingsley gmail.com)'s article
 On Wed, Apr 8, 2009 at 3:39 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Paul D. Anderson wrote:
 b) the features and functions that should be included.

the same semantics as float, double and real do.


What's wrong with just returning some sentinel from opCmp? For example, define int.max as the sentinel for when comparing with nans involved, etc. For opEquals, we don't have a problem, just return false.

IIRC having an opCmp returning floats works, so you could return float.nan. (I've never used this, but I think it was mentioned in these groups)
Apr 08 2009
next sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Steven Schveighoffer wrote:
 On Wed, 08 Apr 2009 16:41:35 -0400, Frits van Bommel 
 <fvbommel remwovexcapss.nl> wrote:
 
 dsimcha wrote:
 == Quote from Jarrett Billingsley (jarrett.billingsley gmail.com)'s 
 article
 On Wed, Apr 8, 2009 at 3:39 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Paul D. Anderson wrote:
 b) the features and functions that should be included.

support the same semantics as float, double and real do.


example, define int.max as the sentinel for when comparing with nans involved, etc. For opEquals, we don't have a problem, just return false.

IIRC having an opCmp returning floats works, so you could return float.nan. (I've never used this, but I think it was mentioned in these groups)

It works if you want to just do x < y. However, try sorting an array of structs that return float for opCmp, and you'll get an error. This is because the compiler has special meaning for opCmp of a certain signature, which goes into the TypeInfo. I submitted a bug for those functions to be documented: http://d.puremagic.com/issues/show_bug.cgi?id=2482

Yet another reason to get rid of built-in .sort; a templated function would have no problem with this :).
Apr 08 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Frits van Bommel (fvbommel REMwOVExCAPSs.nl)'s article
 Steven Schveighoffer wrote:

 Yet another reason to get rid of built-in .sort; a templated function would
have
 no problem with this :).

Yes, and with my proposal (not exclusively mine, it's been suggested by plenty of other people) of importing some very basic, universally used functionality automatically in Object so it can "feel" builtin, getting rid of builtin sort wouldn't even make code *look* any different.
Apr 08 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
dsimcha:
 Yes, and with my proposal (not exclusively mine, it's been suggested by plenty
of
 other people) of importing some very basic, universally used functionality
 automatically in Object so it can "feel" builtin, getting rid of builtin sort
 wouldn't even make code *look* any different.

Let's see, max, min, sort, abs, pow (but ** is better), and maybe few more? Bye, bearophile
Apr 08 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from Frits van Bommel (fvbommel REMwOVExCAPSs.nl)'s article
 Steven Schveighoffer wrote:

 Yet another reason to get rid of built-in .sort; a templated function would
have
 no problem with this :).

Yes, and with my proposal (not exclusively mine, it's been suggested by plenty of other people) of importing some very basic, universally used functionality automatically in Object so it can "feel" builtin, getting rid of builtin sort wouldn't even make code *look* any different.

Great point. My hope is that one day I'll manage to convince Walter and Sean to simply replace V[K] types with AssocArray!(K, V) and then make AssocArray a regular template inside object.d. The current implementation of associative arrays looks... brutal. Andrei
Apr 08 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 The current implementation of associative arrays looks... brutal.

Brutal is cool.
Apr 08 2009
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
The current implementation of associative arrays looks... brutal.<

Do you mean they are currently badly (= in a not precise enough way) managed by the GC? Bye, bearophile
Apr 08 2009
prev sibling next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 == Quote from Frits van Bommel (fvbommel REMwOVExCAPSs.nl)'s article
 Steven Schveighoffer wrote:

 Yet another reason to get rid of built-in .sort; a templated function would
have
 no problem with this :).

Yes, and with my proposal (not exclusively mine, it's been suggested by plenty of other people) of importing some very basic, universally used functionality automatically in Object so it can "feel" builtin, getting rid of builtin sort wouldn't even make code *look* any different.

Sean to simply replace V[K] types with AssocArray!(K, V) and then make AssocArray a regular template inside object.d. The current implementation of associative arrays looks... brutal. Andrei

Yes, but then we lose niceties like AA literals and good declaration syntax. What would be gained by moving stuff into Object compared to improving the implementation within the existing paradigm? On the other hand, the current implementation *could* use some improvement. (In a few minutes when it's written, see post on AA implementation. I've been meaning to post this for a while.)
Apr 08 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 == Quote from Frits van Bommel (fvbommel REMwOVExCAPSs.nl)'s article
 Steven Schveighoffer wrote:
 Yet another reason to get rid of built-in .sort; a templated function would
have
 no problem with this :).

other people) of importing some very basic, universally used functionality automatically in Object so it can "feel" builtin, getting rid of builtin sort wouldn't even make code *look* any different.

Sean to simply replace V[K] types with AssocArray!(K, V) and then make AssocArray a regular template inside object.d. The current implementation of associative arrays looks... brutal. Andrei

Yes, but then we lose niceties like AA literals and good declaration syntax.

Sorry, I meant to include literals too. What I'm saying is that built-in AAs should essentially be a very thin wrapper over a genuine D type. That means only the syntax of the type and the syntax of literals should be built-in - everything else should use the exact same amenities as any user-defined type.
 What
 would be gained by moving stuff into Object compared to improving the
 implementation within the existing paradigm?  On the other hand, the current
 implementation *could* use some improvement.  (In a few minutes when it's
written,
 see post on AA implementation.  I've been meaning to post this for a while.)

Current AAs look awful. They're all casts and bear claws and cave paintings. I tried two times to get into them, and abandoned them for lack of time. (I wanted to add an iterator for keys. It's virtually impossible.) Also, they don't use the normal operator syntax etc. The compiler elaborately transforms expressions into calls to AA functions. Also, AAs use dynamic type info which makes them inherently slower. Oh, and iteration uses opApplyImSlowLikeMolassesUphillOnAColdDay. To me it is painfully obvious that there should be as little magic as possible for elaborate types. Literals and simple type syntax are useful. Keep those, but stop there and let actual code take off from there. It's just the right way, again, to me that's so obvious I don't know how to explain. Andrei
Apr 08 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 == Quote from Frits van Bommel (fvbommel REMwOVExCAPSs.nl)'s article
 Steven Schveighoffer wrote:
 Yet another reason to get rid of built-in .sort; a templated function would





 no problem with this :).





 other people) of importing some very basic, universally used functionality
 automatically in Object so it can "feel" builtin, getting rid of builtin sort
 wouldn't even make code *look* any different.

Sean to simply replace V[K] types with AssocArray!(K, V) and then make AssocArray a regular template inside object.d. The current implementation of associative arrays looks... brutal. Andrei

Yes, but then we lose niceties like AA literals and good declaration syntax.

AAs should essentially be a very thin wrapper over a genuine D type. That means only the syntax of the type and the syntax of literals should be built-in - everything else should use the exact same amenities as any user-defined type.
 What
 would be gained by moving stuff into Object compared to improving the
 implementation within the existing paradigm?  On the other hand, the current
 implementation *could* use some improvement.  (In a few minutes when it's
written,
 see post on AA implementation.  I've been meaning to post this for a while.)

paintings. I tried two times to get into them, and abandoned them for lack of time. (I wanted to add an iterator for keys. It's virtually impossible.) Also, they don't use the normal operator syntax etc. The compiler elaborately transforms expressions into calls to AA functions. Also, AAs use dynamic type info which makes them inherently slower. Oh, and iteration uses opApplyImSlowLikeMolassesUphillOnAColdDay. To me it is painfully obvious that there should be as little magic as possible for elaborate types. Literals and simple type syntax are useful. Keep those, but stop there and let actual code take off from there. It's just the right way, again, to me that's so obvious I don't know how to explain. Andrei

Well, now that I understand your proposal a little better, it makes sense. I had wondered why the current AA implementation uses RTTI instead of templates. Even better would be if only the default implementation were in Object, and a user could somehow override which implementation of AA is given the blessing of pretty syntax by some pragma or export alias or something, as long as the implementation conforms to some specified compile-time interface.
Apr 08 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 Well, now that I understand your proposal a little better, it makes sense.  I
had
 wondered why the current AA implementation uses RTTI instead of templates. 
Even
 better would be if only the default implementation were in Object, and a user
 could somehow override which implementation of AA is given the blessing of
pretty
 syntax by some pragma or export alias or something, as long as the
implementation
 conforms to some specified compile-time interface.

Great! For now, I'd be happy if at least the user could hack their import path to include their own object.d before the stock object.d. Then people can use straight D to implement the AssocArray they prefer. Further improvements of the scheme will then become within reach! Andrei
Apr 08 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Andrei Alexandrescu wrote:
 dsimcha wrote:
 Well, now that I understand your proposal a little better, it makes
 sense.  I had
 wondered why the current AA implementation uses RTTI instead of
 templates.  Even
 better would be if only the default implementation were in Object, and
 a user
 could somehow override which implementation of AA is given the
 blessing of pretty
 syntax by some pragma or export alias or something, as long as the
 implementation
 conforms to some specified compile-time interface.

Great! For now, I'd be happy if at least the user could hack their import path to include their own object.d before the stock object.d. Then people can use straight D to implement the AssocArray they prefer. Further improvements of the scheme will then become within reach! Andrei

dmd -object=myobject.d stuff.d That would require the user to duplicate everything in object, which is a little messy. Maybe it would be a good idea to break object itself into a bunch of public imports to core.internal.* modules, then allow this: dmd -sub=core.internal.aa=myaa stuff.d Of course, it's probably simpler still to have this: dmd -aatype=myaa.AAType stuff.d -- Daniel
Apr 08 2009
parent reply Benji Smith <dlanguage benjismith.net> writes:
Daniel Keep wrote:
 
 Andrei Alexandrescu wrote:
 dsimcha wrote:
 Well, now that I understand your proposal a little better, it makes
 sense.  I had
 wondered why the current AA implementation uses RTTI instead of
 templates.  Even
 better would be if only the default implementation were in Object, and
 a user
 could somehow override which implementation of AA is given the
 blessing of pretty
 syntax by some pragma or export alias or something, as long as the
 implementation
 conforms to some specified compile-time interface.

import path to include their own object.d before the stock object.d. Then people can use straight D to implement the AssocArray they prefer. Further improvements of the scheme will then become within reach! Andrei

dmd -object=myobject.d stuff.d That would require the user to duplicate everything in object, which is a little messy. Maybe it would be a good idea to break object itself into a bunch of public imports to core.internal.* modules, then allow this: dmd -sub=core.internal.aa=myaa stuff.d Of course, it's probably simpler still to have this: dmd -aatype=myaa.AAType stuff.d -- Daniel

Instead, what if the literal syntax was amended to take an optional type name, like this: // Defaults to using built-in associative array type auto assocArray = [ "hello" : "world ]; // Uses my own custom type. auto hashtable = MyHashTableType!(string, string) [ "hello" : "world ]; You could accomplish that pretty easily, as long as the custom type had a no-arg constructor and a function with the signature: void add(K key, V val) --benji
Apr 12 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Benji Smith:
     // Defaults to using built-in associative array type
     auto assocArray = [
        "hello" : "world
     ];
 
     // Uses my own custom type.
     auto hashtable = MyHashTableType!(string, string) [
        "hello" : "world
     ];

In the second case the type inference of the compiler may find the types from the AA literal itself: auto hashtable = MyHashTableType ["hello" : "world]; Bye, bearophile
Apr 12 2009
parent Benji Smith <dlanguage benjismith.net> writes:
bearophile wrote:
 Benji Smith:
     // Defaults to using built-in associative array type
     auto assocArray = [
        "hello" : "world
     ];

     // Uses my own custom type.
     auto hashtable = MyHashTableType!(string, string) [
        "hello" : "world
     ];

In the second case the type inference of the compiler may find the types from the AA literal itself: auto hashtable = MyHashTableType ["hello" : "world]; Bye, bearophile

If that were the case, I'd want the compiler to scan *all* the key/value pairs for instances of derived types (rather than just being based on the first K/V pair, like is currently the case with other array literals). For example (using tango classes, where HttpGet and HttpPost are both subclasses of HttpClient): // Type is: MyHashTableType!(string, HttpClient) auto hashtable = MyHashTableType [ "get" : new HttpGet(), "post" : new HttpPost() ];
Apr 12 2009
prev sibling parent grauzone <none example.net> writes:
 Instead, what if the literal syntax was amended to take an optional type 
 name, like this:
 
    // Defaults to using built-in associative array type
    auto assocArray = [
       "hello" : "world
    ];
 
    // Uses my own custom type.
    auto hashtable = MyHashTableType!(string, string) [
       "hello" : "world
    ];
 
 You could accomplish that pretty easily, as long as the custom type had 
 a no-arg constructor and a function with the signature:
 
    void add(K key, V val)

What about this: an associative array literal would have the type (Key, Value)[] (an array of a Key-Value tuple), and you'd use opAssign (or the new implicit casting operators from D2.0, opImplicitCastFrom or what it was) to convert it to your hash table type. MyHashTableType hashtable = ["hello" : "word"]; expends to (char[], char[])[] tmp = [("hello", "word")]; MyHashTableType hashtable = tmp; expands to (char[], char[])[] tmp = [("hello", "word")]; MyHashTableType!(char[], char[]) hashtable; //magical type inference hashtable.opAssign([("hello", "word")]); Anyway, looking forward to the day the D compiler is merely a CTFE interpreter, and the actual code generation is implemented as D library and executed as normal user code during compile time.
Apr 12 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 Great point. My hope is that one day I'll manage to convince Walter and 
 Sean to simply replace V[K] types with AssocArray!(K, V) and then make 
 AssocArray a regular template inside object.d. The current 
 implementation of associative arrays looks... brutal.

Maybe some compromise can be found. I think the current syntax is good enough. A possible idea is to translate the current AA code to D and create a D module (that object imports, if you want) that contains something like that AssocArray!(K, V). Some very useful things are missing in the current AAs: - OpEquals among AAs, very useful in unit tests, to assert that functions return a correct AA. - Empty AA literal (or expression, I currently use AA!(T, S)). - Empty AAs can be false. - A way to clear an AA, like aa.clear or aa.clear(); - A way to perform a shallow copy, like aa.dup - Possibly lazy view of keys, values and key-value pairs, as in Java and Python3. - A more precise management by the GC. Then the compiler can map the current syntax to the AssocArray!(K, V) template struct/class and its functionality. I don't know if this can be done. It's also a way to reduce the C code and translate some of it to D. Bye, bearophile
Apr 08 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Andrei Alexandrescu:
 Great point. My hope is that one day I'll manage to convince Walter and 
 Sean to simply replace V[K] types with AssocArray!(K, V) and then make 
 AssocArray a regular template inside object.d. The current 
 implementation of associative arrays looks... brutal.

Maybe some compromise can be found. I think the current syntax is good enough.

Yah. I see I created confusion - the surface syntax wouldn't change at all. Only the way the compiler translates it. In essence, aside from literals and type names, all expressions involving hashes should be handled like regular expressions.
 A possible idea is to translate the current AA code to D and create a D module
(that object imports, if you want) that contains something like that
AssocArray!(K, V).
 Some very useful things are missing in the current AAs:
 - OpEquals among AAs, very useful in unit tests, to assert that functions
return a correct AA.
 - Empty AA literal (or expression, I currently use AA!(T, S)).
 - Empty AAs can be false.
 - A way to clear an AA, like aa.clear or aa.clear();
 - A way to perform a shallow copy, like aa.dup
 - Possibly lazy view of keys, values and key-value pairs, as in Java and
Python3.
 - A more precise management by the GC.

These are great ideas. I'd be glad to implement them but currently my hands are tied by the way things are handled today.
 Then the compiler can map the current syntax to the AssocArray!(K, V) template
struct/class and its functionality. I don't know if this can be done.
 It's also a way to reduce the C code and translate some of it to D.

Nonono. What I'm saying is very simple: translate V[K] into AssocArray!(K, V) and [ k1:v1, k2:v2, ..., kn:vn ] into AssocArray!(typeof(k1), typeof(v1))(k1, v1, k2, v2, ..., kn, vn) and do exactly nothing else in particular about hashes. Andrei
Apr 08 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

To me it is painfully obvious that there should be as little magic as possible
for elaborate types. Literals and simple type syntax are useful. Keep those,
but stop there and let actual code take off from there.<

So far I have expressed similar ideas three times, so I agree :-)
These are great ideas. I'd be glad to implement them but currently my hands are
tied by the way things are handled today.<

We are talking about a D module here, so you can write it and add it to the std lib. It can be used even without compiler support, and probably compared to the built-in ones it will be an improvement anyway, even without the syntax support. So go for it :-) Regarding the name of such data structure, HashMap sounds good, it's a very standard name. Once that module is done and works well enough, Walter can (eventually) change his mind and remove the current AAs and add the regex thing you have explained me, for the nice syntax support. Later a similar strategy may be even used for a HashSet data structure and its syntax. Implementing a set with a hashmap is easy, but a true set data structure supports other operations, like interection, union, difference, etc, that a HashMap usually doesn't need to implement. My experience (with Python) shows me that such set operations are useful to write short and readable high-level code. ------------------- dsimcha:
Even better would be if only the default implementation were in Object, and a
user could somehow override which implementation of AA is given the blessing of
pretty syntax by some pragma or export alias or something, as long as the
implementation conforms to some specified compile-time interface.<

This is another step. I don't know how much easy/difficult it is to implement. As you may guess this is also a step toward a more modern pluggable compiler/language. (There are experimental Scheme compilers designed like this).
Also, does anyone besides me have a use for an AA implementation that is
designed to be used with a second stack-based allocator (TempAlloc/SuperStack
as discussed here previously)?<

The default AA has to be flexible, even if it's not always top performance. Your implementation may be useful in a std lib, and not as built-in. From the code comments:
Allocate a StackHash with an array size of nums.length / 2. This is the size of
the array used internally, and is fixed.<

Do you use the alloca() function for this? ---------------- Andrei Alexandrescu: dsimcha>>Even better would be if only the default implementation were in Object, and a user could somehow override which implementation of AA is given the blessing of pretty syntax by some pragma or export alias or something, as long as the implementation conforms to some specified compile-time interface.<<
Further improvements of the scheme will then become within reach!<

At the moment I don't know what syntax/semantics it can be used to implement dsimcha's idea. Thank you, bye, bearophile
Apr 08 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from bearophile (bearophileHUGS lycos.com)'s article
 Andrei Alexandrescu:
 dsimcha:
Also, does anyone besides me have a use for an AA implementation that is


discussed here previously)?<
 The default AA has to be flexible, even if it's not always top performance.
Your

Of course. I had not meant to suggest that this be the standard implementation. It's a performance hack, but a very useful one IMHO given how often an algorithm needs a really fast AA implementation that does not escape the function's scope.
 From the code comments:
Allocate a StackHash with an array size of nums.length / 2. This is the size of


 Do you use the alloca() function for this?

No. It wouldn't work well w/ alloca(), see stack overflows, and the fact that alloca-allocated memory can't escape a function scope, meaning that if the array needed to allocate more memory on a call to opIndexAssign(), there would be no way to do so in the caller's stack frame. It uses TempAlloc, which was an idea proposed by Andrei under the name SuperStack, and later implemented by me. TempAlloc basically grabs big chunks of memory from the GC, and manages them in last in, first out order like a stack. Also, like a stack, it is thread local, and therefore the only time there is possibility for contention is when a new chunk needs to be allocated. On the other hand, if you use too much memory, it allocates another chunk from the heap instead of overflowing and crashing your program. Freeing memory is explicit, though with mixin(newFrame) you can tell TempAlloc to free all TempAlloc memory allocated after that point at the end of the scope.
Apr 08 2009
parent reply superdan <super dan.org> writes:
then lets go all the way. make slices normal types in object.d too. compiler
translate t[] to slice!(t) & [ x, y, z ] to slice!(typeof(x))(x, y, z). then u
write slice in normal d & put it in object.d. fuck the middleman.

i grok why ints and floats must be in the language. optimization shit n binary
comp n stuff. but slice n hash work user defined just fine.
Apr 08 2009
parent dsimcha <dsimcha yahoo.com> writes:
== Quote from superdan (super dan.org)'s article
 then lets go all the way. make slices normal types in object.d too. compiler

write slice in normal d & put it in object.d. fuck the middleman.
 i grok why ints and floats must be in the language. optimization shit n binary

But then you would lose some of the benefits of the builtins with respect to templates and CTFE. Speaking of which, does anyone actually use AAs in templates and CTFE? I have tried once or twice, and it actually works. If we put AAs in object.d, what would be done about using them in CTFE, given that the implementation would likely not be CTFE-compatible?
Apr 08 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 These are great ideas. I'd be glad to implement them but 

Immutable associative arrays may even define a toHash (computed only once, the first time it's needed), so they can be used as keys for other AAs/sets too. Bye, bearophile
Apr 08 2009
parent reply Rainer Deyke <rainerd eldwood.com> writes:
bearophile wrote:
 Immutable associative arrays may even define a toHash (computed only
 once, the first time it's needed), so they can be used as keys for
 other AAs/sets too.

How would this work? Hash value calculated on conversion to immutable? Messy special case, plus the hash value may never be needed. Hash value calculated on first access and stored in the AA? Can't do, AA is immutable. Hash value calculated on first access and stored in a global table? The global table would prevent the AA from being garbage collected. I would like to see this happen, but I don't think D allows it. -- Rainer Deyke - rainerd eldwood.com
Apr 08 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Rainer Deyke:
 How would this work?<
 Hash value calculated on first access and stored in the AA?  Can't do,
 AA is immutable.

You are right, there's a problem here, even once you have added an ".idup" to AAs. A hash value isn't data, it's metadata, so you may have a lazily computed mutable metadata of immutable data. Once computed the hash value essentially becomes an immutable. You can think about a situation where two threads want to use such immutable AA (iAA). There's no need to copy such data, because it's immutable. Both may try to write the mutable hash value, but it's the same value, so no controls are necessary. If you have a pure function you may want to give it an array of such iAA, and then the pure function may put such iAAs into a set/AA inside to compute something. Are immutable functions allowed to take as arguments (beside the data of the iAAs) the immutable future result of a deterministic pure computation performed on immutable data? I think such things are named "immutable futures". Essentially it's a form of lazy & pure computation, and it's done often for example in Haskell and Scheme. It's a small extension of the framework of immutability, and it may lead to many uses. For example in Haskell all data is immutable, but not everything is computed up-front. The compiler is allowed to reason about immutable data that isn't computed yet. This for example allows to manage an infinite stream of (immutable, but lazily computed) prime numbers. So in haskell even a generator function like xprimes() of my dlibs can be thought as immutable, despite it doesn't compute all the prime numbers at the start. Such kind of lazily computed immutable values become very useful in a language/compiler that has a native support of deep immutable data and pure functions. I don't know if in D2.x you can already have a function with a lazy immutable input argument: pure int foo(immutable lazy x) {...} The hash value can be though as the result of one of such pure immutable lazy functions :-) Bye, bearophile
Apr 09 2009
parent Rainer Deyke <rainerd eldwood.com> writes:
bearophile wrote:
 You are right, there's a problem here, even once you have added an
 ".idup" to AAs. A hash value isn't data, it's metadata, so you may
 have a lazily computed mutable metadata of immutable data. Once
 computed the hash value essentially becomes an immutable.

This sounds like the difference between "logical const" and "physical const". I use the "logical const" features of C++ (along with the 'mutable' keyword) in C++ all the time for just this purpose. For better or worse, D has gone the "physical const" route. -- Rainer Deyke - rainerd eldwood.com
Apr 09 2009
prev sibling parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Wed, Apr 8, 2009 at 11:19 PM, Rainer Deyke <rainerd eldwood.com> wrote:

 Hash value calculated on first access and stored in a global table? =A0Th=

 global table would prevent the AA from being garbage collected.

Weak references.
Apr 08 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Jarrett Billingsley wrote:
 opUnorderedCmp?

Yes, that needs to be added.
Apr 08 2009
prev sibling next sibling parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Wed, Apr 8, 2009 at 3:51 PM, dsimcha <dsimcha yahoo.com> wrote:
 =3D=3D Quote from Jarrett Billingsley (jarrett.billingsley gmail.com)'s a=

 On Wed, Apr 8, 2009 at 3:39 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Paul D. Anderson wrote:
 b) the features and functions that should be included.

I'd say NaNs and unordered comparisons. In other words, it should supp=



 the same semantics as float, double and real do.


What's wrong with just returning some sentinel from opCmp? =A0For example=

 int.max as the sentinel for when comparing with nans involved, etc. =A0Fo=

 we don't have a problem, just return false.

Oh, hm, I wasn't aware that the NCEG operators actually called opCmp.
Apr 08 2009
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Paul D. Anderson wrote:
 b) the features and functions that should be included.

I'd say NaNs and unordered comparisons. In other words, it should support the same semantics as float, double and real do. If you've got the time and interest, adding all the functions in std.math would be great!

(n + 1)thed. Andrei
Apr 08 2009
prev sibling next sibling parent reply Paul D. Anderson <paul.d.removethis.anderson comcast.andthis.net> writes:
Walter Bright Wrote:

 Paul D. Anderson wrote:
 b) the features and functions that should be included.

I'd say NaNs and unordered comparisons. In other words, it should support the same semantics as float, double and real do. If you've got the time and interest, adding all the functions in std.math would be great!

I'm not sure I can sign up for ALL of std.math. I'm sure I'll need help. I can do roots, powers and transcendental functions, though. Maybe not very efficiently (power series). (If very high precision numbers are questionable, how valuable are high precision sine and cosine??) Paul
Apr 08 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Paul D. Anderson wrote:
 Walter Bright Wrote:
 
 Paul D. Anderson wrote:
 b) the features and functions that should be included.

support the same semantics as float, double and real do. If you've got the time and interest, adding all the functions in std.math would be great!

I'm not sure I can sign up for ALL of std.math. I'm sure I'll need help. I can do roots, powers and transcendental functions, though. Maybe not very efficiently (power series). (If very high precision numbers are questionable, how valuable are high precision sine and cosine??) Paul

Would be great if we could enlist Don's help. Don? :o) In only slightly related news, the "new, new" Phobos2 offers custom floating-point numbers, see http://erdani.dreamhosters.com/d/web/phobos/std_numeric.html They aren't infinite precision (which makes their utility orthogonal on bigfloat's), but they allow fine tweaking of floating point storage. Want to cram floats in 16 or 24 bits? Care about numbers in [0, 1) at maximum precision? Give CustomFloat a shot. Andrei
Apr 08 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 On Thu, Apr 9, 2009 at 5:46 AM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Paul D. Anderson wrote:
 Walter Bright Wrote:

 Paul D. Anderson wrote:
 b) the features and functions that should be included.

the same semantics as float, double and real do. If you've got the time and interest, adding all the functions in std.math would be great!

help. I can do roots, powers and transcendental functions, though. Maybe not very efficiently (power series). (If very high precision numbers are questionable, how valuable are high precision sine and cosine??) Paul

In only slightly related news, the "new, new" Phobos2 offers custom floating-point numbers, see http://erdani.dreamhosters.com/d/web/phobos/std_numeric.html They aren't infinite precision (which makes their utility orthogonal on bigfloat's), but they allow fine tweaking of floating point storage. Want to cram floats in 16 or 24 bits?

Awesome. So we can use it to create the IEEE Halfs that are used by graphics cards?

It was a big motivator. The example in the dox does exactly that: alias CustomFloat!(1, 5, 10) HalfFloat;
 Care about numbers in [0, 1) at maximum
 precision? Give CustomFloat a shot.

By this do you mean you can get a fixed point format? (i'm guessing so, just by setting exp bits to zero.) If so, then that's very cool too.

Interesting. I haven't tested exp bits to zero, but that should be definitely workable. What I meant was still floating point, but with only negative (or zero) powers. You can do that because you have control over the bias. One thing - alias this should greatly simplify using custom floats. It's not in there yet. Andrei
Apr 08 2009
prev sibling parent reply grauzone <none example.net> writes:
Andrei Alexandrescu wrote:
 Paul D. Anderson wrote:
 Walter Bright Wrote:

 Paul D. Anderson wrote:
 b) the features and functions that should be included.

support the same semantics as float, double and real do. If you've got the time and interest, adding all the functions in std.math would be great!

I'm not sure I can sign up for ALL of std.math. I'm sure I'll need help. I can do roots, powers and transcendental functions, though. Maybe not very efficiently (power series). (If very high precision numbers are questionable, how valuable are high precision sine and cosine??) Paul

Would be great if we could enlist Don's help. Don? :o) In only slightly related news, the "new, new" Phobos2 offers custom floating-point numbers, see http://erdani.dreamhosters.com/d/web/phobos/std_numeric.html They aren't infinite precision (which makes their utility orthogonal on bigfloat's), but they allow fine tweaking of floating point storage. Want to cram floats in 16 or 24 bits? Care about numbers in [0, 1) at maximum precision? Give CustomFloat a shot.

Sorry for the uninformed question, but do these types with with std.math?
 
 Andrei

Apr 08 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
grauzone wrote:
 Andrei Alexandrescu wrote:
 Paul D. Anderson wrote:
 Walter Bright Wrote:

 Paul D. Anderson wrote:
 b) the features and functions that should be included.

support the same semantics as float, double and real do. If you've got the time and interest, adding all the functions in std.math would be great!

I'm not sure I can sign up for ALL of std.math. I'm sure I'll need help. I can do roots, powers and transcendental functions, though. Maybe not very efficiently (power series). (If very high precision numbers are questionable, how valuable are high precision sine and cosine??) Paul

Would be great if we could enlist Don's help. Don? :o) In only slightly related news, the "new, new" Phobos2 offers custom floating-point numbers, see http://erdani.dreamhosters.com/d/web/phobos/std_numeric.html They aren't infinite precision (which makes their utility orthogonal on bigfloat's), but they allow fine tweaking of floating point storage. Want to cram floats in 16 or 24 bits? Care about numbers in [0, 1) at maximum precision? Give CustomFloat a shot.

Sorry for the uninformed question, but do these types with with std.math?

If you meant to ask whether they work with std.math, yes, but only in the sense that they are convertible from and to the built-in floating point types. I've been coquetting with the idea of implementing some operations natively, but there's so much hardware dedicated to IEEE formats, it's faster to convert -> use -> convert back, than to emulate in software. Andrei
Apr 08 2009
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Andrei Alexandrescu wrote:
 grauzone wrote:
 Sorry for the uninformed question, but do these types with with std.math?

If you meant to ask whether they work with std.math, yes, but only in the sense that they are convertible from and to the built-in floating point types. I've been coquetting with the idea of implementing some operations natively, but there's so much hardware dedicated to IEEE formats, it's faster to convert -> use -> convert back, than to emulate in software.

That won't give correct results if you want *more* precision than native types allow. For example, imagine a cross-compiler from $PLATFORM to X86 implemented in D; it would want to do constant-folding on 80-bit floats but $PLATFORM likely doesn't support anything but float & double. I could imagine a similar reason for wanting appropriate rounding behavior, even for smaller types not natively supported.
Apr 08 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Paul D. Anderson wrote:
 I'm not sure I can sign up for ALL of std.math. I'm sure I'll need
 help.  I can do roots, powers and transcendental functions, though.
 Maybe not very efficiently (power series).

It's not necessary to come out of the starting gate with them all implemented to arbitrary precision. A workable first version can just call the std.math real versions, and note in the documentation as a bug that the precision is limited to real precision. The various constants, like std.math.E and PI should also be there. They can be lazily evaluated. I also suggest that the type be a template parameterized with the exponent bits and mantissa bits. float, double and real would then be specializations of them.
 (If very high precision numbers are questionable, how valuable are
 high precision sine and cosine??)

If one accepts the utility of high precision numbers, then one must also accept the utility of high precision math functions!
Apr 08 2009
prev sibling next sibling parent Bill Baxter <wbaxter gmail.com> writes:
On Thu, Apr 9, 2009 at 5:46 AM, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:
 Paul D. Anderson wrote:
 Walter Bright Wrote:

 Paul D. Anderson wrote:
 b) the features and functions that should be included.

I'd say NaNs and unordered comparisons. In other words, it should suppo=



 the same semantics as float, double and real do.

 If you've got the time and interest, adding all the functions in std.ma=



 would be great!

I'm not sure I can sign up for ALL of std.math. I'm sure I'll need help. =A0I can do roots, powers and transcendental functions, though. Maybe not very efficiently (power series). (If very high precision numbers are questionable, how valuable are high precision sine and cosine??) Paul

Would be great if we could enlist Don's help. Don? :o) In only slightly related news, the "new, new" Phobos2 offers custom floating-point numbers, see http://erdani.dreamhosters.com/d/web/phobos/std_numeric.html They aren't infinite precision (which makes their utility orthogonal on bigfloat's), but they allow fine tweaking of floating point storage. Want=

 cram floats in 16 or 24 bits?

Awesome. So we can use it to create the IEEE Halfs that are used by graphics cards?
 Care about numbers in [0, 1) at maximum
 precision? Give CustomFloat a shot.

By this do you mean you can get a fixed point format? (i'm guessing so, just by setting exp bits to zero.) If so, then that's very cool too. --bb
Apr 08 2009
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 08 Apr 2009 16:41:35 -0400, Frits van Bommel  
<fvbommel remwovexcapss.nl> wrote:

 dsimcha wrote:
 == Quote from Jarrett Billingsley (jarrett.billingsley gmail.com)'s  
 article
 On Wed, Apr 8, 2009 at 3:39 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Paul D. Anderson wrote:
 b) the features and functions that should be included.

support the same semantics as float, double and real do.


example, define int.max as the sentinel for when comparing with nans involved, etc. For opEquals, we don't have a problem, just return false.

IIRC having an opCmp returning floats works, so you could return float.nan. (I've never used this, but I think it was mentioned in these groups)

It works if you want to just do x < y. However, try sorting an array of structs that return float for opCmp, and you'll get an error. This is because the compiler has special meaning for opCmp of a certain signature, which goes into the TypeInfo. I submitted a bug for those functions to be documented: http://d.puremagic.com/issues/show_bug.cgi?id=2482 -Steve
Apr 08 2009
prev sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Fri, 10 Apr 2009 03:44:06 +0400, Rainer Deyke <rainerd eldwood.com> wrote:

 bearophile wrote:
 You are right, there's a problem here, even once you have added an
 ".idup" to AAs. A hash value isn't data, it's metadata, so you may
 have a lazily computed mutable metadata of immutable data. Once
 computed the hash value essentially becomes an immutable.

This sounds like the difference between "logical const" and "physical const". I use the "logical const" features of C++ (along with the 'mutable' keyword) in C++ all the time for just this purpose. For better or worse, D has gone the "physical const" route.

One can still write a Mutable!(T) template and get logical const :)
Apr 10 2009
prev sibling parent reply Don <nospam nospam.com> writes:
Paul D. Anderson wrote:
 Is there an active project to develop arbitrary-precision floating point
numbers for D?

 
 I've got a little extra time at the moment and would like to contribute if I
can. I've done some work in floating point arithmetic and would be willing to
start/complete/add to/test/design/etc. such a project. What I hope NOT to do is
to re-implement someone else's perfectly adequate code.

That would be fantastic.
 If no such project exists I'd like to start one. If there are a bunch of
half-finished attempts (I have one of those), let's pool our efforts.

I began the BigInt project in Tango in order to be able to create BigFloat. So far, I've done very little on BigFloat itself -- I've got side-tracked on other things. It would be awesome if you could do some floating-point work. Probably, you'll need some more primitive operations than are currently provided. (Key BigInt primitives which are currently missing are sqrt, pow, and gcd; you probably also need more access to the internals). The Tango BigInt will become part of Phobos2 sooner or later -- actually it's almost entirely a stand-alone project, the only thing directly linking it to Tango is the module names, so it doesn't really matter if you develop in Tango or Phobos. Note that my Bigint asm primitives are in most cases slightly faster than the ones provided by GMP <g>.
 I know several contributors here have a strong interest and/or background in
numerics. I'd like to hear inputs regarding:
 
 a) the merits (or lack) of having an arbitrary-precision floating point type
 
 b) the features and functions that should be included.

Just begin with basic arithmetic.
 
 Just to be clear -- I'm talking about a library addition here, not a change in
the language.
 
 Paul
 
 

Apr 09 2009
parent reply dennis luehring <dl.soluz gmx.net> writes:
On 09.04.2009 09:18, Don wrote:
 Note that my Bigint asm primitives are in most cases slightly faster
 than the ones provided by GMP <g>.

do you think that a blade like method here will increase the speed even more?...
Apr 09 2009
parent reply Don <nospam nospam.com> writes:
dennis luehring wrote:
 On 09.04.2009 09:18, Don wrote:
 Note that my Bigint asm primitives are in most cases slightly faster
 than the ones provided by GMP <g>.

do you think that a blade like method here will increase the speed even more?...

No. I've got it very close to the machine limits. On Intel machines, in which adc and sbc are ridiculously slow and have an undocumented stall with conditional jumps, you could get add and subtract faster for small lengths, if you know the length at compile-time. But that would only be relevant for small-size floating point types such as Andrei was talking about, it wouldn't help BigInt. And the benefit's negligible for AMD machines. BTW, discovering that stall is one of the reasons I'm faster than GMP.
Apr 09 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Don:
 On Intel machines, in which adc and sbc are ridiculously slow and have 
 an undocumented stall with conditional jumps,

I don't know if a i7 CPU too has such problem. It may sound silly, but why don't you write that to Intel (giving some demo asm code too, if necessary), asking if they can remove such bug from next generation of CPUs? From several things I have read and seen in the past it seems they are willing to listen to people that know what they are talking about. For example years ago they have listened to the authors of one Doom version, or to the authors of a famous open source video decoder (I think H.264). A good amount of time ago I have discussed with one of them them regarding a faster integer divisions of small integers (similar to what good C compilers do to divide by small constants). Bye, bearophile
Apr 09 2009