www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - SSE, AVX, and beyond

reply Eljay <eljay adobe.com> writes:
As cent/ucent, should a keyword be reserved for 256?  512?  1024?

- - - - - - - - - - - - - - -

Most programming languages are loathe to add new keywords, because that has the
chance to impact existing code.

So the time to add keywords for D 2.0(alpha) is now, since the language is in
alpha.

For 128-bit signed/unsigned int, D has reserved cent and ucent.  Perfect for
working with UUIDs.  But not implemented yet.

The next generation of SSE, called AVX, will support 256-bit data.  Now,
granted, those 256-bit registers and data-path is more like (u)byte[32], or
(u)short[16], or (u)int[8], or float[8] arrays with specially optimized SIMD
instructions.

The current generation of GPUs support 256-bit data.  Some of which may be used
for GPGPU work.  (D Programming Language does not target those GPGPUs, yet. 
But with a little luck...)

It is within the realm of reason that there may be larger-than-64-bit integers
in the near future.  128 is ready, with cent/ucent.  Should larger sizes have
reserved keywords?  Or are those larger sizes just too ludicrously large to
worry about?

Or maybe a language extension like int!256, int!512, int!1024 (and retrofit to
int!128, int!64, int!32, int!16, int!8 ... much as MSVC++ has __int64, __int32,
__int16, __int8, or FORTRAN has integer*8, integer*4, integer*2, integer*1).

Thoughts?

[My FORTRAN is very, very rusty, so my apologies if I misremembered.]
Aug 09 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Eljay:
Should larger sizes have reserved keywords?  Or are those larger sizes just too
ludicrously large to worry about?<
Today a CPU can perform an operation like ucent & ucent using the single SSE2 instruction PAND (_mm_and_si128 on GCC). A 128 bit unsigned number may look of little use if you see it as an integer value, but it can be very useful if you see it like an array of bits that can be modified with operations that work in parallel on many bits. Such very wide bitwise operations are useful in many situations, for example to compute possible moves in a program that plays chess, to find approximate genomic substrings using a fast finite state machine, to quickly perform set operations on bit vectors and bloom filters, etc. Adding other type names for values longer than cent/ucent doesn't look useful. In my opinion it's much better if the D compiler is able to translate operations like: float[4] v1, v2, v3; uint[4] a1, a2, a3; v3[] = v1[] + v2[]; a3[] = a1[] & a2[]; In single inlined SSE1/2/3+ instructions, and float[8] v1, v2, v3; uint[8] a1, a2, a3; v3[] = v1[] + v2[]; a3[] = a1[] & a2[]; In inlined pairs of istructions, and so on. LDC is supposed to be able to do this, but currently it's not able to (also today those v1, v2, etc on the stack have to be aligned to 16 bytes). Bye, bearophile
Aug 09 2009
prev sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Eljay wrote:
 As cent/ucent, should a keyword be reserved for 256?  512?  1024?
 
 - - - - - - - - - - - - - - -
 
 Most programming languages are loathe to add new keywords, because that has
the chance to impact existing code.
 
 So the time to add keywords for D 2.0(alpha) is now, since the language is in
alpha.
 
 For 128-bit signed/unsigned int, D has reserved cent and ucent.  Perfect for
working with UUIDs.  But not implemented yet.
Is there ANY use case where you'd need a 256-bit integer instead of a BigInteger? Even 128 is a bit dodgy. UUIDs and what not are identifiers, not numbers, so have no problem being stored in a struct wrapping a ubyte[]. I agree compilers should support 256+ bit _data_... But doing so with an entirely new numeric data-type is probably a bad idea. Special treatment for certain constructs and library support is a much better idea.
Aug 10 2009
parent Mattias Holm <mattias.holm openorbit.REMOVE.THIS.org> writes:
Robert Fraser wrote:
 Eljay wrote:
 Is there ANY use case where you'd need a 256-bit integer instead of a 
 BigInteger? Even 128 is a bit dodgy. UUIDs and what not are identifiers, 
 not numbers, so have no problem being stored in a struct wrapping a 
 ubyte[].
Fixed point arithmetic!!! Seriously, 256 bit fixed point numbers (roughly an integer, but with renormalization after multiplication and division) can represent the position of anything in the visible universe at a smaller scale than the plank length (which physically does not make any sense). I would find that pretty nice to work with, actually, it would be a lot nicer than doubles.
 I agree compilers should support 256+ bit _data_... But doing so with an 
 entirely new numeric data-type is probably a bad idea. Special treatment 
 for certain constructs and library support is a much better idea.
I don't get this, I am doing research in compiler optimizations, and I can say that library supported value types are just painful to reason about since they are not part of the language, they end up as being not properly formalized, and the compiler will lose optimization opportunities. It makes more sense to declare some growth opportunity for the future, and then add new types (with new keywords) in future language revisions (which should be enabled by flags, this does complicate the parser a little bit, but it does not have to overly complicated in a handwritten parser. / Mattias
Aug 20 2009