## digitalmars.D - 3D Math Data structures/SIMD

• Lukas Pinkowski (25/25) Dec 21 2007 I'm wondering why the 2D/3D/4D-vector and -matrix data types don't find
• Janice Caron (12/18) Dec 21 2007 That's an awful lot of reserved words - especially when you take into
• Bill Baxter (7/13) Dec 21 2007 I'd complain. :-) Everyone knows that dot(a,b) and cross(a,b) are the
• Jascha Wetzel (3/6) Dec 21 2007 i agree. i think D should stick to the syntax established in shader
• Lukas Pinkowski (24/46) Dec 21 2007 Well, let's make a std.vectormath with:
• Janice Caron (15/20) Dec 21 2007 Now that's just nonsense! Matrix multiplication should be matrix
• Lukas Pinkowski (23/46) Dec 21 2007 Because those templates would have to use inline assembler to make use o...
• Janice Caron (6/7) Dec 21 2007 I call ad hominem on that one!
• Lukas Pinkowski (9/18) Dec 22 2007 Well, but why do you call the component-wise multiplication nonsense? As...
• Janice Caron (28/34) Dec 22 2007 Well, yes and no. Obviously you are correct in that one can define any
• Saaa (9/16) Dec 22 2007 I won't call it a argument at all, just an observation(of which he isn't...
• Bill Baxter (29/54) Dec 21 2007 As pointed out, there is also the outer product that creates an NxN
• =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= (25/31) Dec 22 2007 -----BEGIN PGP SIGNED MESSAGE-----
• 0ffh (4/8) Dec 22 2007 Is it not? Tell Jacques Hadamard!
• Janice Caron (4/6) Dec 22 2007 You're going to have to be a bit more specific, I'm afraid. I googled
• 0ffh (5/8) Dec 22 2007 The element-wise product of matrices (which you called "just not
• Janice Caron (25/32) Dec 22 2007 Cool. I like learning new things. So, elementwise multiplication is
• Lukas Pinkowski (25/57) Dec 22 2007 Floating point numbers do not obey the rules which we normally associate
• Jascha Wetzel (2/9) Dec 22 2007 http://en.wikipedia.org/wiki/Matrix_multiplication#Hadamard_product
• Janice Caron (7/8) Dec 22 2007 Insteresting that the Hadamard product is listed under /Matrix/
• 0ffh (8/18) Dec 22 2007 I just wanted to demonstrate to you your unfortunate predisposition to
• Jascha Wetzel (5/36) Dec 21 2007 this has been proposed before and there has been discussion about the
• Lukas Pinkowski (10/14) Dec 21 2007 I think GDC and LLVMDC would be nice testbeds for such an extension. One...
• Don Clugston (16/33) Dec 22 2007 Do you think you could come up with some concrete examples?
• Mikola Lysenko (4/4) Dec 21 2007 I tried proposing a similar idea some time ago. There was a lot of good...
• Rioshin an'Harthen (24/52) Dec 22 2007 I've found myself wanting the vector and matrix types to be built into t...
• Janice Caron (27/39) Dec 22 2007 With this I completely agree. However, there's more than one way to
• Bill Baxter (10/59) Dec 22 2007 Can be done already (and has been done in the OpenMesh matrix and vector...
• Janice Caron (8/14) Dec 22 2007 I withdraw that last remark. It was uncalled for. I was trying to
• Bill Baxter (28/103) Dec 22 2007 Yeh, and what about octonians too! And we better distinguish
• Janice Caron (15/15) Dec 22 2007 If you wanted to go even more general, you could go beyond std.matrix
• Sascha Katzner (11/13) Dec 22 2007 One reason could be that, it is a performance penalty for the OS to save...
• Jascha Wetzel (38/52) Dec 22 2007 interesting! since SSE is an integral part of x86-64, i wonder whether
• Lukas Pinkowski (18/34) Dec 22 2007 Hi, that's because the OS needs to backup both SSE-registers and the
• =?ISO-8859-1?Q?Pablo_Ripoll=e9s?= (5/26) Dec 22 2007 be careful with that! a matrix is an algebraic structure very different ...
• Knud Soerensen (12/31) Dec 22 2007 Take a look at the vectorization suggestion on
• Tomas Lindquist Olsen (7/38) Dec 23 2007 I could definitely be interested in experimenting with this in LLVMDC. A...
Lukas Pinkowski <Lukas.Pinkowski web.de> writes:
```I'm wondering why the 2D/3D/4D-vector and -matrix data types don't find
their way into the mainstream programming languages as builtin types?
The only that I know of that have builtin-support are the shader languages
(HLSL, GLSL, Cg, ...) and I suppose the VectorC/C++-compiler. Instead the
vector- and matrix-class is coded over and over again, with different
3D-libraries using their own implementation/interface.
SIMD instructions are pretty 'old' now, but the compilers support them only
through non-portable extensions, or handwritten assembly.

I think the programming language of the future should have those builtin

It would be nice if one of the Open Source D-compilers (GDC, LLVMDC) would
implement such an extension to D in an experimental branch; don't know if
it's easy to generate SIMD-code with the GCC backend, but LLVM is supposed
to make it easy, right?
Hopefully this extension could propagate after some time into the official D
spec. Even if Walter won't touch the backend again, DMD could at least
provide a software implementation (like for 64bit integer operations).

Seeing that D seems to be quite popular for game programming and numerics,
this would be a nice addition.

Well, as for the typenames, I guess something along

v2f, v3f, v4f, m2f, m3f, m4f: vectors and matrices based on float
v2d, v3d, v4d, m2d, m3d, m4d: vectors and matrices based on double
v2r, v3r, v4r, m2r, m3r, m4r: vectors and matrices based on real

Or vec2f instead of v2f, mat2f instead of m2f, a.s.o. Complex versions would
be probably needed, too?
```
Dec 21 2007
```On 12/21/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
Well, as for the typenames, I guess something along

v2f, v3f, v4f, m2f, m3f, m4f: vectors and matrices based on float
v2d, v3d, v4d, m2d, m3d, m4d: vectors and matrices based on double
v2r, v3r, v4r, m2r, m3r, m4r: vectors and matrices based on real

Or vec2f instead of v2f, mat2f instead of m2f, a.s.o. Complex versions would
be probably needed, too?

That's an awful lot of reserved words - especially when you take into
account the distinction between complex float, complex double, and ...
er ... the other one. :-) Also, I couldn't help but notice that all
your matrices seem to be square, and in general, they're not.

What's wrong with Vector!(3,float), Matrix!(4,4,real),
Matrix!(3,4,cdouble), etc.?

And as for API, well, the operator overloads should just do the
obvious thing (although I admit we lack dot-product and cross-product
operators, but if you used u*v for dot product and u.cross(v) for
cross product, I don't see anyone complaining).

In other words, it /should/ be a library feature.
```
Dec 21 2007
Bill Baxter <dnewsgroup billbaxter.com> writes:
```Janice Caron wrote:
On 12/21/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
Well, as for the typenames, I guess something along

And as for API, well, the operator overloads should just do the
obvious thing (although I admit we lack dot-product and cross-product
operators, but if you used u*v for dot product and u.cross(v) for
cross product, I don't see anyone complaining).

I'd complain. :-)  Everyone knows that  dot(a,b) and cross(a,b) are the
way God intended C-derived languages to implement the dot and cross
products. :-)  [*]

--bb

[*] Unless you're downs, of course, and then it's a /dot/ b and  a
/cross/ b.
```
Dec 21 2007
Jascha Wetzel <firstname mainia.de> writes:
```Bill Baxter wrote:
I'd complain. :-)  Everyone knows that  dot(a,b) and cross(a,b) are the
way God intended C-derived languages to implement the dot and cross
products. :-)  [*]

i agree. i think D should stick to the syntax established in shader
languages. everything else is counterintuitive.
```
Dec 21 2007
Lukas Pinkowski <Lukas.Pinkowski web.de> writes:
```Janice Caron wrote:

On 12/21/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
Well, as for the typenames, I guess something along

v2f, v3f, v4f, m2f, m3f, m4f: vectors and matrices based on float
v2d, v3d, v4d, m2d, m3d, m4d: vectors and matrices based on double
v2r, v3r, v4r, m2r, m3r, m4r: vectors and matrices based on real

Or vec2f instead of v2f, mat2f instead of m2f, a.s.o. Complex versions
would be probably needed, too?

That's an awful lot of reserved words - especially when you take into
account the distinction between complex float, complex double, and ...
er ... the other one. :-)

Well, let's make a std.vectormath with:

module std.vectormath;

alias __v2f vec2f;
alias __v3f vec3f;

...

Double underscores are reserved anyway, so where exactly is the problem?
People have accepted having hundreds of keywords and reserved identifiers
in D.
Also I forgot the integer and bool versions like v2i, v2ui, a.s.o.

Also, I couldn't help but notice that all
your matrices seem to be square, and in general, they're not.

In general matrices aren't limited to 4x4, right? But those are used in 3D
math dominantly; in case you want higher dimensions, you can build them on
top of the built in ones.

What's wrong with Vector!(3,float), Matrix!(4,4,real),
Matrix!(3,4,cdouble), etc.?

They are not builtin types. You know, we have those nice SIMD instructions
in our processors, but using them requires inline assembler, or
non-portable/non-standard language extensions (see the GCC SIMD extension).

And as for API, well, the operator overloads should just do the
obvious thing (although I admit we lack dot-product and cross-product
operators, but if you used u*v for dot product and u.cross(v) for
cross product, I don't see anyone complaining).

If we wouldn't go for identifiers, cross product could use #, but I don't
know what to use for dot product. Multiplication should be component-wise

In other words, it /should/ be a library feature.

Well, that's the C++ way of thinking.
I'm pretty sure, if C++ existed back then, when floating point
(co-)processors became widespread, people would argue, floating point
support should be a library feature.
```
Dec 21 2007
```On 12/21/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
What's wrong with Vector!(3,float), Matrix!(4,4,real),
Matrix!(3,4,cdouble), etc.?

They are not builtin types.

And this is a problem because...?

Multiplication should be component-wise

Now that's just nonsense! Matrix multiplication should be matrix
multiplication, and nothing else. For example, multiplying a (square)
matrix by the identity matrix (of the same size) should leave it
unchanged, not zero every element not on the main diagonal!

Likewise, vector multiplication must mean vector multiplication, and
nothing else. (Arguably, there are two forms of vector multiplication
- dot product and cross product - however, cross product only has
meaning in three-dimensions, whereas dot product has meaning in any
number of dimensions, so dot production is more general).

Componentwise multiplication... Pah! That's just not mathemathical.
(Imagine doing that for complex numbers instead of proper complex
multiplication!) No thanks! I'd want my multiplications to actually
```
Dec 21 2007
Lukas Pinkowski <Lukas.Pinkowski web.de> writes:
```Janice Caron wrote:

On 12/21/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
What's wrong with Vector!(3,float), Matrix!(4,4,real),
Matrix!(3,4,cdouble), etc.?

They are not builtin types.

And this is a problem because...?

Because those templates would have to use inline assembler to make use of
SIMD-hardware. Well, that's bad, cause it's hard to inline a function that
uses inline assembler, isn't it?
So, we have all that shiny hardware with those funky instructions, but still
it's hard to utilize it...

Multiplication should be component-wise

Now that's just nonsense! Matrix multiplication should be matrix
multiplication, and nothing else. For example, multiplying a (square)
matrix by the identity matrix (of the same size) should leave it
unchanged, not zero every element not on the main diagonal!

Err, we were talking about vector multiplication, in which we have three
cases. Inner product, outer product and component-wise multiplication.
Matrix multiplication is the matrix multiplication as you are used to. Don't
quote out of context.

Likewise, vector multiplication must mean vector multiplication, and
nothing else. (Arguably, there are two forms of vector multiplication
- dot product and cross product - however, cross product only has
meaning in three-dimensions, whereas dot product has meaning in any
number of dimensions, so dot production is more general).

Componentwise multiplication... Pah! That's just not mathemathical.

So you probably don't know much about maths? It is mathematical as long as
you define it correctly. But I'll do it just for you:

F := Set of floating point numbers
V := F^n (set of n-tuples of floating point numbers)

We define component wise multiplication as a function

m: V x V -> V

with: m(a, b) := (a1*b1, a2*b2, ... , an*bn) =: a * b

That's pretty mathematical, isn't it?

(Imagine doing that for complex numbers instead of proper complex
multiplication!) No thanks! I'd want my multiplications to actually

Multiplication of complex numbers is defined quite clearly, as is for
vectors.
I already mentioned the shader languages, I guess you should look GLSL up
and see how "vec3 * vec3" is handled. Then come back here and tell me about
nonsense.
```
Dec 21 2007
```On 12/21/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
So you probably don't know much about maths?

I call ad hominem on that one!

For the record, I have a degree in pure mathematics. Now, before this
line of enquiry proceeds any further, I request that in future you
attack the argment, not the person! Attacking the person is
unneccessary, irrelevant, and causes flame wars.
```
Dec 21 2007
Lukas Pinkowski <Lukas.Pinkowski web.de> writes:
```Janice Caron wrote:

On 12/21/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
So you probably don't know much about maths?

I call ad hominem on that one!

I apologize.

For the record, I have a degree in pure mathematics.

Well, but why do you call the component-wise multiplication nonsense? As a
mathematician you should know that an operator/function is exactly that,
what you define it to be. If I define vector/vector-multiplication to be
component-wise multiplication, then it is component-wise multiplication.

It's just that 3D-programmers agreed to define vector/vector-multiplication
(with the * operator) like this.

Now, before this
line of enquiry proceeds any further, I request that in future you
attack the argment, not the person! Attacking the person is
unneccessary, irrelevant, and causes flame wars.

Right, I'll be cool :-)
```
Dec 22 2007
```On 12/22/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
Well, but why do you call the component-wise multiplication nonsense? As a
mathematician you should know that an operator/function is exactly that,
what you define it to be. If I define vector/vector-multiplication to be
component-wise multiplication, then it is component-wise multiplication.

Well, yes and no. Obviously you are correct in that one can define any
function to do anything (...just as you can in any programming
language). However, whether or not it is meaningful to call such a
function "multiplication" is another matter. Elementwise
multiplication is not normally considered to be "multiplication" in
vector algebra.

Googling "vector multiplication" mostly yeilds the expected results of
dot product and cross product, although Wolfram Mathworld also lists
the "vector direct product" which yields a tensor. I couldn't find
anything, anywhere, which considers elementwise multiplication to be
valid vector multiplication. If such a usage exists, I must assume it
to be rare, or limited to some particular field of expertise (e.g. 3D
graphic programming, which you touch on next).

It's just that 3D-programmers agreed to define vector/vector-multiplication
(with the * operator) like this.

Ah - that would be my problem. I'm not a 3D programmer. (I /am/ three
dimension, and I /am/ a programmer, but ... well, you get the drift!).
To me, there's really nothing special about three dimensions. Vector
arithmetic must work, regardless of the number of elements, be that 3,
4, 5, or 87403461.

Perhaps there is merit in such a function, of which I am unaware,
which has benefit to programmers of 3D graphics. That's cool! Not a
problem. But I still wouldn't call it multiplication. To me,
multiplication of a vector by a vector is undefined. (Multiplication
of a vector by a scalar is defined).

So sure - why not have an elementwise multiplication function? If it's
useful, it should be implemented. I just don't think it's a good idea
to overload opMul() and opMulAssign() with that function. Maybe just
call it something else?
```
Dec 22 2007
Lukas Pinkowski <Lukas.Pinkowski web.de> writes:
```Janice Caron wrote:

On 12/22/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
Well, but why do you call the component-wise multiplication nonsense? As
a mathematician you should know that an operator/function is exactly
that, what you define it to be. If I define vector/vector-multiplication
to be component-wise multiplication, then it is component-wise
multiplication.

Well, yes and no. Obviously you are correct in that one can define any
function to do anything (...just as you can in any programming
language). However, whether or not it is meaningful to call such a
function "multiplication" is another matter. Elementwise
multiplication is not normally considered to be "multiplication" in
vector algebra.

Well, "imaginary real" isn't normally considered to be enumerable and
imaginary and real.
Everyone's smart enough to learn such a thing, even that 'adding' two
strings is concatenating them (in C++).

Googling "vector multiplication" mostly yeilds the expected results of
dot product and cross product, although Wolfram Mathworld also lists
the "vector direct product" which yields a tensor. I couldn't find
anything, anywhere, which considers elementwise multiplication to be
valid vector multiplication.

If such a usage exists, I must assume it
to be rare, or limited to some particular field of expertise (e.g. 3D
graphic programming, which you touch on next).

No it's not rare; the shading languages aren't limited to computer graphics,
but are used for programming highly parallel hardware (= GPU) also. Please
google for GPGPU. It would be a surprise to many people if '*' didn't mean
elementwise multiplication.

It's just that 3D-programmers agreed to define
vector/vector-multiplication (with the * operator) like this.

Ah - that would be my problem. I'm not a 3D programmer. (I /am/ three
dimension, and I /am/ a programmer, but ... well, you get the drift!).
To me, there's really nothing special about three dimensions. Vector
arithmetic must work, regardless of the number of elements, be that 3,
4, 5, or 87403461.

Well, to INTEL, AMD and IBM (PowerPC) there is something special about three
dimensions, as they have built special instruction sets that can be used
for efficient 3D-maths. And I want to be able to use the available hardware
ressources easily and in a portable way.

If you want vectors of higher dimension than 4D, you can either implement
them traditionally, or on top of the optimized builtin vectors.
I don't know how hard it would be to implement hardware optimized arbitrary
sized vectors and matrices in the compiler. But who would say no to such a
thing?

Perhaps there is merit in such a function, of which I am unaware,
which has benefit to programmers of 3D graphics. That's cool! Not a
problem. But I still wouldn't call it multiplication. To me,
multiplication of a vector by a vector is undefined. (Multiplication
of a vector by a scalar is defined).

It's irrelevant if it's undefined to you; if Walter or someone else defines
vector-vector multiplication to be elementwise multiplication, then it is
elementwise multiplication.
After all: Walter has brainwashed us to believe 'imaginary real' to be
imaginary 80bit IEEE floating point numbers on x87. ;-)
```
Dec 22 2007
```On 12/22/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:

No it's not rare; the shading languages aren't limited to computer graphics,
but are used for programming highly parallel hardware (= GPU) also. Please
google for GPGPU. It would be a surprise to many people if '*' didn't mean
elementwise multiplication.

Done. Having read up on it, I now withdraw all of my objections ... except one.

If there's hardware support of three-element arrays and four-element
arrays, then I see no reason why they can't be considered primitive
types. You've convinced me, and you've got my vote.

My one objection (which of course was not one of your proposals in the
first place, merely my misunderstanding), is that I don't want these
things to be called "vectors". I'd like to see that term reserved for
the true mathematical entities. Call them whatever you want - I don't
care - just not "vector", and my complaints will disappear.

After all: Walter has brainwashed us to believe 'imaginary real' to be
imaginary 80bit IEEE floating point numbers on x87. ;-)

I'm sure you know as well as I that many of us, myself included, are
not happy with that nonclamenture. We may unfortunately be stuck with
it, but one example of bad naming does not justify another.
```
Dec 22 2007
Bill Baxter <dnewsgroup billbaxter.com> writes:
```Janice Caron wrote:
On 12/22/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:

No it's not rare; the shading languages aren't limited to computer graphics,
but are used for programming highly parallel hardware (= GPU) also. Please
google for GPGPU. It would be a surprise to many people if '*' didn't mean
elementwise multiplication.

Done. Having read up on it, I now withdraw all of my objections ... except one.

If there's hardware support of three-element arrays and four-element
arrays, then I see no reason why they can't be considered primitive
types. You've convinced me, and you've got my vote.

My one objection (which of course was not one of your proposals in the
first place, merely my misunderstanding), is that I don't want these
things to be called "vectors". I'd like to see that term reserved for
the true mathematical entities. Call them whatever you want - I don't
care - just not "vector", and my complaints will disappear.

I believe the previous thread on providing support for these harware
entities suggested names like float3 and float4.  That seems good to me
in that it doesn't explicitly promise to support any particular
mathematical convention.

--bb
```
Dec 22 2007
=?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
```-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Janice Caron wrote:
On 12/22/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
Well, but why do you call the component-wise multiplication nonsense? As a
mathematician you should know that an operator/function is exactly that,
what you define it to be. If I define vector/vector-multiplication to be
component-wise multiplication, then it is component-wise multiplication.

Well, yes and no. Obviously you are correct in that one can define any
function to do anything (...just as you can in any programming
language). However, whether or not it is meaningful to call such a
function "multiplication" is another matter. Elementwise
multiplication is not normally considered to be "multiplication" in
vector algebra.

Actually, it is (more or less, the explanations on this page are a
bit more general):
http://en.wikipedia.org/wiki/Algebra_over_a_field

Jerome
- --
+------------------------- Jerome M. BERGER ---------------------+
|    mailto:jeberger free.fr      | ICQ:    238062172            |
|    http://jeberger.free.fr/     | Jabber: jeberger jabber.fr   |
+---------------------------------+------------------------------+
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQFHbhEbd0kWM4JG3k8RAiP5AJ9w0tAKy6zT6nfxh0tgEq8GwduZxACdG1Sz
jRPHSKNnoFnEsuhS+aFtFD0=
=E7wH
-----END PGP SIGNATURE-----
```
Dec 22 2007
"Saaa" <empty needmail.com> writes:
```So you probably don't know much about maths? It is mathematical as long as
you define it correctly. But I'll do it just for you:

I call ad hominem on that one!

For the record, I have a degree in pure mathematics. Now, before this
line of enquiry proceeds any further, I request that in future you
attack the argment, not the person! Attacking the person is
unneccessary, irrelevant, and causes flame wars.

I won't call it a argument at all, just an observation(of which he isn't
quite sure hence the questionmark).
The argument comes right after it. Making an abservation about the other
party isn't that bad, it helps in
understanding the others line of thought.
An ad hominem replacement would be:
You are wrong because you don't know much about math!

But I do have to say that the last sentence is a bit condescending, which is
unnecessary.
```
Dec 22 2007
Bill Baxter <dnewsgroup billbaxter.com> writes:
```Janice Caron wrote:
On 12/21/07, Lukas Pinkowski <Lukas.Pinkowski web.de> wrote:
What's wrong with Vector!(3,float), Matrix!(4,4,real),
Matrix!(3,4,cdouble), etc.?

They are not builtin types.

And this is a problem because...?

Multiplication should be component-wise

Now that's just nonsense! Matrix multiplication should be matrix
multiplication, and nothing else. For example, multiplying a (square)
matrix by the identity matrix (of the same size) should leave it
unchanged, not zero every element not on the main diagonal!

Likewise, vector multiplication must mean vector multiplication, and
nothing else. (Arguably, there are two forms of vector multiplication
- dot product and cross product - however, cross product only has
meaning in three-dimensions, whereas dot product has meaning in any
number of dimensions, so dot production is more general).

As pointed out, there is also the outer product that creates an NxN
matrix.  Also defined for any N.  And I believe analogues of the cross
product exist for all odd-dimensioned vectors.  Can't remember exactly
on that one -- heard it listening to a Geometric Algebra talk too long ago.

Componentwise multiplication... Pah! That's just not mathemathical.
(Imagine doing that for complex numbers instead of proper complex
multiplication!) No thanks! I'd want my multiplications to actually

The analogy is bad because for a number of reasons.

1) there's little practical value in component-wise multiplication of
complex numbers.  Whereas component-wise multiplication of vectors is
very often useful in practice.

2) In math the product of two complex numbers a and b is written just
like the product of two scalars: ab.  Writing two vectors next to each
other is a linear algebra "syntax error".  It's an invalid operation
unless you transpose one of the vectors.  So if anything, in a
programming language * on vectors should just not be allowed.  But
making it do nothing is not very useful.

3) In numerical applications it's useful to define all kinds of
non-linear algebra operators too.  For instance shading languages
usually define a < b to be a componentwise comparison yielding a vector
of booleans.  In terms of linear algebra this is meaningless but it's
darn useful, and kind of goes along with the idea that + and - work
component-wise.  And if you allow that, then why not just be consistent
all the way and say that all the binary operators that yield a scalar
result are defined componentwise?  And use things like dot() cross() and
outer() for the various specialized vector products.

4) Componentwise multiplication of vectors is not really "nonsense" even
in terms of linear algebra.  You just have to think of a*b as being
defined to mean diag(a)*b.  That is, one of the operands is first
implicitly converted to a diagonal matrix.

--bb
```
Dec 21 2007
=?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
```-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Janice Caron wrote:
Likewise, vector multiplication must mean vector multiplication, and
nothing else. (Arguably, there are two forms of vector multiplication
- dot product and cross product - however, cross product only has
meaning in three-dimensions, whereas dot product has meaning in any
number of dimensions, so dot production is more general).

Actually, there are *four* forms of vector multiplication:
- dot product;
- cross product (which btw is defined for all finite
dimensionalities greater than 1);
- outer product;
- component-wise.

Of the four, component-wise is the only one that makes sense for a
multiplication *operator* because it's the only one that is defined
as taking exactly two input operands in vector space and returning a
value in the same vector space.

Jerome
- --
+------------------------- Jerome M. BERGER ---------------------+
|    mailto:jeberger free.fr      | ICQ:    238062172            |
|    http://jeberger.free.fr/     | Jabber: jeberger jabber.fr   |
+---------------------------------+------------------------------+
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQFHbN7rd0kWM4JG3k8RAh07AJ48jVMFdaCkcwo9eNzmZTXjPx5dbACeOT5G
xnH8z6cQyVyh6ddgTB6nMzM=
=dT0p
-----END PGP SIGNATURE-----
```
Dec 22 2007
```On 12/22/07, "Jérôme M. Berger" <jeberger free.fr> wrote:
Of the four, component-wise is the only one that makes sense for a
multiplication *operator* because it's the only one that is defined
as taking exactly two input operands in vector space and returning a
value in the same vector space.

By which you mean, you'd like the product of a Vector!(N,T) and a
Vector!(N,T) to be a Vector!(N,T)?

That seems an artificial restriction to me. After all, the product of
a Matrix!(N,M,T) and a Matrix!(N,M,T) is not a Matrix!(N,M,T), unless
N==M. (In general, it's undefined). But you wouldn't want to say "Aha
- in general, it's undefined, so let's define it" (especially not with
componentwise multiplication, because that would conflict with regular
matrix multiplication when N==M).

As I'm sure you know, the product of a Matrix!(N,M,T) and
Matrix!(M,L,T) is a Matrix!(N,L,T). So there is no requirement that
the product be of the same type as either of the originals.

Anyway, for reasons of all the arguments listed in this thread, I am
now convinced that opMul() and opMulAssign() should not be overloaded
at all for the type Vector!. It seems far better to be explicit about
what kind of multiply you actually want.
```
Dec 22 2007
=?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
```-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Janice Caron wrote:
On 12/22/07, "Jérôme M. Berger" <jeberger free.fr> wrote:
Of the four, component-wise is the only one that makes sense for a
multiplication *operator* because it's the only one that is defined
as taking exactly two input operands in vector space and returning a
value in the same vector space.

By which you mean, you'd like the product of a Vector!(N,T) and a
Vector!(N,T) to be a Vector!(N,T)?

That seems an artificial restriction to me.

However, it is the mathematically accepted definition for a binary
operator:
http://en.wikipedia.org/wiki/Binary_operation

After all, the product of
a Matrix!(N,M,T) and a Matrix!(N,M,T) is not a Matrix!(N,M,T), unless
N==M. (In general, it's undefined). But you wouldn't want to say "Aha
- in general, it's undefined, so let's define it" (especially not with
componentwise multiplication, because that would conflict with regular
matrix multiplication when N==M).

As I'm sure you know, the product of a Matrix!(N,M,T) and
Matrix!(M,L,T) is a Matrix!(N,L,T). So there is no requirement that
the product be of the same type as either of the originals.

a "multiplication *operator*" which, being an *operator* should

Anyway, for reasons of all the arguments listed in this thread, I am
now convinced that opMul() and opMulAssign() should not be overloaded
at all for the type Vector!. It seems far better to be explicit about
what kind of multiply you actually want.

I'd tend to agree on that one. Except that now, we need to find a
reasonably short and meaningful name for "element-wise
multiplication" ("dot", "cross" and "outer" work fine for the other
types of multiplication).

Jerome
- --
+------------------------- Jerome M. BERGER ---------------------+
|    mailto:jeberger free.fr      | ICQ:    238062172            |
|    http://jeberger.free.fr/     | Jabber: jeberger jabber.fr   |
+---------------------------------+------------------------------+
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQFHbhM/d0kWM4JG3k8RAn0sAKC49tOu4Bi+9Q2GqNfBRSWinY1ySwCdFCkP
xIAEMgJG2zsBDBxxwacd7Y0=
=9NTw
-----END PGP SIGNATURE-----
```
Dec 22 2007
```On 12/23/07, "Jérôme M. Berger" <jeberger free.fr> wrote:
However, it is the mathematically accepted definition for a binary

operator
You got me on that one. I completely stand corrected. Guess I've been
doing computing for so long that I fell into the trap of mixing
computer language jargon with math jargon. In D, and other C-like
languages, an "operator" is anything that uses infix notation. In
maths, as you pointed out, it has another meaning entirely. But even
so, this is D we're talking about, and I don't see anyone arguing that
matrix multiplication shouldn't use opMul.

I'd tend to agree on that one. Except that now, we need to find a
reasonably short and meaningful name for "element-wise
multiplication" ("dot", "cross" and "outer" work fine for the other
types of multiplication).

Once upon a time, we were promised elementwise operations across the
board. For example

a[] = b[]; //elementwise assignment
a[] = b[] + c[]; //elementwise addition
a[] = b[] * c[]; //elementwise multiplication

Maybe there was some reason why Walter couldn't get it to work, but it
would be nice to see it back on the drawing board. Even if it couldn't
be made to work for generic arrays, maybe it could be made to work for
new primitve types like float4?
```
Dec 23 2007
```On 12/23/07, Janice Caron <caron800 googlemail.com> wrote:
Even if it couldn't
be made to work for generic arrays, maybe it could be made to work for
new primitve types like float4?

Sorry - I need to correct myself there (before anyone else does!)

If we had new primitive types, float3, float4, etc., then for those
types, multiplication would be elementwise /by default/, so no special
syntax or functions would be needed. e.g.

float4 a, b, c;
a = b * c; // elementwise multiplication

For vectors and matrices in general, we're back to the foreach problem
again! (Foreach was a bad design decision because you can't step
through more than one array in lockstep). If you /could/ then you
could do

Matrix!(4,4,float) a, b, c;
foreach(ref x;a)(y:b)(z:c) x = b * c;

(although that wouldn't be parallelised). So it seems to me that
something really needs to be done to improve (or replace) foreach so
that we can do that, and preferably with parallelisation thrown in for
good measuse.
```
Dec 23 2007
```On 12/23/07, Janice Caron <caron800 googlemail.com> wrote:
Matrix!(4,4,float) a, b, c;
foreach(ref x;a)(y:b)(z:c) x = b * c;

Matrix!(4,4,float) a, b, c;
foreach(ref x;a)(y:b)(z:c) x = y * z;

To add to that, Walter has always argued that foreach expresses what
the programmer /wants/, not how it's implemented, and that it's up to
the compiler to figure out the most efficient way to implement it,
which might be different on each platform, and might indeed include
parallelisation, if the compiler thinks it's worth it. I tend to agree
with this, and I see no problem with extending foreach for builtin
arrays. The big problem with it is user-defined types, because opApply
just isn't the right way to implement foreach in these cases. Perhaps
the long term solution is iterators - then foreach for user-defined
types could be implemented by the compiler using iterators instead of
opApply?
```
Dec 23 2007
0ffh <frank frankhirsch.youknow.what.todo.net> writes:
```Janice Caron wrote:
Componentwise multiplication... Pah! That's just not mathemathical.
(Imagine doing that for complex numbers instead of proper complex
multiplication!) No thanks! I'd want my multiplications to actually

Is it not? Tell Jacques Hadamard!
You should be a bit more careful what you write and, especially, how.

regards, frank
```
Dec 22 2007
```On 12/22/07, 0ffh <frank frankhirsch.youknow.what.todo.net> wrote:
Is it not? Tell Jacques Hadamard!
You should be a bit more careful what you write and, especially, how.

You're going to have to be a bit more specific, I'm afraid. I googled
Jacques Hadamard and got that he was a mathematician, but beyond that,
I'm lost. What are you getting at?
```
Dec 22 2007
0ffh <frank frankhirsch.youknow.what.todo.net> writes:
```Janice Caron wrote:
You're going to have to be a bit more specific, I'm afraid. I googled
Jacques Hadamard and got that he was a mathematician, but beyond that,
I'm lost. What are you getting at?

The element-wise product of matrices (which you called "just not
mathemathical") bears his name.

regards, frank

p.s. You are right, I could (and should) have been clearer.
```
Dec 22 2007
```On 12/22/07, 0ffh <frank frankhirsch.youknow.what.todo.net> wrote:
Is it not? Tell Jacques Hadamard!
You should be a bit more careful what you write and, especially, how.

Jacques Hadamard and got that he was a mathematician, but beyond that,
I'm lost. What are you getting at?

The element-wise product of matrices (which you called "just not
mathemathical") bears his name.

Cool. I like learning new things. So, elementwise multiplication is
more properly called Hadamard multiplication is it? That's certainly
interesting.

I'm not not quite sure how I'm supposed to "tell Jacques Hadamard"
anything, though, given that he's been dead for forty four years. I
still don't completely understand what you were getting at, but I'll
try to be clearer about what /I/ meant. By "not mathematical", I meant
multiplication doesn't obey the rules which we normally associate with
multiplication.

Consider, for example, the simple equation two times two equals four.
(They don't get much easier than that). You could represent that in 2D
vectors using Hadamard multiplication as [2,0] * [2,0] = [4,0]. So
far, so good. But we also expect four divided by two to yield two.
How's that done here? Elementwise division of [4,0] by [2,0] would
involve zero divided by zero for the second element. More bizarrely,
a*b can equal zero, even when neither a nor b is zero. So while it
certainly is reasonable to call it a function, I continue to question
whether or not it is reasonable to call it multiplication. That's what
I meant.

If you're suggesting that I intended some slur on poor Mr Hadamard, I
assure you that's false. (Indeed - I hadn't even heard of him until
you mentioned his name). Rest assured, if he were alive today I would
be /more/ than happy to discuss mathematics with him. :-)
```
Dec 22 2007
Lukas Pinkowski <Lukas.Pinkowski web.de> writes:
```Janice Caron wrote:

On 12/22/07, 0ffh <frank frankhirsch.youknow.what.todo.net> wrote:
Is it not? Tell Jacques Hadamard!
You should be a bit more careful what you write and, especially, how.

Jacques Hadamard and got that he was a mathematician, but beyond that,
I'm lost. What are you getting at?

The element-wise product of matrices (which you called "just not
mathemathical") bears his name.

Cool. I like learning new things. So, elementwise multiplication is
more properly called Hadamard multiplication is it? That's certainly
interesting.

I'm not not quite sure how I'm supposed to "tell Jacques Hadamard"
anything, though, given that he's been dead for forty four years. I
still don't completely understand what you were getting at, but I'll
try to be clearer about what /I/ meant. By "not mathematical", I meant
multiplication doesn't obey the rules which we normally associate with
multiplication.

Floating point numbers do not obey the rules which we normally associate
with any elementary operation like addition, subtraction, multiplication
and division.

For floating point numbers this is possible:

a + b == a

Even when b is not 0! We aren't anymore in 'pure mathematical land', but we
are in 'binary numerics land', where all the operations we know, do most of
the time different things than what we are used to.

For a mathematically correct language we would have to rename every operator
into something different. What do you prefer to write?

This:
float a, b;

Or this:
float a, b;
float c = a + b;

Consider, for example, the simple equation two times two equals four.
(They don't get much easier than that). You could represent that in 2D
vectors using Hadamard multiplication as [2,0] * [2,0] = [4,0]. So
far, so good. But we also expect four divided by two to yield two.
How's that done here? Elementwise division of [4,0] by [2,0] would
involve zero divided by zero for the second element. More bizarrely,
a*b can equal zero, even when neither a nor b is zero. So while it
certainly is reasonable to call it a function, I continue to question
whether or not it is reasonable to call it multiplication. That's what
I meant.

It's a question of consistency: Either you question all usage of
mathematical operators in programming languages, or you accept that it's
only a thing of definition and documentation.
I think that it is reasonable to call something multiplication if it is
well-defined and has similarities to other uses of multiplication.
As you know, the multiplication dot is used in maths for all kinds of things
that clearly are not multiplications of two scalars; if you don't know what
the multiplication dot means, you look the definition up, don't you?
```
Dec 22 2007
Jascha Wetzel <firstname mainia.de> writes:
```Janice Caron wrote:
On 12/22/07, 0ffh <frank frankhirsch.youknow.what.todo.net> wrote:
Is it not? Tell Jacques Hadamard!
You should be a bit more careful what you write and, especially, how.

You're going to have to be a bit more specific, I'm afraid. I googled
Jacques Hadamard and got that he was a mathematician, but beyond that,
I'm lost. What are you getting at?

```
Dec 22 2007
```On 12/22/07, Jascha Wetzel <firstname mainia.de> wrote:

Insteresting that the Hadamard product is listed under /Matrix/
multiplication, not vector multiplication. :-)

This is all very interesting, but does not in any way lead me to
conclude that there is justification for elevating the Hadamard
product to /the/ default function with which to overload opMul for
vectors.
```
Dec 22 2007
0ffh <frank frankhirsch.youknow.what.todo.net> writes:
```Janice Caron wrote:
On 12/22/07, Jascha Wetzel <firstname mainia.de> wrote:

Insteresting that the Hadamard product is listed under /Matrix/
multiplication, not vector multiplication. :-)

This is all very interesting, but does not in any way lead me to
conclude that there is justification for elevating the Hadamard
product to /the/ default function with which to overload opMul for
vectors.

I just wanted to demonstrate to you your unfortunate predisposition to
belittle the things you do not know or like (be it the "amateurish"
Tango or the "nonsense" and "just not mathemathical" Hadamard Product).

The result is that, despite the fact that you can be quite insightful,
people are put off by the way you denigrate (as they must peceive it)
their ideas. If they respond in like manner, you should not be surprised.

regards, frank
```
Dec 22 2007
Jascha Wetzel <firstname mainia.de> writes:
```Lukas Pinkowski wrote:
I'm wondering why the 2D/3D/4D-vector and -matrix data types don't find
their way into the mainstream programming languages as builtin types?
The only that I know of that have builtin-support are the shader languages
(HLSL, GLSL, Cg, ...) and I suppose the VectorC/C++-compiler. Instead the
vector- and matrix-class is coded over and over again, with different
3D-libraries using their own implementation/interface.
SIMD instructions are pretty 'old' now, but the compilers support them only
through non-portable extensions, or handwritten assembly.

I think the programming language of the future should have those builtin

It would be nice if one of the Open Source D-compilers (GDC, LLVMDC) would
implement such an extension to D in an experimental branch; don't know if
it's easy to generate SIMD-code with the GCC backend, but LLVM is supposed
to make it easy, right?
Hopefully this extension could propagate after some time into the official D
spec. Even if Walter won't touch the backend again, DMD could at least
provide a software implementation (like for 64bit integer operations).

Seeing that D seems to be quite popular for game programming and numerics,
this would be a nice addition.

Well, as for the typenames, I guess something along

v2f, v3f, v4f, m2f, m3f, m4f: vectors and matrices based on float
v2d, v3d, v4d, m2d, m3d, m4d: vectors and matrices based on double
v2r, v3r, v4r, m2r, m3r, m4r: vectors and matrices based on real

Or vec2f instead of v2f, mat2f instead of m2f, a.s.o. Complex versions would
be probably needed, too?

this has been proposed before and there has been discussion about the
naming, too. i'd like to see that rather sooner than later, as well.

you might want to check out Don Clugston's work on Blade, which is a
significant step towards what you're looking for.
```
Dec 21 2007
Lukas Pinkowski <Lukas.Pinkowski web.de> writes:
```Jascha Wetzel wrote:
this has been proposed before and there has been discussion about the
naming, too. i'd like to see that rather sooner than later, as well.

I think GDC and LLVMDC would be nice testbeds for such an extension. One of
these could implement those into the compiler along with a software
implementation for compatibility with the other compilers. Hopefully Walter
would either include this experimental extension into the D spec, or
propose a standard interface himself.
I overlooked the LLVM tutorial and it seems to be quite easy, but I don't
whether I find the time soon to do myself what I demand from others ;-)

you might want to check out Don Clugston's work on Blade, which is a
significant step towards what you're looking for.

I know about it, and it's really awesome what you can do in D.
```
Dec 21 2007
Don Clugston <dac nospam.com.au> writes:
```Lukas Pinkowski wrote:
Jascha Wetzel wrote:
this has been proposed before and there has been discussion about the
naming, too. i'd like to see that rather sooner than later, as well.

I think GDC and LLVMDC would be nice testbeds for such an extension. One of
these could implement those into the compiler along with a software
implementation for compatibility with the other compilers. Hopefully Walter
would either include this experimental extension into the D spec, or
propose a standard interface himself.
I overlooked the LLVM tutorial and it seems to be quite easy, but I don't
whether I find the time soon to do myself what I demand from others ;-)

you might want to check out Don Clugston's work on Blade, which is a
significant step towards what you're looking for.

I know about it, and it's really awesome what you can do in D.

Do you think you could come up with some concrete examples?
I could imagine a version specialised for 2-D and 3-D vectors and quaternions.
Something like:
---

float[4][] f, g;
const float K = 1.234;

mixin(ginsu(q{
f[5..60].x = g[0..55].y + g[2..57].z;
// whatever else
});

---
I'm not a game programmer, so I don't really have much idea of which operations
are important.
It would be really helpful to have some example inner loops.
```
Dec 22 2007
Mikola Lysenko <mikolalysenko gmail.com> writes:
```I tried proposing a similar idea some time ago.  There was a lot of good
discussion, but I don't think that any consensus was reached at the end.

http://www.digitalmars.com/d/archives/digitalmars/D/Small_Vectors_Proposal_47634.html#N47634

-Mik
```
Dec 21 2007
"Rioshin an'Harthen" <rharth75 hotmail.com> writes:
```"Lukas Pinkowski" <Lukas.Pinkowski web.de> kirjoitti viestissä
news:fkg4qg\$mm\$1 digitalmars.com...
I'm wondering why the 2D/3D/4D-vector and -matrix data types don't find
their way into the mainstream programming languages as builtin types?
The only that I know of that have builtin-support are the shader languages
(HLSL, GLSL, Cg, ...) and I suppose the VectorC/C++-compiler. Instead the
vector- and matrix-class is coded over and over again, with different
3D-libraries using their own implementation/interface.
SIMD instructions are pretty 'old' now, but the compilers support them
only
through non-portable extensions, or handwritten assembly.

I think the programming language of the future should have those builtin

It would be nice if one of the Open Source D-compilers (GDC, LLVMDC) would
implement such an extension to D in an experimental branch; don't know if
it's easy to generate SIMD-code with the GCC backend, but LLVM is supposed
to make it easy, right?
Hopefully this extension could propagate after some time into the official
D
spec. Even if Walter won't touch the backend again, DMD could at least
provide a software implementation (like for 64bit integer operations).

Seeing that D seems to be quite popular for game programming and numerics,
this would be a nice addition.

Well, as for the typenames, I guess something along

v2f, v3f, v4f, m2f, m3f, m4f: vectors and matrices based on float
v2d, v3d, v4d, m2d, m3d, m4d: vectors and matrices based on double
v2r, v3r, v4r, m2r, m3r, m4r: vectors and matrices based on real

Or vec2f instead of v2f, mat2f instead of m2f, a.s.o. Complex versions
would
be probably needed, too?

I've found myself wanting the vector and matrix types to be built into the
compiler too many times, as almost all the different libraries have had to
implement them. And usually, they're incompatible with each other, so it
would be much better if they were "standardized" by the programming language
in question, be it C, C++, D...

However, a gross of new keywords isn't a good idea. I'd prefer to only use
*two* new keywords: vector and matrix, as follows:

vector(uint[4]) vec;

with members accessible by vec.x, vec.y, vec.z and vec.w, and

matrix(cdouble[4]) mat;

with members accessible by mat.11, mat.12, mat.13, mat.14, mat.21 ... mat.43
and mat.44.

If, e.g. a vector of three vectors of four doubles were required, it could
be defined with

vector( vector( double[4] )[3] )

(with the spacing just for clarity)

Actually, come to think of it, should we separate quaternions from vectors,
since they're actually quite different than your standard vector? This would
likely make it easier to support mathematics between quaternions, a
quaternion and a matrix, and a quaternion and a vector. Something like

quaternion(double) quat;

with the members accessible by quat.r, quat.i, quat.j and quat.k.
```
Dec 22 2007
```On 12/22/07, Rioshin an'Harthen <rharth75 hotmail.com> wrote:
it
would be much better if they were "standardized" by the programming language
in question, be it C, C++, D...

With this I completely agree. However, there's more than one way to
standardise. Having a module in Phobos called std.matrix would be
standardisation, and I think that would be perfectly good enough.

However, a gross of new keywords isn't a good idea. I'd prefer to only use
*two* new keywords: vector and matrix

D seems to adopt the general principle that if it can be done with the
language as-is, then there is no need to implement as a new feature.
Walter is particularly cautious about introducing new reserved words,
as clearly that would break existing code which used those words as
identifiers.

vector(uint[4]) vec;

Doesn't seem much different to my earlier suggestion of

Vector!(4,uint) vec;

although I guess

Vector!(uint[4]) vec;

might work just as well. So long as template code can deduce the
element type and size, it probably wouldn't make much difference.

with members accessible by vec.x, vec.y, vec.z and vec.w

It's not obvious to me why the elements should be x, y, z and w. How
does this generalize? What's the rule? Is it "Start at 'x', proceed up
the English alphabet till you get to 'z', then after that work
backwards from 'w' down to 'a'? I don't get it. Seems like an odd and
arbitrary rule, and also totally English-centric. (Well, we wouldn't
want to use the Cyrillic alphabet would we? That's foreign!)

Plus, what would the elements be named for a 100-element vector?

matrix(cdouble[4]) mat;
with members accessible by mat.11, mat.12, mat.13, mat.14, mat.21 ... mat.43
and mat.44.

I'd think that elements should be zero-based, not one-based. Plus,
your syntax makes no provision for matrices with more than nine
elements in either dimension. It's just not general enough.

Actually, come to think of it, should we separate quaternions from vectors,
since they're actually quite different than your standard vector?

Of course! We should always separate apples from oranges, as they're
quite different things. I don't think that's even an issue.
```
Dec 22 2007
Bill Baxter <dnewsgroup billbaxter.com> writes:
```Janice Caron wrote:
On 12/22/07, Rioshin an'Harthen <rharth75 hotmail.com> wrote:
it
would be much better if they were "standardized" by the programming language
in question, be it C, C++, D...

With this I completely agree. However, there's more than one way to
standardise. Having a module in Phobos called std.matrix would be
standardisation, and I think that would be perfectly good enough.

However, a gross of new keywords isn't a good idea. I'd prefer to only use
*two* new keywords: vector and matrix

D seems to adopt the general principle that if it can be done with the
language as-is, then there is no need to implement as a new feature.
Walter is particularly cautious about introducing new reserved words,
as clearly that would break existing code which used those words as
identifiers.

vector(uint[4]) vec;

Doesn't seem much different to my earlier suggestion of

Vector!(4,uint) vec;

although I guess

Vector!(uint[4]) vec;

might work just as well. So long as template code can deduce the
element type and size, it probably wouldn't make much difference.

with members accessible by vec.x, vec.y, vec.z and vec.w

It's not obvious to me why the elements should be x, y, z and w. How
does this generalize? What's the rule? Is it "Start at 'x', proceed up
the English alphabet till you get to 'z', then after that work
backwards from 'w' down to 'a'? I don't get it. Seems like an odd and
arbitrary rule, and also totally English-centric. (Well, we wouldn't
want to use the Cyrillic alphabet would we? That's foreign!)

Plus, what would the elements be named for a 100-element vector?

matrix(cdouble[4]) mat;
with members accessible by mat.11, mat.12, mat.13, mat.14, mat.21 ... mat.43
and mat.44.

Can be done already (and has been done in the OpenMesh matrix and vector
classes ).

MatrixT!(double, M,N) A;

A is an MxN matrix, and if M and N are both less than 10 then you can
to access elements using the notation A.m00, a.m01, a.m30, etc.

All thanks to the miracles of anonymous unions, static if, string mixins
and compile-time code generation.

http://www.dsource.org/projects/openmeshd/browser/trunk/OpenMeshD/OpenMesh/Core/Geometry/MatrixT.d

--bb
```
Dec 22 2007
```On 12/22/07, Janice Caron <caron800 googlemail.com> wrote:
It's not obvious to me why the elements should be x, y, z and w. How
does this generalize? What's the rule? Is it "Start at 'x', proceed up
the English alphabet till you get to 'z', then after that work
backwards from 'w' down to 'a'? I don't get it. Seems like an odd and
arbitrary rule, and also totally English-centric. (Well, we wouldn't
want to use the Cyrillic alphabet would we? That's foreign!)

I withdraw that last remark. It was uncalled for. I was trying to
posit that there is nothing special about the English alphabet, and
that all Unicode letters are acceptable as identifier names, but I
didn't express that very well, so if I caused any offense, I
apologise.

I still don't see how the rule generalises to N elements though, and
so my question about the rule is still open.
```
Dec 22 2007
=?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
```-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Janice Caron wrote:
On 12/22/07, Janice Caron <caron800 googlemail.com> wrote:
It's not obvious to me why the elements should be x, y, z and w. How
does this generalize? What's the rule? Is it "Start at 'x', proceed up
the English alphabet till you get to 'z', then after that work
backwards from 'w' down to 'a'? I don't get it. Seems like an odd and
arbitrary rule, and also totally English-centric. (Well, we wouldn't
want to use the Cyrillic alphabet would we? That's foreign!)

I withdraw that last remark. It was uncalled for. I was trying to
posit that there is nothing special about the English alphabet, and
that all Unicode letters are acceptable as identifier names, but I
didn't express that very well, so if I caused any offense, I
apologise.

I still don't see how the rule generalises to N elements though, and
so my question about the rule is still open.

I guess, he meant to say that .x, .y .z and .w could be used as an
alternative (and in addition) to [] for small vectors. However, the
problem is that the choice of letters is application-dependent:
- 2D vectors often use (u, v) as well as (x, y);
- 4D vectors often use "t" for the 4th component instead of "w".

Something that could be nice:

Vector!(double[3], "abc") vec;

The "abc" string would be optional, but if given it would need to
be the same size as the vector and it would tell the compiler that
we want to be able to access the elements with vec.a, vec.b and
vec.c in addition to vec[i]. This would allow us to specify what
letters we want to be able to use for any given application.

Jerome
- --
+------------------------- Jerome M. BERGER ---------------------+
|    mailto:jeberger free.fr      | ICQ:    238062172            |
|    http://jeberger.free.fr/     | Jabber: jeberger jabber.fr   |
+---------------------------------+------------------------------+
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQFHbOIId0kWM4JG3k8RAvcKAKC0r3Xu90Ttie9zxQdZEnQz6uJByQCfZVw3
KpaUV5eVt6cUEdJRB/kZcX4=
=jGHS
-----END PGP SIGNATURE-----
```
Dec 22 2007
Bill Baxter <dnewsgroup billbaxter.com> writes:
```Jérôme M. Berger wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Janice Caron wrote:
On 12/22/07, Janice Caron <caron800 googlemail.com> wrote:
It's not obvious to me why the elements should be x, y, z and w. How
does this generalize? What's the rule? Is it "Start at 'x', proceed up
the English alphabet till you get to 'z', then after that work
backwards from 'w' down to 'a'? I don't get it. Seems like an odd and
arbitrary rule, and also totally English-centric. (Well, we wouldn't
want to use the Cyrillic alphabet would we? That's foreign!)

I withdraw that last remark. It was uncalled for. I was trying to
posit that there is nothing special about the English alphabet, and
that all Unicode letters are acceptable as identifier names, but I
didn't express that very well, so if I caused any offense, I
apologise.

I still don't see how the rule generalises to N elements though, and
so my question about the rule is still open.

I guess, he meant to say that .x, .y .z and .w could be used as an
alternative (and in addition) to [] for small vectors. However, the
problem is that the choice of letters is application-dependent:
- 2D vectors often use (u, v) as well as (x, y);
- 4D vectors often use "t" for the 4th component instead of "w".

Something that could be nice:

Vector!(double[3], "abc") vec;

The "abc" string would be optional, but if given it would need to
be the same size as the vector and it would tell the compiler that
we want to be able to access the elements with vec.a, vec.b and
vec.c in addition to vec[i]. This would allow us to specify what
letters we want to be able to use for any given application.

That's a nice idea.  It would also have the side effect of making things
like colors ("rgb") and vectors ("xyz") distinct types automatically.

--bb
```
Dec 22 2007
Bill Baxter <dnewsgroup billbaxter.com> writes:
```Rioshin an'Harthen wrote:
"Lukas Pinkowski" <Lukas.Pinkowski web.de> kirjoitti viestissä
news:fkg4qg\$mm\$1 digitalmars.com...
I'm wondering why the 2D/3D/4D-vector and -matrix data types don't find
their way into the mainstream programming languages as builtin types?
The only that I know of that have builtin-support are the shader
languages
(HLSL, GLSL, Cg, ...) and I suppose the VectorC/C++-compiler. Instead the
vector- and matrix-class is coded over and over again, with different
3D-libraries using their own implementation/interface.
SIMD instructions are pretty 'old' now, but the compilers support them
only
through non-portable extensions, or handwritten assembly.

I think the programming language of the future should have those builtin

It would be nice if one of the Open Source D-compilers (GDC, LLVMDC)
would
implement such an extension to D in an experimental branch; don't know if
it's easy to generate SIMD-code with the GCC backend, but LLVM is
supposed
to make it easy, right?
Hopefully this extension could propagate after some time into the
official D
spec. Even if Walter won't touch the backend again, DMD could at least
provide a software implementation (like for 64bit integer operations).

Seeing that D seems to be quite popular for game programming and
numerics,
this would be a nice addition.

Well, as for the typenames, I guess something along

v2f, v3f, v4f, m2f, m3f, m4f: vectors and matrices based on float
v2d, v3d, v4d, m2d, m3d, m4d: vectors and matrices based on double
v2r, v3r, v4r, m2r, m3r, m4r: vectors and matrices based on real

Or vec2f instead of v2f, mat2f instead of m2f, a.s.o. Complex versions
would
be probably needed, too?

I've found myself wanting the vector and matrix types to be built into
the compiler too many times, as almost all the different libraries have
had to implement them. And usually, they're incompatible with each
other, so it would be much better if they were "standardized" by the
programming language in question, be it C, C++, D...

However, a gross of new keywords isn't a good idea. I'd prefer to only
use *two* new keywords: vector and matrix, as follows:

vector(uint[4]) vec;

with members accessible by vec.x, vec.y, vec.z and vec.w, and

matrix(cdouble[4]) mat;

with members accessible by mat.11, mat.12, mat.13, mat.14, mat.21 ...
mat.43 and mat.44.

If, e.g. a vector of three vectors of four doubles were required, it
could be defined with

vector( vector( double[4] )[3] )

(with the spacing just for clarity)

Actually, come to think of it, should we separate quaternions from
vectors, since they're actually quite different than your standard
vector? This would likely make it easier to support mathematics between
quaternions, a quaternion and a matrix, and a quaternion and a vector.
Something like

quaternion(double) quat;

with the members accessible by quat.r, quat.i, quat.j and quat.k.

Yeh, and what about octonians too!  And we better distinguish
homogeneous vectors and matrices from regular vectors and matrices.  And
really, points and vectors are different things, so they should have
distinct types (unless you're using the homogenous varieties, since
those can express both).  And really it's all expressed so much more
elegantly using geometric algebra so we better have multivectors and
wedge products in the language too.

The problem is there are a lot of interesting and useful mathematical
constructs out there.  Where do you stop.

It seems to me like primitive types in the language should reflect what
the silicon is actually capable of [*].  Anything else can be a library.
There is no quaternion multiplication instruction on any hardware I'm
aware of, so that's a construct that doesn't belong in the language in
my opinion.  I do see there being some value in basic primitives that
can use these SSE type instructions efficiently, and then those can be
used to build the uber-efficient quaternions and matrices etc as library
types.

--bb

[*] not sure how I feel about complex numbers on this score.  I think a
standard library implementation would have been sufficient.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1612.pdf
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2142.html
which skimming basically looks like it says the problems with
std::complex can be fixed without making complex a built-in type.

--bb
```
Dec 22 2007
```If you wanted to go even more general, you could go beyond std.matrix
and head into the murky waters of std.tensor.

Tensors are a generalisation of the progression: scalar, vector, matrix, ....

Think of a scalar as a zero-dimensional array; a vector as a one
dimentional array, and a matrix as a two dimentional array. A scalar
is a tensor with rank zero; a vector is a tensor with rank one; a
matrix is a tensor with rank two. This completely generalises for
tensors of arbitrary (non negative integer) rank.

(There is a complication though, in that you have to distinguish
between contravariant and covariant indeces)

If tensor mathematics were implemented, vectors and matrices could be
trivially implemented in terms of tensors.

See http://mathworld.wolfram.com/Tensor.html

(That might be going a bit further than people are ever going to need
though! :-) )
```
Dec 22 2007
Sascha Katzner <sorry.no spam.invalid> writes:
```Lukas Pinkowski wrote:
SIMD instructions are pretty 'old' now, but the compilers support them only
through non-portable extensions, or handwritten assembly.

One reason could be that, it is a performance penalty for the OS to save
the SIMD registers (XMM1, XMM2...etc.). You can verify that with the
test programm attached to this posting. If you uncomment lines 27-30 the
programm is ~50% slower (9.8s vs 6.7s on a Core 2 Duo E6750 on Vista).

SSE is great if you do a lot of heavy computations in your program, but
if you only do a dot product here and a cross product there you better
not use SSE, because your whole program runs a lot slower if you use SSE
instructions.

LLAP,
Sascha Katzner
```
Dec 22 2007
Jascha Wetzel <firstname mainia.de> writes:
```Sascha Katzner wrote:
Lukas Pinkowski wrote:
SIMD instructions are pretty 'old' now, but the compilers support them
only
through non-portable extensions, or handwritten assembly.

One reason could be that, it is a performance penalty for the OS to save
the SIMD registers (XMM1, XMM2...etc.). You can verify that with the
test programm attached to this posting. If you uncomment lines 27-30 the
programm is ~50% slower (9.8s vs 6.7s on a Core 2 Duo E6750 on Vista).

SSE is great if you do a lot of heavy computations in your program, but
if you only do a dot product here and a cross product there you better
not use SSE, because your whole program runs a lot slower if you use SSE
instructions.

interesting! since SSE is an integral part of x86-64, i wonder whether
this is an issue there as well...
Using the slightly modified code below, i tried that using GDC on 64bit
linux and the timing was identical. That doesn't mean too much, but it
is a hint. Further testing pending...

import tango.io.Stdout;
import tango.util.time.StopWatch;

struct Vector3f {
float x, y, z;

x += v.x;
y += v.y;
z += v.z;
}

Vector3f opMul(float s) {
return Vector3f(x * s, y * s, z * s);
}
}

int main(char[][] args) {
StopWatch elapsed;

Vector3f v1 = {1.0f, 2.0f, 3.0f};
Vector3f v2 = {4.0f, 5.0f, 6.0f};

float t;
asm {
movss	XMM1, t;
}

elapsed.start;
for (int i=0; i<0x40FFFFFF; i++) {
// do something nontrivial...
v2 += v1 * 3.0f;
}
auto duration = elapsed.stop;
Stdout.formatln("{:6}", duration);

// to ensure that the compiler doesn't eliminate/optimize the inner loop
Stdout("(" v1.x, v1.y, v1.z ") (" v2.x, v2.y, v2.z ")").newline;
return 0;
}
```
Dec 22 2007
Lukas Pinkowski <Lukas.Pinkowski web.de> writes:
```Sascha Katzner wrote:

Lukas Pinkowski wrote:
SIMD instructions are pretty 'old' now, but the compilers support them
only through non-portable extensions, or handwritten assembly.

One reason could be that, it is a performance penalty for the OS to save
the SIMD registers (XMM1, XMM2...etc.). You can verify that with the
test programm attached to this posting. If you uncomment lines 27-30 the
programm is ~50% slower (9.8s vs 6.7s on a Core 2 Duo E6750 on Vista).

SSE is great if you do a lot of heavy computations in your program, but
if you only do a dot product here and a cross product there you better
not use SSE, because your whole program runs a lot slower if you use SSE
instructions.

LLAP,
Sascha Katzner

Hi, that's because the OS needs to backup both SSE-registers and the
x87-stack.
On my Athlon64 3800+ on openSUSE 10.2 (GCC 4.1.2), when I compile the
equivalent C++-code once with SSE and once with x87, they are equally fast
(see attached code). And that's even without using GCC's SIMD extension.
I'll try to make an example with SIMD operations (though I don't think that
there we'll be a significant performance gain for this easy code, if at
all).
(Note: Using double in test.cc because float yields different results for
SSE and x87)

As I don't have GDC installed (it fails to build for me), I can't test it
with GDC. But I've read often enough that DMD doesn't generate fast FP
code.

Of course, by disabling the x87, we loose D's real type, but that's OK for
applications where 80bit precision is not required. It would be a _compiler
option_ anyway, and if you're into numerics you really should know what
you're doing!
```
Dec 22 2007
=?ISO-8859-1?Q?Pablo_Ripoll=e9s?= <in-call gmx.net> writes:
```Janice Caron Wrote:

If you wanted to go even more general, you could go beyond std.matrix
and head into the murky waters of std.tensor.

Tensors are a generalisation of the progression: scalar, vector, matrix, ....

be careful with that! a matrix is an algebraic structure very different from
vectors and tensors.  you could say that a tensor is a generalization of a
vector (a vector is a rank-1 tensor) or that a tensor is a generalization of a
scalar (a scalar is a rank-0 tensor), however a matrix is a different thing.
think of a matrix as a two dimensional array with several algebraic rules and
operations.  matrices are away to represent and operate a bunch of numbers.
matrices serve to represent scalars, vectors and rank-2 tensors.
mathematically speaking, matrices are higher level than arrays but lower than
tensors.  Tensors are "geometric" entities that are independent of the
coordinate system.  Tensors are much more conceptualized than plain algebraic
matrices, they are a very particular tool to represent the former.  if you were
to expand a high rank tensor product, representing the corresponding slices

IMHO when implementing mathematical concepts into the language, the A&D phase
should be that given by the mathematics, its primitive conceptual design should
be retained.

Think of a scalar as a zero-dimensional array; a vector as a one
dimentional array, and a matrix as a two dimentional array. A scalar
is a tensor with rank zero; a vector is a tensor with rank one; a
matrix is a tensor with rank two. This completely generalises for
tensors of arbitrary (non negative integer) rank.

(There is a complication though, in that you have to distinguish
between contravariant and covariant indeces)

If tensor mathematics were implemented, vectors and matrices could be
trivially implemented in terms of tensors.

See http://mathworld.wolfram.com/Tensor.html

(That might be going a bit further than people are ever going to need
though! :-) )

if it happens to be well implemented, i don't think so.

cheers!
```
Dec 22 2007
Knud Soerensen <4tuu4k002 sneakemail.com> writes:
```Hi Lukas

Lukas Pinkowski wrote:
I'm wondering why the 2D/3D/4D-vector and -matrix data types don't find
their way into the mainstream programming languages as builtin types?
The only that I know of that have builtin-support are the shader languages
(HLSL, GLSL, Cg, ...) and I suppose the VectorC/C++-compiler. Instead the
vector- and matrix-class is coded over and over again, with different
3D-libraries using their own implementation/interface.
SIMD instructions are pretty 'old' now, but the compilers support them only
through non-portable extensions, or handwritten assembly.

Take a look at the vectorization suggestion on
http://all-technology.com/eigenpolls/dwishlist/index.php?it=10
on the wishlist
http://all-technology.com/eigenpolls/dwishlist/
this would give a standard way to write array expression.

Years ago walter expresed that someting like this would be included in 2.0!

Walter is this still your opinion ?

I think the programming language of the future should have those builtin

It would be nice if one of the Open Source D-compilers (GDC, LLVMDC) would
implement such an extension to D in an experimental branch; don't know if
it's easy to generate SIMD-code with the GCC backend, but LLVM is supposed
to make it easy, right?
Hopefully this extension could propagate after some time into the official D
spec. Even if Walter won't touch the backend again, DMD could at least
provide a software implementation (like for 64bit integer operations).

Yes, an experimental compiler where the d community could experiment
with new features is also a good idea.

I think that all that it need is for someone to do it.
```
Dec 22 2007
Tomas Lindquist Olsen <tomas famolsen.dk> writes:
```Lukas Pinkowski wrote:
I'm wondering why the 2D/3D/4D-vector and -matrix data types don't find
their way into the mainstream programming languages as builtin types?
The only that I know of that have builtin-support are the shader languages
(HLSL, GLSL, Cg, ...) and I suppose the VectorC/C++-compiler. Instead the
vector- and matrix-class is coded over and over again, with different
3D-libraries using their own implementation/interface.
SIMD instructions are pretty 'old' now, but the compilers support them only
through non-portable extensions, or handwritten assembly.

I think the programming language of the future should have those builtin

It would be nice if one of the Open Source D-compilers (GDC, LLVMDC) would
implement such an extension to D in an experimental branch; don't know if
it's easy to generate SIMD-code with the GCC backend, but LLVM is supposed
to make it easy, right?
Hopefully this extension could propagate after some time into the official D
spec. Even if Walter won't touch the backend again, DMD could at least
provide a software implementation (like for 64bit integer operations).

Seeing that D seems to be quite popular for game programming and numerics,
this would be a nice addition.

Well, as for the typenames, I guess something along

v2f, v3f, v4f, m2f, m3f, m4f: vectors and matrices based on float
v2d, v3d, v4d, m2d, m3d, m4d: vectors and matrices based on double
v2r, v3r, v4r, m2r, m3r, m4r: vectors and matrices based on real

Or vec2f instead of v2f, mat2f instead of m2f, a.s.o. Complex versions would
be probably needed, too?

I could definitely be interested in experimenting with this in LLVMDC. As LLVM