## digitalmars.D.announce - Small vector and matrix proposed for phobos2 now on github

- Gareth Charnock <gareth.tpc gmail.com> Apr 25 2010
- "Robert Jacques" <sandford jhu.edu> Apr 25 2010
- Gareth Charnock <gareth.tpc gmail.com> Apr 25 2010
- bearophile <bearophileHUGS lycos.com> Apr 26 2010
- bearophile <bearophileHUGS lycos.com> Apr 26 2010
- Gareth Charnock <gareth.tpc gmail.com> Apr 26 2010
- =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> Apr 27 2010
- Gareth Charnock <gareth.tpc gmail.com> Apr 29 2010
- #ponce <spam spam.spam> Apr 27 2010
- "Robert Jacques" <sandford jhu.edu> Apr 26 2010
- "Robert Jacques" <sandford jhu.edu> Apr 26 2010
- "Robert Jacques" <sandford jhu.edu> Apr 26 2010
- "Robert Jacques" <sandford jhu.edu> Apr 29 2010
- "Robert Jacques" <sandford jhu.edu> Apr 30 2010
- bearophile <bearophileHUGS lycos.com> Apr 26 2010
- bearophile <bearophileHUGS lycos.com> Apr 26 2010
- #ponce <spam spam.spam> Apr 26 2010
- Gareth Charnock <gareth.tpc gmail.com> Apr 26 2010
- #ponce <spam spam.spam> Apr 26 2010
- bearophile <bearophileHUGS lycos.com> Apr 26 2010
- Gareth Charnock <gareth oerc.ox.ac.uk> Apr 26 2010
- bearophile <bearophileHUGS lycos.com> Apr 26 2010

I've put the beginnings of my matrix-vector library up on github (between swizzling, generalising to N dimensions and making use of static foreach this project quickly turned from a cleanup to a rewrite). You can find it here. http://github.com/gcharnock/phoboslinalgebra Some highlights so far: //Dimension and field are templatable parameters. There is no hard limit on N, but obviously everything is set up for N being small. alias Vector!(float,3) vector_t; alias Matrix!(float,3) matrix_t; //Very natural initialisation auto v=vector_t(1,2,3); auto m=matrix_t(1,2,0,0,1,0,0,3,1); //Single element access does not require opDispatch writeln(v.x); //Swizzling supported via opDispatch writeln(v.yzx); //2 3 1 writeln(v.xxyyzx); //1 1 2 2 3 1 //No matter how big N, elements always have a canonical name (probably more important for matrices where the letters of the alphabet run out faster although I've only done this for vectors so far) writeln(v.x2); //3 writeln(v.x0x1x2); //1 2 3, zero based indexing because D inherits C //Some Math operations. Static foreach is used to unroll all loops. Unfortunately I ran into some problems putting the operators outside the struct. I'm not sure if this was me or the compiler but I'll need another hacking session to work out what's going on. float s; v*s v/s v*=s v/=s v+v v-v v*v m+m m-m m*m m*v A question I'd like input on would what should the semantics of the Transpose(), Transposed() function. I can think of some alternatives: Transpose: 1) Naive solution, copies N^^2-N scalars. 2) Set a bool. Switch between two element iteration schemes for nearly every operation (with suitable use of templates this might not be as painful as it sounds.) Transposed: 1) makes a new matrix 2) creates a transposed image which is forever linked to the original matrix: A=B.Transpose() => Changes to A affect B 3) creates a transposed image which is copy-on-write linked to the original matrix: A=B.Transpose() => Changes to A do not affect B

Apr 25 2010

On Sun, 25 Apr 2010 23:17:10 -0300, Gareth Charnock <gareth.tpc gmail.com> wrote:I've put the beginnings of my matrix-vector library up on github (between swizzling, generalising to N dimensions and making use of static foreach this project quickly turned from a cleanup to a rewrite). You can find it here. http://github.com/gcharnock/phoboslinalgebra Some highlights so far: //Dimension and field are templatable parameters. There is no hard limit on N, but obviously everything is set up for N being small. alias Vector!(float,3) vector_t; alias Matrix!(float,3) matrix_t; //Very natural initialisation auto v=vector_t(1,2,3); auto m=matrix_t(1,2,0,0,1,0,0,3,1); //Single element access does not require opDispatch writeln(v.x); //Swizzling supported via opDispatch writeln(v.yzx); //2 3 1 writeln(v.xxyyzx); //1 1 2 2 3 1 //No matter how big N, elements always have a canonical name (probably more important for matrices where the letters of the alphabet run out faster although I've only done this for vectors so far) writeln(v.x2); //3 writeln(v.x0x1x2); //1 2 3, zero based indexing because D inherits C //Some Math operations. Static foreach is used to unroll all loops. Unfortunately I ran into some problems putting the operators outside the struct. I'm not sure if this was me or the compiler but I'll need another hacking session to work out what's going on. float s; v*s v/s v*=s v/=s v+v v-v v*v m+m m-m m*m m*v A question I'd like input on would what should the semantics of the Transpose(), Transposed() function. I can think of some alternatives: Transpose: 1) Naive solution, copies N^^2-N scalars. 2) Set a bool. Switch between two element iteration schemes for nearly every operation (with suitable use of templates this might not be as painful as it sounds.) Transposed: 1) makes a new matrix 2) creates a transposed image which is forever linked to the original matrix: A=B.Transpose() => Changes to A affect B 3) creates a transposed image which is copy-on-write linked to the original matrix: A=B.Transpose() => Changes to A do not affect B

Some quick comments: - The version I'm seeing on github doesn't seem to have all the features you're referencing (i.e. v*v). Why are some ops limited to */ and other +-? - Static foreach isn't a good way to do loop unrolling (at least historically). - In terms of naming, I prefer float3 instead of vector3f. - Return type and argument types need to be dynamically deduced: i.e. int3 * float3 should work. - length should mean the number of vector elements. I'd recommend norm or L2 instead. - For the matrix, Field[D][D] raw; will let you get out of manual indexing issues. - I've never seen matrix swizzling and don't really know what it would be used for. - You should be able to set matrix layout to either C or Fortran style - Re: Transpose: this should be an in place swap. Given the value-type semantics of small matrices, make the copies. - Re: Transposed: this function should be named T (or have an equivalent alias) and return a reference to the current matrix casted to its C or Fortran layout counterpart. In general, this feels like a decent start, although it's still very alpha and feature incomplete. Keep up the good work.

Apr 25 2010

Thanks. To quickly answer this:- The version I'm seeing on github doesn't seem to have all the features you're referencing (i.e. v*v). Why are some ops limited to */ and

It was quite late (3am) when I typed up that email. I'm sorry if I got the ops wrong. v*v was actually rejected in favour of dot(v,v). Unless I've made another mistake the implemented operators are: v*s v/s v*=s v/=s v+v v-v v+=v v-=v m+m m-m m*m m*v (where v is a vector, m a matrix, s a scalar) The reason the feature set is very limited is I wanted to get feedback before implementing a large set of features and then having to change things. The idea was to implement just enough so that it would be clear where I was going. For example, I wasn't aware that compile time foreach was a bad way to unroll loops (perhaps I should build a mixin string?).

Apr 25 2010

Robert Jacques:- Element wise *, etc are important operators that you need to cleanly support. Design-wise, since we can't do Matlab style operators, I prefer */+- to be element wise,

But keep in mind in D the built-in array ops always require the []. Bye, bearophile

Apr 26 2010

Robert Jacques:Yes, but emulating that behavior is difficult. And array ops are still buggy and broken to the degree of not being usable. In theory you shouldn't need to define a small vector or matrix in D: fixed size arrays are value types, array op support and free functions as member function support. In practice, though, we still need it.

You are right, in practice we need it. But practice is not enough, we have to keep in mind what's right too. And the right thing is to improve the built-in array ops :-) And array ops don't replace everything you can do with a vector lib, you can define dot and cross product, small matrices, matrix operations, sparse arrays, truly rectangular arrays made of a single block of memory, etc. Bye, bearophile

Apr 26 2010

- Element wise *, etc are important operators that you need to cleanly support. Design-wise, since we can't do Matlab style operators, I prefer */+- to be element wise, and then use dot/cross/solve for matrix ops. This seems to avoid the confusion of as having some ops be matrix style and some ops be array style. Since D is unicode, you could also add the real inner product operator as an alias. Also, don't forget the outer product, aka broadcasting, i.e. v.dot(v.T). - static foreach seems to add some overhead, although I don't know why. What you should really do is write some benchmarks and test static foreach, array ops and hand-unrolling for several vector sizes.

I'm afraid you've just set me off a rant. I apologise, but you just told me that your web browser was Google. I think we may be heading for a culture clash. I approach vectors and matrices from a math/physics background. Here vector's primary and only purpose is to model a directed arrow in some space (3D space, function space (lots and lots of things can be shoehorned into the idea of a space) but always in some space.). Consequently, the only things that matter about a vector are physical quantities. What are physical quantities? Anything that is invariant under a coordinate transform. For example, if you have a unit stick sitting due north and and you measure it with a ruler sitting due north and one due east you conclude this: (1,0) But if you measure with two rulers due NW and NE you conclude: (0.707,0.707) These are a same physical vector. You can see this by using the rotation matrix between the coordinate systems: (0.707 -0.707)(1)=(0.707) (0.707 0.707)(0) (0.707) So the vector coordinates (a coincidence of the placement of the rulers) are not physical but the vector is. I don't think I have never seen a physical equation requiring a element by element multiplication but I have seen lots of dot products. There is a reason for this. The dot product of a vector with another vector gives you a scalar. A scalar is not just a number (vector components are numbers but not scalars), a scalar is a number that does not change under a coordinate transform. The height of a hill does not change when you draw a map in different ways so it that is a scalar. The element by element product does not yield anything other than N numbers which are an accident of the coordinate system. Matrices are either a system of linear equations or a representation of a tensor. Again, the most natural way to combine them is the matrix product (although there are other natural ways: direct product, kronchecker product and index contraction). The best way to think about a matrix is not to think about the matrix components, but as an indivisible whole. Element by element multiplication seems equally arbarant. These things have been so deeply ingrained that I didn't even consider an element by element product in the design and I had to do a double take before I got what was being asked for. When the dot product was asked for as a "dot" I thought it would replace I'd consider any use of element by element multiplication has probably a sign of a hack, and possibly a sign that what you are trying to use a vector for is actually an array of numbers. Hacks are fine but I don't think they're worth an operator. I'd want a big, explicit elementWiseProduct() to tell me something was going on. Good user interfaces are modelled on the principle of least surprise. I think there are two different models of least surprise here. I want transcribe my matrix-vector equations directly into D and: u=Ap+Bq p,q,u vectors, A matrix, B matrix Looks better as: u=A*p+B*q; Than: u=matrix_vector_product(A,p) + matrix_vector_product(B,q) But I think programmers are more used to using vector and array interchangeably. Possibly this is a reason why a vector struct is needed rather than defining free operators over just any old array. You want the operators to have their vector-meanings for vectors while retaining their field-meanings when dealing with fields like float, double, real etc. Here's another one from the C++ standard library. std::complex is a pair of numbers. Why isn't operator* a element wise multiplication with an separate complex_product(.,.)? Because if you use std::complex you probably want to model complex numbers. tl;dr Google is not really your web browser, even though I know what you mean. An array is not really a vector, especially when 3D (or any other D) space . .... PS: Okay so I just had a looked at the matrix and vector classes in Ogre3D and irrlicht. Looks like they both define v*v as element wise multiplication but m*m is matrix multiplication. That just seems even more inconsistent.

Apr 26 2010

Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Gareth Charnock wrote:PS: Okay so I just had a looked at the matrix and vector classes in Ogre3D and irrlicht. Looks like they both define v*v as element wise multiplication but m*m is matrix multiplication. That just seems even more inconsistent.

Eigen (http://eigen.tuxfamily.org/ ) uses '*' for the matrix multiplication. v*v is an error (incompatible shapes). Element wise operations can be done like this: v.cwise()*v Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr

Apr 27 2010

Jérôme M. Berger wrote:Gareth Charnock wrote:PS: Okay so I just had a looked at the matrix and vector classes in Ogre3D and irrlicht. Looks like they both define v*v as element wise multiplication but m*m is matrix multiplication. That just seems even more inconsistent.

Eigen (http://eigen.tuxfamily.org/ ) uses '*' for the matrix multiplication. v*v is an error (incompatible shapes). Element wise operations can be done like this: v.cwise()*v Jerome

claim is true. I can't believe I've missed this for so long (found the boost matrix library, blitz++, the matrix template library, FLENS, something called armadillo but not eigen. I do believe all my C++ linear algebra woes are over. Eigen seems to treat vectors as 1 by n matrices and if you do this you get the matrix-vector product and the dot product for free as these are all the same operation. Probably v^T*v would be the dot product. Expression templates should make these operations efficient.

Apr 29 2010

Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: quoted-printable Robert Jacques wrote:On Thu, 29 Apr 2010 19:16:49 -0400, Gareth Charnock <gareth.tpc gmail.com> wrote: =20J=E9r=F4me M. Berger wrote:Gareth Charnock wrote:PS: Okay so I just had a looked at the matrix and vector classes in Ogre3D and irrlicht. Looks like they both define v*v as element wise=

multiplication but m*m is matrix multiplication. That just seems eve=

more inconsistent.

multiplication. v*v is an error (incompatible shapes). Element wise operations can be done like this: v.cwise()*v Jerome

claim is true. I can't believe I've missed this for so long (found the=

boost matrix library, blitz++, the matrix template library, FLENS, something called armadillo but not eigen. I do believe all my C++ linear algebra woes are over. Eigen seems to treat vectors as 1 by n matrices and if you do this you=

get the matrix-vector product and the dot product for free as these are all the same operation. Probably v^T*v would be the dot product. Expression templates should make these operations efficient.

Actually, express templates, though they give nice syntax result in a lot of temporaries, which hurt performance. Check out Don's BLADE library and the associated talks and posts.

According to the documentation, Eigen expression templates do not result in temporaries, except for multiplication. And even then, you can remove the temporaries by using the .lazy function in Eigen2 or the .noalias() function in the upcoming Eigen3: http://eigen.tuxfamily.org/dox-devel/Eigen2ToEigen3.html#LazyVsNoalias An yes, it is very impressive :) and very comfortable to use. I had seen about the same selection as you before discovering Eigen and moving to it. The only drawback is that it currently doesn't multithread. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr

Apr 30 2010

As for principal of least surprise, lots of libraries op for using array-wise * and v.dot(v), including (IIRC) all of the various GPU shading/compute languages.

In GLSL, vector * vector is component-wise but matrix * vector is not matrix * matrix is not, and it feels very natural. In HLSL, * is always component-wise and you have to use mul(a, b) to use the product... I don't buy it. Gareth is right, element-wise multiplication for matrices is near to useless, whereas it's handy for vectors.

Apr 27 2010

On Mon, 26 Apr 2010 03:46:47 -0300, Gareth Charnock <gareth.tpc gmail.com> wrote:Thanks. To quickly answer this: > - The version I'm seeing on github doesn't seem to have all the features > you're referencing (i.e. v*v). Why are some ops limited to */ and other +-? It was quite late (3am) when I typed up that email. I'm sorry if I got the ops wrong. v*v was actually rejected in favour of dot(v,v). Unless I've made another mistake the implemented operators are: v*s v/s v*=s v/=s v+v v-v v+=v v-=v m+m m-m m*m m*v (where v is a vector, m a matrix, s a scalar) The reason the feature set is very limited is I wanted to get feedback before implementing a large set of features and then having to change things. The idea was to implement just enough so that it would be clear where I was going. For example, I wasn't aware that compile time foreach was a bad way to unroll loops (perhaps I should build a mixin string?).

- Element wise *, etc are important operators that you need to cleanly support. Design-wise, since we can't do Matlab style operators, I prefer */+- to be element wise, and then use dot/cross/solve for matrix ops. This seems to avoid the confusion of as having some ops be matrix style and some ops be array style. Since D is unicode, you could also add the real inner product operator as an alias. Also, don't forget the outer product, aka broadcasting, i.e. v.dot(v.T). - static foreach seems to add some overhead, although I don't know why. What you should really do is write some benchmarks and test static foreach, array ops and hand-unrolling for several vector sizes.

Apr 26 2010

On Mon, 26 Apr 2010 13:16:02 -0300, bearophile <bearophileHUGS lycos.com> wrote:Robert Jacques:- Element wise *, etc are important operators that you need to cleanly support. Design-wise, since we can't do Matlab style operators, I prefer */+- to be element wise,

But keep in mind in D the built-in array ops always require the []. Bye, bearophile

Yes, but emulating that behavior is difficult. And array ops are still buggy and broken to the degree of not being usable. In theory you shouldn't need to define a small vector or matrix in D: fixed size arrays are value types, array op support and free functions as member function support. In practice, though, we still need it.

Apr 26 2010

On Mon, 26 Apr 2010 18:02:53 -0300, Gareth Charnock <gareth.tpc gmail.com> wrote:- Element wise *, etc are important operators that you need to cleanly support. Design-wise, since we can't do Matlab style operators, I prefer */+- to be element wise, and then use dot/cross/solve for matrix ops. This seems to avoid the confusion of as having some ops be matrix style and some ops be array style. Since D is unicode, you could also add the real inner product operator as an alias. Also, don't forget the outer product, aka broadcasting, i.e. v.dot(v.T). - static foreach seems to add some overhead, although I don't know why. What you should really do is write some benchmarks and test static foreach, array ops and hand-unrolling for several vector sizes.

I'm afraid you've just set me off a rant. I apologise, but you just told me that your web browser was Google. I think we may be heading for a culture clash. I approach vectors and matrices from a math/physics background. Here vector's primary and only purpose is to model a directed arrow in some space (3D space, function space (lots and lots of things can be shoehorned into the idea of a space) but always in some space.). Consequently, the only things that matter about a vector are physical quantities. What are physical quantities? Anything that is invariant under a coordinate transform. For example, if you have a unit stick sitting due north and and you measure it with a ruler sitting due north and one due east you conclude this: (1,0) But if you measure with two rulers due NW and NE you conclude: (0.707,0.707) These are a same physical vector. You can see this by using the rotation matrix between the coordinate systems: (0.707 -0.707)(1)=(0.707) (0.707 0.707)(0) (0.707) So the vector coordinates (a coincidence of the placement of the rulers) are not physical but the vector is. I don't think I have never seen a physical equation requiring a element by element multiplication but I have seen lots of dot products. There is a reason for this. The dot product of a vector with another vector gives you a scalar. A scalar is not just a number (vector components are numbers but not scalars), a scalar is a number that does not change under a coordinate transform. The height of a hill does not change when you draw a map in different ways so it that is a scalar. The element by element product does not yield anything other than N numbers which are an accident of the coordinate system. Matrices are either a system of linear equations or a representation of a tensor. Again, the most natural way to combine them is the matrix product (although there are other natural ways: direct product, kronchecker product and index contraction). The best way to think about a matrix is not to think about the matrix components, but as an indivisible whole. Element by element multiplication seems equally arbarant. These things have been so deeply ingrained that I didn't even consider an element by element product in the design and I had to do a double take before I got what was being asked for. When the dot product was asked for as a "dot" I thought it would replace I'd consider any use of element by element multiplication has probably a sign of a hack, and possibly a sign that what you are trying to use a vector for is actually an array of numbers. Hacks are fine but I don't think they're worth an operator. I'd want a big, explicit elementWiseProduct() to tell me something was going on. Good user interfaces are modelled on the principle of least surprise. I think there are two different models of least surprise here. I want transcribe my matrix-vector equations directly into D and: u=Ap+Bq p,q,u vectors, A matrix, B matrix Looks better as: u=A*p+B*q; Than: u=matrix_vector_product(A,p) + matrix_vector_product(B,q) But I think programmers are more used to using vector and array interchangeably. Possibly this is a reason why a vector struct is needed rather than defining free operators over just any old array. You want the operators to have their vector-meanings for vectors while retaining their field-meanings when dealing with fields like float, double, real etc. Here's another one from the C++ standard library. std::complex is a pair of numbers. Why isn't operator* a element wise multiplication with an separate complex_product(.,.)? Because if you use std::complex you probably want to model complex numbers. tl;dr Google is not really your web browser, even though I know what you mean. An array is not really a vector, especially when 3D (or any other D) space . .... PS: Okay so I just had a looked at the matrix and vector classes in Ogre3D and irrlicht. Looks like they both define v*v as element wise multiplication but m*m is matrix multiplication. That just seems even more inconsistent.

Personally, I'd expect a pure mathematician to argue that the * operator isn't equivalent to the â€¢ operator and might even argue for the more formal <,> operator. :) Remember, unlike complex numbers, * isn't defined between vector types, so the library needs to decide what operator it's mapping to it. As for principal of least surprise, lots of libraries op for using array-wise * and v.dot(v), including (IIRC) all of the various GPU shading/compute languages. P.S. I use Opera for my web browsing.

Apr 26 2010

On Thu, 29 Apr 2010 19:16:49 -0400, Gareth Charnock <gareth.tpc gmail.com> wrote:Jérôme M. Berger wrote:Gareth Charnock wrote:PS: Okay so I just had a looked at the matrix and vector classes in Ogre3D and irrlicht. Looks like they both define v*v as element wise multiplication but m*m is matrix multiplication. That just seems even more inconsistent.

multiplication. v*v is an error (incompatible shapes). Element wise operations can be done like this: v.cwise()*v Jerome

claim is true. I can't believe I've missed this for so long (found the boost matrix library, blitz++, the matrix template library, FLENS, something called armadillo but not eigen. I do believe all my C++ linear algebra woes are over. Eigen seems to treat vectors as 1 by n matrices and if you do this you get the matrix-vector product and the dot product for free as these are all the same operation. Probably v^T*v would be the dot product. Expression templates should make these operations efficient.

Actually, express templates, though they give nice syntax result in a lot of temporaries, which hurt performance. Check out Don's BLADE library and the associated talks and posts.

Apr 29 2010

On Fri, 30 Apr 2010 17:23:33 -0400, Jérôme M. Berger <jeberger free.fr> wrote:Robert Jacques wrote:On Thu, 29 Apr 2010 19:16:49 -0400, Gareth Charnock <gareth.tpc gmail.com> wrote:Jérôme M. Berger wrote:Gareth Charnock wrote:

multiplication. v*v is an error (incompatible shapes). Element wise operations can be done like this: v.cwise()*v Jerome

claim is true. I can't believe I've missed this for so long (found the boost matrix library, blitz++, the matrix template library, FLENS, something called armadillo but not eigen. I do believe all my C++ linear algebra woes are over. Eigen seems to treat vectors as 1 by n matrices and if you do this you get the matrix-vector product and the dot product for free as these are all the same operation. Probably v^T*v would be the dot product. Expression templates should make these operations efficient.

Actually, express templates, though they give nice syntax result in a lot of temporaries, which hurt performance. Check out Don's BLADE library and the associated talks and posts.

According to the documentation, Eigen expression templates do not result in temporaries, except for multiplication. And even then, you can remove the temporaries by using the .lazy function in Eigen2 or the .noalias() function in the upcoming Eigen3: http://eigen.tuxfamily.org/dox-devel/Eigen2ToEigen3.html#LazyVsNoalias An yes, it is very impressive :) and very comfortable to use. I had seen about the same selection as you before discovering Eigen and moving to it. The only drawback is that it currently doesn't multithread. Jerome

No, according to the documentation, Eigen expression templates do not result in any temporary vectors. They do however, result in a a lot of stack temporaries. And stack temporaries of stack temporaries. The overhead of using expression templates has generally been ignored in CS, which is why when I first saw Don's BLADE library and talk it was so refreshing. The weaknesses of expression templates was also a major reason for the inclusion of array ops as a language feature.

Apr 30 2010

Gareth Charnock:http://github.com/gcharnock/phoboslinalgebra

I think that a single module is better. If seen fitting, some functionality can be moved inside other already present modules of Phobos. The module will need a good amount of unit tests. In the code I see no point in calling functions like: CTFENextToken Just call them: nextToken Note: functions, template functions, and ctfe functions, start with a lower case letter (and templates generally with with an upper case). Don't call this module and its contents matrix or vector or Vector, etc. Call them SmallVector, or SmallVec, SmallMat, ecc, because this lib is not designed to be a general purpose matrix or vector lib, it's designed for small number of items. You can call this module std.tinyarray :-)alias Matrix!(float,3) matrix_t;

To define the sizes of a matrix you need two numbers: alias Matrix!(float, 3, 5) Matrix35;//Very natural initialisation auto m=matrix_t(1,2,0,0,1,0,0,3,1);

A 1D initialization of a matrix is not so natural :-)//No matter how big N, elements always have a canonical name (probably more important for matrices where the letters of the alphabet run out faster although I've only done this for vectors so far)

What's the point of giving names to matrix elements?2) Set a bool. Switch between two element iteration schemes for nearly every operation (with suitable use of templates this might not be as painful as it sounds.)

The point of a transposition is sometimes to actually have data in a different position. With cache effects the position of data can be quite important (a matrix multiplication can be 2 or 3 times faster if you transpose. This is true for larger matrices).Transposed: 1) makes a new matrix 2) creates a transposed image which is forever linked to the original matrix: A=B.Transpose() => Changes to A affect B

The semantics of transpose/transposed can be like in Python: the transpose works in-place, transposed creates a new matrix. Methods and properties like transpose have to start with a lowercase: auto A = B.transposed; Bye, bearophile

Apr 26 2010

You can do performance benchmarks compared to the tinyvector module here, for D1: http://www.fantascienza.net/leonardo/so/libs_d.zip It comes from a performance tuning for DMD. Bye, bearophile

Apr 26 2010

The semantics of transpose/transposed can be like in Python: the transpose works in-place, transposed creates a new matrix.

That's what I do, with normalize/normalized too.

Apr 26 2010

bearophile wrote:Gareth Charnock:http://github.com/gcharnock/phoboslinalgebra

I think that a single module is better. If seen fitting, some functionality can be moved inside other already present modules of Phobos. The module will need a good amount of unit tests. In the code I see no point in calling functions like: CTFENextToken Just call them: nextToken Note: functions, template functions, and ctfe functions, start with a lower case letter (and templates generally with with an upper case).

useful.Don't call this module and its contents matrix or vector or Vector, etc. Call them SmallVector, or SmallVec, SmallMat, ecc, because this lib is not designed to be a general purpose matrix or vector lib, it's designed for small number of items. You can call this module std.tinyarray :-)

look there for vectors and matrices. std.tinylinalg comes off as a mess of letters.alias Matrix!(float,3) matrix_t;

To define the sizes of a matrix you need two numbers: alias Matrix!(float, 3, 5) Matrix35;

//Very natural initialisation auto m=matrix_t(1,2,0,0,1,0,0,3,1);

A 1D initialization of a matrix is not so natural :-)

It is in emacs, which will align the statement like this: auto m=matrix_t(1,2,0, 0,1,0, 0,3,1) But point taken, there needs to be other ways of initialising.//No matter how big N, elements always have a canonical name (probably more important for matrices where the letters of the alphabet run out faster although I've only done this for vectors so far)

What's the point of giving names to matrix elements?

Ease of access, just in case you need it. Also stuff like: vector3 translate = m.a30a31a32; Most of the metaprograming is already done so it might as well be reused. (Actually that's so common it could do with its own function.)2) Set a bool. Switch between two element iteration schemes for nearly every operation (with suitable use of templates this might not be as painful as it sounds.)

The point of a transposition is sometimes to actually have data in a different position. With cache effects the position of data can be quite important (a matrix multiplication can be 2 or 3 times faster if you transpose. This is true for larger matrices).

Which is why using this design will increase the complexity of the code a fair bit. I'm not a fan of 2 because I'd prefer a value type. However I want to check I'm not alone (as I appear to be on the definition of * for vectors).Transposed: 1) makes a new matrix 2) creates a transposed image which is forever linked to the original matrix: A=B.Transpose() => Changes to A affect B

The semantics of transpose/transposed can be like in Python: the transpose works in-place, transposed creates a new matrix.

Sorry, should have read A=B.Transposed() => A becomes linked to B, changes to A affect B.Methods and properties like transpose have to start with a lowercase: auto A = B.transposed;

I'll try to be more consistent.Bye, bearophile

Apr 26 2010

I'm not sure how your static foreach is actually static. I think static foreach is a very good idea to minimize speed difference between debug and release versions (i know one shouldn't ship debug version, but sometimes you have to)...//Single element access does not require opDispatch writeln(v.x); //Swizzling supported via opDispatch writeln(v.yzx); //2 3 1 writeln(v.xxyyzx); //1 1 2 2 3 1

Can we get assignment through swizzle ? Like: v.xy = vector2f(1.f, 2.f); (but one should disallow: v.xx = vector2f(1.f, 2.f); ) Also, I like the naming, even though I would probably alias it anyway. Existing solutions: vector3f, vec3f, smallVector3f, float3...

Apr 26 2010

#ponce:I'm not sure how your static foreach is actually static.

It's a foreach on: template TypeNuple(T, size_t n) { static if(n == 0) { alias TypeTuple!() TypeNuple; } else { alias TypeTuple!(T,TypeNuple!(T, n-1)) TypeNuple; } } So with the current D it's a static foreach. Bye, bearophile

Apr 26 2010

I need a better name for ArgList. It originally made sense because because I assumed I be using the tuple like this: this(ArgList argList) But then I ended up using it in a load of static foreaches. I think it would be useful to actually be able to say "static foreach" so that readers (and writers) of the code know for sure what's going on. I seem to remember you starting a thread on that very matter. bearophile wrote:#ponce:I'm not sure how your static foreach is actually static.

It's a foreach on: template TypeNuple(T, size_t n) { static if(n == 0) { alias TypeTuple!() TypeNuple; } else { alias TypeTuple!(T,TypeNuple!(T, n-1)) TypeNuple; } } So with the current D it's a static foreach. Bye, bearophile

Apr 26 2010

Gareth Charnock:I think it would be useful to actually be able to say "static foreach" so that readers (and writers) of the code know for sure what's going on. I seem to remember you starting a thread on that very matter.

http://d.puremagic.com/issues/show_bug.cgi?id=4085 Bye, bearophile

Apr 26 2010