www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DIP80: phobos additions

reply "Robert burner Schadek" <rburners gmail.com> writes:
Phobos is awesome, the libs of go, python and rust only have 
better marketing.
As discussed on dconf, phobos needs to become big and blow the 
rest out of the sky.

http://wiki.dlang.org/DIP80

lets get OT, please discuss
Jun 07 2015
next sibling parent "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
Yes, it's a great DIP's discussion. I just for the expansion of Phobos! You also need to consider Hana to copy some very useful elements in the Phobos: http://ldionne.com/hana/
Jun 07 2015
prev sibling next sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
can we discuss the downside of making phobos huge? I actively avoid adding phobos libs to my projects because it bloats my binaries and increases compile times by massive amounts.
Jun 07 2015
next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 8 June 2015 at 00:05:58 UTC, weaselcat wrote:
 I actively avoid adding phobos libs to my projects because it 
 bloats my binaries and increases compile times by massive 
 amounts.
Me too... but that's not actually a problem of huge library. It is more a problem of an interconnected library - if you write independent modules an import should only pull them with little from other ones. There's a difference with classes because of Object.factory, all of them are pulled in, but modules with functions, structs, and templates are cool, shouldn't be a problem.
Jun 07 2015
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, 8 June 2015 at 00:05:58 UTC, weaselcat wrote:
 can we discuss the downside of making phobos huge?

 I actively avoid adding phobos libs to my projects because it 
 bloats my binaries and increases compile times by massive 
 amounts.
Andrei has already stated that we are definitely going to make Phobos large. We are _not_ going for the minimalistic approach, and pretty much no other language is at this point either. So, Phobos _will_ continue to grow in size. Now, as Adam points out, we can should do a better job of making it so that different pieces of Phobos don't depend on each other if they don't need to, but it's a given at this point that Phobos is only going to get larger. And if unnecessary dependencies are kept to a minimum, then it really shouldn't hurt your compilation times (and I'm sure that we'll have further compiler improvements in that area anyway). - Jonathan M Davis
Jun 07 2015
parent "weaselcat" <weaselcat gmail.com> writes:
On Monday, 8 June 2015 at 01:39:33 UTC, Jonathan M Davis wrote:
 On Monday, 8 June 2015 at 00:05:58 UTC, weaselcat wrote:
 can we discuss the downside of making phobos huge?

 I actively avoid adding phobos libs to my projects because it 
 bloats my binaries and increases compile times by massive 
 amounts.
Andrei has already stated that we are definitely going to make Phobos large. We are _not_ going for the minimalistic approach, and pretty much no other language is at this point either. So, Phobos _will_ continue to grow in size. Now, as Adam points out, we can should do a better job of making it so that different pieces of Phobos don't depend on each other if they don't need to, but it's a given at this point that Phobos is only going to get larger. And if unnecessary dependencies are kept to a minimum, then it really shouldn't hurt your compilation times (and I'm sure that we'll have further compiler improvements in that area anyway). - Jonathan M Davis
I wasn't arguing against a large library(in fact, I prefer it.) I just think the effort should be put towards making phobos more modular before adding more stuff on top of it and making the problem worse. bye,
Jun 07 2015
prev sibling next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
Would love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.
Jun 07 2015
parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 8/06/2015 2:50 p.m., Tofu Ninja wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest
 out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
Would love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.
https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu, hows it going? Gl3n should be a candidate as it is old code and good one at that. https://github.com/Dav1dde/gl3n But it seems like it is no longer maintained. Can anyone contact the author regarding license to boost? Image manipulation blocked by color.
Jun 07 2015
next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 8 June 2015 at 13:08, Rikki Cattermole via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 8/06/2015 2:50 p.m., Tofu Ninja wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest
 out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
Would love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.
https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu, hows it going?
I've kinda just been working on it on the side for my own use. I wasn't happy with the layout, and restructured it a lot. If there's an active demand for it, I'll give it top priority...?
Jun 07 2015
next sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 8/06/2015 3:48 p.m., Manu via Digitalmars-d wrote:
 On 8 June 2015 at 13:08, Rikki Cattermole via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 8/06/2015 2:50 p.m., Tofu Ninja wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest
 out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
Would love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.
https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu, hows it going?
I've kinda just been working on it on the side for my own use. I wasn't happy with the layout, and restructured it a lot. If there's an active demand for it, I'll give it top priority...?
Like I said its a blocker for an image library. There's no point implementing an image library with a half baked color definition meant for phobos. The long term issue is that we cannot really move forward with anything related to GUI or game development into phobos without it. So preferably we can get it into phobos by the end of the year :)
Jun 07 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 8 June 2015 at 13:54, Rikki Cattermole via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 8/06/2015 3:48 p.m., Manu via Digitalmars-d wrote:
 On 8 June 2015 at 13:08, Rikki Cattermole via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 8/06/2015 2:50 p.m., Tofu Ninja wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest
 out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
Would love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.
https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu, hows it going?
I've kinda just been working on it on the side for my own use. I wasn't happy with the layout, and restructured it a lot. If there's an active demand for it, I'll give it top priority...?
Like I said its a blocker for an image library. There's no point implementing an image library with a half baked color definition meant for phobos.
Yeah, that's fine. Is there an initiative for a phobos image library? I have said before that I'm dubious about it's worth; the trouble with an image library is that it will be almost impossible to decide on API, whereas a colour is fairly unambiguous in terms of design merits.
 The long term issue is that we cannot really move forward with anything
 related to GUI or game development into phobos without it.
 So preferably we can get it into phobos by the end of the year :)
Yeah, I agree it's a sore missing point, which is why I started working on it ;) ... I'll make it high priority. I recently finished up various work on premake5, so I can work on this now.
Jun 07 2015
next sibling parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 8/06/2015 4:05 p.m., Manu via Digitalmars-d wrote:
 On 8 June 2015 at 13:54, Rikki Cattermole via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 8/06/2015 3:48 p.m., Manu via Digitalmars-d wrote:
 On 8 June 2015 at 13:08, Rikki Cattermole via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 8/06/2015 2:50 p.m., Tofu Ninja wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest
 out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
Would love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.
https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu, hows it going?
I've kinda just been working on it on the side for my own use. I wasn't happy with the layout, and restructured it a lot. If there's an active demand for it, I'll give it top priority...?
Like I said its a blocker for an image library. There's no point implementing an image library with a half baked color definition meant for phobos.
Yeah, that's fine. Is there an initiative for a phobos image library? I have said before that I'm dubious about it's worth; the trouble with an image library is that it will be almost impossible to decide on API, whereas a colour is fairly unambiguous in terms of design merits.
I agree that it is. But we will need to move past this for the betterment of our ecosystem. Without it we will suffer too much. As it is, Devisualization.Image will have a new interface once std.image.color is pulled. So it'll be a contender for std.image.
 The long term issue is that we cannot really move forward with anything
 related to GUI or game development into phobos without it.
 So preferably we can get it into phobos by the end of the year :)
Yeah, I agree it's a sore missing point, which is why I started working on it ;) ... I'll make it high priority. I recently finished up various work on premake5, so I can work on this now.
Sounds good, I was getting worried that you had stopped altogether.
Jun 07 2015
prev sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Monday, 8 June 2015 at 04:05:23 UTC, Manu wrote:
 Yeah, that's fine. Is there an initiative for a phobos image 
 library?
 I have said before that I'm dubious about it's worth; the 
 trouble with
 an image library is that it will be almost impossible to decide 
 on
 API, whereas a colour is fairly unambiguous in terms of design 
 merits.
Personally I would just be happy with a d wrapper for something like freeimage being included.
Jun 07 2015
parent reply "Mike" <none none.com> writes:
On Monday, 8 June 2015 at 04:21:45 UTC, Tofu Ninja wrote:
 Personally I would just be happy with a d wrapper for something 
 like freeimage being included.
That's what Deimos is for (https://github.com/D-Programming-Deimos/FreeImage). Mike
Jun 07 2015
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Monday, 8 June 2015 at 04:22:56 UTC, Mike wrote:
 On Monday, 8 June 2015 at 04:21:45 UTC, Tofu Ninja wrote:
 Personally I would just be happy with a d wrapper for 
 something like freeimage being included.
That's what Deimos is for (https://github.com/D-Programming-Deimos/FreeImage). Mike
I guess I meant to use it as a base for image loading and storing and to build some kind of d image lib on top of it. I see no point in us trying to implement all the various image formats if we try to make a image lib for phobos.
Jun 07 2015
parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 8/06/2015 4:34 p.m., Tofu Ninja wrote:
 On Monday, 8 June 2015 at 04:22:56 UTC, Mike wrote:
 On Monday, 8 June 2015 at 04:21:45 UTC, Tofu Ninja wrote:
 Personally I would just be happy with a d wrapper for something like
 freeimage being included.
That's what Deimos is for (https://github.com/D-Programming-Deimos/FreeImage). Mike
I guess I meant to use it as a base for image loading and storing and to build some kind of d image lib on top of it. I see no point in us trying to implement all the various image formats if we try to make a image lib for phobos.
Atleast my idea behind Devisualization.Image was mostly this. The implementation can be swapped out with another easily. But the actual interface used is well made. So while a Phobos image library might have a few formats such as PNG, it probably wouldn't include a vast array of them. So then its just a matter of allowing 3rd party libraries to add them transparently.
Jun 07 2015
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, 8 June 2015 at 04:22:56 UTC, Mike wrote:
 On Monday, 8 June 2015 at 04:21:45 UTC, Tofu Ninja wrote:
 Personally I would just be happy with a d wrapper for 
 something like freeimage being included.
That's what Deimos is for (https://github.com/D-Programming-Deimos/FreeImage).
Yeah. After the problems with linking in curl, I think that we more or less decided that including stuff in Phobos which has to link against 3rd party libraries isn't a great idea. Maybe we'll end up doing it again, but in general, it just makes more sense for those to be done as 3rd party libraries and put in code.dlang.org. - Jonathan M Davis
Jun 07 2015
parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Monday, 8 June 2015 at 04:34:56 UTC, Jonathan M Davis wrote:
 On Monday, 8 June 2015 at 04:22:56 UTC, Mike wrote:
 On Monday, 8 June 2015 at 04:21:45 UTC, Tofu Ninja wrote:
 Personally I would just be happy with a d wrapper for 
 something like freeimage being included.
That's what Deimos is for (https://github.com/D-Programming-Deimos/FreeImage).
Yeah. After the problems with linking in curl, I think that we more or less decided that including stuff in Phobos which has to link against 3rd party libraries isn't a great idea. Maybe we'll end up doing it again, but in general, it just makes more sense for those to be done as 3rd party libraries and put in code.dlang.org. - Jonathan M Davis
I think that is probably pretty sad if that is actually the current stance. There are so many great libraries that Phobos could benefit from and is not because of a packaging issue.
Jun 08 2015
prev sibling parent "Mike" <none none.com> writes:
On Monday, 8 June 2015 at 03:48:14 UTC, Manu wrote:

 I've kinda just been working on it on the side for my own use.
 I wasn't happy with the layout, and restructured it a lot.
 If there's an active demand for it, I'll give it top 
 priority...?
I'm interested in this library as well. Mike
Jun 07 2015
prev sibling parent "Mike" <none none.com> writes:
On Monday, 8 June 2015 at 03:08:46 UTC, Rikki Cattermole wrote:

 Gl3n should be a candidate as it is old code and good one at 
 that.
 https://github.com/Dav1dde/gl3n
 But it seems like it is no longer maintained.
Looks like it's been getting a couple commits monthly, so I think it's being maintained.
Jun 08 2015
prev sibling next sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
I think a std.bindings or something similar for ubiquitous C libraries would go a long way - _quality_(not just a wrapper) SDL, OpenGL, etc bindings. D is very attractive to game developers, I think with a little push it would get a lot of traction from this.
Jun 07 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 8 June 2015 at 13:15, weaselcat via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest out of
 the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
I think a std.bindings or something similar for ubiquitous C libraries would go a long way - _quality_(not just a wrapper) SDL, OpenGL, etc bindings. D is very attractive to game developers, I think with a little push it would get a lot of traction from this.
I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.
Jun 07 2015
next sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 8/06/2015 3:53 p.m., Manu via Digitalmars-d wrote:
 On 8 June 2015 at 13:15, weaselcat via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest out of
 the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
I think a std.bindings or something similar for ubiquitous C libraries would go a long way - _quality_(not just a wrapper) SDL, OpenGL, etc bindings. D is very attractive to game developers, I think with a little push it would get a lot of traction from this.
I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.
I'm definitely interested. Imagine getting something like that into phobos! Would be utterly amazing for us. Or atleast parts of it, once D-ified. Although might be worth doing tests using e.g. ldc to see how many platforms you can actually get working. Then perhaps an acceptance criteria before you port it?
Jun 07 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 8 June 2015 at 13:59, Rikki Cattermole via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 8/06/2015 3:53 p.m., Manu via Digitalmars-d wrote:
 On 8 June 2015 at 13:15, weaselcat via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest out
 of
 the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
I think a std.bindings or something similar for ubiquitous C libraries would go a long way - _quality_(not just a wrapper) SDL, OpenGL, etc bindings. D is very attractive to game developers, I think with a little push it would get a lot of traction from this.
I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.
I'm definitely interested. Imagine getting something like that into phobos! Would be utterly amazing for us. Or atleast parts of it, once D-ified.
I can't really see a place for many parts in phobos... large parts of it are hardware/platform abstraction; would depend on many system library bindings present in phobos.
 Although might be worth doing tests using e.g. ldc to see how many platforms
 you can actually get working.
 Then perhaps an acceptance criteria before you port it?
Yeah, it's a lot of work to do unit tests for parallel runtime systems that depend almost exclusively on user input or large bodies of external data... and where many of the outputs don't naturally feedback for analysis (render output, audio output). I can see a unit test framework being more code than most parts of the engine ;) .. not that it would be bad (it would be awesome!), I just can't imagine a simple/acceptable design. The thing I'm most happy about with Fuji is how relatively minimal it is (considering its scope and capability).
Jun 07 2015
parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 8/06/2015 4:12 p.m., Manu via Digitalmars-d wrote:
 On 8 June 2015 at 13:59, Rikki Cattermole via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 8/06/2015 3:53 p.m., Manu via Digitalmars-d wrote:
 On 8 June 2015 at 13:15, weaselcat via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest out
 of
 the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
I think a std.bindings or something similar for ubiquitous C libraries would go a long way - _quality_(not just a wrapper) SDL, OpenGL, etc bindings. D is very attractive to game developers, I think with a little push it would get a lot of traction from this.
I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.
I'm definitely interested. Imagine getting something like that into phobos! Would be utterly amazing for us. Or atleast parts of it, once D-ified.
I can't really see a place for many parts in phobos... large parts of it are hardware/platform abstraction; would depend on many system library bindings present in phobos.
 Although might be worth doing tests using e.g. ldc to see how many platforms
 you can actually get working.
 Then perhaps an acceptance criteria before you port it?
Yeah, it's a lot of work to do unit tests for parallel runtime systems that depend almost exclusively on user input or large bodies of external data... and where many of the outputs don't naturally feedback for analysis (render output, audio output). I can see a unit test framework being more code than most parts of the engine ;) .. not that it would be bad (it would be awesome!), I just can't imagine a simple/acceptable design. The thing I'm most happy about with Fuji is how relatively minimal it is (considering its scope and capability).
They would have to be manual tests. So e.g. throws exceptions happily and uses threads kind of thing. But where you load it up and run it. It could help the ldc and gdc guys know what is still missing for this use case.
Jun 07 2015
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/7/2015 8:53 PM, Manu via Digitalmars-d wrote:
 I've been humoring the idea of porting my engine to D. It's about 15
 years of development, better/cleaner than most proprietary engines
 I've used at game studios.
 I wonder if there would be interest in this? Problem is, I need all
 the cross compilers to exist before I pull the plug on the C code... a
 game engine is no good if it's not portable to all the consoles under
 the sun. That said, I think it would be a good case-study to get the
 cross compilers working against.
It's a chicken-and-egg thing. Somebody's got to start and not wait for the others.
Jun 08 2015
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-06-08 05:53, Manu via Digitalmars-d wrote:

 I've been humoring the idea of porting my engine to D. It's about 15
 years of development, better/cleaner than most proprietary engines
 I've used at game studios.
Perhaps you could try using magicport. -- /Jacob Carlborg
Jun 08 2015
prev sibling parent reply "Joakim" <dlang joakim.fea.st> writes:
On Monday, 8 June 2015 at 03:53:52 UTC, Manu wrote:
 I've been humoring the idea of porting my engine to D. It's 
 about 15
 years of development, better/cleaner than most proprietary 
 engines
 I've used at game studios.
 I wonder if there would be interest in this? Problem is, I need 
 all
 the cross compilers to exist before I pull the plug on the C 
 code... a
 game engine is no good if it's not portable to all the consoles 
 under
 the sun. That said, I think it would be a good case-study to 
 get the
 cross compilers working against.
What cross-compilers are you waiting for? Nobody is working on XBone or PS4 as far as I know, but Dan's work on iOS seems pretty far along, if you want to try that out.
Jun 09 2015
parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 9 June 2015 at 17:32, Joakim via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Monday, 8 June 2015 at 03:53:52 UTC, Manu wrote:
 I've been humoring the idea of porting my engine to D. It's about 15
 years of development, better/cleaner than most proprietary engines
 I've used at game studios.
 I wonder if there would be interest in this? Problem is, I need all
 the cross compilers to exist before I pull the plug on the C code... a
 game engine is no good if it's not portable to all the consoles under
 the sun. That said, I think it would be a good case-study to get the
 cross compilers working against.
What cross-compilers are you waiting for? Nobody is working on XBone or PS4 as far as I know, but Dan's work on iOS seems pretty far along, if you want to try that out.
XBone works. PS4 is probably easy or already working. Android, iOS are critical. Nintendo platforms also exist. I would hope we'll see Enscripten and NaCl at some point; I could use them at work right now. The phones do appear to be moving recently, which is really encouraging.
Jun 09 2015
prev sibling next sibling parent reply "ezneh" <petitv.isat gmail.com> writes:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
IMHO, Phobos could include thinks like this as a standard : - OAuth (1 & 2), at least it would be useful for projects like vibe.d - Create / read QR codes, maybe ? It seems we see more and more QR Codes here and there, so it could potentially be worth it - Better / full OS bindings (winapi, x11, ect.), but it would (sadly) require a very large amount of work to do so. - + what has been said already. I guess we can try to think about integrating a large amount of things that are widely used and are considered as "standards"
Jun 08 2015
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 06/08/2015 03:55 AM, ezneh wrote:
 - Create / read QR codes, maybe ? It seems we see more and more QR Codes
 here and there, so it could potentially be worth it
I see them everywhere, but does anyone ever actually use them? Usually it's just an obvious link to some company's marketing/advertising. It's basically just like the old CueCat, if anyone remembers it: <https://en.wikipedia.org/wiki/CueCat> Only time I've ever seen *anyone* actually using a QR code is when *I* use a "display QR link for this page" FF plugin to send the webpage I'm looking at to my phone. Maybe I'm just not seeing it, but I suspect QR is more someone that companies *want* people to care about, rather than something anyone actually uses.
Jun 13 2015
next sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Sat, 13 Jun 2015 11:46:41 -0400, Nick Sabalausky wrote:

 Maybe I'm just not seeing it, but I suspect QR is more someone that
 companies *want* people to care about, rather than something anyone
 actually uses.
same for me.=
Jun 13 2015
prev sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 6/13/15 11:46 AM, Nick Sabalausky wrote:
 On 06/08/2015 03:55 AM, ezneh wrote:
 - Create / read QR codes, maybe ? It seems we see more and more QR Codes
 here and there, so it could potentially be worth it
I see them everywhere, but does anyone ever actually use them? Usually it's just an obvious link to some company's marketing/advertising. It's basically just like the old CueCat, if anyone remembers it: <https://en.wikipedia.org/wiki/CueCat> Only time I've ever seen *anyone* actually using a QR code is when *I* use a "display QR link for this page" FF plugin to send the webpage I'm looking at to my phone. Maybe I'm just not seeing it, but I suspect QR is more someone that companies *want* people to care about, rather than something anyone actually uses.
A rather cool usage of QR code I saw was a sticker on a device that was a link to the PDF of the manual. -Steve
Jun 13 2015
next sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Sat, 13 Jun 2015 21:57:42 -0400, Steven Schveighoffer wrote:

 A rather cool usage of QR code I saw was a sticker on a device that was
 a link to the PDF of the manual.
it's k001, but i'll take a printed URL for it in any time. the old good=20 URL that i can read with my eyes.=
Jun 13 2015
prev sibling parent reply "Joakim" <dlang joakim.fea.st> writes:
On Sunday, 14 June 2015 at 01:57:37 UTC, Steven Schveighoffer 
wrote:
 On 6/13/15 11:46 AM, Nick Sabalausky wrote:
 On 06/08/2015 03:55 AM, ezneh wrote:
 - Create / read QR codes, maybe ? It seems we see more and 
 more QR Codes
 here and there, so it could potentially be worth it
I see them everywhere, but does anyone ever actually use them? Usually it's just an obvious link to some company's marketing/advertising. It's basically just like the old CueCat, if anyone remembers it: <https://en.wikipedia.org/wiki/CueCat> Only time I've ever seen *anyone* actually using a QR code is when *I* use a "display QR link for this page" FF plugin to send the webpage I'm looking at to my phone. Maybe I'm just not seeing it, but I suspect QR is more someone that companies *want* people to care about, rather than something anyone actually uses.
A rather cool usage of QR code I saw was a sticker on a device that was a link to the PDF of the manual.
Then there's always this: http://www.theverge.com/2015/6/19/8811425/heinz-ketchup-qr-code-porn-site-fundorado Not the fault of the QR code of course, just an expired domain name, but still funny. :)
Jun 19 2015
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 6/19/15 9:50 PM, Joakim wrote:

 Then there's always this:

 http://www.theverge.com/2015/6/19/8811425/heinz-ketchup-qr-code-porn-site-fundorado


 Not the fault of the QR code of course, just an expired domain name, but
 still funny. :)
Oh man. Note to marketing department -- all QR codes must point to ourcompany.com, you can redirect from there!!! -Steve
Jun 21 2015
prev sibling next sibling parent "ponce" <contact gam3sfrommars.fr> writes:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
What I'd like in phobos: - OS bindings (more complete win32, Cocoa etc) - DerelictUtil - allocators That's about it.
Jun 08 2015
prev sibling next sibling parent "Per =?UTF-8?B?Tm9yZGzDtnci?= <per.nordlow gmail.com> writes:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
Automatic randomizer for builtins, ranges, etc. Used to generate data for tests. Here's a start: https://github.com/nordlow/justd/blob/master/random_ex.d
Jun 08 2015
prev sibling next sibling parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
There are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions? Some notes about portability: 1. OS X has Accelerated framework builtin. 2. Linux has blast by default or it can be easily installed. However default blast is very slow. The openBLAS is preferred. 3. Looks like there is no simple way to have BLAS support on Windows. Should we provide BLAS library with DMD for Windows and maybe Linux? Regards, Ilya
Jun 08 2015
next sibling parent "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Tuesday, 9 June 2015 at 03:26:25 UTC, Ilya Yaroshenko wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
 wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
There are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions? Some notes about portability: 1. OS X has Accelerated framework builtin. 2. Linux has blast by default or it can be easily installed. However default blast is very slow. The openBLAS is preferred. 3. Looks like there is no simple way to have BLAS support on Windows. Should we provide BLAS library with DMD for Windows and maybe Linux? Regards, Ilya
... probably std.container.Array is good template to start.
Jun 08 2015
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/8/15 8:26 PM, Ilya Yaroshenko wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest
 out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
There are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions? Some notes about portability: 1. OS X has Accelerated framework builtin. 2. Linux has blast by default or it can be easily installed. However default blast is very slow. The openBLAS is preferred. 3. Looks like there is no simple way to have BLAS support on Windows. Should we provide BLAS library with DMD for Windows and maybe Linux?
I think licensing matters would make this difficult. What I do think we can do is: (a) Provide standard data layouts in std.array for the typical shapes supported by linear algebra libs: row major, column major, alongside with striding primitives. (b) Provide signatures for C and Fortran libraries so people who have them can use them easily with D. (c) Provide high-level wrappers on top of those functions. Andrei
Jun 08 2015
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 9 June 2015 at 06:59:07 UTC, Andrei Alexandrescu 
wrote:
 On 6/8/15 8:26 PM, Ilya Yaroshenko wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
 wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better
 marketing.
 As discussed on dconf, phobos needs to become big and blow 
 the rest
 out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
There are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions? Some notes about portability: 1. OS X has Accelerated framework builtin. 2. Linux has blast by default or it can be easily installed. However default blast is very slow. The openBLAS is preferred. 3. Looks like there is no simple way to have BLAS support on Windows. Should we provide BLAS library with DMD for Windows and maybe Linux?
I think licensing matters would make this difficult. What I do think we can do is: (a) Provide standard data layouts in std.array for the typical shapes supported by linear algebra libs: row major, column major, alongside with striding primitives.
I don't think this is quite the right approach. Multidimensional arrays and matrices are about accessing and iteration over data, not data structures themselves. The standard layouts are common special cases.
 (b) Provide signatures for C and Fortran libraries so people 
 who have them can use them easily with D.

 (c) Provide high-level wrappers on top of those functions.


 Andrei
That is how e.g. numpy works and it's OK, but D can do better. Ilya, I'm very interested in discussing this further with you. I have a reasonable idea and implementation of how I would want the generic n-dimensional types in D to work, but you seem to have more experience with BLAS and LAPACK than me* and of course interfacing with them is critical. *I rarely interact with them directly.
Jun 09 2015
next sibling parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Tuesday, 9 June 2015 at 08:50:16 UTC, John Colvin wrote:
 On Tuesday, 9 June 2015 at 06:59:07 UTC, Andrei Alexandrescu 
 wrote:
 On 6/8/15 8:26 PM, Ilya Yaroshenko wrote:
 On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
 wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better
 marketing.
 As discussed on dconf, phobos needs to become big and blow 
 the rest
 out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
There are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions? Some notes about portability: 1. OS X has Accelerated framework builtin. 2. Linux has blast by default or it can be easily installed. However default blast is very slow. The openBLAS is preferred. 3. Looks like there is no simple way to have BLAS support on Windows. Should we provide BLAS library with DMD for Windows and maybe Linux?
I think licensing matters would make this difficult. What I do think we can do is: (a) Provide standard data layouts in std.array for the typical shapes supported by linear algebra libs: row major, column major, alongside with striding primitives.
I don't think this is quite the right approach. Multidimensional arrays and matrices are about accessing and iteration over data, not data structures themselves. The standard layouts are common special cases.
Probably we need both approaches: [1]. Multidimensional random access slices (ranges, not only arrays) We can do it easily: size_t anyNumber; auto ar = new int[3 * 8 * 9 + anyNumber]; auto slice = Slice[0..3, 4..8, 1..9]; assert(ar.canBeSlicedWith(slice)); //checks that ar.length <= 3 * 8 * 9 auto tensor = ar.sliced(slice); tensor[0, 1, 2] = 4; auto matrix = tensor[0..$, 1, 0..$]; assert(matrix[0, 2] == 4); [2]. BLAS Transposed.no/yes and Major.row/column (naming can be changed) flags for plain 2D matrixes based on 2.1 D arrays (both GC and manual memory management) 2.2 std.container.array (RefCounted) RowMajor and RowMinor are not needed if Transposed is already defined. But this stuff helps engineers implement software in terms of corresponding mathematical documentation. I hope to create nogc versions for 2.1 and 2.2 (because GC is not needed for slices). Furthermore [2] can be based on [1].
 (b) Provide signatures for C and Fortran libraries so people 
 who have them can use them easily with D.

 (c) Provide high-level wrappers on top of those functions.


 Andrei
That is how e.g. numpy works and it's OK, but D can do better. Ilya, I'm very interested in discussing this further with you. I have a reasonable idea and implementation of how I would want the generic n-dimensional types in D to work, but you seem to have more experience with BLAS and LAPACK than me* and of course interfacing with them is critical. *I rarely interact with them directly.
John, please describe your ideas and use cases. I think github issues is more convenient place. You have opened https://github.com/kyllingstad/scid/issues/24 . So I think we can past our code examples at SciD issue.
Jun 09 2015
parent "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
 size_t anyNumber;
 auto ar = new int[3 * 8 * 9 + anyNumber];
 auto slice = Slice[0..3, 4..8, 1..9];
 assert(ar.canBeSlicedWith(slice)); //checks that ar.length <= 3 
 * 8 * 9

 auto tensor = ar.sliced(slice);
 tensor[0, 1, 2] = 4;

 auto matrix = tensor[0..$, 1, 0..$];
 assert(matrix[0, 2] == 4);
assert(&matrix[0, 2] is &tensor[0, 1, 2]);
Jun 09 2015
prev sibling next sibling parent "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
 Ilya, I'm very interested in discussing this further with you. 
 I have a reasonable idea and implementation of how I would want 
 the generic n-dimensional types in D to work, but you seem to 
 have more experience with BLAS and LAPACK than me* and of 
 course interfacing with them is critical.

 *I rarely interact with them directly.
I have created Phobos PR. Now we can discuss it at GitHub. https://github.com/D-Programming-Language/phobos/pull/3397
Jun 09 2015
prev sibling next sibling parent reply "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Tuesday, 9 June 2015 at 08:50:16 UTC, John Colvin wrote:
 I don't think this is quite the right approach. 
 Multidimensional arrays and matrices are about accessing and 
 iteration over data, not data structures themselves. The 
 standard layouts are common special cases.
Yes, I really want to D supports multidimensional arrays, matrices, rational numbers and quaternions. I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :) Rational numbers and quaternions have already been implemented are: https://github.com/k3kaimu/carbon/blob/master/source/carbon/rational.d https://github.com/k3kaimu/carbon/blob/master/source/carbon/quaternion.d Satisfactory developments with matrices have here: https://github.com/k3kaimu/carbon/blob/master/source/carbon/linear.d
Jun 09 2015
parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
 I believe that Phobos must support some common methods of 
 linear algebra and general mathematics. I have no desire to 
 join D with Fortran libraries :)
D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
Jun 09 2015
next sibling parent reply "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Tuesday, 9 June 2015 at 15:26:43 UTC, Ilya Yaroshenko wrote:
 D definitely needs BLAS API support for matrix multiplication. 
 Best BLAS libraries are written in assembler like openBLAS. 
 Otherwise D will have last position in corresponding math 
 benchmarks.
Yes, those programs on D, is clearly lagging behind the programmers Wolfram Mathematica :) https://projecteuler.net/language=D https://projecteuler.net/language=Mathematica To solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.
Jun 09 2015
next sibling parent reply "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:
 To solve these problems you need something like Blas. Perhaps 
 BLAS - it's more practical way to enrich D techniques for 
 working with matrices.
Actually, that's what you need to realize in D: http://www.boost.org/doc/libs/1_58_0/libs/numeric/ublas/doc/index.html
Jun 09 2015
next sibling parent "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Tuesday, 9 June 2015 at 16:16:39 UTC, Dennis Ritchie wrote:
 On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:
 To solve these problems you need something like Blas. Perhaps 
 BLAS - it's more practical way to enrich D techniques for 
 working with matrices.
Actually, that's what you need to realize in D: http://www.boost.org/doc/libs/1_58_0/libs/numeric/ublas/doc/index.html
This is very good stuff. However I want to create something more simple: [1]. n-dimensional slices (without matrix multiplication, "RowMajor/..." and other math features) [2]. netlib like standart CBLAS API at `etc.blas.cblas` [3]. High level bindings to connect [1] and 1-2D subset of [2].
Jun 09 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/9/15 9:16 AM, Dennis Ritchie wrote:
 On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:
 To solve these problems you need something like Blas. Perhaps BLAS -
 it's more practical way to enrich D techniques for working with matrices.
Actually, that's what you need to realize in D: http://www.boost.org/doc/libs/1_58_0/libs/numeric/ublas/doc/index.html
"And finally uBLAS offers good (but not outstanding) performance." -- Andrei
Jun 09 2015
parent reply "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Tuesday, 9 June 2015 at 17:19:28 UTC, Andrei Alexandrescu 
wrote:
 On 6/9/15 9:16 AM, Dennis Ritchie wrote:
 On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:
 To solve these problems you need something like Blas. Perhaps 
 BLAS -
 it's more practical way to enrich D techniques for working 
 with matrices.
Actually, that's what you need to realize in D: http://www.boost.org/doc/libs/1_58_0/libs/numeric/ublas/doc/index.html
"And finally uBLAS offers good (but not outstanding) performance." -- Andrei
OK, but... Same thing I can say about BigInt in Phobos. "And finally `std.bigint` offers good (but not outstanding) performance." I decided 17 math problems and for most of them I needed a `BigInt`: http://i.imgur.com/CmOSm7V.png https://projecteuler.net/language=D If in D would not be `BigInt`, I probably would have used to Boost.Multiprekison on C++: http://www.boost.org/doc/libs/1_58_0/libs/multiprecision/doc/html/index.html Or do some slow Python. Maybe all this and does not give a huge performance, but for a wide range of mathematical problems it all helps. Thus, it is better to have something than nothing :) And BLAS is more than something...
Jun 09 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/9/15 11:42 AM, Dennis Ritchie wrote:
 "And finally `std.bigint` offers good (but not outstanding) performance."
BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Jun 09 2015
next sibling parent reply "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Tuesday, 9 June 2015 at 18:58:56 UTC, Andrei Alexandrescu 
wrote:
 On 6/9/15 11:42 AM, Dennis Ritchie wrote:
 "And finally `std.bigint` offers good (but not outstanding) 
 performance."
BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Done: https://issues.dlang.org/show_bug.cgi?id=14673
Jun 09 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/9/15 12:21 PM, Dennis Ritchie wrote:
 On Tuesday, 9 June 2015 at 18:58:56 UTC, Andrei Alexandrescu wrote:
 On 6/9/15 11:42 AM, Dennis Ritchie wrote:
 "And finally `std.bigint` offers good (but not outstanding)
 performance."
BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Done: https://issues.dlang.org/show_bug.cgi?id=14673
Thanks! -- Andrei
Jun 09 2015
prev sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:
 On 6/9/15 11:42 AM, Dennis Ritchie wrote:
 "And finally `std.bigint` offers good (but not outstanding) performance."
BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Slightly OT, but this reminds me. RefCounted is not viable when using the GC, because any references on the heap may race against stack-based references. Can we make RefCounted use atomicInc and atomicDec? It will hurt performance a bit, but the current state is not good. I spoke with Erik about this, as he was planning on using RefCounted, but didn't know about the hairy issues with the GC. If we get to a point where we can have a thread-local GC, we can remove the implementation detail of using atomic operations when possible. -Steve
Jun 09 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/9/15 1:53 PM, Steven Schveighoffer wrote:
 On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:
 On 6/9/15 11:42 AM, Dennis Ritchie wrote:
 "And finally `std.bigint` offers good (but not outstanding)
 performance."
BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Slightly OT, but this reminds me. RefCounted is not viable when using the GC, because any references on the heap may race against stack-based references.
How do you mean that?
 Can we make RefCounted use atomicInc and atomicDec? It will hurt
 performance a bit, but the current state is not good.

 I spoke with Erik about this, as he was planning on using RefCounted,
 but didn't know about the hairy issues with the GC.

 If we get to a point where we can have a thread-local GC, we can remove
 the implementation detail of using atomic operations when possible.
The obvious solution that comes to mind is adding a Flag!"interlocked". -- Andrei
Jun 09 2015
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 6/9/15 5:46 PM, Andrei Alexandrescu wrote:
 On 6/9/15 1:53 PM, Steven Schveighoffer wrote:
 On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:
 On 6/9/15 11:42 AM, Dennis Ritchie wrote:
 "And finally `std.bigint` offers good (but not outstanding)
 performance."
BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Slightly OT, but this reminds me. RefCounted is not viable when using the GC, because any references on the heap may race against stack-based references.
How do you mean that?
If you add an instance of RefCounted to a GC-destructed type (either in an array, or as a member of a class), there is the potential that the GC will run the dtor of the RefCounted item in a different thread, opening up the possibility of races.
 Can we make RefCounted use atomicInc and atomicDec? It will hurt
 performance a bit, but the current state is not good.

 I spoke with Erik about this, as he was planning on using RefCounted,
 but didn't know about the hairy issues with the GC.

 If we get to a point where we can have a thread-local GC, we can remove
 the implementation detail of using atomic operations when possible.
The obvious solution that comes to mind is adding a Flag!"interlocked".
Can you explain it further? It's not obvious to me. -Steve
Jun 10 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/10/15 3:52 AM, Steven Schveighoffer wrote:
 On 6/9/15 5:46 PM, Andrei Alexandrescu wrote:
 On 6/9/15 1:53 PM, Steven Schveighoffer wrote:
 On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:
 On 6/9/15 11:42 AM, Dennis Ritchie wrote:
 "And finally `std.bigint` offers good (but not outstanding)
 performance."
BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Slightly OT, but this reminds me. RefCounted is not viable when using the GC, because any references on the heap may race against stack-based references.
How do you mean that?
If you add an instance of RefCounted to a GC-destructed type (either in an array, or as a member of a class), there is the potential that the GC will run the dtor of the RefCounted item in a different thread, opening up the possibility of races.
That's a problem with the GC. Collected memory must be deallocated in the thread that allocated it. It's not really that complicated to implement, either - the collection process puts the memory to deallocate in a per-thread freelist; then when each thread wakes up and tries to allocate things, it first allocates from the freelist.
 Can we make RefCounted use atomicInc and atomicDec? It will hurt
 performance a bit, but the current state is not good.

 I spoke with Erik about this, as he was planning on using RefCounted,
 but didn't know about the hairy issues with the GC.

 If we get to a point where we can have a thread-local GC, we can remove
 the implementation detail of using atomic operations when possible.
The obvious solution that comes to mind is adding a Flag!"interlocked".
Can you explain it further? It's not obvious to me.
The RefCounted type could have a flag as a template parameter. Andrei
Jun 10 2015
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 6/10/15 11:49 AM, Andrei Alexandrescu wrote:
 On 6/10/15 3:52 AM, Steven Schveighoffer wrote:
 On 6/9/15 5:46 PM, Andrei Alexandrescu wrote:
 On 6/9/15 1:53 PM, Steven Schveighoffer wrote:
 On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:
 On 6/9/15 11:42 AM, Dennis Ritchie wrote:
 "And finally `std.bigint` offers good (but not outstanding)
 performance."
BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Slightly OT, but this reminds me. RefCounted is not viable when using the GC, because any references on the heap may race against stack-based references.
How do you mean that?
If you add an instance of RefCounted to a GC-destructed type (either in an array, or as a member of a class), there is the potential that the GC will run the dtor of the RefCounted item in a different thread, opening up the possibility of races.
That's a problem with the GC. Collected memory must be deallocated in the thread that allocated it. It's not really that complicated to implement, either - the collection process puts the memory to deallocate in a per-thread freelist; then when each thread wakes up and tries to allocate things, it first allocates from the freelist.
I agree it's a problem with the GC, but not that it's a simple fix. It's not just a freelist -- the dtor needs to be run in the thread also. But the amount of affected code (i.e. any code that uses GC) makes this a very high risk change, whereas changing RefCounted is a 2-line change that is easy to prove/review. I will make the RefCounted atomic PR if you can accept that.
 Can we make RefCounted use atomicInc and atomicDec? It will hurt
 performance a bit, but the current state is not good.

 I spoke with Erik about this, as he was planning on using RefCounted,
 but didn't know about the hairy issues with the GC.

 If we get to a point where we can have a thread-local GC, we can remove
 the implementation detail of using atomic operations when possible.
The obvious solution that comes to mind is adding a Flag!"interlocked".
Can you explain it further? It's not obvious to me.
The RefCounted type could have a flag as a template parameter.
OK, thanks for the explanation. I'd do it the other way around: Flag!"threadlocal", since we should be safe by default. -Steve
Jun 10 2015
parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 10 June 2015 at 20:31:52 UTC, Steven Schveighoffer 
wrote:
 OK, thanks for the explanation. I'd do it the other way around: 
 Flag!"threadlocal", since we should be safe by default.
`RefCounted!T` is also thread-local by default, only `shared(RefCounted!T)` needs to use atomic operations.
Jun 11 2015
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 6/11/15 4:15 AM, "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net>" 
wrote:
 On Wednesday, 10 June 2015 at 20:31:52 UTC, Steven Schveighoffer wrote:
 OK, thanks for the explanation. I'd do it the other way around:
 Flag!"threadlocal", since we should be safe by default.
`RefCounted!T` is also thread-local by default, only `shared(RefCounted!T)` needs to use atomic operations.
I may have misunderstood Andrei. We can't just use a flag to fix this problem, all allocations are in danger of races (even thread-local ones). But maybe he meant *after* we fix the GC we could add a flag? I'm not sure. A flag at this point would be a band-aid fix, allowing one to optimize if one knows that his code never puts RefCounted instances on the heap. Hard to prove... -Steve
Jun 11 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/11/15 5:17 AM, Steven Schveighoffer wrote:
 On 6/11/15 4:15 AM, "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net>"
 wrote:
 On Wednesday, 10 June 2015 at 20:31:52 UTC, Steven Schveighoffer wrote:
 OK, thanks for the explanation. I'd do it the other way around:
 Flag!"threadlocal", since we should be safe by default.
`RefCounted!T` is also thread-local by default, only `shared(RefCounted!T)` needs to use atomic operations.
I may have misunderstood Andrei. We can't just use a flag to fix this problem, all allocations are in danger of races (even thread-local ones). But maybe he meant *after* we fix the GC we could add a flag? I'm not sure.
Yes, we definitely need to fix the GC. -- Andrei
Jun 11 2015
prev sibling parent reply "ixid" <adamsibson hotmail.com> writes:
On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:
 On Tuesday, 9 June 2015 at 15:26:43 UTC, Ilya Yaroshenko wrote:
 D definitely needs BLAS API support for matrix multiplication. 
 Best BLAS libraries are written in assembler like openBLAS. 
 Otherwise D will have last position in corresponding math 
 benchmarks.
Yes, those programs on D, is clearly lagging behind the programmers Wolfram Mathematica :) https://projecteuler.net/language=D https://projecteuler.net/language=Mathematica To solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.
I suspect this is more about who the Mathematica and D users are as Project Euler is mostly mathematical rather than code optimization. More of the Mathematica users would have strong maths backgrounds. I haven't felt held back by D at all, it's only been my own lack of ability. I'm in 2nd place atm for D users.
Jun 10 2015
next sibling parent "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Wednesday, 10 June 2015 at 08:39:12 UTC, ixid wrote:
 I suspect this is more about who the Mathematica and D users 
 are as Project Euler is mostly mathematical rather than code 
 optimization. More of the Mathematica users would have strong 
 maths backgrounds. I haven't felt held back by D at all, it's 
 only been my own lack of ability. I'm in 2nd place atm for D 
 users.
OK, if D is at least BLAS, I will try to overtake you :)
Jun 10 2015
prev sibling parent reply "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Wednesday, 10 June 2015 at 08:39:12 UTC, ixid wrote:
 I suspect this is more about who the Mathematica and D users 
 are as Project Euler is mostly mathematical rather than code 
 optimization.
Here and I say that despite the fact that in D BigInt not optimized very well, it helps me to solve a wide range of tasks that do not require high performance, so I want to BLAS or something similar was in D. Something is better than nothing!
Jun 10 2015
parent reply "ixid" <adamsibson hotmail.com> writes:
On Wednesday, 10 June 2015 at 08:50:31 UTC, Dennis Ritchie wrote:
 On Wednesday, 10 June 2015 at 08:39:12 UTC, ixid wrote:
 I suspect this is more about who the Mathematica and D users 
 are as Project Euler is mostly mathematical rather than code 
 optimization.
Here and I say that despite the fact that in D BigInt not optimized very well, it helps me to solve a wide range of tasks that do not require high performance, so I want to BLAS or something similar was in D. Something is better than nothing!
You rarely need to use BigInt for heavy lifting though, often it's just summing, not that I would argue against optimization. I think speed is absolutely vital and one of the most powerful things we could do to promote D would be to run the best benchmarks site for all language comers and make sure D does very well. Every time there's a benchmark contest it seems to unearth D performance issues that can be greatly improved upon. I'm sure you will beat me pretty quickly, as I said my maths isn't very good but it might motivate me to solve some more! =)
Jun 10 2015
parent "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Wednesday, 10 June 2015 at 09:43:47 UTC, ixid wrote:
 You rarely need to use BigInt for heavy lifting though, often 
 it's just summing, not that I would argue against optimization. 
 I think speed is absolutely vital and one of the most powerful 
 things we could do to promote D would be to run the best 
 benchmarks site for all language comers and make sure D does 
 very well. Every time there's a benchmark contest it seems to 
 unearth D performance issues that can be greatly improved upon.
Yes it is. Many are trying to find performance problems D. And sometimes it turns out.
 I'm sure you will beat me pretty quickly, as I said my maths 
 isn't very good but it might motivate me to solve some more! =)
No, I will start to beat you until next year, because, unfortunately, I will not have a full year of access to the computer. We can say that this is something like a long vacation :)
Jun 10 2015
prev sibling next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 I believe that Phobos must support some common methods of linear algebra
 and general mathematics. I have no desire to join D with Fortran libraries
 :)
D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Jun 09 2015
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d 
 <digitalmars-d puremagic.com> wrote:
 I believe that Phobos must support some common methods of 
 linear algebra and general mathematics. I have no desire to 
 join D with Fortran libraries :)
D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Optimising floating point is a massive pain because of precision concerns and IEEE-754 conformance. Just because something is analytically the same doesn't mean you want the optimiser to go ahead and make the switch for you. Of the things that can be done, lazy operations should make it easier/possible for the optimiser to spot.
Jun 09 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10 June 2015 at 02:32, John Colvin via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 I believe that Phobos must support some common methods of linear algebra
 and general mathematics. I have no desire to join D with Fortran libraries
 :)
D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Optimising floating point is a massive pain because of precision concerns and IEEE-754 conformance. Just because something is analytically the same doesn't mean you want the optimiser to go ahead and make the switch for you.
We have flags to control this sort of thing (fast-math, strict ieee, etc). I will worry about my precision, I just want the optimiser to do its job and do the very best it possibly can. In the case of linear algebra, the optimiser generally fails and I must manually simplify expressions as much as possible. In the event the expressions emerge as a result of a series of inlines, or generic code (the sort that appears frequently as a result of stream/range based programming), then there's nothing you can do except to flatten and unroll your work loops yourself.
 Of the things that can be done, lazy operations should make it
 easier/possible for the optimiser to spot.
My experience is that they possibly make it harder, although I don't know why. I find the compiler becomes very unpredictable optimising deep lazy expressions. The backend inline heuristics may not be tuned for typical D expressions of this type? I often wish I could address common compound operations myself, by implementing something like a compound operator which I can special case with an optimised path for particular expressions. But I can't think of any reasonable ways to approach that.
Jun 09 2015
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 9 June 2015 at 16:45:33 UTC, Manu wrote:
 On 10 June 2015 at 02:32, John Colvin via Digitalmars-d 
 <digitalmars-d puremagic.com> wrote:
 On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d 
 <digitalmars-d puremagic.com> wrote:
 [...]
A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Optimising floating point is a massive pain because of precision concerns and IEEE-754 conformance. Just because something is analytically the same doesn't mean you want the optimiser to go ahead and make the switch for you.
We have flags to control this sort of thing (fast-math, strict ieee, etc). I will worry about my precision, I just want the optimiser to do its job and do the very best it possibly can. In the case of linear algebra, the optimiser generally fails and I must manually simplify expressions as much as possible.
If the compiler is free to rewrite by analytical rules then "I will worry about my precision" is equivalent to either "I don't care about my precision" or "I have checked the codegen". A simple rearrangement of an expression can easily turn a perfectly good result in to complete garbage. It would be great if compilers were even better at fast-math mode, but an awful lot of applications can't use it.
Jun 09 2015
parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10 June 2015 at 03:04, John Colvin via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Tuesday, 9 June 2015 at 16:45:33 UTC, Manu wrote:
 On 10 June 2015 at 02:32, John Colvin via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 [...]
A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Optimising floating point is a massive pain because of precision concerns and IEEE-754 conformance. Just because something is analytically the same doesn't mean you want the optimiser to go ahead and make the switch for you.
We have flags to control this sort of thing (fast-math, strict ieee, etc). I will worry about my precision, I just want the optimiser to do its job and do the very best it possibly can. In the case of linear algebra, the optimiser generally fails and I must manually simplify expressions as much as possible.
If the compiler is free to rewrite by analytical rules then "I will worry about my precision" is equivalent to either "I don't care about my precision" or "I have checked the codegen". A simple rearrangement of an expression can easily turn a perfectly good result in to complete garbage. It would be great if compilers were even better at fast-math mode, but an awful lot of applications can't use it.
This is fine, those applications would continue not to use it. Personally, I've never written code in 20 years where I didn't want fast-math.
Jun 11 2015
prev sibling parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 I believe that Phobos must support some common methods of 
 linear algebra
 and general mathematics. I have no desire to join D with 
 Fortran libraries
 :)
D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Simplified expressions would help because 1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small). 2. Low level optimisation requires specific CPU/Cache optimisation. Modern implementations are optimised for all cache levels. See work by KAZUSHIGE GOTO http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
Jun 09 2015
next sibling parent "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Tuesday, 9 June 2015 at 16:40:56 UTC, Ilya Yaroshenko wrote:
 On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 I believe that Phobos must support some common methods of 
 linear algebra
 and general mathematics. I have no desire to join D with 
 Fortran libraries
 :)
D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Simplified expressions would help because 1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small). 2. Low level optimisation requires specific CPU/Cache optimisation. Modern implementations are optimised for all cache levels. See work by KAZUSHIGE GOTO http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
EDIT: would NOT help
Jun 09 2015
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 I believe that Phobos must support some common methods of linear algebra
 and general mathematics. I have no desire to join D with Fortran
 libraries
 :)
D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Simplified expressions would [NOT] help because 1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small).
Perhaps you've never worked with incompetent programmers (in my experience, >50% of the professional workforce). Programmers, on average, don't know maths. They literally have no idea how to simplify an algebraic expression. I think there are about 3-4 (being generous!) people in my office (of 30-40) that could do it properly, and without spending heaps of time on it.
 2. Low level optimisation requires specific CPU/Cache optimisation. Modern
 implementations are optimised for all cache levels. See work by KAZUSHIGE
 GOTO
 http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
Low-level optimisation is a sliding scale, not a binary position. Reaching 'optimal' state definitely requires careful consideration of all the details you refer to, but there are a lot of improvements that can be gained from quickly written code without full low-level optimisation. A lot of basic low-level optimisations (like just using appropriate opcodes, or eliding redundant operations; ie, squares followed by sqrt) can't be applied without first simplifying expressions.
Jun 11 2015
next sibling parent "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:
 Perhaps you've never worked with incompetent programmers (in my
 experience, >50% of the professional workforce).
 Programmers, on average, don't know maths. They literally have 
 no idea
 how to simplify an algebraic expression.
 I think there are about 3-4 (being generous!) people in my 
 office (of
 30-40) that could do it properly, and without spending heaps of 
 time
 on it.
But you don't think you need to look up to programmers who are not able to quickly simplify an algebraic expression? :) For example, I'm a little addicted to sports programming. And I could really use matrix and other math in the standard library.
Jun 11 2015
prev sibling parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:
 On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 I believe that Phobos must support some common methods of 
 linear algebra
 and general mathematics. I have no desire to join D with 
 Fortran
 libraries
 :)
D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Simplified expressions would [NOT] help because 1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small).
Perhaps you've never worked with incompetent programmers (in my experience, >50% of the professional workforce). Programmers, on average, don't know maths. They literally have no idea how to simplify an algebraic expression. I think there are about 3-4 (being generous!) people in my office (of 30-40) that could do it properly, and without spending heaps of time on it.
 2. Low level optimisation requires specific CPU/Cache 
 optimisation. Modern
 implementations are optimised for all cache levels. See work 
 by KAZUSHIGE
 GOTO
 http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
Low-level optimisation is a sliding scale, not a binary position. Reaching 'optimal' state definitely requires careful consideration of all the details you refer to, but there are a lot of improvements that can be gained from quickly written code without full low-level optimisation. A lot of basic low-level optimisations (like just using appropriate opcodes, or eliding redundant operations; ie, squares followed by sqrt) can't be applied without first simplifying expressions.
OK, generally you are talking about something we can name MathD. I understand the reasons. However I am strictly against algebraic operations (or eliding redundant operations for floating points) for basic routines in system programming language. Even float/double internal conversion to real in math expressions is a huge headache when math algorithms are implemented (see first two comments at https://github.com/D-Programming-Language/phobos/pull/2991 ). In system PL sqrt(x)^2 should compiles as is. Such optimisations can be implemented over the basic routines (pow, sqrt, gemv, gemm, etc). We can use approach similar to D compile time regexp. Best, Ilya
Jun 11 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 12 June 2015 at 15:22, Ilya Yaroshenko via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:
 On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 I believe that Phobos must support some common methods of linear
 algebra
 and general mathematics. I have no desire to join D with Fortran
 libraries
 :)
D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Simplified expressions would [NOT] help because 1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small).
Perhaps you've never worked with incompetent programmers (in my experience, >50% of the professional workforce). Programmers, on average, don't know maths. They literally have no idea how to simplify an algebraic expression. I think there are about 3-4 (being generous!) people in my office (of 30-40) that could do it properly, and without spending heaps of time on it.
 2. Low level optimisation requires specific CPU/Cache optimisation.
 Modern
 implementations are optimised for all cache levels. See work by KAZUSHIGE
 GOTO
 http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
Low-level optimisation is a sliding scale, not a binary position. Reaching 'optimal' state definitely requires careful consideration of all the details you refer to, but there are a lot of improvements that can be gained from quickly written code without full low-level optimisation. A lot of basic low-level optimisations (like just using appropriate opcodes, or eliding redundant operations; ie, squares followed by sqrt) can't be applied without first simplifying expressions.
OK, generally you are talking about something we can name MathD. I understand the reasons. However I am strictly against algebraic operations (or eliding redundant operations for floating points) for basic routines in system programming language.
That's nice... I'm all for it :) Perhaps if there were some distinction between a base type and an algebraic type? I wonder if it would be possible to express an algebraic expression like a lazy range, and then capture the expression at the end and simplify it with some fancy template... I'd call that an abomination, but it might be possible. Hopefully nobody in their right mind would ever use that ;)
 Even float/double internal conversion to real
 in math expressions is a huge headache when math algorithms are implemented
 (see first two comments at
 https://github.com/D-Programming-Language/phobos/pull/2991 ). In system PL
 sqrt(x)^2  should compiles as is.
Yeah... unless you -fast-math, in which case I want the compiler to do whatever it can. Incidentally, I don't think I've ever run into a case in practise where precision was lost by doing _less_ operations.
 Such optimisations can be implemented over the basic routines (pow, sqrt,
 gemv, gemm, etc). We can use approach similar to D compile time regexp.
Not really. The main trouble is that many of these patterns only emerge when inlining is performed. It would be particularly awkward to express such expressions in some DSL that spanned across conventional API boundaries.
Jun 12 2015
parent "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Friday, 12 June 2015 at 11:00:20 UTC, Manu wrote:
 Low-level optimisation is a sliding scale, not a binary 
 position.
 Reaching 'optimal' state definitely requires careful 
 consideration of
 all the details you refer to, but there are a lot of 
 improvements that
 can be gained from quickly written code without full low-level
 optimisation. A lot of basic low-level optimisations (like 
 just using
 appropriate opcodes, or eliding redundant operations; ie, 
 squares
 followed by sqrt) can't be applied without first simplifying
 expressions.
OK, generally you are talking about something we can name MathD. I understand the reasons. However I am strictly against algebraic operations (or eliding redundant operations for floating points) for basic routines in system programming language.
That's nice... I'm all for it :) Perhaps if there were some distinction between a base type and an algebraic type? I wonder if it would be possible to express an algebraic expression like a lazy range, and then capture the expression at the end and simplify it with some fancy template... I'd call that an abomination, but it might be possible. Hopefully nobody in their right mind would ever use that ;)
... for example we can optimise matrix chain multiplication https://en.wikipedia.org/wiki/Matrix_chain_multiplication ---- //calls `this(MatrixExp!double chain)` Matrix!double = m1*m2*m3*m4; ----
 Even float/double internal conversion to real
 in math expressions is a huge headache when math algorithms 
 are implemented
 (see first two comments at
 https://github.com/D-Programming-Language/phobos/pull/2991 ). 
 In system PL
 sqrt(x)^2  should compiles as is.
Yeah... unless you -fast-math, in which case I want the compiler to do whatever it can. Incidentally, I don't think I've ever run into a case in practise where precision was lost by doing _less_ operations.
Mathematics functions requires concrete order of operations http://www.netlib.org/cephes/ (std.mathspecial and a bit of std.math/std.numeric are based on cephes).
 Such optimisations can be implemented over the basic routines 
 (pow, sqrt,
 gemv, gemm, etc). We can use approach similar to D compile 
 time regexp.
Not really. The main trouble is that many of these patterns only emerge when inlining is performed. It would be particularly awkward to express such expressions in some DSL that spanned across conventional API boundaries.
If I am not wrong in both LLVM and GCC `fast-math` attribute can be defined for functions. This feature can be implemented in D.
Jun 12 2015
prev sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10 June 2015 at 02:17, Manu <turkeyman gmail.com> wrote:
 ... If we defined the properties along with their properties ...
*operators* along with their properties
Jun 09 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/9/15 1:50 AM, John Colvin wrote:
 On Tuesday, 9 June 2015 at 06:59:07 UTC, Andrei Alexandrescu wrote:
 (a) Provide standard data layouts in std.array for the typical shapes
 supported by linear algebra libs: row major, column major, alongside
 with striding primitives.
I don't think this is quite the right approach. Multidimensional arrays and matrices are about accessing and iteration over data, not data structures themselves. The standard layouts are common special cases.
I see. So what would be the primitives necessary? Strides (in the form of e.g. special ranges)? What are the things that would make a library vendor or user go, "OK, now I know what steps to take to use my code with D"?
 (b) Provide signatures for C and Fortran libraries so people who have
 them can use them easily with D.

 (c) Provide high-level wrappers on top of those functions.


 Andrei
That is how e.g. numpy works and it's OK, but D can do better. Ilya, I'm very interested in discussing this further with you. I have a reasonable idea and implementation of how I would want the generic n-dimensional types in D to work, but you seem to have more experience with BLAS and LAPACK than me* and of course interfacing with them is critical. *I rarely interact with them directly.
Color me interested. This is another of those domains that hold great promise for D, but sadly a strong champion has been missing. Or two :o). Andrei
Jun 09 2015
parent "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Tuesday, 9 June 2015 at 16:08:40 UTC, Andrei Alexandrescu 
wrote:
 On 6/9/15 1:50 AM, John Colvin wrote:
 On Tuesday, 9 June 2015 at 06:59:07 UTC, Andrei Alexandrescu 
 wrote:
 (a) Provide standard data layouts in std.array for the 
 typical shapes
 supported by linear algebra libs: row major, column major, 
 alongside
 with striding primitives.
I don't think this is quite the right approach. Multidimensional arrays and matrices are about accessing and iteration over data, not data structures themselves. The standard layouts are common special cases.
I see. So what would be the primitives necessary? Strides (in the form of e.g. special ranges)? What are the things that would make a library vendor or user go, "OK, now I know what steps to take to use my code with D"?
N-dimensional slices can be expressed as N slices and N shifts. Where shift equals count of elements in source range between front elements of neighboring sub-slices on corresponding slice-level. private struct Slice(size_t N, Range) { size_t[2][N] slices; size_t[N] shifts; Range range; }
Jun 09 2015
prev sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Tuesday, 9 June 2015 at 03:26:25 UTC, Ilya Yaroshenko wrote:

 There are
 https://github.com/9il/simple_matrix and
 https://github.com/9il/cblas .
 I will try to rework them for Phobos.

 Any ideas and suggestions?
A well-supported matrix math library would definitely lead to me using D more. I would definitely applaud any work being done on this subject, but I still feel there are some enhancements (most seemingly minor) that would really make a matrix math library easy/fun to use. Most of what I discuss below is just syntactical sugar for some stuff that could be accomplished with loops or std.algorithm, but having it built-in would make practical use of a matrix math library much easier. I think Armadillo implements some of these as member functions, whereas other languages like R and Matlab have them more built-in. Disclaimer: I don't consider myself a D expert, so I could be horribly wrong on some of this stuff. 1) There is no support for assignment to arrays based on the values of another array. int[] A = [-1, 1, 5]; int[] B = [1, 2]; int[] C = A[B]; You would have to use int[] C = A[1..2];. In this simple example, it’s not really a big deal, but if I have a function that returns B, then I can’t just throw B in there. I would have to loop through B and assign it to C. So the type of assignment is possible, but if you’re frequently doing this type of array manipulation, then the number of loops you need starts increasing. 2) Along the same lines, there is no support for replacing the B above with an array of bools like bool[] B = [false, true, true]; or auto B = A.map!(a => a < 0); Again, it is doable with a loop, but this form of logical indexing is a pretty common idiom for people who use Matlab or R quite a bit. 3) In addition to being able to index by a range of values or bools, you would want to be able to make assignments based on this. So something like A[B] = c; This is a very common operation in R or Matlab. support for array comparison operators. Something like int[3] B; B[] = A[] + 5; works, but bool[3] B; B[] = A[] > 0; doesn’t (I’m also not sure why I can’t just write auto B[] = A[] + 5;, but that’s neither here nor there). Moreover, it seems like only the mathematical operators work in this way. Mathematical functions from std.math, like exp, don’t seem to work. You have to use map (or a loop) with exp to get the result. I don’t have an issue with map, per se, but it seems inconsistent when some things work but not others. 5) You can only assign scalars to slices of arrays. There doesn’t seem to be an ability to assign an array to a slice. For instead of what I had written for C. 6) std.range and std.algorithm seem to have much better support for one dimensional containers than if you want to treat a container as two-dimensional. If you have a two-dimensional array and want to use map on every element, then there’s no issue. However, if you want to apply a function to each column or row, then you’d have to use a for loop (not even foreach). This seems to be a more difficult problem to solve than the others. I’m not sure what the best approach is, but it makes sense to look at other languages/libraries. In R, you have apply, which can operate on any dimensional array. Matlab has arrayfun. Numpy has apply_along_axis. Armadillo has .each_col and .each_row (one other thing about Armadillo is that you can switch between what underlying matrix math library is being used, like OpenBlas vs. Intel MKL).
Jun 11 2015
next sibling parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Thursday, 11 June 2015 at 21:30:22 UTC, jmh530 wrote:
 Most of what I discuss below is just syntactical sugar for some 
 stuff that could be accomplished with loops or std.algorithm,
Your post reminds me of two things I've considered attempting in the past: 1) a set of operators that have no meaning unless an overload is specifically provided (for dot product, dyadic transpose, etc.) and 2) a library implementing features of array-oriented languages to the extent it's possible (APL functions, rank awareness, trivial reshaping, aggregate lifting, et al). Syntax sugar can be important. -Wyatt
Jun 11 2015
parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Thursday, 11 June 2015 at 22:36:28 UTC, Wyatt wrote:

 1) a set of operators that have no meaning unless an overload 
 is specifically provided (for dot product, dyadic transpose, 
 etc.) and
I see your point, but I think it might be a bit risky if you allow too much freedom for overloading operators. For instance, what if two people implement separate packages for matrix multiplication, one adopts the syntax of R (%*%) and one adopts the new Python syntax ( ). It may lead to some confusion.
Jun 11 2015
parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Friday, 12 June 2015 at 00:11:16 UTC, jmh530 wrote:
 On Thursday, 11 June 2015 at 22:36:28 UTC, Wyatt wrote:

 1) a set of operators that have no meaning unless an overload 
 is specifically provided (for dot product, dyadic transpose, 
 etc.) and
I see your point, but I think it might be a bit risky if you allow too much freedom for overloading operators. For instance, what if two people implement separate packages for matrix multiplication, one adopts the syntax of R (%*%) and one adopts the new Python syntax ( ). It may lead to some confusion.
From the outset, my thought was to strictly define the set of (eight or so?) symbols for this. If memory serves, it was right around the time Walter's rejected wholesale user-defined operators because of exactly the problem you mention. (Compounded by Unicode-- what the hell is "2 🐵 8" supposed to be!?) I strongly suspect you don't need many simultaneous extra operators on a type to cover most cases. -Wyatt
Jun 11 2015
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 12 June 2015 at 01:55:15 UTC, Wyatt wrote:
 From the outset, my thought was to strictly define the set of 
 (eight or so?) symbols for this.  If memory serves, it was 
 right around the time Walter's rejected wholesale user-defined 
 operators because of exactly the problem you mention. 
 (Compounded by Unicode-- what the hell is "2 🐵 8" supposed to 
 be!?)  I strongly suspect you don't need many simultaneous 
 extra operators on a type to cover most cases.

 -Wyatt
What would the new order of operations be for these new operators?
Jun 11 2015
parent "Wyatt" <wyatt.epp gmail.com> writes:
On Friday, 12 June 2015 at 03:18:31 UTC, Tofu Ninja wrote:
 What would the new order of operations be for these new 
 operators?
Hadn't honestly thought that far. Like I said, it was more of a nascent idea than a coherent proposal (probably with a DIP and many more words). It's an interesting question, though. notes, though: precedence and fixity are determined by the base operator. In my head, extra operators would be represented in code by some annotation or affix on a built-in operator... say, braces around it or something (e.g. [*] or {+}, though this is just an example that sets a baseline for visibility). -Wyatt
Jun 12 2015
prev sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 12 June 2015 at 01:55:15 UTC, Wyatt wrote:
 From the outset, my thought was to strictly define the set of 
 (eight or so?) symbols for this.  If memory serves, it was 
 right around the time Walter's rejected wholesale user-defined 
 operators because of exactly the problem you mention. 
 (Compounded by Unicode-- what the hell is "2 🐵 8" supposed to 
 be!?)  I strongly suspect you don't need many simultaneous 
 extra operators on a type to cover most cases.

 -Wyatt
I actually thought about it more, and D does have a bunch of binary operators that no ones uses. You can make all sorts of weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,
--, &++, ^^+, in++, |-, %~, ect...
void main(string[] args){ test a; test b; a +* b; } struct test{ private struct testAlpha{ test payload; } testAlpha opUnary(string s : "*")(){ return testAlpha(this); } void opBinary(string op : "+")(test rhs){ writeln("+"); } void opBinary(string op : "+")(testAlpha rhs){ writeln("+*"); } }
Jun 17 2015
next sibling parent reply "Dominikus Dittes Scherkl" writes:
On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:
 I actually thought about it more, and D does have a bunch of 
 binary operators that no ones uses. You can make all sorts of 
 weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,
--, &++, ^^+, in++, |-, %~, ect...
+* is a specially bad idea, as I would read that as "a + (*b)", which is quite usual in C. But in general very cool. I love ~~ and |- the most :-)
Jun 23 2015
parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Tuesday, 23 June 2015 at 16:33:29 UTC, Dominikus Dittes 
Scherkl wrote:
 On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:
 I actually thought about it more, and D does have a bunch of 
 binary operators that no ones uses. You can make all sorts of 
 weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,
--, &++, ^^+, in++, |-, %~, ect...
+* is a specially bad idea, as I would read that as "a + (*b)", which is quite usual in C. But in general very cool. I love ~~ and |- the most :-)
Yeah |- does seem like an interesting one, not sure what it would mean though, I get the impression it's a wall or something. Also you can basicly combine any binOp and any number of unaryOps to create an arbitrary number of custom binOps. ~+*+*+*+ could be valid! You could probably make something like brainfuck in D's unary operators.
Jun 23 2015
prev sibling parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:
 I actually thought about it more, and D does have a bunch of 
 binary operators that no ones uses. You can make all sorts of 
 weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,
--, &++, ^^+, in++, |-, %~, ect...
void main(string[] args){ test a; test b; a +* b; } struct test{ private struct testAlpha{ test payload; } testAlpha opUnary(string s : "*")(){ return testAlpha(this); } void opBinary(string op : "+")(test rhs){ writeln("+"); } void opBinary(string op : "+")(testAlpha rhs){ writeln("+*"); } }
Oh right, meant to respond to this. I'll admit it took me a few to really get why that works-- it's fairly clever and moderately terrifying. (I showed it to a friend and he opined it may violate the grammar.) But playing with it a bit...well, it's very cumbersome having to do these overload gymnastics. It eats away at your opUnary space because of the need for private proxy types, and each one needs an opBinary defined to support it explicitly. It also means you can't make overloads for mismatched types or builtin types (at least, I couldn't figure out how in the few minutes I spent poking it over lunch). -Wyatt
Jun 24 2015
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Wednesday, 24 June 2015 at 19:04:38 UTC, Wyatt wrote:
 On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:
 I actually thought about it more, and D does have a bunch of 
 binary operators that no ones uses. You can make all sorts of 
 weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,
--, &++, ^^+, in++, |-, %~, ect...
void main(string[] args){ test a; test b; a +* b; } struct test{ private struct testAlpha{ test payload; } testAlpha opUnary(string s : "*")(){ return testAlpha(this); } void opBinary(string op : "+")(test rhs){ writeln("+"); } void opBinary(string op : "+")(testAlpha rhs){ writeln("+*"); } }
Oh right, meant to respond to this. I'll admit it took me a few to really get why that works-- it's fairly clever and moderately terrifying. (I showed it to a friend and he opined it may violate the grammar.) But playing with it a bit...well, it's very cumbersome having to do these overload gymnastics. It eats away at your opUnary space because of the need for private proxy types, and each one needs an opBinary defined to support it explicitly. It also means you can't make overloads for mismatched types or builtin types (at least, I couldn't figure out how in the few minutes I spent poking it over lunch). -Wyatt
I am thinking of writing a mixin that will set up the proxy for you so that you can just write. struct test { mixin binOpProxy("*"); void opBinary(string op : "+*", T)(T rhs){ writeln("+*"); } } The hard part will be to get it to work with arbitrarily long unary proxies. Eg: mixin binOpProxy("~-~"); void opBinary(string op : "+~-~", T)(T rhs){ writeln("+~-~"); }
Jun 24 2015
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 06/24/2015 11:41 PM, Tofu Ninja wrote:
 On Wednesday, 24 June 2015 at 19:04:38 UTC, Wyatt wrote:
 On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:
 I actually thought about it more, and D does have a bunch of binary
 operators that no ones uses. You can make all sorts of weird
 operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,
--, &++, ^^+, in++, |-, %~, ect...
void main(string[] args){ test a; test b; a +* b; } struct test{ private struct testAlpha{ test payload; } testAlpha opUnary(string s : "*")(){ return testAlpha(this); } void opBinary(string op : "+")(test rhs){ writeln("+"); } void opBinary(string op : "+")(testAlpha rhs){ writeln("+*"); } }
Oh right, meant to respond to this. I'll admit it took me a few to really get why that works-- it's fairly clever and moderately terrifying. (I showed it to a friend and he opined it may violate the grammar.) But playing with it a bit...well, it's very cumbersome having to do these overload gymnastics. It eats away at your opUnary space because of the need for private proxy types, and each one needs an opBinary defined to support it explicitly. It also means you can't make overloads for mismatched types or builtin types (at least, I couldn't figure out how in the few minutes I spent poking it over lunch). -Wyatt
I am thinking of writing a mixin that will set up the proxy for you so that you can just write. struct test { mixin binOpProxy("*"); void opBinary(string op : "+*", T)(T rhs){ writeln("+*"); } } The hard part will be to get it to work with arbitrarily long unary proxies. Eg: mixin binOpProxy("~-~"); void opBinary(string op : "+~-~", T)(T rhs){ writeln("+~-~"); }
Obviously you will run into issues with precedence soon, but this should do it: import std.stdio; struct Test{ mixin(binOpProxy("+~+-~*--+++----*")); void opBinary(string op : "+~+-~*--+++----*", T)(T rhs){ writeln("+~+-~*--+++----*"); } } void main(){ Test a,b; a +~+-~*--+++----* b; } import std.string, std.algorithm, std.range; int operatorSuffixLength(string s){ int count(dchar c){ return 2-s.retro.countUntil!(d=>c!=d)%2; } if(s.endsWith("++")) return count('+'); if(s.endsWith("--")) return count('-'); return 1; } struct TheProxy(T,string s){ T unwrap; this(T unwrap){ this.unwrap=unwrap; } static if(s.length){ alias NextType=TheProxy!(T,s[0..$-operatorSuffixLength(s)]); alias FullType=NextType.FullType; mixin(` auto opUnary(string op : "`~s[$-operatorSuffixLength(s)..$]~`")(){ return NextType(unwrap); }`); }else{ alias FullType=typeof(this); } } string binOpProxy(string s)in{ assert(s.length>=1+operatorSuffixLength(s)); assert(!s.startsWith("++")); assert(!s.startsWith("--")); foreach(dchar c;s) assert("+-*~".canFind(c)); }body{ int len=operatorSuffixLength(s); return ` auto opUnary(string op:"`~s[$-len..$]~`")(){ return TheProxy!(typeof(this),"`~s[1..$-len]~`")(this); } auto opBinary(string op:"`~s[0]~`")(TheProxy!(typeof(this),"`~s[1..$-1]~`").FullType t){ return opBinary!"`~s~`"(t.unwrap); } `; }
Jun 24 2015
parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Thursday, 25 June 2015 at 01:32:22 UTC, Timon Gehr wrote:
 [...]
Heres what I came up with... I love D so much <3 module util.binOpProxy; import std.algorithm : joiner, map; import std.array : array; struct __typeproxy(T, string s) { enum op = s; T payload; auto opUnary(string newop)() { return __typeproxy!(T,newop~op)(payload); } } /** * Example: * struct test * { * mixin(binOpProxy!("~", "*")); * * void opBinary(string op : "+~~", T)(T rhs) * { * writeln("hello!"); * } * * void opBinary(string op : "+~+-~*--+++----*", T)(T rhs) * { * writeln("world"); * } * * void opBinary(string op, T)(T rhs) * { * writeln("default"); * } * } * */ enum binOpProxy(proxies ...) = ` import ` ~ __MODULE__ ~ ` : __typeproxy; auto opBinary(string op, D : __typeproxy!(T, T_op), T, string T_op) (D rhs) { return opBinary!(op~D.op)(rhs.payload); } ` ~ [proxies].map!((string a) => ` auto opUnary(string op : "` ~ a ~ `")() { return __typeproxy!(typeof(this),op)(this); } `).joiner.array;
Jun 25 2015
prev sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 12/06/2015 9:30 a.m., jmh530 wrote:
 On Tuesday, 9 June 2015 at 03:26:25 UTC, Ilya Yaroshenko wrote:

 There are
 https://github.com/9il/simple_matrix and
 https://github.com/9il/cblas .
 I will try to rework them for Phobos.

 Any ideas and suggestions?
A well-supported matrix math library would definitely lead to me using D more. I would definitely applaud any work being done on this subject, but I still feel there are some enhancements (most seemingly minor) that would really make a matrix math library easy/fun to use. Most of what I discuss below is just syntactical sugar for some stuff that could be accomplished with loops or std.algorithm, but having it built-in would make practical use of a matrix math library much easier. I think Armadillo implements some of these as member functions, whereas other languages like R and Matlab have them more built-in. Disclaimer: I don't consider myself a D expert, so I could be horribly wrong on some of this stuff. 1) There is no support for assignment to arrays based on the values of another array. int[] A = [-1, 1, 5]; int[] B = [1, 2]; int[] C = A[B]; You would have to use int[] C = A[1..2];. In this simple example, it’s not really a big deal, but if I have a function that returns B, then I can’t just throw B in there. I would have to loop through B and assign it to C. So the type of assignment is possible, but if you’re frequently doing this type of array manipulation, then the number of loops you need starts increasing. 2) Along the same lines, there is no support for replacing the B above with an array of bools like bool[] B = [false, true, true]; or auto B = A.map!(a => a < 0); Again, it is doable with a loop, but this form of logical indexing is a pretty common idiom for people who use Matlab or R quite a bit. 3) In addition to being able to index by a range of values or bools, you would want to be able to make assignments based on this. So something like A[B] = c; This is a very common operation in R or Matlab. for array comparison operators. Something like int[3] B; B[] = A[] + 5; works, but bool[3] B; B[] = A[] > 0; doesn’t (I’m also not sure why I can’t just write auto B[] = A[] + 5;, but that’s neither here nor there). Moreover, it seems like only the mathematical operators work in this way. Mathematical functions from std.math, like exp, don’t seem to work. You have to use map (or a loop) with exp to get the result. I don’t have an issue with map, per se, but it seems inconsistent when some things work but not others. 5) You can only assign scalars to slices of arrays. There doesn’t seem couldn’t write A[0..1] = B; or A[0, 1] = B; instead of what I had written for C. 6) std.range and std.algorithm seem to have much better support for one dimensional containers than if you want to treat a container as two-dimensional. If you have a two-dimensional array and want to use map on every element, then there’s no issue. However, if you want to apply a function to each column or row, then you’d have to use a for loop (not even foreach). This seems to be a more difficult problem to solve than the others. I’m not sure what the best approach is, but it makes sense to look at other languages/libraries. In R, you have apply, which can operate on any dimensional array. Matlab has arrayfun. Numpy has apply_along_axis. Armadillo has .each_col and .each_row (one other thing about Armadillo is that you can switch between what underlying matrix math library is being used, like OpenBlas vs. Intel MKL).
Humm, work on getting gl3n into phobos or work on my ODBC driver manager. Tough choice.
Jun 11 2015
parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole wrote:

 Humm, work on getting gl3n into phobos or work on my ODBC 
 driver manager. Tough choice.
I can only speak for myself. I'm sure there's a lot of value in solid ODBC support. I use SQL some, but I use matrix math more. I'm not that familiar with gl3n, but it looks like it's meant for the math used in OpenGL. My knowledge of OpenGL is limited. I had some cursory interest in the developments of Vulkan earlier in March, but without much of a background in OpenGL I didn't follow everything they were talking about. I don't think many other languages include OpenGL support in their standard libraries (though I imagine game developers would welcome it).
Jun 12 2015
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 12 June 2015 at 17:10:08 UTC, jmh530 wrote:
 On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole wrote:

 Humm, work on getting gl3n into phobos or work on my ODBC 
 driver manager. Tough choice.
I can only speak for myself. I'm sure there's a lot of value in solid ODBC support. I use SQL some, but I use matrix math more. I'm not that familiar with gl3n, but it looks like it's meant for the math used in OpenGL. My knowledge of OpenGL is limited. I had some cursory interest in the developments of Vulkan earlier in March, but without much of a background in OpenGL I didn't follow everything they were talking about. I don't think many other languages include OpenGL support in their standard libraries (though I imagine game developers would welcome it).
Matrix math is matrix math, it being for ogl makes no real difference. Also if you are waiting to learn vulkan but have not done any other graphics, don't, learn ogl now, vulkan will be harder.
Jun 12 2015
next sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:

 Matrix math is matrix math, it being for ogl makes no real 
 difference.
I think it’s a little more complicated than that. BLAS and LAPACK (or variants on them) are low-level matrix math libraries that many higher-level libraries call. Few people actually use BLAS directly. So, clearly, not every matrix math library is the same. What differentiates BLAS from Armadillo is that you can be far more productive in Armadillo because the syntax is friendly (and quite similar to Matlab and others). There’s a reason why people use glm in C++. It’s probably the most productive way to do matrix math with OpenGL. However, it may not be the most productive way to do more general matrix math. That’s why I hear about people using Armadillo, Eigen, and Blaze, but I’ve never heard anyone recommend using glm. Syntax matters.
Jun 12 2015
parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 13/06/2015 7:45 a.m., jmh530 wrote:
 On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:

 Matrix math is matrix math, it being for ogl makes no real difference.
I think it’s a little more complicated than that. BLAS and LAPACK (or variants on them) are low-level matrix math libraries that many higher-level libraries call. Few people actually use BLAS directly. So, clearly, not every matrix math library is the same. What differentiates BLAS from Armadillo is that you can be far more productive in Armadillo because the syntax is friendly (and quite similar to Matlab and others). There’s a reason why people use glm in C++. It’s probably the most productive way to do matrix math with OpenGL. However, it may not be the most productive way to do more general matrix math. That’s why I hear about people using Armadillo, Eigen, and Blaze, but I’ve never heard anyone recommend using glm. Syntax matters.
The reason I am considering gl3n is because it is old solid code. It's proven itself. It'll make the review process relatively easy. But hey, if we want to do it right, we'll never get any implementation in.
Jun 12 2015
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:
 On Friday, 12 June 2015 at 17:10:08 UTC, jmh530 wrote:
 On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole 
 wrote:

 Humm, work on getting gl3n into phobos or work on my ODBC 
 driver manager. Tough choice.
I can only speak for myself. I'm sure there's a lot of value in solid ODBC support. I use SQL some, but I use matrix math more. I'm not that familiar with gl3n, but it looks like it's meant for the math used in OpenGL. My knowledge of OpenGL is limited. I had some cursory interest in the developments of Vulkan earlier in March, but without much of a background in OpenGL I didn't follow everything they were talking about. I don't think many other languages include OpenGL support in their standard libraries (though I imagine game developers would welcome it).
Matrix math is matrix math, it being for ogl makes no real difference.
The tiny subset of numerical linear algebra that is relevant for graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at all representative of the whole. The algorithms are different and the APIs are often necessarily different. Even just considering scale, no one sane calls in to BLAS to multiply a 3*3 matrix by a 3 element vector, simultaneously no one sane *doesn't* call in to BLAS or an equivalent to multiply two 500*500 matrices.
Jun 13 2015
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:
 The tiny subset of numerical linear algebra that is relevant 
 for graphics (mostly very basic operations, 2,3 or 4 
 dimensions) is not at all representative of the whole. The 
 algorithms are different and the APIs are often necessarily 
 different.

 Even just considering scale, no one sane calls in to BLAS to 
 multiply a 3*3 matrix by a 3 element vector, simultaneously no 
 one sane *doesn't* call in to BLAS or an equivalent to multiply 
 two 500*500 matrices.
I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?
Jun 13 2015
next sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 13/06/2015 10:35 p.m., Tofu Ninja wrote:
 On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:
 The tiny subset of numerical linear algebra that is relevant for
 graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at
 all representative of the whole. The algorithms are different and the
 APIs are often necessarily different.

 Even just considering scale, no one sane calls in to BLAS to multiply
 a 3*3 matrix by a 3 element vector, simultaneously no one sane
 *doesn't* call in to BLAS or an equivalent to multiply two 500*500
 matrices.
I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?
IMO simple matrix is fine for a standard library. More complex highly specialized math library yeah no. Not enough gain for such a complex code. Where as matrix/vector support for e.g. OpenGL now that will have a high visibility to game devs.
Jun 13 2015
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Saturday, 13 June 2015 at 10:37:39 UTC, Rikki Cattermole wrote:
 On 13/06/2015 10:35 p.m., Tofu Ninja wrote:
 On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:
 [...]
I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?
IMO simple matrix is fine for a standard library. More complex highly specialized math library yeah no. Not enough gain for such a complex code. Where as matrix/vector support for e.g. OpenGL now that will have a high visibility to game devs.
Linear algebra for graphics is the specialised case, not the other way around. As a possible name for something like gl3n in phobos, I like std.math.geometry
Jun 13 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 13 June 2015 at 11:05:19 UTC, John Colvin wrote:
 Linear algebra for graphics is the specialised case, not the 
 other way around. As a possible name for something like gl3n in 
 phobos, I like std.math.geometry
A geometry library is different, it should be type safe when it comes to units, lengths, distances, areas... I think linear algebra should have the same syntax for small and large matrices and switch representation behind the scenes. The challenge is to figure out what kind of memory layouts you need to support in order to interact with existing frameworks/hardware with no conversion.
Jun 13 2015
parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Saturday, 13 June 2015 at 11:18:54 UTC, Ola Fosheim Grøstad 
wrote:
 I think linear algebra should have the same syntax for small 
 and large matrices and switch representation behind the scenes.
Switching representations behind the scenes? Sounds complicated. I would think that if you were designing it from the ground up, you would have one general matrix math library. Then a graphics library could be built on top of that functionality. That way, as improvements are made to the matrix math functionality, the graphics library would benefit too. However, given that there already is a well developed math graphics library, I'm not sure what's optimal. I can see the argument for implementing gl3n in the standard library (as a specialized math graphics option) on its own if there is demand for it.
Jun 13 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 02:56:04 UTC, jmh530 wrote:
 On Saturday, 13 June 2015 at 11:18:54 UTC, Ola Fosheim Grøstad 
 wrote:
 I think linear algebra should have the same syntax for small 
 and large matrices and switch representation behind the scenes.
Switching representations behind the scenes? Sounds complicated.
You don't have much of a choice if you want it to perform. You have take take into consideration: 1. hardware factors such as SIMD and alignment 2. what is known at compile time and what is only known at runtime 3. common usage patterns (what elements are usually 0, 1 or a value) 4. when does it pay off to encode the matrix modifications and layout as meta information (like transpose and scalar multiplication or addition) And sometimes you might want to compute the inverse matrix when doing the transforms, rather than as a separate step for performance reasons.
 I would think that if you were designing it from the ground up, 
 you would have one general matrix math library. Then a graphics 
 library could be built on top of that functionality. That way, 
 as improvements are made to the matrix math functionality, the 
 graphics library would benefit too.
Yes, but nobody wants to use a matrix library that does not perform close to the hardware limitations, so the representation should be special cased to fit the hardware for common matrix layouts.
Jun 14 2015
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Saturday, 13 June 2015 at 10:35:55 UTC, Tofu Ninja wrote:
 On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:
 The tiny subset of numerical linear algebra that is relevant 
 for graphics (mostly very basic operations, 2,3 or 4 
 dimensions) is not at all representative of the whole. The 
 algorithms are different and the APIs are often necessarily 
 different.

 Even just considering scale, no one sane calls in to BLAS to 
 multiply a 3*3 matrix by a 3 element vector, simultaneously no 
 one sane *doesn't* call in to BLAS or an equivalent to 
 multiply two 500*500 matrices.
I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?
Yes, that's what I was trying to point out. Anyway, gl3n or similar would be great to have in phobos, I've used it quite a bit and think it's great, but it should be very clear that it's not a general purpose matrix/linear algebra toolkit. It's a specialised set of types and operations specifically for low-dimensional geometry, with an emphasis on common graphics idioms.
Jun 13 2015
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 06/13/2015 12:35 PM, Tofu Ninja wrote:
 On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:
 The tiny subset of numerical linear algebra that is relevant for
 graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at
 all representative of the whole. The algorithms are different and the
 APIs are often necessarily different.

 Even just considering scale, no one sane calls in to BLAS to multiply
 a 3*3 matrix by a 3 element vector, simultaneously no one sane
 *doesn't* call in to BLAS or an equivalent to multiply two 500*500
 matrices.
I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff.
(It's neither weird nor crazy.)
 Maybe they should be kept separate?
I think there's no point to that. Just have dynamically sized and fixed sized versions. Why should they be incompatible? It's the same concept.
Jun 13 2015
prev sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Saturday, 13 June 2015 at 10:35:55 UTC, Tofu Ninja wrote:
 On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:
 The tiny subset of numerical linear algebra that is relevant 
 for graphics (mostly very basic operations, 2,3 or 4 
 dimensions) is not at all representative of the whole. The 
 algorithms are different and the APIs are often necessarily 
 different.

 Even just considering scale, no one sane calls in to BLAS to 
 multiply a 3*3 matrix by a 3 element vector, simultaneously no 
 one sane *doesn't* call in to BLAS or an equivalent to 
 multiply two 500*500 matrices.
I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?
+1 nobody uses general purpose linear matrix libraries for games/graphics for a reason, many game math libraries take shortcuts everywhere and are extensively optimized(e.g, for cache lines) for the general purpose vec3/mat4 types. many performance benefits for massive matrices see performance detriments for tiny graphics-oriented matrices. This is just shoehorning, plain and simple.
Jun 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 08:14:21 UTC, weaselcat wrote:
 nobody uses general purpose linear matrix libraries for 
 games/graphics for a reason,
The reason is that C++ didn't provide anything. As a result each framework provide their own and you get N different libraries that are incompatible. There is no good reason for making small-matrix libraries incompatible with the rest of eco-system given the templating system you have in D. What you need is a library that supports multiple representations and can do the conversions. Of course, you'll do better if you also have term-rewriting/AST-macros.
Jun 14 2015
parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Sunday, 14 June 2015 at 09:07:19 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 14 June 2015 at 08:14:21 UTC, weaselcat wrote:
 nobody uses general purpose linear matrix libraries for 
 games/graphics for a reason,
The reason is that C++ didn't provide anything. As a result each framework provide their own and you get N different libraries that are incompatible. There is no good reason for making small-matrix libraries incompatible with the rest of eco-system given the templating system you have in D. What you need is a library that supports multiple representations and can do the conversions. Of course, you'll do better if you also have term-rewriting/AST-macros.
The reason is general purpose matrixes allocated at heap, but small graphic matrices are plain structs. `opCast(T)` should be enough.
Jun 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 09:19:19 UTC, Ilya Yaroshenko wrote:
 The reason is general purpose matrixes allocated at heap, but 
 small graphic matrices are plain structs.
No, the reason is that LA-libraries are C-libraries that also deal with variable sized matrices. A good generic API can support both. You cannot create a good generic API in C. You can in D.
Jun 14 2015
parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Sunday, 14 June 2015 at 09:25:25 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 14 June 2015 at 09:19:19 UTC, Ilya Yaroshenko wrote:
 The reason is general purpose matrixes allocated at heap, but 
 small graphic matrices are plain structs.
No, the reason is that LA-libraries are C-libraries that also deal with variable sized matrices. A good generic API can support both. You cannot create a good generic API in C. You can in D.
We need D own BLAS implementation to do it. Sight, DBLAS will be largest part of std.
Jun 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 09:59:22 UTC, Ilya Yaroshenko wrote:
 We need D own BLAS implementation to do it.
Why can't you use "version" for those that want to use a BLAS library for the implementation? Those who want replications of LAPACK/LINPACK APIs can use separate bindings? And those who want to use BLAS directly would not use phobos anyway, but a direct binding so they can switch implementation? I think a good generic higher level linear algebra library for D should aim to be equally useful for 2D Graphics, 3D/4D GPU graphics, CAD solid modelling, robotics, 3D raytracing, higher dimensional fractals, physics sims, image processing, signal processing, scientific computing (which is pretty wide) and more. The Phobos API should be user-level, not library-level like BLAS. IMO. You really want an API that look like this in Phobos? http://www.netlib.org/blas/ BLAS/LAPACK/LINPACK all originate in Fortran with a particular scientific tradition in mind, so I think one should rethink how D goes about this. Fortran has very primitive abstraction mechanisms. This stuff is stuck in the 80s…
Jun 14 2015
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
I think there might be a disconnection in this thread. D only, or 
D frontend?

There are hardware vendor and commercial libraries that are 
heavily optimized for particular hardware configurations. There 
is no way a D-only solution can beat those. As an example Apple 
provides various implementations for their own machines, so an 
old program on a new machine can run faster than a static D-only 
library solution.

What D can provide is a unifying abstraction, but to get there 
one need to analyze what exists. Like Apple's Accelerate 
framework:

https://developer.apple.com/library/prerelease/ios/documentation/Accelerate/Reference/AccelerateFWRef/index.html#//apple_ref/doc/uid/TP40009465

That goes beyond BLAS. We also need to look at vDSP etc. You'll 
find similar things for Microsoft/Intel/AMD/ARM etc…
Jun 14 2015
parent "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Sunday, 14 June 2015 at 10:43:24 UTC, Ola Fosheim Grøstad 
wrote:
 I think there might be a disconnection in this thread. D only, 
 or D frontend?

 There are hardware vendor and commercial libraries that are 
 heavily optimized for particular hardware configurations. There 
 is no way a D-only solution can beat those. As an example Apple 
 provides various implementations for their own machines, so an 
 old program on a new machine can run faster than a static 
 D-only library solution.

 What D can provide is a unifying abstraction, but to get there 
 one need to analyze what exists. Like Apple's Accelerate 
 framework:

 https://developer.apple.com/library/prerelease/ios/documentation/Accelerate/Reference/AccelerateFWRef/index.html#//apple_ref/doc/uid/TP40009465

 That goes beyond BLAS. We also need to look at vDSP etc. You'll 
 find similar things for Microsoft/Intel/AMD/ARM etc…
+1
Jun 14 2015
prev sibling parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Sunday, 14 June 2015 at 10:15:08 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 14 June 2015 at 09:59:22 UTC, Ilya Yaroshenko wrote:
 We need D own BLAS implementation to do it.
Why can't you use "version" for those that want to use a BLAS library for the implementation? Those who want replications of LAPACK/LINPACK APIs can use separate bindings? And those who want to use BLAS directly would not use phobos anyway, but a direct binding so they can switch implementation? I think a good generic higher level linear algebra library for D should aim to be equally useful for 2D Graphics, 3D/4D GPU graphics, CAD solid modelling, robotics, 3D raytracing, higher dimensional fractals, physics sims, image processing, signal processing, scientific computing (which is pretty wide) and more. The Phobos API should be user-level, not library-level like BLAS. IMO. You really want an API that look like this in Phobos? http://www.netlib.org/blas/ BLAS/LAPACK/LINPACK all originate in Fortran with a particular scientific tradition in mind, so I think one should rethink how D goes about this. Fortran has very primitive abstraction mechanisms. This stuff is stuck in the 80s…
I am really don't understand what you mean with "generic" keyword. Do you want one matrix type that includes all cases??? I hope you does not. If not, yes it should be generic like all other Phobos. But we will have one module for 3D/4D geometric and 3D/4D matrix/vector multiplications, another module for general matrix (std.container.matrix) and another module with generic BLAS (std.numeric.blas) for general purpose matrixes. After all of that we can think about scripting like "m0 = m1*v*m2" features. I think LAPACK would not be implemented in Phobos, but we can use SciD instead.
Jun 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 11:43:46 UTC, Ilya Yaroshenko wrote:
 I am really don't understand what you mean with "generic" 
 keyword.

 Do you want one matrix type that includes all cases???
 I hope you does not.
Yes, that is what generic programming is about. The type should signify the semantics, not exact representation. Then you alias common types "float4x4" etc. It does take a lot of abstraction design work. I've done some of it in C++ for sliced views over memory and arrays and I'd say you need many iterations to get it right.
 If not, yes it should be generic like all other Phobos. But we 
 will have one module for 3D/4D geometric and 3D/4D 
 matrix/vector multiplications, another module for general 
 matrix (std.container.matrix) and another module with generic 
 BLAS (std.numeric.blas) for general purpose matrixes. After all 
 of that we can think about scripting like "m0 = m1*v*m2" 
 features.
All I can say is that I have a strong incentive to avoid using Phobos features if D does not automatically utilize the best OS/CPU vendor provided libraries in a portable manner and with easy-to-read high level abstractions. D's strength compared to C++/Rust is that D can evolve to be easier to use than those languages. C++/Rust are hard to use by nature. But usability takes a lot of API design effort, so it won't come easy. D's strength compared to Go is that it can better take advantage of hardware and provide better library abstractions, Go appears to deliberately avoid it. They probably want to stay nimble with very limited hardware-interfacing so that you can easily move it around in the cloud.
Jun 14 2015
parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Sunday, 14 June 2015 at 12:01:47 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 14 June 2015 at 11:43:46 UTC, Ilya Yaroshenko wrote:
 I am really don't understand what you mean with "generic" 
 keyword.

 Do you want one matrix type that includes all cases???
 I hope you does not.
Yes, that is what generic programming is about. The type should signify the semantics, not exact representation. Then you alias common types "float4x4" etc.
std.range has a lot of types + D arrays. The power in unified API (structural type system). For matrixes this API is very simple: operations like m1[] += m2, transposed, etc. Ilya
Jun 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 12:18:39 UTC, Ilya Yaroshenko wrote:
 std.range has a lot of types + D arrays.
 The power in unified API (structural type system).
Yeah, I agree that templates in C++/D more or less makes those type systems structural-like, even though C is using nominal typing. I've also found that although the combinatorial explosion is a possibility, most applications I write have a "types.h" file that define the subset I want to use for that application. So the combinatorial explosion is not such a big deal after all. But one need to be patient and add lots of static_asserts… since the template type system is weak.
 For matrixes this API is very simple: operations like m1[] += 
 m2, transposed, etc.
I think it is a bit more complicated than that. You also need to think about alignment, padding, strides, convolutions, identiy matrices, invertible matrices, windows on a stream, higher order matrices etc…
Jun 14 2015
parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Sunday, 14 June 2015 at 12:52:52 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 14 June 2015 at 12:18:39 UTC, Ilya Yaroshenko wrote:
 std.range has a lot of types + D arrays.
 The power in unified API (structural type system).
Yeah, I agree that templates in C++/D more or less makes those type systems structural-like, even though C is using nominal typing. I've also found that although the combinatorial explosion is a possibility, most applications I write have a "types.h" file that define the subset I want to use for that application. So the combinatorial explosion is not such a big deal after all. But one need to be patient and add lots of static_asserts… since the template type system is weak.
 For matrixes this API is very simple: operations like m1[] += 
 m2, transposed, etc.
I think it is a bit more complicated than that. You also need to think about alignment, padding, strides, convolutions, identiy matrices, invertible matrices, windows on a stream, higher order matrices etc…
Alignment, strides (windows on a stream - I understand it like Sliding Windows) are not a problem. Convolutions, identiy matrices, invertible matrices are stuff I don't want to see in Phobos. They are about "MathD" not about (big) standard library. For hight order slices see https://github.com/D-Programming-Language/phobos/pull/3397
Jun 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 13:48:23 UTC, Ilya Yaroshenko wrote:
 Alignment, strides (windows on a stream - I understand it like 
 Sliding Windows) are not a problem.
It isn't a problem if you use the best possible abstraction from the start. It is a problem if you don't focus on it from the start.
 Convolutions, identiy matrices, invertible matrices are stuff I 
 don't want to see in Phobos. They are about "MathD" not about 
 (big) standard library.
I don't see how you can get good performance without special casing identity matrices, transposed matrices and so on. You surely need to support matrix inversion, Gauss-Jordan elimination (or the equivalent) etc?
Jun 14 2015
parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Sunday, 14 June 2015 at 14:02:59 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 14 June 2015 at 13:48:23 UTC, Ilya Yaroshenko wrote:
 Alignment, strides (windows on a stream - I understand it like 
 Sliding Windows) are not a problem.
It isn't a problem if you use the best possible abstraction from the start. It is a problem if you don't focus on it from the start.
I am sorry for this trolling: Lisp is the best abstraction, thought. Sometimes I find very cool abstract libraries, with relatively small number of users. For example many programmers don't want to use Boost only because it's abstractions makes them crazy.
 Convolutions, identiy matrices, invertible matrices are stuff 
 I don't want to see in Phobos. They are about "MathD" not 
 about (big) standard library.
I don't see how you can get good performance without special casing identity matrices, transposed matrices and so on. You surely need to support matrix inversion, Gauss-Jordan elimination (or the equivalent) etc?
For daily scientific purposes - yes. For R/Matlab like mathematical library - yes. For real world application - no. Engineer can achieve best performance without special cases by lowering "abstraction" down. Simplicity and transparency ("how it works") is more important in this case.
Jun 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 14:25:11 UTC, Ilya Yaroshenko wrote:
 I am sorry for this trolling:
 Lisp is the best abstraction, thought.
Even it if was, it does not provide the meta info and alignment type constraints that makes it possible to hardware/SIMD optimize it behind the scenes.
 For example many programmers don't want to use Boost only 
 because it's abstractions makes them crazy.
Yes, C++ templates are a hard nut to crack, if D had added excellent pattern matching to its meta programming repertoire the I think this would be enough to put D in a different league. Application programmers should not have to deal with lots of type parameters, they can use the simplified version (aliases). That's what I do in my C++ libs, using templated aliasing to make a complicated type composition easy to use while still getting the benefits generic pattern matching and generic programming.
 Convolutions, identiy matrices, invertible matrices are stuff
For daily scientific purposes - yes. For R/Matlab like mathematical library - yes. For real world application - no. Engineer can achieve best performance without special cases by lowering "abstraction" down. Simplicity and transparency ("how it works") is more important in this case.
Getting platform optimized versions of frequently used heavy operations is the primary reason for why I would use a builtin library over rolling my own. Especially if the compiler has builtin high-level optimizations for the algebra. A naive basic matrix library is simple to write, I don't need standard library support for that + I get it to work the way I want by using SIMD registers directly... => I probably would not use it if I could implement it in less than 10 hours.
Jun 14 2015
next sibling parent "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 14 June 2015 at 14:46:36 UTC, Ola Fosheim Grøstad 
wrote:
 Yes, C++ templates are a hard nut to crack, if D had added 
 excellent pattern matching to its meta programming repertoire 
 the I think this would be enough to put D in a different league.
https://github.com/solodon4/Mach7
Jun 14 2015
prev sibling parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
 A naive basic matrix library is simple to write, I don't need 
 standard library support for that + I get it to work the way I 
 want by using SIMD registers directly... => I probably would 
 not use it if I could implement it in less than 10 hours.
A naive std.algorithm and std.range is easy to write too.
Jun 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 15:15:38 UTC, Ilya Yaroshenko wrote:
 A naive basic matrix library is simple to write, I don't need 
 standard library support for that + I get it to work the way I 
 want by using SIMD registers directly... => I probably would 
 not use it if I could implement it in less than 10 hours.
A naive std.algorithm and std.range is easy to write too.
I wouldn't know. People have different needs. Builtin for-each-loops, threads and SIMD support are more important to me than iterators (ranges). But the problem with linear algebra is that you might want to do SIMD optimized versions where you calculate 4 equations at the time, do reshuffling etc. So a library solution has to provide substantial benefits.
Jun 14 2015
parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Sunday, 14 June 2015 at 18:05:33 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 14 June 2015 at 15:15:38 UTC, Ilya Yaroshenko wrote:
 A naive basic matrix library is simple to write, I don't need 
 standard library support for that + I get it to work the way 
 I want by using SIMD registers directly... => I probably 
 would not use it if I could implement it in less than 10 
 hours.
A naive std.algorithm and std.range is easy to write too.
I wouldn't know. People have different needs. Builtin for-each-loops, threads and SIMD support are more important to me than iterators (ranges). But the problem with linear algebra is that you might want to do SIMD optimized versions where you calculate 4 equations at the time, do reshuffling etc. So a library solution has to provide substantial benefits.
Yes, but it would be hard to create SIMD optimised version. What do you think about this chain of steps? 1. Create generalised (only type template and my be flags) BLAS algorithms (probably slow) with CBLAS like API. 2. Allow users to use existing CBLAS libraries inside generalised BLAS. 3. Start to improve generalised BLAS with SIMD instructions. 4. And then continue discussion about type of matrixes we want...
Jun 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 18:49:21 UTC, Ilya Yaroshenko wrote:
 Yes, but it would be hard to create SIMD optimised version.
Then again clang is getting better at this stuff.
 What do you think about this chain of steps?

 1. Create generalised (only type template and my be flags) BLAS 
 algorithms (probably  slow) with CBLAS like API.
 2. Allow users to use existing CBLAS libraries inside 
 generalised BLAS.
 3. Start to improve generalised BLAS with SIMD instructions.
 4. And then continue discussion about type of matrixes we 
 want...
Hmm… I don't know. In general I think the best thing to do is to develop libraries with a project and then turn it into something more abstract. If I had more time I think I would have made the assumption that we could make LDC produce whatever next version of clang can do with pragmas/GCC-extensions and used that assumption for building some prototypes. So I would: 1. protoype typical constructs in C, compile it with next version of llvm/clang (with e.g. 4xloop-unrolling and try different optimization/vectorizing options) the look at the output in LLVM IR and assembly mnemonic code. 2. Then write similar code with hardware optimized BLAS and benchmark where the overhead between pure C/LLVM and BLAS calls balance out to even. Then you have a rough idea of what the limitations of the current infrastructure looks like, and can start modelling the template types in D? I'm not sure that you should use SIMD directly, but align the memory for it. Like, on iOS you end up using LLVM subsets because of the new bitcode requirements. Ditto for PNACL. Just a thought, but that's what I would I do.
Jun 14 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
Another thing worth noting is that I believe Intel has put some 
effort into next gen (?) LLVM/Clang for autovectorizing into 
AVX2. It might be worth looking into as it uses a mask that 
allows the CPU to skip computations that would lead to no change, 
but I think it is only available on last gen Intel CPUs.

Also worth keeping in mind is that future versions of LLVM will 
have to deal with GCC extensions and perhaps also Clang pragmas. 
So maybe take a look at:

http://clang.llvm.org/docs/LanguageExtensions.html#vectors-and-extended-vectors

and

http://clang.llvm.org/docs/LanguageExtensions.html#extensions-for-loop-hint-optimizations

?
Jun 14 2015
prev sibling parent reply "anonymous" <a b.cd> writes:
 1. Create generalised (only type template and my be flags) 
 BLAS algorithms (probably  slow) with CBLAS like API.
See [1] (the Matmul benchmark) Julia Native is probably backed with Intel MKL or OpenBLAS. D version was optimized by Martin Nowak [2] and is still _much_ slower.
 2. Allow users to use existing CBLAS libraries inside 
 generalised BLAS.
I think a good interface is more important than speed of default implementation (at least for e.g large matrix multiplication). Just use existing code for speed... Goto's papers about his BLAS: [3][4] Having something a competitive in D would be great but probably a lot of work. Without a good D interface dstep + openBLAS/Atlas header will not look that bad. Note I am not talking about small matrices/graphics.
 3. Start to improve generalised BLAS with SIMD instructions.
nice, but not really important. Good interface to existing high quality BLAS seems more important to me than fast D linear algebra implementation + CBLAS like interface.
 4. And then continue discussion about type of matrixes we 
 want...
+1
 2. Then write similar code with hardware optimized BLAS and 
 benchmark where the overhead between pure C/LLVM and BLAS calls 
 balance out to even.
may there are more important / beneficial things to work on - assuming total time of contributors is fix and used for other D stuff:) [1] https://github.com/kostya/benchmarks [2] https://github.com/kostya/benchmarks/pull/6 [3] http://www.cs.utexas.edu/users/flame/pubs/GotoTOMS2.pdf [4] http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
Jun 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 21:31:53 UTC, anonymous wrote:
 2. Then write similar code with hardware optimized BLAS and 
 benchmark where the overhead between pure C/LLVM and BLAS 
 calls balance out to even.
may there are more important / beneficial things to work on - assuming total time of contributors is fix and used for other D stuff:)
Sure, but that is what I'd do if I had the time. Get a baseline for what kind of NxN sizes D can reasonably be expected to deal with in a "naive brute force" manner. Then consider pushing anything beyond that over to something more specialized. *shrugs*
Jun 14 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 14 June 2015 at 21:50:02 UTC, Ola Fosheim Grøstad 
wrote:
 Sure, but that is what I'd do if I had the time. Get a baseline 
 for what kind of NxN sizes D can reasonably be expected to deal 
 with in a "naive brute force" manner.
In case it isn't obvious: a potential advantage of a simple algorithm that do "naive brute force" is that the backend might stand a better chance optimizing it, at least when you have a matrix that is known at compile time.
Jun 15 2015
prev sibling parent reply "anonymous" <a b.de> writes:
On Sunday, 14 June 2015 at 21:50:02 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 14 June 2015 at 21:31:53 UTC, anonymous wrote:
 2. Then write similar code with hardware optimized BLAS and 
 benchmark where the overhead between pure C/LLVM and BLAS 
 calls balance out to even.
may there are more important / beneficial things to work on - assuming total time of contributors is fix and used for other D stuff:)
Sure, but that is what I'd do if I had the time. Get a baseline for what kind of NxN sizes D can reasonably be expected to deal with in a "naive brute force" manner. Then consider pushing anything beyond that over to something more specialized. *shrugs*
On Sunday, 14 June 2015 at 21:50:02 UTC, Ola Fosheim Grøstad wrote:
 On Sunday, 14 June 2015 at 21:31:53 UTC, anonymous wrote:
 2. Then write similar code with hardware optimized BLAS and 
 benchmark where the overhead between pure C/LLVM and BLAS 
 calls balance out to even.
may there are more important / beneficial things to work on - assuming total time of contributors is fix and used for other D stuff:)
Sure, but that is what I'd do if I had the time. Get a baseline for what kind of NxN sizes D can reasonably be expected to deal with in a "naive brute force" manner. Then consider pushing anything beyond that over to something more specialized. *shrugs*
sorry, I should read more careful. I understand 'optimize default implementation to the speed of high quality BLAS for _any_/large matrix size'. Great if it is done but imo there is no real pressure to do it and probably needs lot of time of experts. To benchmark when existing BLAS is actually faster is than 'naive brute force' sounds very good and reasonable.
Jun 15 2015
next sibling parent "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Monday, 15 June 2015 at 08:12:17 UTC, anonymous wrote:
 I understand 'optimize default implementation to the speed of 
 high quality BLAS for _any_/large matrix size'. Great if it is 
 done but imo there is no real pressure to do it and probably 
 needs lot of time of experts.
+1
Jun 15 2015
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 15 June 2015 at 08:12:17 UTC, anonymous wrote:
 sorry, I should read more careful. I understand 'optimize 
 default implementation to the speed of high quality BLAS for 
 _any_/large matrix size'. Great if it is done but imo there is 
 no real pressure to do it and probably needs lot of time of 
 experts.

 To benchmark when existing BLAS is actually faster is than 
 'naive brute force' sounds very good and reasonable.
Yes. Well, I think there are some different expectations to what a standard library should include. In my view BLAS is primarily an API that matters because people have existing code bases, therefore it is common to have good implementations for it. I don't really see any reason for why new programs should target it. I think it is a good idea to stay higher level. Provide simple implementations that the optimizer can deal with. Then have a benchmarking program that run on different configurations (os+hardware) to measure when the non-D libraries perform better and use those when they are faster. So I don't think phobos should provide BLAS as such. That's what I would do, anyway.
Jun 15 2015
prev sibling next sibling parent reply "John Chapman" <johnch_atms hotmail.com> writes:
It's a shame ucent/cent never got implemented. But couldn't they 
be added to Phobos? I often need a 128-bit type with better 
precision than float and double.
Jun 10 2015
next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
 It's a shame ucent/cent never got implemented. But couldn't 
 they be added to Phobos? I often need a 128-bit type with 
 better precision than float and double.
FWIW: https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/math/wideint.d
Jun 10 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/10/15 1:53 AM, ponce wrote:
 On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
 It's a shame ucent/cent never got implemented. But couldn't they be
 added to Phobos? I often need a 128-bit type with better precision
 than float and double.
FWIW: https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/math/wideint.d
Yes, arbitrary fixed-size integrals would be good to have in Phobos. Who's the author of that code? Can we get something going here? -- Andrei
Jun 10 2015
parent "ponce" <contact gam3sfrommars.fr> writes:
On Wednesday, 10 June 2015 at 15:44:40 UTC, Andrei Alexandrescu 
wrote:
 On 6/10/15 1:53 AM, ponce wrote:
 On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
 It's a shame ucent/cent never got implemented. But couldn't 
 they be
 added to Phobos? I often need a 128-bit type with better 
 precision
 than float and double.
FWIW: https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/math/wideint.d
Yes, arbitrary fixed-size integrals would be good to have in Phobos. Who's the author of that code? Can we get something going here? -- Andrei
Sorry for the delay. I wrote this code a while earlier. I will relicense it anyway that is needed (if needed). Currently lack the time to polish it more (adding custom literals would be the one thing to do).
Jun 23 2015
prev sibling next sibling parent reply "John Chapman" <johnch_atms hotmail.com> writes:
On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
 It's a shame ucent/cent never got implemented. But couldn't 
 they be added to Phobos? I often need a 128-bit type with 
 better precision than float and double.
Other things I often have a need for: Weak references Queues, stacks, sets Logging Custom date/time formatting Locale-aware number/currency formatting HMAC (for OAuth) URI parsing Sending email (SMTP) Continuations for std.parallelism.Task Database connectivity (sounds like this is on the cards) HTTP listener
Jun 10 2015
next sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Wed, 10 Jun 2015 09:12:15 +0000, John Chapman wrote:

 On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
 It's a shame ucent/cent never got implemented. But couldn't they be
 added to Phobos? I often need a 128-bit type with better precision than
 float and double.
=20 Other things I often have a need for: =20 Weak references
+inf for including that into Phobos. current implementations are hacks=20 that may stop working when internals will change, but if it will be in=20 Phobos, it will be always up-to-date.=
Jun 10 2015
prev sibling next sibling parent reply "Robert burner Schadek" <rburners gmail.com> writes:
On Wednesday, 10 June 2015 at 09:12:17 UTC, John Chapman wrote:

 Logging
std.experimental.logger!?
Jun 10 2015
parent "John Chapman" <johnch_atms hotmail.com> writes:
On Wednesday, 10 June 2015 at 09:30:37 UTC, Robert burner Schadek 
wrote:
 On Wednesday, 10 June 2015 at 09:12:17 UTC, John Chapman wrote:

 Logging
std.experimental.logger!?
Perfect, he said sheepishly.
Jun 10 2015
prev sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 10 June 2015 at 09:12:17 UTC, John Chapman wrote:
 HMAC (for OAuth)
https://github.com/D-Programming-Language/phobos/pull/3233 Unfortunately it triggers a module cycle bug on FreeBSD that I can't figure out, so it hasn't been merged yet.
Jun 10 2015
prev sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
 It's a shame ucent/cent never got implemented. But couldn't 
 they be added to Phobos? I often need a 128-bit type with 
 better precision than float and double.
I think the next release of LDC will support it, at least on some platforms...
Jun 10 2015
prev sibling next sibling parent "rsw0x" <anonymous anonymous.com> writes:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
std.container.concurrent.*
Jun 13 2015
prev sibling next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 06/07/2015 02:27 PM, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the rest out
 of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
What are the problems with std.json?
Jun 13 2015
parent reply "weaselcat" <weaselcat gmail.com> writes:
On Saturday, 13 June 2015 at 16:53:22 UTC, Nick Sabalausky wrote:
 On 06/07/2015 02:27 PM, Robert burner Schadek wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better
 marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out
 of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
What are the problems with std.json?
slow
Jun 13 2015
parent "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
Good start:
http://code.dlang.org/packages/dip80-ndslice
https://github.com/9il/dip80-ndslice/blob/master/source/std/experimental/range/ndslice.d

I miss the function `sliced` in Phobos.
Jun 13 2015
prev sibling parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
 Phobos is awesome, the libs of go, python and rust only have 
 better marketing.
 As discussed on dconf, phobos needs to become big and blow the 
 rest out of the sky.

 http://wiki.dlang.org/DIP80

 lets get OT, please discuss
N-dimensional slices is ready for comments! Announce http://forum.dlang.org/thread/rilfmeaqkailgpxoziuo forum.dlang.org Ilya
Jun 15 2015
parent reply "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:
 N-dimensional slices is ready for comments!
It seems to me that the properties of the matrix require `row` and `col` like this: import std.stdio, std.experimental.range.ndslice, std.range : iota; void main() { auto matrix = 100.iota.sliced(3, 4, 5); writeln(matrix[0]); // [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]] // writeln(matrix[0].row); // 4 // writeln(matrix[0].col); // 5 } P.S. I'm not exactly sure that these properties should work exactly as in my code :)
Jun 15 2015
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 15 June 2015 at 13:44:53 UTC, Dennis Ritchie wrote:
 On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:
 N-dimensional slices is ready for comments!
It seems to me that the properties of the matrix require `row` and `col` like this: import std.stdio, std.experimental.range.ndslice, std.range : iota; void main() { auto matrix = 100.iota.sliced(3, 4, 5); writeln(matrix[0]); // [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]] // writeln(matrix[0].row); // 4 // writeln(matrix[0].col); // 5 } P.S. I'm not exactly sure that these properties should work exactly as in my code :)
try .length!0 and .length!1 or .shape[0] and .shape[1]
Jun 15 2015
parent "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Monday, 15 June 2015 at 13:55:16 UTC, John Colvin wrote:
 On Monday, 15 June 2015 at 13:44:53 UTC, Dennis Ritchie wrote:
 On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:
 N-dimensional slices is ready for comments!
It seems to me that the properties of the matrix require `row` and `col` like this: import std.stdio, std.experimental.range.ndslice, std.range : iota; void main() { auto matrix = 100.iota.sliced(3, 4, 5); writeln(matrix[0]); // [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]] // writeln(matrix[0].row); // 4 // writeln(matrix[0].col); // 5 } P.S. I'm not exactly sure that these properties should work exactly as in my code :)
try .length!0 and .length!1 or .shape[0] and .shape[1]
Nitpick: shape contains lengths and strides: .shape.lengths[0] and .shape.lengths[1]
Jun 15 2015
prev sibling parent reply "Ilya Yaroshenko" <ilyayaroshenko gmail.com> writes:
On Monday, 15 June 2015 at 13:44:53 UTC, Dennis Ritchie wrote:
 On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:
 N-dimensional slices is ready for comments!
It seems to me that the properties of the matrix require `row` and `col` like this: import std.stdio, std.experimental.range.ndslice, std.range : iota; void main() { auto matrix = 100.iota.sliced(3, 4, 5); writeln(matrix[0]); // [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]] // writeln(matrix[0].row); // 4 // writeln(matrix[0].col); // 5 } P.S. I'm not exactly sure that these properties should work exactly as in my code :)
This works: unittest { import std.stdio, std.experimental.range.ndslice; import std.range : iota; auto matrix = 100.iota.sliced(3, 4, 5); writeln(matrix[0]); writeln(matrix[0].length); // 4 writeln(matrix[0].length!0); // 4 writeln(matrix[0].length!1); // 5 writeln(matrix.length!2); // 5 } Prints: //[[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]] //4 //4 //5 I am note sure that we need something like `height`/row and `width`/col for nd-slices. This kind of names can be used after casting to the future `std.container.matrix`.
Jun 15 2015
parent "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Monday, 15 June 2015 at 14:32:20 UTC, Ilya Yaroshenko wrote:
 I am note sure that we need something like `height`/row and 
 `width`/col for nd-slices. This kind of names can be used after 
 casting to the future `std.container.matrix`.
Here something similar implemented: https://github.com/k3kaimu/carbon/blob/master/source/carbon/linear.d#L52-L56 Want in the future something like `rows' and `cols`: https://github.com/k3kaimu/carbon/blob/master/source/carbon/linear.d#L156-L157 Waiting for `static foreach`. This design really helps a lot to implement multidimensional slices.
Jun 15 2015