www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - DConf 2014 Day 2 Talk 3: Designing an Aurora: A Glimpse at the Graphical

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
https://news.ycombinator.com/newest (please find and vote quickly)

https://twitter.com/D_Programming/status/486540487080554496

https://www.facebook.com/dlang.org/posts/881134858566863

http://www.reddit.com/r/programming/comments/2a5ia9/dconf_2014_day_2_talk_3_designing_an_aurora_a/


Andrei
Jul 08 2014
next sibling parent =?UTF-8?B?Ik5vcmRsw7Z3Ig==?= <per.nordlow gmail.com> writes:
On Tuesday, 8 July 2014 at 16:03:36 UTC, Andrei Alexandrescu 
wrote:
http://www.reddit.com/r/programming/comments/2a5ia9/dconf_2014_day_2_talk_3_designing_an_aurora_a/

Very intriguing.

First question for Andrew Wilson i reckon :)

Is the Immutable Scene Object (ISO) supposed to be an exact copy 
(same type and same contents) of the User Scene Object (USO) 
especially with regards to the Model-View-Controller pattern:

https://en.wikipedia.org/wiki/Model-View-Controller

I'm asking because I first thought that

- USO typically maps to the Model (data) and the
- ISO typically maps to the View (visual representation)
Jul 08 2014
prev sibling next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Tuesday, 8 July 2014 at 16:03:36 UTC, Andrei Alexandrescu
wrote:
http://www.reddit.com/r/programming/comments/2a5ia9/dconf_2014_day_2_talk_3_designing_an_aurora_a/
Great talk but I have some reservations about the design. What I am most concerned about is the design of the immediate mode layer. I was one of the few who initially pushed for the immediate mode but I think you missed the point. There are several points that I want to address so I will go through them one at a time. Also I apologize for the wall of text. *Scene Graph Personally I find it odd that the immediate mode knows anything about a scene graph at all. Scene Graphs are not an end all be all, they do not make everything simpler to deal with. It is one way to solve the problem but not always the best. D is supposed to be multi paradigm, locking the users into a scene graph design is against that goal. I personally think that the immediate mode should be designed for performance and the less performant but 'simpler' modes should be built on top of it. *Performance vs Simplicity I know that you have stated quite clearly that you do not believe performance should be a main goal of Aurora, you have stated that simplicity is a more important goal. I propose that there is NO reason at all that Aurora can't have both in the same way that D itself has both. I think it is just a matter of properly defining the goals of each layer. The retained mode should be designed with simplicity in mind whilst still trying to be performant where possible. On the other hand, the immediate mode should be designed with performance in mind whilst still trying to be simple where possible. The simple mode(s?) should be build on top of the single performance mode. *Design Modern graphics hardware has a very well defined interface and all modern graphics api's are all converging on matching the hardware as close as possible. Modern graphics is done by sending buffers of data to the card and having programmable shaders to operate on the data, period. I believe that the immediate mode layer should match this as close a possible. If that involves having some DSL for shaders that gets translated into all the other various shader languages then so be it, the differences between them is minimal. If the DSL was a subset of D then all the better. *2D vs 3D I think the difference you are making between 2D and 3D is largely artificial. In modern graphics api's the difference between 2D and 3D is merely a matrix multiply. If the immediate mode was designed how I suggest above, then 2D vs 3D is a non issue. *Games D is already appealing to game developers and with the work on nogc and andrei's work with allocators, it is becoming even more appealing. The outcome of Aurora could land D a VERY strong spot in games that would secure it a very good future. But only if it is done right. I think there is a certain level of responsibility in the way Aurora gets designed that needs to be taken into account. I know that most of my points are not in line with what you said Aurora would and wouldn't be. I just don't think there is any reason Aurora couldn't achieve the above points whilst still maintaining it's goal of simplicity. Also, I am willing to help, I just don't really know what needs working on. I have a lot of experience with openGL on windows writing high performance graphics.
Jul 08 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 9 July 2014 at 04:26:55 UTC, Tofu Ninja wrote:
 Modern graphics hardware has a very well defined interface and
 all modern graphics api's are all converging on matching the
 hardware as close as possible. Modern graphics is done by 
 sending
 buffers of data to the card and having programmable shaders to
 operate on the data, period.
That's true, but OpenGL is being left behind now that there is a push to match the low level of how GPU drivers work. Apple's Metal is oriented towards the tiled PowerVR and scenegraphs, probably also with some expectations of supporting the upcoming raytracing accelerators. AMD is in talks with Intel (rumour) with the intent of cooperating on Mantle. Direct-X is going lower level… So, there is really no stability in the API at the lower level. But yes, OpenGL is not particularly suitable for rendering a scene graph without an optimizing engine to reduce context switches.
 largely artificial. In modern graphics api's the difference
 between 2D and 3D is merely a matrix multiply. If the immediate
 mode was designed how I suggest above, then 2D vs 3D is a non
 issue.
Actually, modern 2D APIs like Apple's Quartz are backend "independent" and render to PDF. Native PDF support is important if you want to have an advantage in the web space and in the application space in general. There is almost no chance anyone wanting to do 3D would use something like Aurora… If you can handle 3D math you also can do OpenGL, Mantle, Metal? But then again, the official status for Aurora is kind of unclear.
Jul 08 2014
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Wednesday, 9 July 2014 at 05:30:21 UTC, Ola Fosheim Grøstad 
wrote:
 That's true, but OpenGL is being left behind now that there is 
 a push to match the low level of how GPU drivers work.
As I said, ALL api's are converging on low level access, this includes opengl. This means that all major api's are moving to a buffer+shader model because this is what the hardware likes(there is some more interesting things with command buffers that is happening also).
 Apple's Metal is oriented towards the tiled PowerVR and 
 scenegraphs,
I am not exactly sure where you are get that idea, Metal is the same, buffers+shaders. The major difference is the command buffer that is being explicitly exposed, this is actually what is meant when they say that the the api is getting closer to the hardware. In current api's(dx/ogl) the command buffers are hidden from the user and constructed behind the scenes, in dx is is done by Microsoft and in ogl it is done by the driver(nvidia/amd/intel). There has been a push recently for this to be exposed to the user in some form, this is what metal does, I believe mantel does something similar but I can't be sure because they have not released any documentation.
 probably also with some expectations of supporting the upcoming 
 raytracing accelerators.
I doubt it.
 AMD is in talks with Intel (rumour) with the intent of 
 cooperating on Mantle.
I don't know anything about that but I also doubt it.
 Direct-X is going lower level… So, there is really no stability 
 in the API at the lower level.
On the contrary, all this movement towards low level API is actually causing the API's to all look vary similar.
 But yes, OpenGL is not particularly suitable for rendering a 
 scene graph without an optimizing engine to reduce context 
 switches.
I was not talking explicitly about ogl, I am just talking about video cards in general.
 Actually, modern 2D APIs like Apple's Quartz are backend 
 "independent" and render to PDF. Native PDF support is 
 important if you want to have an advantage in the web space and 
 in the application space in general.
This does not really have any thing to do with what I am talking about. I am talking about hardware accelerated graphics, once it gets into the hardware(gpu), there is no real difference between 2d and 3d.
 There is almost no chance anyone wanting to do 3D would use 
 something like Aurora… If you can handle 3D math you also can 
 do OpenGL, Mantle, Metal?
As it stands now, that may be the case, but I honestly don't see a reason it must be so.
 But then again, the official status for Aurora is kind of 
 unclear.
This is true.
Jul 09 2014
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Wednesday, 9 July 2014 at 15:03:13 UTC, Tofu Ninja wrote:

Also I should note, dx and ogl are both also moving towards 
exposing the command buffer.
Jul 09 2014
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Wednesday, 9 July 2014 at 15:22:35 UTC, Tofu Ninja wrote:
 On Wednesday, 9 July 2014 at 15:03:13 UTC, Tofu Ninja wrote:

 Also I should note, dx and ogl are both also moving towards 
 exposing the command buffer.
I should say that it looks like they are moving in that direction, both opengl and direct x support draw indirect which is almost nearly all the way to a command buffer, it is only a matter of time before it become a reality(explicitly with metal and mantel).
Jul 09 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 9 July 2014 at 16:25:14 UTC, Tofu Ninja wrote:
 is almost nearly all the way to a command buffer, it is only a 
 matter of time before it become a reality(explicitly with metal 
 and mantel).
Yes, of course, but it does not belong in a stable high level graphics API. It's not gonna work ten years down the road…
Jul 09 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 9 July 2014 at 15:03:13 UTC, Tofu Ninja wrote:
 I am not exactly sure where you are get that idea, Metal is the 
 same, buffers+shaders. The major difference is the command 
 buffer that is being explicitly exposed, this is actually what 
 is meant when they say that the the api is getting closer to 
 the hardware.
Yes, but 3D APIs are temporary so they don't belong in a stable development library. Hardware and APIs have been constantly changing for 25 years. My point was that the current move is from heavy graphic contexts with few API calls to explicit command buffers with many API calls. I would think it fits better to tiling where you defer rendering and sort polygons and therefore get context switches anyway (classical PowerVR on iDevices). It fits better to rendering a display graph directly, or UI etc.
 this to be exposed to the user in some form, this is what metal
does, I believe mantel does something similar but I can't be sure because they have not released any documentation.
Yes, this is what they do. It is closer to what you want for general computing on the GPU. So there is probably a long term strategy for unifying computation and graphics in there somewhere. IIRC Apple claims Metal can be used for general computing as well as 3D.
 probably also with some expectations of supporting the 
 upcoming raytracing accelerators.
I doubt it.
Why? Imagination Technologies (PowerVR) purchased the raytracing accelerator (hardware design/patents) that three former Apple employees designed and has just completed the design for mobile devices so it is close to production. The RTU (ray tracing unit) has supposedly been worked into the same series of GPUs that is used in the iPhone. Speculation, sure, but not unlikely either. http://www.imgtec.com/powervr/raytracing.asp
 AMD is in talks with Intel (rumour) with the intent of 
 cooperating on Mantle.
I don't know anything about that but I also doubt it.
Why? Intel has always been willing to cooperate when AMD holds the strong cards (ATI is stronger than Intel's 3D division). http://www.phoronix.com/scan.php?page=news_item&px=MTcyODY
 On the contrary, all this movement towards low level API is 
 actually causing the API's to all look vary similar.
I doubt it. ;-) Apple wants unique AAA titles on their iDevices to keep Android/Winphone at bay and to defend the high profit margins. They have no interest in portable low level access and will just point at OpenGL 2ES for that.
 graphics, once it gets into the hardware(gpu), there is no real 
 difference between 2d and 3d.
True, but that is not a very stable abstraction level. Display Postscript/PDF etc is much more stable. It is also a very useful abstraction level since it means you can use the same graphics API for sending a drawing to the screen, to the printer or to a file.
 As it stands now, that may be the case, but I honestly don't 
 see a reason it must be so.
Well, having the abstractions for opening a drawing context, input devices etc would be useful, but not really a language level task IMO. Solid cross platform behaviour on that level will never happen (just think about what you have to wrap up on Android).
Jul 09 2014
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Wednesday, 9 July 2014 at 16:21:55 UTC, Ola Fosheim Grøstad
wrote:
 My point was that the current move is from heavy graphic 
 contexts with few API calls to explicit command buffers with 
 many API calls. I would think it fits better to tiling where 
 you defer rendering and sort polygons and therefore get context 
 switches anyway (classical PowerVR on iDevices). It fits better 
 to rendering a display graph directly, or UI etc.
Actually is seems to be moving to fewer and fewer api calls where possible(see AZDO) with lightweight contexts.
 Yes, this is what they do. It is closer to what you want for 
 general computing on the GPU. So there is probably a long term 
 strategy for unifying computation and graphics in there 
 somewhere. IIRC Apple claims Metal can be used for general 
 computing as well as 3D.
Yeah, it seems like that is where everything is going very fast, that is why I wish Aurora could try to follow that.
 Why?

 Imagination Technologies (PowerVR) purchased the raytracing 
 accelerator (hardware design/patents) that three former Apple 
 employees designed and has just completed the design for mobile 
 devices so it is close to production. The RTU (ray tracing 
 unit) has supposedly been worked into the same series of GPUs 
 that is used in the iPhone. Speculation, sure, but not unlikely 
 either.

 http://www.imgtec.com/powervr/raytracing.asp
This is actually really cool, I just don't see real time ray tracing being usable(games and the like) for at least another 5-10 years, though I will certainly be very happy to be wrong.
 Why?

 Intel has always been willing to cooperate when AMD holds the 
 strong cards (ATI is stronger than Intel's 3D division).

 http://www.phoronix.com/scan.php?page=news_item&px=MTcyODY
You may be right, I don't know, it just doesn't seem to be something they would do to me, just a gut feeling, no real basis to back it up.
 I doubt it. ;-)

 Apple wants unique AAA titles on their iDevices to keep 
 Android/Winphone at bay and to defend the high profit margins. 
 They have no interest in portable low level access and will 
 just point at OpenGL 2ES for that.
They will all be incompatible of course, no way we could get a decent standard... nooooooo. All I am saying is that as they get closer and closer to the hardware, they will all start looking relatively similar. After all, if they are all trying to get close to the same thing(the hardware) then by association they are getting closer to eachother. There will be stupid little nuances that make them incompatible but they will still be doing basically the same thing. Hardware specific api's(Mantel) complicate this a little bit but not by much, all the gpu hardware(excluding niche stuff like hardware ray tracers :P) out there has basicly the same interface.
 True, but that is not a very stable abstraction level. Display 
 Postscript/PDF etc is much more stable. It is also a very 
 useful abstraction level since it means you can use the same 
 graphics API for sending a drawing to the screen, to the 
 printer or to a file.
I think its a fine abstraction level, buffers and shaders are not hard concepts at all. All api's that Aurora is going to be based on offers them as well as all modern gpu's support them. If shaders were written in a DLS then in the case where Aurora needs to fall back to software rendering then they can be translated to D code and mixed right in. When they need to be turned into some api specific shader then they could be translated at compile time(the differences should mostly just be syntax). If the DSL was a subset of D then that would simplify it even further as well as make the learning curve much smaller. Its a perfectly fine level of abstraction for any sort of graphics that also happens to be supported very well by modern GPU's. I don't see the problem.
 Well, having the abstractions for opening a drawing context, 
 input devices etc would be useful, but not really a language 
 level task IMO. Solid cross platform behaviour on that level 
 will never happen (just think about what you have to wrap up on 
 Android).
Well then in that case Aurora should be designed as a software renderer with hardware support as a possible addition later on. But that comes back to the point that is is a little iffy what Aurora is actually trying to be. Personally I would be disappointed if it went down that route.
Jul 09 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 10 July 2014 at 00:22:39 UTC, Tofu Ninja wrote:
 Actually is seems to be moving to fewer and fewer api calls 
 where possible(see AZDO) with lightweight contexts.
Yeah, AZDO appears to work within the OpenGL framework as is. However, I get a feeling that there will be more moves from Intel/AMD towards integrating the FPUs of the GPU with the CPU. Phi, the HPC CPU from Intel, was meant to support rendering (Larrabee).
 Yeah, it seems like that is where everything is going very fast,
 that is why I wish Aurora could try to follow that.
Depends on what Aurora is meant to target. The video says it is meant to be more of a playful environment that allows pac-man mockups and possibly GUIs in the long run, but not sure who would want that? There are so many much better IDE/REPL environments for that: Swift, Flash&Co, HTML5/WebGL/Dart/PNaCL, Python and lots of advanced frameworks with engines for cross platform mobile development at all kinds of proficiency levels. Seems to me what a language that D needs is two separate frameworks: 1. A barebones 3D high performance library with multiple backends that follow the hardware trends (kind of what you are suggesting). Suitable for creating games and HPC stuff. 2. A stable high level API with geometric libraries for dealing with established abstractions: font files, vector primitives, PDF generation and parsing with canvas abstraction for both screen/gui, print, file… Suitable for applications/web. 3. An engine layering of 2. on top of 1. for portable interactive graphics but a higher abstraction level than 1.
 This is actually really cool, I just don't see real time ray
 tracing being usable(games and the like) for at least another
 5-10 years, though I will certainly be very happy to be wrong.
I think it is only meant for shiny details on the mobile platforms. I could see it being used for mirrors in a car game. Spherical knobs etc. If it works out ok when hitting silicone then I can see Apple using it to strenghten iOS as a "portable console platform", which probably means having proprietary APIs that squeeze every drop out of the hardware.
 You may be right, I don't know, it just doesn't seem to be
 something they would do to me, just a gut feeling, no real basis
 to back it up.
Intel and AMD/ATI have a common "enemy" in the GPU/HPC field: Nvidia/CUDA.
 basically the same thing. Hardware specific api's(Mantel)
 complicate this a little bit but not by much, all the gpu
 hardware(excluding niche stuff like hardware ray tracers :P) out
 there has basicly the same interface.
Well, Direct-X's model has forced the same kind of pipeline, but if they get rid of DX… There are also other performant solutions out there: FPGA, crossbar-style multicore (many simple CPUS with local memory on a grid with memory busses between them).
 time(the differences should mostly just be syntax). If the DSL
 was a subset of D then that would simplify it even further as
 well as make the learning curve much smaller. Its a perfectly
 fine level of abstraction for any sort of graphics that also
 happens to be supported very well by modern GPU's. I don't see
 the problem.
Well, it all depends on the application area. For pixel based rendering, sure, shaders is the only way. I agree. For more stable application oriented APIs, vectors all the way and wrap up JPEG/PNG files in "image block" abstractions.
 hardware support as a possible addition later on. But that comes
 back to the point that is is a little iffy what Aurora is
 actually trying to be. Personally I would be disappointed if it
 went down that route.
Well, OS level abstractions are hard to get right and people who has managed to do it charge quite a bit of money for it: https://www.madewithmarmalade.com/shop I guess it is possible if you only target desktop Windows/Mac, but other than that I think PNaCL/Pepper would be a more valuable cross platform target.
Jul 10 2014
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Thursday, 10 July 2014 at 10:43:45 UTC, Ola Fosheim Grøstad 
wrote:
 Depends on what Aurora is meant to target. The video says it is 
 meant to be more of a playful environment that allows pac-man 
 mockups and possibly GUIs in the long run, but not sure who 
 would want that? There are so many much better IDE/REPL 
 environments for that: Swift, Flash&Co, HTML5/WebGL/Dart/PNaCL, 
 Python and lots of  advanced frameworks with engines for cross 
 platform mobile development at all kinds of proficiency levels.

 Seems to me what a language that D needs is two separate 
 frameworks:

 1. A barebones 3D high performance library with multiple 
 backends that follow the hardware trends (kind of what you are 
 suggesting). Suitable for creating games and HPC stuff.

 2. A stable high level API with geometric libraries for dealing 
 with established abstractions: font files, vector primitives, 
 PDF generation and parsing with canvas abstraction for both 
 screen/gui, print, file… Suitable for applications/web.

 3. An engine layering of 2. on top of 1. for portable 
 interactive graphics but a higher abstraction level than 1.
YES(I am so glad some one else sees this)! This is basically what I have been saying all along. I hoped the immediate mode could be (1) and the retained mode could be (2/3) so that we could have both and not be limited, but that does not seem to be the direction it is going. It is not even clear what the immediate mode 'is' right now in the current designs of Aurora. It seems to be more of an implementation detail rather than something usable on its own. As it stands now, the direction that Aurora is taking seems to be an odd one IMHO. It is trying to be some thing in between (1) and (2/3) but I don't think that is useful to any one except maybe gui writers. That is what prompted me to post.
Jul 10 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 10 July 2014 at 14:59:47 UTC, Tofu Ninja wrote:
 YES(I am so glad some one else sees this)! This is basically 
 what I have been saying all along. I hoped the immediate mode 
 could be (1) and the retained mode could be (2/3) so that we 
 could have both and not be limited, but that does not seem to 
 be the direction it is going.
Oh, good then we are on the same frontier! :-) I thought you preferred an integrated approach. In my experience big frameworks tend to never get the APIs quite right, become tedious to work with, are difficult to adapt and seldom reach completion before they are out-of-date. Much better with small, nimble, focused, polishable and performant IMO.
 As it stands now, the direction that Aurora is taking seems to 
 be an odd one IMHO. It is trying to be some thing in between 
 (1) and (2/3) but I don't think that is useful to any one 
 except maybe gui writers. That is what prompted me to post.
Right, I could use (1) and (2) , but have no obvious use case for (3)… So if Aurora does not partition the design space into independent parts, then I can't use it. I think the library space needs to be partioned properly just like the language/memory space (nogc/gc) in order to appeal to interactive app writers.
Jul 10 2014
prev sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Thursday, 10 July 2014 at 00:22:39 UTC, Tofu Ninja wrote:
 I think its a fine abstraction level, buffers and shaders are 
 not
 hard concepts at all. All api's that Aurora is going to be based
 on offers them as well as all modern gpu's support them. If
 shaders were written in a DLS then in the case where Aurora 
 needs
 to fall back to software rendering then they can be translated 
 to
 D code and mixed right in. When they need to be turned into some
 api specific shader then they could be translated at compile
 time(the differences should mostly just be syntax). If the DSL
 was a subset of D then that would simplify it even further as
 well as make the learning curve much smaller. Its a perfectly
 fine level of abstraction for any sort of graphics that also
 happens to be supported very well by modern GPU's. I don't see
 the problem.
You might want to look at what bgfx does: https://github.com/bkaradzic/bgfx It provides a shader compiler to various supported backends. Abstracting shaders more or less mandates having a shader compiler from your language to graphics API out there. It does make an additional build step. Compiling such a shader abstraction language at compile-time seems a bit optimistic to me.
Jul 10 2014
parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Thursday, 10 July 2014 at 15:38:07 UTC, ponce wrote:
 You might want to look at what bgfx does: 
 https://github.com/bkaradzic/bgfx
 It provides a shader compiler to various supported backends.
I have seen it but I have never used it. I don't actually know if its any good or not but it is in the same vein of what I am talking about.
 Abstracting shaders more or less mandates having a shader 
 compiler from your language to graphics API out there. It does 
 make an additional build step.
It would be complicated yes but certainly doable.
 Compiling such a shader abstraction language at compile-time 
 seems a bit optimistic to me.
Maybe a little now but in the future maybe not, things are always improving. It is also one of the reason I wish we could call out to precompiled code at compile time, make it possible to have inline shaders passed out of the compiler at compile time and compiled by some other app.
Jul 10 2014
prev sibling next sibling parent "Dicebot" <public dicebot.lv> writes:
On Tuesday, 8 July 2014 at 16:03:36 UTC, Andrei Alexandrescu 
wrote:
 https://news.ycombinator.com/newest (please find and vote 
 quickly)

 https://twitter.com/D_Programming/status/486540487080554496

 https://www.facebook.com/dlang.org/posts/881134858566863

 http://www.reddit.com/r/programming/comments/2a5ia9/dconf_2014_day_2_talk_3_designing_an_aurora_a/


 Andrei
http://youtu.be/PRbK7jk0jrk
Jul 09 2014
prev sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 08 Jul 2014 09:03:37 -0700, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:

 https://news.ycombinator.com/newest (please find and vote quickly)

 https://twitter.com/D_Programming/status/486540487080554496

 https://www.facebook.com/dlang.org/posts/881134858566863

 http://www.reddit.com/r/programming/comments/2a5ia9/dconf_2014_day_2_talk_3_designing_an_aurora_a/


 Andrei
So instead of replying to each messages and accumulating a LOT of quoted text I will attempt to answer some of your questions here. The first is that I would like to challenge the assumption that Scene Graphs are somehow "Less Performant". In fact the they are a very highly performant solution to many problems, yes there are a few drawbacks but they are far outweighed by the performance gains. Remember that when you are designing a system you must consider the system as a whole, it's WAY to easy to get caught up in individual concerns. Most AAA game engines use some form of scene graph. The one I am familiar with is CryENGINE 3. Now those scene graphs are more specialized, but there is absolutely no reason that a scene graph has to be "slow". In fact D's immutability may give the compiler (and us) the ability to build the most performant scene graph in the world. Scene graphs make relative transforms easy for example, and since that is really all you're doing in 3D, making that easy for the computer is a massive win. As for similarity in graphics subsystem API's they are, at best, superficial. When you actually try to implement something on top of them, you end up forcing the abstraction far higher than you'd think. Plus the high-level API design was something that Walter, Andrei, and I all agreed on at the start. I want to reiterate that Aurora is NOT a high-performance game engine and we won't even pretend to try. The focus on 2D is not about how difficult 2D versus 3D is, but about project scope. We want to build Aurora in components that are useful on their own. Based on previous message traffic in the forum I think that more people will find use for the 2D components than the 3D components. Writing a DSL for shaders is one of those things that sounds good until you actually try it. There is a LOT of complexity, both at the language level and the number of output variations, within shaders that would have to addressed. While D is appealing to game designers, Aurora is very explicitly NOT targeting them. They will either create their own engines or using a COTS system that's already available. Walter made this point extremely clear to me when he asked me to take this project on. Yes, performance is not a goal, because we are intentionally not targeting scenarios where that is the first concern. I understand that a lot of people want Aurora to be a high-performance graphics API with a focus on games, but that isn't our goal. Simplicity is the goal and we will sacrifice performance where it directly conflicts with that goal. If you need a high-performance game engine, I would strongly recommend either creating a custom solution or using an off-the-self system. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Jul 14 2014
parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Tuesday, 15 July 2014 at 05:40:29 UTC, Adam Wilson wrote:
 Yes, performance is not a goal, because we are intentionally 
 not targeting scenarios where that is the first concern. I 
 understand that a lot of people want Aurora to be a 
 high-performance graphics API with a focus on games, but that 
 isn't our goal. Simplicity is the goal and we will sacrifice 
 performance where it directly conflicts with that goal. If you 
 need a high-performance game engine, I would strongly recommend 
 either creating a custom solution or using an off-the-self 
 system.
To clarify, while Aurora may not be a performant enough solution for a full graphics engine, it should theoretically be sufficiently fast for a GUI within a game, correct?
Jul 15 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 15 Jul 2014 09:12:01 -0700, Kapps <opantm2+spam gmail.com> wrote:

 On Tuesday, 15 July 2014 at 05:40:29 UTC, Adam Wilson wrote:
 Yes, performance is not a goal, because we are intentionally not  
 targeting scenarios where that is the first concern. I understand that  
 a lot of people want Aurora to be a high-performance graphics API with  
 a focus on games, but that isn't our goal. Simplicity is the goal and  
 we will sacrifice performance where it directly conflicts with that  
 goal. If you need a high-performance game engine, I would strongly  
 recommend either creating a custom solution or using an off-the-self  
 system.
To clarify, while Aurora may not be a performant enough solution for a full graphics engine, it should theoretically be sufficiently fast for a GUI within a game, correct?
That shouldn't be a problem. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Jul 15 2014