www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Graphics Library for D

reply "Adam Wilson" <flyboynw gmail.com> writes:
Hello Fellow D Heads,

Recently, I've been working to evaluate the feasibility and reasonability  
of building out a binding to Cinder in D. And while it is certainly  
feasible to wrap Cinder, that a binding would be necessarily complex and  
feel very unnatural in D.

So after talking it over with Walter and Andrei, we feel that, while we  
like how Cinder is designed and would very much like to have something  
like it available in D, wrapping Cinder is not the best approach in the  
long-term.

With that in mind, we would like to start a discussion with interested  
parties about building a graphics library in the same concept as Cinder,  
but using an idiomatic D implementation from the ground up. Walter has  
suggested that we call it Aurora, and given the visual connotations  
associated with that name, I think it is most appropriate for this project.

I know that the community has worked through a few of the problems  
involved. For example, I can't remember who wrote it, but I've seen a  
module floating around that can create a window in a cross-platform  
manner, and I know Mike Parker has been heavily involved in graphics for  
D. And no discussion of graphics would be complete without Manu, whose  
input Walter, Andrei, and I would greatly appreciate.

I want to point out that while Cinder will be the design template, the  
goal here is to use D to it's maximum potential. I fully expect that what  
we end up with will be quite different than Cinder.

Due to the scope of the project I think it would be best to execute the  
project in stages. This will allow us to deliver useful chunks of working  
code to the community. Although I haven't yet heard anything on the  
subject, I would assume that once Aurora reaches an acceptable quality bar  
it would be a candidate for inclusion in Phobos, as such I would like to  
approach the design as if that were the end goal.

The logical phases as I can see them are as follows, but please suggest  
changes:

- Windowing and System Interaction (Including Keyboard/Mouse/Touch Input)
- Basic Drawing (2D Shapes, Lines, Gradients, etc)
- Image Rendering (Image Loading, Rendering, Modification, Saving, etc.)
- 3D Drawing (By far the most complex stage, so we'll leave it for last)

Here are a couple of things that Aurora is not intended to be:
- Aurora is not a high-performance game engine. The focus is on making a  
general purpose API  that is accessible to non-graphics programmers. That  
said, we don't want to purposely ruin performance and any work and  
guidance on that aspect will be warmly welcomed.
- Aurora is not a GUI library. Aurora is intended as a creative graphics  
programming library in the same concept as Cinder. This means that it will  
be much closer to game's graphics engine, in terms of design and  
capability, than a UI library; therefore we should approach the design  
 from that standpoint.

My personal experience in graphics programming is almost completely with  
DirectX and Windows so I would be happy to work on support for that  
platform. However, we need to support many other platforms, and I know  
that there are others in the community have the skills needed, your help  
would be invaluable.

If you are interested in helping with a Cinder like library for D and/or  
have code you'd like to contribute, let's start talking and see what  
happens.

While I do have some ideas about how to design the library, I would rather  
open the floor to the community first to see what our combined intellect  
has to offer as I don't want to unduly influence the ideas generated here.  
The idea is to build the best technical graphics library that we can, not  
measure egos.

So with the above framework in mind, let's talk!

-- 
Adam Wilson
IRC: LightBender
Aurora Project Coordinator
Jan 05 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/5/2014 8:10 PM, Adam Wilson wrote:
 Recently, I've been working to evaluate the feasibility and reasonability of
 building out a binding to Cinder in D.
For reference, here's what Cinder is: http://libcinder.org/ It's been well received by the C++ community.
Jan 05 2014
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 05 Jan 2014 20:17:22 -0800, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 1/5/2014 8:10 PM, Adam Wilson wrote:
 Recently, I've been working to evaluate the feasibility and  
 reasonability of
 building out a binding to Cinder in D.
For reference, here's what Cinder is: http://libcinder.org/ It's been well received by the C++ community.
I knew forgot something! -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 05 2014
prev sibling parent reply "Joakim" <joakim airpost.net> writes:
On Monday, 6 January 2014 at 04:17:21 UTC, Walter Bright wrote:
 On 1/5/2014 8:10 PM, Adam Wilson wrote:
 Recently, I've been working to evaluate the feasibility and 
 reasonability of
 building out a binding to Cinder in D.
For reference, here's what Cinder is: http://libcinder.org/ It's been well received by the C++ community.
I took a look at the website. Other than being popular what is it about Cinder that triggered this graphics push: do they make any good technical decisions? I can't tell just from looking at their website.
Jan 05 2014
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 05 Jan 2014 23:30:24 -0800, Joakim <joakim airpost.net> wrote:

 On Monday, 6 January 2014 at 04:17:21 UTC, Walter Bright wrote:
 On 1/5/2014 8:10 PM, Adam Wilson wrote:
 Recently, I've been working to evaluate the feasibility and  
 reasonability of
 building out a binding to Cinder in D.
For reference, here's what Cinder is: http://libcinder.org/ It's been well received by the C++ community.
I took a look at the website. Other than being popular what is it about Cinder that triggered this graphics push: do they make any good technical decisions? I can't tell just from looking at their website.
It's something that Walter and I have been discussing since the last GoingNative. A non-programmer (a sculptor by training in fact) used Cinder/C++ to create a music player app called Planetary (http://planetary.bloom.io/) for the iPad using Cinder. Walter, Andrei, and I feel that D would be a more appealing language to such creatives but D lacks the required seamlessly integrated graphics library they need to create their art. There has also been interest from a number of people for using such a library as a base for a GUI toolkit, and I am sure that their are game developers of the mobile/casual bent who would love something like this. We could probably even build in support for GPGPU work as that is closely related to graphics rendering. Building a graphics rendering toolkit would provide the base library support for D to do quite literally anything with graphics, which is a major and growing part of computing today. It is quite essential that D have this capability. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 05 2014
prev sibling next sibling parent "Zz" <Zz nospam.com> writes:
For 2D I like Anti Grain Geometry.

http://www.antigrain.com/

When used with the C++ Agg2D wrapper things become super easy.

Here is the online documentation of Agg2D from the Delphi port of 
AGG
http://www.crossgl.com/aggpas/documentation/index.html

For another view here is an interesting article on how AGG works 
in Haiku and covers some of the compile time properties of AGG
https://www.haiku-os.org/documents/dev/painter_and_how_agg_works



Zz
Jan 06 2014
prev sibling parent "Steve Teale" <steve.teale britseyeview.com> writes:
On Monday, 6 January 2014 at 07:30:25 UTC, Joakim wrote:
 On Monday, 6 January 2014 at 04:17:21 UTC, Walter Bright wrote:
 On 1/5/2014 8:10 PM, Adam Wilson wrote:
 Recently, I've been working to evaluate the feasibility and 
 reasonability of
 building out a binding to Cinder in D.
For reference, here's what Cinder is: http://libcinder.org/ It's been well received by the C++ community.
I took a look at the website. Other than being popular what is it about Cinder that triggered this graphics push: do they make any good technical decisions? I can't tell just from looking at their website.
Agreed, the web site does not give you any clear impression of what it does!
Jan 09 2014
prev sibling next sibling parent reply "Nick B" <nick.barbalich gmail.com> writes:
On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 Hello Fellow D Heads,

 Recently, I've been working to evaluate the feasibility and 
 reasonability of building out a binding to Cinder in D. And 
 while it is certainly feasible to wrap Cinder, that a binding 
 would be necessarily complex and feel very unnatural in D.

 So after talking it over with Walter and Andrei, we feel that, 
 while we like how Cinder is designed and would very much like 
 to have something like it available in D, wrapping Cinder is 
 not the best approach in the long-term.

 With that in mind, we would like to start a discussion with 
 interested parties about building a graphics library in the 
 same concept as Cinder, but using an idiomatic D implementation 
 from the ground up.
I assume that the licence will be BOOST ?? A picture is worth a thousand words, therefore is this the type of graphics library output you are refering to: http://marcinignac.com/projects/cindermedusae/ Nick
Jan 05 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 05 Jan 2014 20:58:11 -0800, Nick B <nick.barbalich gmail.com>  
wrote:

 On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 Hello Fellow D Heads,

 Recently, I've been working to evaluate the feasibility and  
 reasonability of building out a binding to Cinder in D. And while it is  
 certainly feasible to wrap Cinder, that a binding would be necessarily  
 complex and feel very unnatural in D.

 So after talking it over with Walter and Andrei, we feel that, while we  
 like how Cinder is designed and would very much like to have something  
 like it available in D, wrapping Cinder is not the best approach in the  
 long-term.

 With that in mind, we would like to start a discussion with interested  
 parties about building a graphics library in the same concept as  
 Cinder, but using an idiomatic D implementation from the ground up.
I assume that the licence will be BOOST ?? A picture is worth a thousand words, therefore is this the type of graphics library output you are refering to: http://marcinignac.com/projects/cindermedusae/ Nick
I don't think anyone here has a problem with Boost licensing so I think it would be be safe to assume that license. That is indeed an example of something that you can do with Cinder, and another would would be Planetary: http://planetary.bloom.io/ The idea is too allow a wide range of graphics capabilities in a format that is accessible to non-graphics programmers while using D. Walter, Andrei, and I believe that D's relative simplicity when compared to C++ makes this kind of project of ideal for D and people who are looking to write graphics code without having to learn the intricacies of C++. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 05 2014
prev sibling next sibling parent reply "Mike" <none none.com> writes:
I'm sure you'll receive no shortage of opinions with such an open 
invitation, so here's mine:

* Please don't make a graphics library that is useful only on PCs.

* Please consider more than mouse and keypboard as input devices 
(e.g. multitouch)

* Today is the first I've heard of Cinder, but if you really want 
to see an extremely well-written graphics library, take a look at 
AGG (Anti-Grain Geometry).  It's is a superior example of 
elegantly employed templates for a real practical problem (lot of 
different platforms). As an example illustrating its reach, I 
ported it to an ARM Cortex-M4 MCU at 168MHz with 2MB of RAM 
trivial modification.  It powers an 800x480 7" TFT LCD with 
16-bit color and is damn fast.  For a quick intro, go here 
(http://www.antigrain.com/doc/introduction/introduction.agdoc.html#toc0006). 
  It is superior design model for anyone wishing to develop a 
graphics library.

* Break the library into loosely coupled pieces that can be used 
individually or aggregated by other projects to build things that 
we haven't even thought of
   *  Geometric primitives should be their own library/package
   *  Vector graphics (paths, line caps, etc...) should be their 
own library/package
   *  Raster graphics should be their own library/package
   *  Window management should be its own library/package
   *  Font's are just data.  Don't couple them to the rendering 
engine.  (e.g  Convert them to a vector/raster representation and 
then let those libraries handle the rest.
   *  The rendering engine should be its own library/package, that 
should be easily replaced with whatever is suitable for the given 
platform.

I think Aurora should be a library of libraries that can be used 
individually or aggregated to build new and exciting projects 
that you and I haven't yet even thought of.  Think std.algorithm.

 The logical phases as I can see them are as follows, but please 
 suggest changes:
Start with the most primitive and move up from there (If loosely coupled, the could be developed in parallel) 1. Geometric primitives 2. Vector graphics 3. Fonts (just a special case of vector graphics) 4. Rasterization 5. Backend (Direct2D/3D, OpenGL, OpenVG, whatever) Once that is done, Widget libraries, Windowing, Input Device handling, etc... can all be built on top of that. A graphics library is something I will need in my future D plans, so I look forward to seeing where this goes, and I hope I'll be able to make a positive contribution or two myself. But, please play a little with AGG before you begin your design. I think you'll be glad you did. (http://www.antigrain.com/)
Jan 05 2014
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 05 Jan 2014 21:32:54 -0800, Mike <none none.com> wrote:

 I'm sure you'll receive no shortage of opinions with such an open  
 invitation, so here's mine:

 * Please don't make a graphics library that is useful only on PCs.
That's the plan. However at this point D only works on x86 so it's a moot point. But if we built a pluggable rendering backend that would allow us to swap the rendering that should be sufficient.
 * Please consider more than mouse and keypboard as input devices (e.g.  
 multitouch)
Definitely. I included touch on my list because even outside the phone/tablet market it can be useful. All my desktops have a touch monitor.
 * Today is the first I've heard of Cinder, but if you really want to see  
 an extremely well-written graphics library, take a look at AGG  
 (Anti-Grain Geometry).  It's is a superior example of elegantly employed  
 templates for a real practical problem (lot of different platforms). As  
 an example illustrating its reach, I ported it to an ARM Cortex-M4 MCU  
 at 168MHz with 2MB of RAM trivial modification.  It powers an 800x480 7"  
 TFT LCD with 16-bit color and is damn fast.  For a quick intro, go here  
 (http://www.antigrain.com/doc/introduction/introduction.agdoc.html#toc0006).  
   It is superior design model for anyone wishing to develop a graphics  
 library.
I've heard of it, and we are definitely open to stealing the best of any library, so I/we take a look at it.
 * Break the library into loosely coupled pieces that can be used  
 individually or aggregated by other projects to build things that we  
 haven't even thought of
    *  Geometric primitives should be their own library/package
    *  Vector graphics (paths, line caps, etc...) should be their own  
 library/package
    *  Raster graphics should be their own library/package
    *  Window management should be its own library/package
    *  Font's are just data.  Don't couple them to the rendering engine.   
 (e.g  Convert them to a vector/raster representation and then let those  
 libraries handle the rest.
This is a good idea, I like it.
    *  The rendering engine should be its own library/package, that  
 should be easily replaced with whatever is suitable for the given  
 platform.
As in pluggable backend? That should be easy enough to do.
 I think Aurora should be a library of libraries that can be used  
 individually or aggregated to build new and exciting projects that you  
 and I haven't yet even thought of.  Think std.algorithm.
Combined with the above idea this would be an excellent demonstration of D's capabilities. And it would definitely improve the design of the library.
 The logical phases as I can see them are as follows, but please suggest  
 changes:
Start with the most primitive and move up from there (If loosely coupled, the could be developed in parallel) 1. Geometric primitives 2. Vector graphics 3. Fonts (just a special case of vector graphics) 4. Rasterization 5. Backend (Direct2D/3D, OpenGL, OpenVG, whatever)
The problem I can see here is that if you want to test the first four, number five has to be built out to some degree.
 Once that is done, Widget libraries, Windowing, Input Device handling,  
 etc... can all be built on top of that.

 A graphics library is something I will need in my future D plans, so I  
 look forward to seeing where this goes, and I hope I'll be able to make  
 a positive contribution or two myself.
Indeed, a graphics library is required for my future plans as well. Which is why I am speadheading this project. :-)
 But, please play a little with AGG before you begin your design.  I  
 think you'll be glad you did. (http://www.antigrain.com/)
-- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 05 2014
next sibling parent "Mike" <none none.com> writes:
On Monday, 6 January 2014 at 07:17:05 UTC, Adam Wilson wrote:
 On Sun, 05 Jan 2014 21:32:54 -0800, Mike <none none.com> wrote:
 * Please don't make a graphics library that is useful only on 
 PCs.
That's the plan. However at this point D only works on x86 so it's a moot point. But if we built a pluggable rendering backend that would allow us to swap the rendering that should be sufficient.
I understand, but if I and others are able to achieve their goals, that's going to change soon. It's not just about the backend, its about efficient use of resources and loose coupling. For now, see if you can find a Pentium 166MHZ with 16MB of RAM for your testing. If you can get it to run well there the Core-i7 users will also rejoice :-) Of course I'm being a little silly, but please watch the following from Andei and Manu to see what I mean: Expanding Platform Base (http://www.youtube.com/watch?v=4M-0LFBP9AU#t=2399) Operational Professionalism (http://www.youtube.com/watch?v=4M-0LFBP9AU) Memory Laziness (http://www.youtube.com/watch?v=FKceA691Wcg#t=2458)
For a quick intro, go
 here 
 (http://www.antigrain.com/doc/introduction/introduction.agdoc.html#toc0006).
  It is superior design model for anyone wishing to develop a 
 graphics library.
I've heard of it, and we are definitely open to stealing the best of any library, so I/we take a look at it.
Allow me to elaborate more on why I think AGGs design is superior. You don't actually have to port it to any platform. Because it uses templates, the porting is done with a few typedefs at the beginning of your program like so: typedef agg::rgba8 Color; //could be rgb8, bgr8, rgb16, rgb32, and many others typedef agg::span_allocator<Color> SpanAllocator; typedef agg::span_interpolator_linear<> SpanInterpolator; typedef agg::rendering_buffer RenderingBuffer; typedef agg::scanline_u8 Scanline; typedef agg::rasterizer_scanline_aa<> Rasterizer; //aa is anti-aliasing, but could be aliased if you wanted. typedef agg::path_storage Path; typedef agg::conv_curve<AggPath> Curve; typedef agg::conv_stroke<AggCurve> Stroke; This configures the entire pipeline for my platform. I didn't need to modify any header files, set and #defines, or add a if version(MyPlatform) block to the source code, etc... Because its template programming, these new types will be compiled when I build, and I will have a custom rendering pipeline specific to my platform. I didn't see how elegant this was at first, but now I think it's brilliant and really fulfills the promise of template programming. You could probably use AGG to render an uncompressed Blue-Ray movie to ASCII art if you wanted to. I don't suggest doing the same in D, necessarily, but I do suggest understanding this pattern and let it influence the way you approach portability. There may even be a better D way that differs substantially from this.
   *  The rendering engine should be its own library/package, 
 that should be easily replaced with whatever is suitable for 
 the given platform.
As in pluggable backend? That should be easy enough to do.
A pluggable backend, yes, or possibly simply specifying a different template argument.
 The logical phases as I can see them are as follows, but 
 please suggest changes:
Start with the most primitive and move up from there (If loosely coupled, the could be developed in parallel) 1. Geometric primitives 2. Vector graphics 3. Fonts (just a special case of vector graphics) 4. Rasterization 5. Backend (Direct2D/3D, OpenGL, OpenVG, whatever)
The problem I can see here is that if you want to test the first four, number five has to be built out to some degree.
Of course you are right. I guess I was think more "fundamentals first".
Jan 06 2014
prev sibling next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 06/01/14 08:16, Adam Wilson wrote:
 On Sun, 05 Jan 2014 21:32:54 -0800, Mike <none none.com> wrote:
 * Please don't make a graphics library that is useful only on PCs.
That's the plan. However at this point D only works on x86 so it's a moot point.
Not really -- the GDC/LDC teams seem to be doing excellent work on ARM support which I get the feeling will arrive sooner rather than later. I think it'll be really important to have multi-device support at the core of your graphics library project, and to factor that into your design from the beginning. Besides, you planning in that way will give encouragement to those porting efforts :-) Related to this: I read in the news that C++ is planning on standardizing on Cairo as a graphics library. Is that something that could be useful to engage with?
Jan 06 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 06 Jan 2014 03:02:00 -0800, Joseph Rushton Wakeling  
<joseph.wakeling webdrake.net> wrote:

 On 06/01/14 08:16, Adam Wilson wrote:
 On Sun, 05 Jan 2014 21:32:54 -0800, Mike <none none.com> wrote:
 * Please don't make a graphics library that is useful only on PCs.
That's the plan. However at this point D only works on x86 so it's a moot point.
Not really -- the GDC/LDC teams seem to be doing excellent work on ARM support which I get the feeling will arrive sooner rather than later. I think it'll be really important to have multi-device support at the core of your graphics library project, and to factor that into your design from the beginning. Besides, you planning in that way will give encouragement to those porting efforts :-)
The plan is to support ARM systems. And if LDC or GDC can deliver that with full druntime/Phobos support we can look at build a new backends.
 Related to this: I read in the news that C++ is planning on  
 standardizing on Cairo as a graphics library.  Is that something that  
 could be useful to engage with?
It's possible but from what I've gathered Cairo is not the best, just the simplest common denominator. I am actually on the C++ Graphics forum, although I have posted anything because even with my graphics background I feel quite inadequate in there. :-) -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 06 2014
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Jan 05, 2014 at 11:16:03PM -0800, Adam Wilson wrote:
 On Sun, 05 Jan 2014 21:32:54 -0800, Mike <none none.com> wrote:
[...]
The logical phases as I can see them are as follows, but please
suggest changes:
Start with the most primitive and move up from there (If loosely coupled, the could be developed in parallel) 1. Geometric primitives 2. Vector graphics 3. Fonts (just a special case of vector graphics) 4. Rasterization 5. Backend (Direct2D/3D, OpenGL, OpenVG, whatever)
The problem I can see here is that if you want to test the first four, number five has to be built out to some degree.
[...] For initial development, it could be as simple as rendering to a generic framebuffer that then gets blitted to whatever OS primitives you have to display it on the screen. That should be enough to get things going while "real" backends get developed. T -- Your inconsistency is the only consistent thing about you! -- KD
Jan 06 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
06-Jan-2014 19:16, H. S. Teoh пишет:
 On Sun, Jan 05, 2014 at 11:16:03PM -0800, Adam Wilson wrote:
 On Sun, 05 Jan 2014 21:32:54 -0800, Mike <none none.com> wrote:
[...]
 The logical phases as I can see them are as follows, but please
 suggest changes:
Start with the most primitive and move up from there (If loosely coupled, the could be developed in parallel) 1. Geometric primitives 2. Vector graphics 3. Fonts (just a special case of vector graphics) 4. Rasterization 5. Backend (Direct2D/3D, OpenGL, OpenVG, whatever)
The problem I can see here is that if you want to test the first four, number five has to be built out to some degree.
[...] For initial development, it could be as simple as rendering to a generic framebuffer that then gets blitted to whatever OS primitives you have to display it on the screen. That should be enough to get things going while "real" backends get developed.
It would be better not to or you risk introducing suboptimal software rendering patterns that do not have even remote correspondence to the hardware. -- Dmitry Olshansky
Jan 06 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 06 Jan 2014 07:41:54 -0800, Dmitry Olshansky  =

<dmitry.olsh gmail.com> wrote:

 06-Jan-2014 19:16, H. S. Teoh =D0=BF=D0=B8=D1=88=D0=B5=D1=82:
 On Sun, Jan 05, 2014 at 11:16:03PM -0800, Adam Wilson wrote:
 On Sun, 05 Jan 2014 21:32:54 -0800, Mike <none none.com> wrote:
[...]
 The logical phases as I can see them are as follows, but please
 suggest changes:
Start with the most primitive and move up from there (If loosely coupled, the could be developed in parallel) 1. Geometric primitives 2. Vector graphics 3. Fonts (just a special case of vector graphics) 4. Rasterization 5. Backend (Direct2D/3D, OpenGL, OpenVG, whatever)
The problem I can see here is that if you want to test the first four, number five has to be built out to some degree.
[...] For initial development, it could be as simple as rendering to a gene=
ric
 framebuffer that then gets blitted to whatever OS primitives you have=
to
 display it on the screen. That should be enough to get things going
 while "real" backends get developed.
It would be better not to or you risk introducing suboptimal software =
=
 rendering patterns that do not have even remote correspondence to the =
=
 hardware.
I share Dmitry's position. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 06 2014
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Mike:

 * Break the library into loosely coupled pieces that can be 
 used individually or aggregated by other projects to build 
 things that we haven't even thought of
   *  Geometric primitives should be their own library/package
   *  Vector graphics (paths, line caps, etc...) should be their 
 own library/package
   *  Raster graphics should be their own library/package
   *  Window management should be its own library/package
   *  Font's are just data.  Don't couple them to the rendering 
 engine.  (e.g  Convert them to a vector/raster representation 
 and then let those libraries handle the rest.
   *  The rendering engine should be its own library/package, 
 that should be easily replaced with whatever is suitable for 
 the given platform.
An independent color system module could be useful, even in Phobos. Part of the Geometric primitives (2D/3D vectors, rotation matrices, the most commonly useful geometry algorithms and formulas) could go in a Phobos module. And the graphics library could import and use this standard module. Something like the simpledisplay module (that the graphics library will not import) could be useful in Phobos: https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff Bye, bearophile
Jan 06 2014
next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 6 January 2014 at 12:52:02 UTC, bearophile wrote:
 An independent color system module could be useful, even in 
 Phobos.
Did you see my color.d that simpledisplay now depends on? It isn't particularly fancy, but I found that having an independent color struct was useful since it is imported by many modules, and moving it out to its own kept them from getting too inter-dependent. I put the basic image framebuffer there for the same reason - it is very little code, but having a reusable common type with easy access helps interoperability.
Jan 06 2014
prev sibling parent reply Martin Nowak <code dawg.eu> writes:
On 01/06/2014 01:52 PM, bearophile wrote:
 Mike:

 * Break the library into loosely coupled pieces that can be used
 individually or aggregated by other projects to build things that we
 haven't even thought of
   *  Geometric primitives should be their own library/package
   *  Vector graphics (paths, line caps, etc...) should be their own
 library/package
   *  Raster graphics should be their own library/package
   *  Window management should be its own library/package
   *  Font's are just data.  Don't couple them to the rendering
 engine.  (e.g  Convert them to a vector/raster representation and then
 let those libraries handle the rest.
   *  The rendering engine should be its own library/package, that
 should be easily replaced with whatever is suitable for the given
 platform.
An independent color system module could be useful, even in Phobos. Part of the Geometric primitives (2D/3D vectors, rotation matrices, the most commonly useful geometry algorithms and formulas) could go in a Phobos module. And the graphics library could import and use this standard module.
I wrote a vector graphics library quite a while ago. https://github.com/MartinNowak/graphics It contains a lot of Path and Bezier curve algorithms and uses a novel rasterization technique http://josiahmanson.com/research/wavelet_rasterization/. It's a little outdated and probably won't compile any longer. It also depends on a small GUI primitives library which contains Color, Point, Size and Rect. https://github.com/MartinNowak/guip
Jan 16 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Thu, 16 Jan 2014 07:51:26 -0800, Martin Nowak <code dawg.eu> wrote:

 On 01/06/2014 01:52 PM, bearophile wrote:
 Mike:

 * Break the library into loosely coupled pieces that can be used
 individually or aggregated by other projects to build things that we
 haven't even thought of
   *  Geometric primitives should be their own library/package
   *  Vector graphics (paths, line caps, etc...) should be their own
 library/package
   *  Raster graphics should be their own library/package
   *  Window management should be its own library/package
   *  Font's are just data.  Don't couple them to the rendering
 engine.  (e.g  Convert them to a vector/raster representation and then
 let those libraries handle the rest.
   *  The rendering engine should be its own library/package, that
 should be easily replaced with whatever is suitable for the given
 platform.
An independent color system module could be useful, even in Phobos. Part of the Geometric primitives (2D/3D vectors, rotation matrices, the most commonly useful geometry algorithms and formulas) could go in a Phobos module. And the graphics library could import and use this standard module.
I wrote a vector graphics library quite a while ago. https://github.com/MartinNowak/graphics It contains a lot of Path and Bezier curve algorithms and uses a novel rasterization technique http://josiahmanson.com/research/wavelet_rasterization/. It's a little outdated and probably won't compile any longer. It also depends on a small GUI primitives library which contains Color, Point, Size and Rect. https://github.com/MartinNowak/guip
This is awesome! What's the license on it and can we use it in Aurora? -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 16 2014
prev sibling next sibling parent reply "ilya-stromberg" <ilya-stromberg-2009 yandex.ru> writes:
On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 - Aurora is not a GUI library. Aurora is intended as a creative 
 graphics programming library in the same concept as Cinder. 
 This means that it will be much closer to game's graphics 
 engine, in terms of design and capability, than a UI library; 
 therefore we should approach the design from that standpoint.
Sorry, what does it mean? For example, I want to create minimal GUI (1-2 buttons and text fields). Do you have any plans to support it?
Jan 05 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 05 Jan 2014 23:25:38 -0800, ilya-stromberg  
<ilya-stromberg-2009 yandex.ru> wrote:

 On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 - Aurora is not a GUI library. Aurora is intended as a creative  
 graphics programming library in the same concept as Cinder. This means  
 that it will be much closer to game's graphics engine, in terms of  
 design and capability, than a UI library; therefore we should approach  
 the design from that standpoint.
Sorry, what does it mean? For example, I want to create minimal GUI (1-2 buttons and text fields). Do you have any plans to support it?
Not at present, you could certainly use it to build a custom minimal GUI toolkit but that is not it's primary function. It would probably be best to think about Aurora as a graphics rendering toolkit. You tell us how to draw the pixels and we'll do it as best we can on that device. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 05 2014
parent reply "Matt Taylor" <taylorius gmail.com> writes:
Hi

I'm a new contributor to this site - I'm trying D out, and liking 
it a lot. Though I'm new to D, my background is in computer 
graphics, so I'm excited to see graphics capabilities being 
discussed, and I thought I'd add my two-penneth.

Firstly, there seems to be disagreement on exactly what is to be 
developed. I've seen 3 types of things discussed.

1. GUI Library. It seems to me that a really well designed GUI 
system would be - by far - the most valuable thing for increasing 
the widespread use of D. I would use it in a heartbeat if such a 
thing existed.

2. 2D/3D Graphics interface to underlying hardware. This can be 
useful - though it's not clear to me that the 2D part would be 
terribly useful on its own, without 3D OpenGL support, or 
possibly acting as primitives for a GUI library. I suppose 2D 
games could use it.

3. High quality 2D, SVG-style renderer, capable of rendering to 
arbitrary bitmaps. Not sure what the point of this is, or who 
would use it. At any rate, it seems sufficiently specialised that 
it doesn't belong in a core library.

If it were up to me (I know, it isn't) - I would create a 
graphics system for D based on a modern web-browser's Javascript 
DOM. I daresay not everything would map well to D, but I would 
start with that, and adapt it where necessary. I would include 
WebGL 2.0 support, mouse handling callbacks etc.

But that's just me :-)

Best Regards

Matt Taylor
Jan 13 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 13 Jan 2014 15:46:42 -0800, Matt Taylor <taylorius gmail.com>  
wrote:

 Hi

 I'm a new contributor to this site - I'm trying D out, and liking it a  
 lot. Though I'm new to D, my background is in computer graphics, so I'm  
 excited to see graphics capabilities being discussed, and I thought I'd  
 add my two-penneth.
Welcome! We could use all the help we can get! I'll be announcing firmer plans this weekend after I've had a chance to write them out. Stay tuned.
 Firstly, there seems to be disagreement on exactly what is to be  
 developed. I've seen 3 types of things discussed.

 1. GUI Library. It seems to me that a really well designed GUI system  
 would be - by far - the most valuable thing for increasing the  
 widespread use of D. I would use it in a heartbeat if such a thing  
 existed.
Aurora is not a GUI library.
 2. 2D/3D Graphics interface to underlying hardware. This can be useful -  
 though it's not clear to me that the 2D part would be terribly useful on  
 its own, without 3D OpenGL support, or possibly acting as primitives for  
 a GUI library. I suppose 2D games could use it.
This is what Aurora is intended to be. The 2D part could be used as the building blocks for a GUI or games or whatever else you can come up with. Aurora is intended to be a general-purpose graphics library for D. It should be usable on any device/platform that D supports or will support.
 3. High quality 2D, SVG-style renderer, capable of rendering to  
 arbitrary bitmaps. Not sure what the point of this is, or who would use  
 it. At any rate, it seems sufficiently specialised that it doesn't  
 belong in a core library.
I tend to agree that SVG is a bit specialized for our purposes with Aurora. If someone wants to implement an SVG renderer on top of Aurora though, that would be valid and useful but not within the scope of Aurora itself.
 If it were up to me (I know, it isn't) - I would create a graphics  
 system for D based on a modern web-browser's Javascript DOM. I daresay  
 not everything would map well to D, but I would start with that, and  
 adapt it where necessary. I would include WebGL 2.0 support, mouse  
 handling callbacks etc.
Do you mean HTML DOM? This would be similar to XAML and both are UX languages which are unrelated to the purpose and goal of Aurora. And don't think this would translate to well into D code. One of the goals is to use idiomatic D as much as possible.
 But that's just me :-)

 Best Regards

 Matt Taylor
-- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 13 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 02:32:01 UTC, Adam Wilson wrote:
 I tend to agree that SVG is a bit specialized for our purposes
This is a misconception. SVG is a generic scene-graph-like DOM. Much in the same vain as OpenInventor and Flash (or Coin3D, which is a free implementation of OpenInventor). SVG is a hierarchy of transformations and shapes. It is very close to being a retained mode version of Display Postscript, which IS the graphics library of NeXT and very close to the generic graphics libraries of OS-X (Quartz) which is heavily based on NeXT. (And probably Direct2D too).
Jan 14 2014
prev sibling parent reply "Matt Taylor" <taylorius gmail.com> writes:
On Tuesday, 14 January 2014 at 02:32:01 UTC, Adam Wilson wrote:
 On Mon, 13 Jan 2014 15:46:42 -0800, Matt Taylor 
 <taylorius gmail.com> wrote:
 2. 2D/3D Graphics interface to underlying hardware. This can 
 be useful - though it's not clear to me that the 2D part would 
 be terribly useful on its own, without 3D OpenGL support, or 
 possibly acting as primitives for a GUI library. I suppose 2D 
 games could use it.
This is what Aurora is intended to be. The 2D part could be used as the building blocks for a GUI or games or whatever else you can come up with. Aurora is intended to be a general-purpose graphics library for D. It should be usable on any device/platform that D supports or will support.
Is Aurora exclusively 2D? Or will 3D get a look in?
 If it were up to me (I know, it isn't) - I would create a 
 graphics system for D based on a modern web-browser's 
 Javascript DOM. I daresay not everything would map well to D, 
 but I would start with that, and adapt it where necessary. I 
 would include WebGL 2.0 support, mouse handling callbacks etc.
Do you mean HTML DOM? This would be similar to XAML and both are UX languages which are unrelated to the purpose and goal of Aurora. And don't think this would translate to well into D code. One of the goals is to use idiomatic D as much as possible.
Definitely not. The web browser's DOM is a completely separate entity to HTML (even though HTML is commonly used to populate it on a webpage). A webpage can be constructed entirely programatically in javascript. Personally I wouldn't allow HTML,XAML or any other ML within a mile of my code. :-) No, I'm definitely talking about pure D. What we take from the DOM is it's structure and capabilities (i.e the Javascript DOM API), and then reinterpret that API into idiomatic D. The thing with a 2D graphics API, is deciding what level you want it to work at. For example SDL is a 2D graphics API of sorts - it gives you a pixel array, and you can set each pixel to whatever colour you like. Meanwhile, at the other end of the scale, you have a system with the capabilities of the Javascript DOM (see above), or the Flash player. This system can composite multiple 2D layers, and keep track of the contents of those layers. Having said all that, I've a feeling that my thoughts are heading in a different direction to what you had planned :-) so forgive me if I'm merely distracting you from the task at hand. Cheers Matt
Jan 14 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 10:44:57 UTC, Matt Taylor wrote:
 Meanwhile, at the other end of the scale, you have a system 
 with the capabilities of the Javascript DOM (see above), or the 
 Flash player. This system can composite multiple 2D layers, and 
 keep track of the contents of those layers.
Yes, this is the scene graph approach. This is basically SVG, a hierarchy of transforms, shapes and interactive nodes. A retained mode version of Display Postscript. Although OpenInventor has a more powerful model, perhaps (Coin3D is freely available as a starting point). The Flash model is formalized as as a DOM in StageXL: http://www.stagexl.org/ Another scene graph that was used in X11 servers is PHIGS.
Jan 14 2014
next sibling parent "ed" <sillymongrel gmail.com> writes:
On Tuesday, 14 January 2014 at 11:10:36 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 14 January 2014 at 10:44:57 UTC, Matt Taylor wrote:
 Meanwhile, at the other end of the scale, you have a system 
 with the capabilities of the Javascript DOM (see above), or 
 the Flash player. This system can composite multiple 2D 
 layers, and keep track of the contents of those layers.
Yes, this is the scene graph approach. This is basically SVG, a hierarchy of transforms, shapes and interactive nodes. A retained mode version of Display Postscript. Although OpenInventor has a more powerful model, perhaps (Coin3D is freely available as a starting point). The Flash model is formalized as as a DOM in StageXL: http://www.stagexl.org/ Another scene graph that was used in X11 servers is PHIGS.
No please not PHIGS....I had to use PHIGS way back when, before we switched to Inventor. Inventor is a much nicer API. But whatever you guys come up with will obviously be well thought out. Cannot wait to see it progress further. Cheers, Ed
Jan 14 2014
prev sibling parent reply "Matt Taylor" <taylorius gmail.com> writes:
On Tuesday, 14 January 2014 at 11:10:36 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 14 January 2014 at 10:44:57 UTC, Matt Taylor wrote:
 Meanwhile, at the other end of the scale, you have a system 
 with the capabilities of the Javascript DOM (see above), or 
 the Flash player. This system can composite multiple 2D 
 layers, and keep track of the contents of those layers.
Yes, this is the scene graph approach. This is basically SVG, a hierarchy of transforms, shapes and interactive nodes. A retained mode version of Display Postscript. Although OpenInventor has a more powerful model, perhaps (Coin3D is freely available as a starting point).
Indeed, though OpenInventor is a 3D scene graph of course. I don't think it's advisable to try and shoehorn 2D and 3D into one system. For my money, web browsers have it about right - a standardised DOM for 2D (albeit with the odd 3D css trick available these days) and WebGL for the "real" 3D. Not only that, but loads of people are already familiar with how they work. Cheers Matt
Jan 14 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 12:02:45 UTC, Matt Taylor wrote:
 Indeed, though OpenInventor is a 3D scene graph of course. I 
 don't think it's advisable to try and shoehorn 2D and 3D into 
 one system.
I agree, it is better to do 2D well. If you mix 2D with 3D, then it is better to do it as projection of 2D onto 3D surfaces. Like povray does it in it's scenegraph (2D is extruded into 3D textures). Besides things are happening fast now on the hardware end with real time raytracing on high end hardware (OctaneRender, Brigade etc). It is just a matter of time before scene graphs take over 3D anyway...
Jan 14 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
A small example of a real time path tracer in webgl:

http://madebyevan.com/webgl-path-tracing/

A lot more fun than regular 3D or 2D. If combined with caching 
surfaces, it probably could be used to build a glossy 2.5D UI.
Jan 14 2014
parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 14/01/2014 16:35, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" a écrit :
 A small example of a real time path tracer in webgl:

 http://madebyevan.com/webgl-path-tracing/

 A lot more fun than regular 3D or 2D. If combined with caching surfaces,
 it probably could be used to build a glossy 2.5D UI.
Do you know this blog : http://raytracey.blogspot.fr/ ?
Jan 14 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 19:00:06 UTC, Xavier Bigand wrote:
 Do you know this blog : http://raytracey.blogspot.fr/ ?
Yeah, nice blog! I look at it every once in a while. :-) The Brigade demos are impressive, but they are of course scene optimized. I've been keeping tabs on the original author of Brigade, Jaco Bikker (http://igad.nhtv.nl/~bikker/) for over a decade. He wrote some nice tutorials on fast fake sky/cloud simulations in the late 90s I think. I believe his path tracing engine Aruana has been used for teaching game design, so it isn't far fetched to have a D library with a path tracer for experimental graphics. And a path/ray tracer can of course also do 2D stuff like beziers by sampling. There are shaders out that does this.
Jan 14 2014
parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 14/01/2014 22:42, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" a écrit :
 On Tuesday, 14 January 2014 at 19:00:06 UTC, Xavier Bigand wrote:
 Do you know this blog : http://raytracey.blogspot.fr/ ?
Yeah, nice blog! I look at it every once in a while. :-) The Brigade demos are impressive, but they are of course scene optimized. I've been keeping tabs on the original author of Brigade, Jaco Bikker (http://igad.nhtv.nl/~bikker/) for over a decade. He wrote some nice tutorials on fast fake sky/cloud simulations in the late 90s I think. I believe his path tracing engine Aruana has been used for teaching game design, so it isn't far fetched to have a D library with a path tracer for experimental graphics. And a path/ray tracer can of course also do 2D stuff like beziers by sampling. There are shaders out that does this.
Sadly I don't think it's really usable even for GUI, if it have to run a all current devices.
Jan 14 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 21:59:13 UTC, Xavier Bigand wrote:
 Sadly I don't think it's really usable even for GUI, if it have 
 to run a all current devices.
It would need either a modern GPU or a fast CPU, like i5/i7. But with caching of 2D surfaces the performance requirements are not as high. (You prerender "button unpressed", "button with motion blur", "button pressed" etc).
Jan 14 2014
parent reply "Matt Taylor" <taylorius gmail.com> writes:
On Tuesday, 14 January 2014 at 22:28:00 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 14 January 2014 at 21:59:13 UTC, Xavier Bigand 
 wrote:
 Sadly I don't think it's really usable even for GUI, if it 
 have to run a all current devices.
It would need either a modern GPU or a fast CPU, like i5/i7. But with caching of 2D surfaces the performance requirements are not as high. (You prerender "button unpressed", "button with motion blur", "button pressed" etc).
For some reason sledgehammers and nuts come to mind! :-) Arauna / Brigade is pretty cool mind you. My company makes a mental ray compatible pseudo real-time visualizer. We use Intel's Embree for our ray intersection. It's CPU only, but very fast - and faster still if you couple it with ISPC. Cheers Matt
Jan 15 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 15 January 2014 at 09:55:36 UTC, Matt Taylor wrote:
 For some reason sledgehammers and nuts come to mind! :-)
Monte Carlo based sampling is of course a sledge hammer, but it works well with nuts if you don't mind the noise!
 Arauna / Brigade is pretty cool mind you. My company makes a 
 mental ray compatible pseudo real-time visualizer. We use 
 Intel's Embree for our ray intersection. It's CPU only, but 
 very fast - and faster still if you couple it with ISPC.
That's great! And thanks for the info, I didn't know about Embree and ISPC. (So Intel has a LLVM based compiler for SSE... :^)
Jan 15 2014
prev sibling next sibling parent "Rikki Cattermole" <alphaglosined gmail.com> writes:
I have to say this basically has made DOOGLE obsolete however 
here is some things I have learnt from it:
- Separate out OpenGL implementation abstractions. E.g. Webgl vs 
desktop gl.
This may not be required because the aim is towards non web based 
output but I think something like GWT[0] would bring an 
interesting dimension to such a library.
Also include the opengl util stuff including textures ext. in 
this with having it abstracted.
- If you think your going 3d do it first, its harder (yes) but 
going 2d from that is pretty much load image into texture, load 
mapping coords and display.
- If you can explore some kind of factory mechanism, I never did 
but versioning implementations between sub packages was 
absolutely messy. Perhaps a registrations system. That works 
quite well for my web service framework with routes, models, 
update functions ext.
- From experience on Windows, the client window area is a little 
buggy. You may have to make up for that in making sure its exact 
size requested. For an idea of what I did check out[1][2]

Some ideas I had for DOOGLE but really didn't even get close to 
doing:
- Able to (out of process) query and manipulate the gui with a 
permission mechanism.
- Output html/css/js with a routing mechanism to work as a web 
server routes.
 From what I have considered the context would need to change on 
request, which would hold e.g. the actual request.

[0] http://www.gwtproject.org/
[1] 
https://github.com/rikkimax/DOOGLE/blob/master/resources/shaders/button_popup.frag
[2] 
https://github.com/rikkimax/DOOGLE/blob/master/source/StandardPlatformWindow/doogle/window/opengl/window_win.d#L28
Jan 06 2014
prev sibling next sibling parent reply "Namespace" <rswhite4 googlemail.com> writes:
On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 Hello Fellow D Heads,
 I know that the community has worked through a few of the 
 problems involved. For example, I can't remember who wrote it, 
 but I've seen a module floating around that can create a window 
 in a cross-platform manner, and I know Mike Parker has been 
 heavily involved in graphics for D.
You mean Dgame? http://dgame-dev.de/ It's still in development but the next release is coming. I would be interested in helping. But I have only experience in OpenGL, not DirectX.
Jan 06 2014
parent reply Mike Parker <aldacron gmail.com> writes:
On 1/6/2014 6:31 PM, Namespace wrote:
 On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 Hello Fellow D Heads,
 I know that the community has worked through a few of the problems
 involved. For example, I can't remember who wrote it, but I've seen a
 module floating around that can create a window in a cross-platform
 manner,
 You mean Dgame? http://dgame-dev.de/
I think he's referring to simpledisplay: https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff/blob/master/simpledisplay.d
Jan 06 2014
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 6 January 2014 at 12:57:04 UTC, Mike Parker wrote:
 I think he's referring to simpledisplay:

 https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff/blob/master/simpledisplay.d
Yea, my simpledisplay.d (and its dependency, color.d) has some code for creating a window, drawing on it (including some OpenGL stuff now), getting input, and drawing on images using OS native things. color.d has a color struct, some hsl <-> rgb functions, and a basic image framebuffer class. Also in the repo is png.d which can load and save many png files, bmp.d for bmps, and minigui.d which builds on top of simpledisplay to add some basic gui widgets. There's also stb_truetype.d which is a port of the public domain C library for loading ttfs in a single file without a dependency on freetype or anything. I still haven't uploaded image_basicdrawing.d because it sucks and I plagiarized one of the functions, but that thing uses the class in color.d and adds stuff like "draw line", "draw circle", etc. to it so you can draw without a backing screen. While simpledisplay.d works reasonably well, its code is pretty ugly (half the file is just bindings to Win32 or Xlib), minigui.d is still far from complete. Most my work lately has been web UIs, so I just haven't had the push to finish it...
Jan 06 2014
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-01-06 05:10, Adam Wilson wrote:
 Hello Fellow D Heads,

 Recently, I've been working to evaluate the feasibility and
 reasonability of building out a binding to Cinder in D. And while it is
 certainly feasible to wrap Cinder, that a binding would be necessarily
 complex and feel very unnatural in D.

 So after talking it over with Walter and Andrei, we feel that, while we
 like how Cinder is designed and would very much like to have something
 like it available in D, wrapping Cinder is not the best approach in the
 long-term.

 With that in mind, we would like to start a discussion with interested
 parties about building a graphics library in the same concept as Cinder,
 but using an idiomatic D implementation from the ground up. Walter has
 suggested that we call it Aurora, and given the visual connotations
 associated with that name, I think it is most appropriate for this project.

[SNIP]
 So with the above framework in mind, let's talk!
I like it :). Every time I read a proposal that has anything to do with graphics I always start to think there will be problems on Mac OS X. Depending on how much interaction with the platform is needed, it always comes back to the same problem: interfacing with Objective-C. It's verbose, cumbersome and annoying to interface with Objective-C without language support. We badly need D/Objective-C [1]. [1] http://wiki.dlang.org/DIP43 -- /Jacob Carlborg
Jan 06 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 06 Jan 2014 02:38:21 -0800, Jacob Carlborg <doob me.com> wrote:

 On 2014-01-06 05:10, Adam Wilson wrote:
 Hello Fellow D Heads,

 Recently, I've been working to evaluate the feasibility and
 reasonability of building out a binding to Cinder in D. And while it is
 certainly feasible to wrap Cinder, that a binding would be necessarily
 complex and feel very unnatural in D.

 So after talking it over with Walter and Andrei, we feel that, while we
 like how Cinder is designed and would very much like to have something
 like it available in D, wrapping Cinder is not the best approach in the
 long-term.

 With that in mind, we would like to start a discussion with interested
 parties about building a graphics library in the same concept as Cinder,
 but using an idiomatic D implementation from the ground up. Walter has
 suggested that we call it Aurora, and given the visual connotations
 associated with that name, I think it is most appropriate for this  
 project.

 [SNIP]
 So with the above framework in mind, let's talk!
I like it :). Every time I read a proposal that has anything to do with graphics I always start to think there will be problems on Mac OS X. Depending on how much interaction with the platform is needed, it always comes back to the same problem: interfacing with Objective-C. It's verbose, cumbersome and annoying to interface with Objective-C without language support. We badly need D/Objective-C [1]. [1] http://wiki.dlang.org/DIP43
That's why I want to keep the dependencies limited and low-level, if we only use OpenGL on OSX when can get away with a C interface. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 06 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-01-07 06:45, Adam Wilson wrote:

 That's why I want to keep the dependencies limited and low-level, if we
 only use OpenGL on OSX when can get away with a C interface.
Unfortunately you cannot. You need Objective-C just to bring up a basic window. If you also want all the rest (application menu, dock icon and so on) which is expect from every application on Mac OS X it requires surprisingly a lot of code just to get the basics up and running. This assumes if you want to avoid app bundles and plist config files. Just have a look what was required to get the standard menus and dock icons for a Derelict application using SDL: http://www.dsource.org/projects/derelict/browser/branches/Derelict2/DerelictSDL/derelict/sdl/macinit Most of those files are bindings, here are the actual code: http://www.dsource.org/projects/derelict/browser/branches/Derelict2/DerelictSDL/derelict/sdl/macinit/SDLMain.d -- /Jacob Carlborg
Jan 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 08:01:44 UTC, Jacob Carlborg wrote:
 Unfortunately you cannot. You need Objective-C just to bring up 
 a basic window. If you also want all the rest (application 
 menu, dock icon and so on) which is expect from every 
 application on Mac OS X it requires surprisingly a lot of code 
 just to get the basics up and running.
Yes, but you can call D as a library from an Objective-C runtime. 1. Objective-C main() calls D main(). 2. D main creates OSApplication facade and calls run(someDfunc) on it. 3. Objective-C event loop call back to someDfunc
Jan 07 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-01-07 09:18, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:

 Yes, but you can call D as a library from an Objective-C runtime.

 1. Objective-C main() calls D main().
 2. D main creates OSApplication facade and calls run(someDfunc) on it.
 3. Objective-C event loop call back to someDfunc
Yes, sure. In theory it sounds easy. Although I suspect access to more of the Objective-C API's are necessary. That would either require more Objective-C code or using D, which comes back to the original problem. -- /Jacob Carlborg
Jan 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 14:31:07 UTC, Jacob Carlborg wrote:
 On 2014-01-07 09:18, "Ola Fosheim Grøstad" Yes, sure. In theory 
 it sounds easy. Although I suspect access to more of the 
 Objective-C API's are necessary. That would either require more 
 Objective-C code or using D, which comes back to the original 
 problem.
Yep, you're right. You need to set up the OpenGL context from Objective-C, and the callbacks and... and... ;-) So getting the OSApplication right for all platforms (uniform facade) is an abstraction challenge, but the OpenGL calls are pure C, so the Objective-C stuff is mostly about initialization and setting up hooks for exception handling (like being put to sleep, being asked to free memory/resources,...) You probably don't need more than 500-1000 lines of Objective-C.
Jan 07 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
A while ago, someone contributed D code to my simpledisplay.d 
that make an obj-c window. Sadly though, I haven't maintained it 
at all* and I'm sure it no longer compiles. But, it might be 
fixable, at least to get the basics up again.

* I don't have a mac. A few months ago, I tried to.... acquire a 
copy OSX to run in a VM for these purposes, but I couldn't 
actually get it to boot and gave up on it.
Jan 07 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-01-07 17:10, Adam D. Ruppe wrote:
 A while ago, someone contributed D code to my simpledisplay.d that make
 an obj-c window. Sadly though, I haven't maintained it at all* and I'm
 sure it no longer compiles. But, it might be fixable, at least to get
 the basics up again.
It looks like it will setup the basics, but no more than that. There are some things it doesn't handle and some things it's not handling correctly. It uses the ugly, verbose and cumbersome Objective-C runtime functions, which are C functions. Which I really like to _not_ have to write. The whole reason for my answer. It also uses some CoreFoundation functions, which also are C functions, to avoid using some of Objective-C API's. -- /Jacob Carlborg
Jan 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 17:16:21 UTC, Jacob Carlborg wrote:
 It looks like it will setup the basics, but no more than that. 
 There are some things it doesn't handle and some things it's 
 not handling correctly.
I honestly think it will be easier to maintain and keep up to date with changes in OS-X if done entirely in Objective-C. If you are interested in that then I'd be interested in cooperating on a runtime wrapper that can be used both from C++ and D for OS-X 10.x+ and ios5.1+.
Jan 07 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-01-07 19:12, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:

 I honestly think it will be easier to maintain and keep up to date with
 changes in OS-X if done entirely in Objective-C. If you are interested
 in that then I'd be interested in cooperating on a runtime wrapper that
 can be used both from C++ and D for OS-X 10.x+ and ios5.1+.
I don't know, I have quite a lot on my plate currently. -- /Jacob Carlborg
Jan 07 2014
prev sibling next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 If you are interested in helping with a Cinder like library for 
 D and/or have code you'd like to contribute, let's start 
 talking and see what happens.
First I must say I dislike the Cinder concept because C++ frameworks like this tend to have an extremely large scope (See_also: JUCE). I work on GFM since 2012 (https://github.com/p0nce/gfm) which is 100% public domain, feel free to take anything from it. There is some overlap with Cinder's features: http://p0nce.github.io/gfm/ It seems that you want graphics API abstraction, yet Cinder has none of this.
Jan 06 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 06 Jan 2014 03:22:25 -0800, ponce <contact gam3sfrommars.fr> wrote:

 On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 If you are interested in helping with a Cinder like library for D  
 and/or have code you'd like to contribute, let's start talking and see  
 what happens.
First I must say I dislike the Cinder concept because C++ frameworks like this tend to have an extremely large scope (See_also: JUCE).
All graphics API's tend to have a large scope, it's a function of the complexity of the task. I don't see this as an inherently bad thing, just something that we need to think about while designing it.
 I work on GFM since 2012 (https://github.com/p0nce/gfm) which is 100%  
 public domain, feel free to take anything from it. There is some overlap  
 with Cinder's features: http://p0nce.github.io/gfm/

 It seems that you want graphics API abstraction, yet Cinder has none of  
 this.
Um, this last statement makes no sense, that's pretty much exactly what Cinder is... -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 06 2014
parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Tuesday, 7 January 2014 at 05:48:38 UTC, Adam Wilson wrote:
 It seems that you want graphics API abstraction, yet Cinder 
 has none of this.
Um, this last statement makes no sense, that's pretty much exactly what Cinder is...
What I meant is "Cinder does not seem to dispatch to different graphics API" (OpenGL vs DirectX vs ...). It targets OpenGL exclusively from what I read on the feature list. So I was wondering about the discussion on different graphics API backends.
Jan 07 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 07 Jan 2014 00:00:14 -0800, ponce <contact gam3sfrommars.fr> wrote:

 On Tuesday, 7 January 2014 at 05:48:38 UTC, Adam Wilson wrote:
 It seems that you want graphics API abstraction, yet Cinder has none  
 of this.
Um, this last statement makes no sense, that's pretty much exactly what Cinder is...
What I meant is "Cinder does not seem to dispatch to different graphics API" (OpenGL vs DirectX vs ...). It targets OpenGL exclusively from what I read on the feature list. So I was wondering about the discussion on different graphics API backends.
Ahh I see, my experience with Cinder caused me to short-circuit some useful info. Cinder has a DirectX backend in the latest dev branches. My apologies for the confusion. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 07 2014
prev sibling next sibling parent reply "Dejan Lekic" <dejan.lekic gmail.com> writes:
So Cinder is basically an OpenCV competition?
Jan 06 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 06 Jan 2014 03:46:58 -0800, Dejan Lekic <dejan.lekic gmail.com>  
wrote:

 So Cinder is basically an OpenCV competition?
Yes, OpenCV is in the same class of tools as Cinder, although the API's are different. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 06 2014
parent reply "David Nadlinger" <code klickverbot.at> writes:
On Tuesday, 7 January 2014 at 05:50:04 UTC, Adam Wilson wrote:
 On Mon, 06 Jan 2014 03:46:58 -0800, Dejan Lekic 
 <dejan.lekic gmail.com> wrote:

 So Cinder is basically an OpenCV competition?
Yes, OpenCV is in the same class of tools as Cinder, although the API's are different.
Isn't OpenCV mainly a collection of computer vision algorithms, maybe including some helpers to build an experimental UI around them? David
Jan 06 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 06 Jan 2014 22:22:07 -0800, David Nadlinger <code klickverbot.at>  
wrote:

 On Tuesday, 7 January 2014 at 05:50:04 UTC, Adam Wilson wrote:
 On Mon, 06 Jan 2014 03:46:58 -0800, Dejan Lekic <dejan.lekic gmail.com>  
 wrote:

 So Cinder is basically an OpenCV competition?
Yes, OpenCV is in the same class of tools as Cinder, although the API's are different.
Isn't OpenCV mainly a collection of computer vision algorithms, maybe including some helpers to build an experimental UI around them? David
Indeed it is, although the way the front page talks about it is misleading... I stand corrected. :-) -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 06 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 The logical phases as I can see them are as follows, but please 
 suggest changes:

 - Windowing and System Interaction (Including 
 Keyboard/Mouse/Touch Input)
 - Basic Drawing (2D Shapes, Lines, Gradients, etc)
 - Image Rendering (Image Loading, Rendering, Modification, 
 Saving, etc.)
 - 3D Drawing (By far the most complex stage, so we'll leave it 
 for last)
I suggest you start working the other way. Because if you want performance you will most likely want everything in the render path to be based on shaders: 1. Fonts and monochrome icons based on distance-fields. 2. All fills based on shaders, with the ability to convert D into GLSL etc. 3. Render to texture, with caching mechanism. 4. Simplistic mesh rendering from native D arrays. 5. The ability to attach native 3D code paths (GL/DX) to the engine for advanced meshes. Then you build 2D on top of this. Rendering and input should be completely separate.
Jan 06 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 06 Jan 2014 07:49:27 -0800, Ola Fosheim Gr=F8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 The logical phases as I can see them are as follows, but please sugge=
st =
 changes:

 - Windowing and System Interaction (Including Keyboard/Mouse/Touch  =
 Input)
 - Basic Drawing (2D Shapes, Lines, Gradients, etc)
 - Image Rendering (Image Loading, Rendering, Modification, Saving, et=
c.)
 - 3D Drawing (By far the most complex stage, so we'll leave it for la=
st)
 I suggest you start working the other way. Because if you want  =
 performance you will most likely want everything in the render path to=
=
 be based on shaders:

 1. Fonts and monochrome icons based on distance-fields.
Typically it's easier to use TrueType/OpenType fonts. Although Pathing = support for icons is a must.
 2. All fills based on shaders, with the ability to convert D into GLSL=
=
 etc.
If possible I want to hand this off to a system library like OpenGL or = DirectX.
 3. Render to texture, with caching mechanism.
My understanding is that it is best to let the GPU drivers handle cachin= g, = but somebody with Game Dev experience might be able to shed some light.
 4. Simplistic mesh rendering from native D arrays.
Good idea.
 5. The ability to attach native 3D code paths (GL/DX) to the engine fo=
r =
 advanced meshes.
Good idea.
 Then you build 2D on top of this.

 Rendering and input should be completely separate.
While I can't speak for the OpenGL library, from a performance standpoin= t = using 3D drawing for 2D surfaces is actually not an efficient method for= = anything other than perfect square, as soon as you want curves performan= ce = tanks. Microsoft actually built an entire 2D/3D rendering engine using = DirectX9 called MilCore. Now, DirectX9 has no native 2D capability and = MilCore is the rendering core for WPF. So how did they draw a 2D circle?= = They draw over 1000 triangles. Needless to say that if you do this a few= = dozen times you tank GPU performance on anything less than a discrete GP= U = of some expense. On Windows, we have access to Direct2D, which does not have those = limitations. And Direct2D will inter-operate with Direct3D seamlessly. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 06:18:23 UTC, Adam Wilson wrote:

 1. Fonts and monochrome icons based on distance-fields.
Typically it's easier to use TrueType/OpenType fonts. Although Pathing support for icons is a must.
That is slow or space consumingif you want mipmapping.
 2. All fills based on shaders, with the ability to convert D 
 into GLSL etc.
If possible I want to hand this off to a system library like OpenGL or DirectX.
You need a common syntax.
 3. Render to texture, with caching mechanism.
My understanding is that it is best to let the GPU drivers handle caching, but somebody with Game Dev experience might be able to shed some light.
GPU drivers don't. How could they? You mean to pregenerate everything?
 While I can't speak for the OpenGL library, from a performance 
 standpoint using 3D drawing for 2D surfaces is actually not an 
 efficient method for anything other than perfect square, as 
 soon as you want curves performance tanks. Microsoft actually 
 built an entire 2D/3D rendering engine using DirectX9 called
DX9 targets slow fragmentshaders. You should target fast fragmentshaders. And you should choose the common subset of GL/DX. You create perfect solid circles by using 2 fragmentshaders and z-buffering. First you draw all the inner triangles, for all objects in the scene, then you draw all the border triangles, back to front.
 you do this a few dozen times you tank GPU performance on 
 anything less than a discrete GPU of some expense.
Next batch of Intel CPUs will provide solid shader performance, according to Intel IIRC. You should design for what is available in your average CPU in 3 years.
 On Windows, we have access to Direct2D, which does not have 
 those limitations. And Direct2D will inter-operate with 
 Direct3D seamlessly.
And double implement everything? It would be a solid mistake to make the engine Microsoft centric. You should target OpenGL 2/3 ES / WebGL. That is the common denominator.
Jan 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 07:12:13 UTC, Ola Fosheim Grøstad 
wrote:
 make the engine Microsoft centric. You should target OpenGL 2/3 
 ES / WebGL. That is the common denominator.
Actually, you should use a DX compatible subset of OpenGL ES 3/WebGL 2, it is supposed to be fully compatible with OpenGL 4.3. Though, if you only want vectorgraphics then the sane thing to do would be to rip out the SVG engine in Chrome or FF and create a full SVG DOM in D.
Jan 07 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 07 Jan 2014 00:05:35 -0800, Ola Fosheim Gr=F8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 7 January 2014 at 07:12:13 UTC, Ola Fosheim Gr=F8stad wrot=
e:
 make the engine Microsoft centric. You should target OpenGL 2/3 ES / =
=
 WebGL. That is the common denominator.
Actually, you should use a DX compatible subset of OpenGL ES 3/WebGL 2=
, =
 it is supposed to be fully compatible with OpenGL 4.3.

 Though, if you only want vectorgraphics then the sane thing to do woul=
d =
 be to rip out the SVG engine in Chrome or FF and create a full SVG DOM=
=
 in D.
I would like to reiterate that Aurora will not specify the back-end, so = it = will not OpenGL or DirectX, or any API centric. The intention is a = high-level API that can use any renderer that meets some basic = requirements. There is nothing technically wrong with DirectX on Windows= = and unlike OpenGL which requires manufacturer provided drivers, it's = guaranteed to be available. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 07 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 08:31:07 UTC, Adam Wilson wrote:
 I would like to reiterate that Aurora will not specify the 
 back-end, so it will not OpenGL or DirectX, or any API centric.
You need a reference graphics pipeline or performance will suffer.
 The intention is a high-level API that can use any renderer 
 that meets some basic requirements. There is nothing 
 technically wrong with DirectX on Windows and unlike OpenGL 
 which requires manufacturer provided drivers, it's guaranteed 
 to be available.
Not sure why you wrote that, but you come through as Microsoft biased, If you want cross platform performance your only choice is to pick a DX compatible subset of OpenGL ES 2/3 and use that as your reference graphics pipeline, which is quite easy to do. If your reference implementation is based on a proprietary 2D engine then all other platform implementations will suffer.
Jan 07 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 08:50:28 UTC, Ola Fosheim Grøstad 
wrote:
 If you want cross platform performance your only choice is to 
 pick a DX compatible subset of OpenGL ES 2/3 and use that as
Just in case this isn't obvious: you need a reference implementation during API design in order to track performance and feature regressions on a wide range of systems and GPU types (e.g. mobile tile based GPUs work differently from regular desktop GPUs). The goal of the reference is not to obtain optimal performance, but to ensure that acceptable performance and resource usage is attainable across the set of target systems without limiting the feature set.
Jan 07 2014
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
07-Jan-2014 12:30, Adam Wilson пишет:
 On Tue, 07 Jan 2014 00:05:35 -0800, Ola Fosheim Grøstad
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 There is nothing technically wrong with DirectX on Windows
 and unlike OpenGL which requires manufacturer provided drivers, it's
 guaranteed to be available.
Pardon, but this reads like citation of some old crap to me. And how would you use a GPU w/o manufacturer provided drivers? DX also builds on top of vendor specific drivers. -- Dmitry Olshansky
Jan 07 2014
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 7 Jan 2014 10:20, "Dmitry Olshansky" <dmitry.olsh gmail.com> wrote:
 07-Jan-2014 12:30, Adam Wilson =D0=BF=D0=B8=D1=88=D0=B5=D1=82:
 On Tue, 07 Jan 2014 00:05:35 -0800, Ola Fosheim Gr=C3=B8stad
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 There is nothing technically wrong with DirectX on Windows
 and unlike OpenGL which requires manufacturer provided drivers, it's
 guaranteed to be available.
Pardon, but this reads like citation of some old crap to me. And how would you use a GPU w/o manufacturer provided drivers? DX also builds on top of vendor specific drivers.
I thought it was the other way round. As in vendors write drivers to interface specifically with directX on windows, so Microsoft doesn't have to.
Jan 07 2014
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
07-Jan-2014 15:52, Iain Buclaw пишет:
 On 7 Jan 2014 10:20, "Dmitry Olshansky" <dmitry.olsh gmail.com
 <mailto:dmitry.olsh gmail.com>> wrote:
  >
  > 07-Jan-2014 12:30, Adam Wilson пишет:
  >>
  >> On Tue, 07 Jan 2014 00:05:35 -0800, Ola Fosheim Grøstad
  >> <ola.fosheim.grostad+dlang gmail.com
 <mailto:ola.fosheim.grostad%2Bdlang gmail.com>> wrote:
  >> There is nothing technically wrong with DirectX on Windows
  >> and unlike OpenGL which requires manufacturer provided drivers, it's
  >> guaranteed to be available.
  >
  >
  > Pardon, but this reads like citation of some old crap to me.
  > And how would you use a GPU w/o manufacturer provided drivers?
  > DX also builds on top of vendor specific drivers.
  >

 I thought it was the other way round. As in vendors write drivers to
 interface specifically with directX on windows, so Microsoft doesn't
 have to.
As with all drivers they do follow interfaces for directX to build upon. And there is the same thing with OpenGL and the way it integrates with the windows. The difference might be in that DX is huge framework, and vendors effectively write small "core" for it. With GL the balance could be the other way around, but GL doesn't try to be all of the many facets of the multimedia in the first place. (and now with DirectCompute I'm not even sure what DX wants to be actually) -- Dmitry Olshansky
Jan 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 13:18:51 UTC, Dmitry Olshansky 
wrote:
 The difference might be in that DX is huge framework, and 
 vendors effectively write small "core" for it.
I think this is all just a misunderstanding. Adam probably just meant that DX drivers are being updated automatically by Microsoft while you have to download the GL drivers yourself. But I never meant that the graphics library should expose GL-only functionality...
 With GL the balance could be the other way around, but GL 
 doesn't try to be all of the many facets of the multimedia in 
 the first place.
 (and now with DirectCompute I'm not even sure what DX wants to 
 be actually)
Yes, and I think this is a very good point for why the reference implementation should not be in DX. You risk ending up with all other platforms having to implement DX components that are not in GL (and there is a lot of them). E.g. in GL you cannot do anything without writing your own shaders and there is no notion of 2D-anything… Many of the GL calls and parameters are actually also legacy calls so the REAL OpenGL ES API that you are likely to use is quite limited and bare bones. Another reason I've already mentioned is to test feature coverage/performance on multiple platforms which only OpenGL ES enables. Yet another reason is to allow/encourage as many as possible to dabble with the API early on to increase the usability of it. Which actually might be the most important aspect.
Jan 07 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 07 Jan 2014 05:34:08 -0800, Ola Fosheim Gr=C3=B8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 7 January 2014 at 13:18:51 UTC, Dmitry Olshansky wrote:
 The difference might be in that DX is huge framework, and vendors  =
 effectively write small "core" for it.
I think this is all just a misunderstanding. Adam probably just meant =
=
 that DX drivers are being updated automatically by Microsoft while you=
=
 have to download the GL drivers yourself. But I never meant that the  =
 graphics library should expose GL-only functionality...

 With GL the balance could be the other way around, but GL doesn't try=
=
 to be all of the many facets of the multimedia in the first place.
 (and now with DirectCompute I'm not even sure what DX wants to be  =
 actually)
Yes, and I think this is a very good point for why the reference =
 implementation should not be in DX. You risk ending up with all other =
=
 platforms having to implement DX components that are not in GL (and  =
 there is a lot of them).
You seem very concerned that the low-level API will effect the design of= = the high-level API. If that happens we've failed and need to try again. = = Plenty of other libraries have done this so I don't see this as anything= = other than a theoretical risk that can be designed around.
 E.g. in GL you cannot do anything without writing your own shaders and=
=
 there is no notion of 2D-anything=E2=80=A6 Many of the GL calls and pa=
rameters =
 are actually also legacy calls so the REAL OpenGL ES API that you are =
=
 likely to use is quite limited and bare bones.

 Another reason I've already mentioned is to test feature  =
 coverage/performance on multiple platforms which only OpenGL ES enable=
s.
 Yet another reason is to allow/encourage as many as possible to dabble=
=
 with the API early on to increase the usability of it. Which actually =
=
 might be the most important aspect.
I asked Mike Parker about this at Dconf 2013, with the intent of having = a = reference implementation. You know what he said? Don't bother. Because i= n = the end it does not matter which implementation you start with, other = API's will look different no matter what you do, even for OGL (differing= = implementations), differing fonts, differing rendering pipelines, etc...= = His advice was to pick whatever worked best for the implementor and make= = the other API implementations look as close a possible and have people = submit bug reports. Eventually they will match on a per-pixel basis but = = shooting for that goal out of the gate is a waste of effort. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 19:00:46 UTC, Adam Wilson wrote:
 You seem very concerned that the low-level API will effect the 
 design of the high-level API. If that happens we've failed and 
 need to try again.
No, I am afraid that you pick a high-level ENGINE (not HAL API) and dress it up. If you actually use a low level api, it will be shader based and use roughly the same pipeline. DX3D and GL have feature parity if you stay away from the esoteric stuff.
 I asked Mike Parker about this at Dconf 2013, with the intent 
 of having a reference implementation. You know what he said? 
 Don't bother. Because in the end it does not matter which 
 implementation you start with, other API's will look different 
 no matter what you do, even for OGL (differing 
 implementations), differing fonts, differing rendering 
 pipelines, etc...
No. Simple shaders work roughly the same on modern hardware, though you need performance tweaks, avoid accumulated FP errors etc. If you don't depend on system fonts, fonts should not differ.
 His advice was to pick whatever worked best for the implementor 
 and make the other API implementations look as close a possible 
 and have people submit bug reports.
I am arguing in favour of having one reference implementation that is highly portable. There is no need to match up pixels to anything if there is only one solution…
Jan 07 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 07 Jan 2014 11:21:47 -0800, Ola Fosheim Gr=C3=B8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 7 January 2014 at 19:00:46 UTC, Adam Wilson wrote:
 You seem very concerned that the low-level API will effect the design=
=
 of the high-level API. If that happens we've failed and need to try  =
 again.
No, I am afraid that you pick a high-level ENGINE (not HAL API) and =
 dress it up. If you actually use a low level api, it will be shader  =
 based and use roughly the same pipeline. DX3D and GL have feature pari=
ty =
 if you stay away from the esoteric stuff.

 I asked Mike Parker about this at Dconf 2013, with the intent of havi=
ng =
 a reference implementation. You know what he said? Don't bother.  =
 Because in the end it does not matter which implementation you start =
=
 with, other API's will look different no matter what you do, even for=
=
 OGL (differing implementations), differing fonts, differing rendering=
=
 pipelines, etc...
No. Simple shaders work roughly the same on modern hardware, though yo=
u =
 need performance tweaks, avoid accumulated FP errors etc. If you don't=
=
 depend on system fonts, fonts should not differ.
That depends on how one defines system fonts I suppose. I was figuring = Open/TrueType fonts at least, which are all vector fonts.
 His advice was to pick whatever worked best for the implementor and  =
 make the other API implementations look as close a possible and have =
=
 people submit bug reports.
I am arguing in favour of having one reference implementation that is =
=
 highly portable. There is no need to match up pixels to anything if  =
 there is only one solution=E2=80=A6
Right, but Mike Parker has experience doing this, his opinion counts for= = quite a bit. His biggest point however is that the high-level API should= = be completely independent of the low-level API's. The high-level API = describes what the user wants and it's up to the graphics API implemento= r = to get it right. To be honest I would rather use a 2D graphics library = like cairo that supports OpenGL on Posix systems before we went to the = trouble of making OpenGL render 2D shapes in 3D space, I've done that = before, it's not easy. One of the more difficult problems is converting = 2D = pixels into the Cartesian coordinates system while accounting for DPI. = It's doable, but it's more a LOT work than working with Direct2D or Cair= o = or some other suitable API... -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 20:06:47 UTC, Adam Wilson wrote:
 Right, but Mike Parker has experience doing this, his opinion
(I don't know Mike, but it doesn't matter, I never care about technical opinions anyway. I care about arguments, so name dropping has zero effect on me. Even Carmack have opinions that are wrong.)
 counts for quite a bit. His biggest point however is that the 
 high-level API should be completely independent of the 
 low-level API's.
That's not possible. The GPU pipeline defines a design space. For 2D graphics it consists of texture atlases, shaders and how to obtain "context coherency" and reduce the costs of overdraw. If you stay in that design space and do it well, you get great speed and can afford to have less efficient higher-level structures creating a framework that is easier to use. The more low level headroom you have, the more high level freedom you get. The more speed you waste on the lower levels the more constrained and annoying using the high level api becomes, because you have to take care to avoid lower level bottlenecks. Which is a good argument for retained mode at the cost of latency for high level frameworks.
 The high-level API describes what the user wants and it's up to 
 the graphics API implementor to get it right.
That is the scene-graph approach: Cocos2D, HTML, SVG, VRML, Open Inventor etc
 to the trouble of making OpenGL render 2D shapes in 3D space, 
 I've done that before, it's not easy. One of the more difficult 
 problems is converting 2D pixels into the Cartesian coordinates 
 system while accounting for DPI. It's doable, but it's more a
Well, not sure why DPI is a problem, but managing dynamic atlases (organizing multiple images on a single texture) in an optimal and transparent manner requires an infrastructure. Sure.
Jan 07 2014
parent reply Mike Parker <aldacron gmail.com> writes:
On 1/8/2014 5:35 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Tuesday, 7 January 2014 at 20:06:47 UTC, Adam Wilson wrote:
 counts for quite a bit. His biggest point however is that the
 high-level API should be completely independent of the low-level API's.
That's not possible. The GPU pipeline defines a design space. For 2D graphics it consists of texture atlases, shaders and how to obtain "context coherency" and reduce the costs of overdraw. If you stay in that design space and do it well, you get great speed and can afford to have less efficient higher-level structures creating a framework that is easier to use. The more low level headroom you have, the more high level freedom you get.
It is very much possible. There are a number of high-level graphics APIs out there designed in just such a way, many of which predate shader APIs. Java2D is a prominent one. For the sort of package we're discussing here, you should never define the high-level API in terms of low-level API features. Shaders shouldn't even enter into the equation. That's the sort of backend implementation detail that users of the API shouldn't have to worry about. What you're talking about is fine when you have the luxury of supporting a limited number of platforms, or are working in a specific application space (games, for instance). In those cases, performance is going to trump generality and you can model your API more closely to the hardware. But for a general-purpose graphics API that needs to be available on as many devices as possible, that's not going to work. And anything that's going to be considered for inclusion into Phobos probably should favor generality over performance. Ideally, the default backend should be software, so that the API can be used even when graphics hardware is not available (though I'm not saying that's a realistic target to start out with). I realize that we're in an age where GPU programming is becoming more common and even cell phones have good hardware acceleration, but another consideration is the target audience. If you want an API that a non-graphics programmer can more easily get up to speed with, then that's another reason that you can't let the low-level influence the design. The closer you get to the details, the more difficult it is to learn. It's quite possible to put together a reasonably performant, easy-to-use, general purpose graphics API that can take advantage of the best renderer on a given platform without letting the low-level details leak into the high-level design.
Jan 07 2014
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-01-08 03:38, Mike Parker wrote:
 On 1/8/2014 5:35 AM, "Ola Fosheim Grøstad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Tuesday, 7 January 2014 at 20:06:47 UTC, Adam Wilson wrote:
 counts for quite a bit. His biggest point however is that the
 high-level API should be completely independent of the low-level API's.
That's not possible. The GPU pipeline defines a design space. For 2D graphics it consists of texture atlases, shaders and how to obtain "context coherency" and reduce the costs of overdraw. If you stay in that design space and do it well, you get great speed and can afford to have less efficient higher-level structures creating a framework that is easier to use. The more low level headroom you have, the more high level freedom you get.
It is very much possible. There are a number of high-level graphics APIs out there designed in just such a way, many of which predate shader APIs. Java2D is a prominent one. For the sort of package we're discussing here, you should never define the high-level API in terms of low-level API features. Shaders shouldn't even enter into the equation. That's the sort of backend implementation detail that users of the API shouldn't have to worry about. What you're talking about is fine when you have the luxury of supporting a limited number of platforms, or are working in a specific application space (games, for instance). In those cases, performance is going to trump generality and you can model your API more closely to the hardware. But for a general-purpose graphics API that needs to be available on as many devices as possible, that's not going to work. And anything that's going to be considered for inclusion into Phobos probably should favor generality over performance.
I would say that we should try and built the API in many different levels and layers. The top layer would be for ease of use to quickly get something to show on the display. If more performance is needed it should be possible to access the lower levels of the API to get more control of what you need to do.
 Ideally, the default
 backend should be software, so that the API can be used even when
 graphics hardware is not available (though I'm not saying that's a
 realistic target to start out with).
Ideally the default backend should detect if graphics hardware is available or not at runtime and choose the best backend for that.
 I realize that we're in an age where GPU programming is becoming more
 common and even cell phones have good hardware acceleration, but another
 consideration is the target audience. If you want an API that a
 non-graphics programmer can more easily get up to speed with, then
 that's another reason that you can't let the low-level influence the
 design. The closer you get to the details, the more difficult it is to
 learn.

 It's quite possible to put together a reasonably performant,
 easy-to-use, general purpose graphics API that can take advantage of the
 best renderer on a given platform without letting the low-level details
 leak into the high-level design.
-- /Jacob Carlborg
Jan 07 2014
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 07 Jan 2014 23:52:48 -0800, Jacob Carlborg <doob me.com> wrote:

 On 2014-01-08 03:38, Mike Parker wrote:
 On 1/8/2014 5:35 AM, "Ola Fosheim Gr=F8stad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Tuesday, 7 January 2014 at 20:06:47 UTC, Adam Wilson wrote:
 counts for quite a bit. His biggest point however is that the
 high-level API should be completely independent of the low-level  =
 API's.
That's not possible. The GPU pipeline defines a design space. For 2D=
 graphics it consists of texture atlases, shaders and how to obtain
 "context coherency" and reduce the costs of overdraw. If you stay in=
 that design space and do it well, you get great speed and can afford=
to
 have less efficient higher-level structures creating a framework tha=
t =
 is
 easier to use. The more low level headroom you have, the more high  =
 level
 freedom you get.
It is very much possible. There are a number of high-level graphics A=
PIs
 out there designed in just such a way, many of which predate shader
 APIs. Java2D is a prominent one. For the sort of package we're
 discussing here, you should never define the high-level API in terms =
of
 low-level API features. Shaders shouldn't even enter into the equatio=
n.
 That's the sort of backend implementation detail that users of the AP=
I
 shouldn't have to worry about.

 What you're talking about is fine when you have the luxury of support=
ing
 a limited number of platforms, or are working in a specific applicati=
on
 space (games, for instance). In those cases, performance is going to
 trump generality and you can model your API more closely to the
 hardware. But for a general-purpose graphics API that needs to be
 available on as many devices as possible, that's not going to work. A=
nd
 anything that's going to be considered for inclusion into Phobos
 probably should favor generality over performance.
I would say that we should try and built the API in many different =
 levels and layers. The top layer would be for ease of use to quickly g=
et =
 something to show on the display. If more performance is needed it  =
 should be possible to access the lower levels of the API to get more  =
 control of what you need to do.
As useful as that may be I want to keep the scope focused for now. A = high-level API allows us the most flexibility in terms of implementation= = and user accessibility. We're not trying to be triple-A performant so = building out extra API levels for more performance is a low priority. = Let's do something focused and do it well before we expand the scope. Th= is = project is already huge. :-)
 Ideally, the default
 backend should be software, so that the API can be used even when
 graphics hardware is not available (though I'm not saying that's a
 realistic target to start out with).
Ideally the default backend should detect if graphics hardware is =
 available or not at runtime and choose the best backend for that.

 I realize that we're in an age where GPU programming is becoming more=
 common and even cell phones have good hardware acceleration, but anot=
her
 consideration is the target audience. If you want an API that a
 non-graphics programmer can more easily get up to speed with, then
 that's another reason that you can't let the low-level influence the
 design. The closer you get to the details, the more difficult it is t=
o
 learn.

 It's quite possible to put together a reasonably performant,
 easy-to-use, general purpose graphics API that can take advantage of =
the
 best renderer on a given platform without letting the low-level detai=
ls
 leak into the high-level design.
-- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
prev sibling parent reply Mike Parker <aldacron gmail.com> writes:
On 1/8/2014 4:52 PM, Jacob Carlborg wrote:

 I would say that we should try and built the API in many different
 levels and layers. The top layer would be for ease of use to quickly get
 something to show on the display. If more performance is needed it
 should be possible to access the lower levels of the API to get more
 control of what you need to do.
It's only necessary to have an abstract renderer interface with concrete backend implementations at the lowest level. Then user-facing API (std.gfx.geometry, std.gfx.svg, or whatever) operates through the renderer interface. For this sort of package, I don't believe the renderer interface should be exposed to the user at all. Doing so would greatly inhibit the freedom to refactor it down the road. And that freedom, I think, is important for long-term maintenance.
 Ideally, the default
 backend should be software, so that the API can be used even when
 graphics hardware is not available (though I'm not saying that's a
 realistic target to start out with).
Ideally the default backend should detect if graphics hardware is available or not at runtime and choose the best backend for that.
By "default", I mean "fallback", for cases when there's no, or problematic, hardware acceleration available. As to which and how many hardware-accelerated backends ship along with that, that's very much open to debate. Which versions of OpenGL to support out of the box, whether or not a D3D renderer should be included and, if so, which version, and so on. Honestly, I'd prefer not to see this package in Phobos at all. That implies a number of constraints, both design time and run time, that would not be necessary if it were left as a third-party dub-enabled package. Anything in Phobos has to work where D works, be it on a headless server or a tweaked-out gaming rig, or on an old system or a future one. Maximum portability and maintainability need to take priority over performance. If the graphics API can't work in all of those possible environments, then it has no business being part of the standard library.
Jan 08 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-01-08 09:32, Mike Parker wrote:

 It's only necessary to have an abstract renderer interface with concrete
 backend implementations at the lowest level. Then user-facing API
 (std.gfx.geometry, std.gfx.svg, or whatever) operates through the
 renderer interface. For this sort of package, I don't believe the
 renderer interface should be exposed to the user at all. Doing so would
 greatly inhibit the freedom to refactor it down the road. And that
 freedom, I think, is important for long-term maintenance.
I'm saying it would be nice to have access to the platform handles (and similar) when the high level API isn't enough.
 By "default", I mean "fallback", for cases when there's no, or
 problematic, hardware acceleration available. As to which and how many
 hardware-accelerated backends ship along with that, that's very much
 open to debate. Which versions of OpenGL to support out of the box,
 whether or not a D3D renderer should be included and, if so, which
 version, and so on.
I see.
 Honestly, I'd prefer not to see this package in Phobos at all. That
 implies a number of constraints, both design time and run time, that
 would not be necessary if it were left as a third-party dub-enabled
 package. Anything in Phobos has to work where D works, be it on a
 headless server or a tweaked-out gaming rig, or on an old system or a
 future one. Maximum portability and maintainability need to take
 priority over performance. If the graphics API can't work in all of
 those possible environments, then it has no business being part of the
 standard library.
I think you're exaggerating a bit. I wouldn't expect a graphics library to work if I don't have a screen and/or graphics card. Do you expect std.net/socket to work without a NIC? I would like to see it as an official library distributed with D, but not necessarily included in Phobos. -- /Jacob Carlborg
Jan 08 2014
parent reply "Mike Parker" <aldacron gmail.com> writes:
On Wednesday, 8 January 2014 at 09:34:20 UTC, Jacob Carlborg 
wrote:

 I'm saying it would be nice to have access to the platform 
 handles (and similar) when the high level API isn't enough.
Perhaps. But I would argue that if you need that sort of access then you should be using a more specialized library anyway. I'm under the impression that what we're discussing here is something simple and easy to use. That can cover a wide range of use cases without accessing the low level, but would be an impedement if you do need the low level.
 Honestly, I'd prefer not to see this package in Phobos at all. 
 That
 implies a number of constraints, both design time and run 
 time, that
 would not be necessary if it were left as a third-party 
 dub-enabled
 package. Anything in Phobos has to work where D works, be it 
 on a
 headless server or a tweaked-out gaming rig, or on an old 
 system or a
 future one. Maximum portability and maintainability need to 
 take
 priority over performance. If the graphics API can't work in 
 all of
 those possible environments, then it has no business being 
 part of the
 standard library.
I think you're exaggerating a bit. I wouldn't expect a graphics library to work if I don't have a screen and/or graphics card.
Rendering to a memory buffer to generate png images is a legitimate use case. If Phobos has a graphics API, I would expect that to be supported even when no gpu is present.
 Do you expect std.net/socket to work without a NIC?
That's different. A gpu is only required for *hardware accelerated* rendering. And I don't think Phobos should require it.
 I would like to see it as an official library distributed with 
 D, but not necessarily included in Phobos.
Whether it's distributed with the compiler as an optional package or simply maintained in the D organization at github and distributed through dub, either case is better than making it part of the standard lib, IMO. Then more liberty can be taken with the requirements.
Jan 08 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 8 January 2014 at 11:34:53 UTC, Mike Parker wrote:
 Rendering to a memory buffer to generate png images is a 
 legitimate use case. If Phobos has a graphics API, I would 
 expect that to be supported even when no gpu is present.
Yes, this is true, but that was not the goal stated at the start of the thread. The linked framework is a wrapper for a hodge podge of graphics technologies that target real time graphics (an internal framework that was developed to do graphics for advertising I think). A generic non-real-time graphics API that is capable of generating PDF, SVG and PNG would be quite useful in web-services for instance. But then it should be based on the graphics model that can be represented efficiently in PDF and SVG. However, if you want interactive graphics, you enter a different domain. An engine that assumes that all geometry change for each frame is quite different from an engine that assumes that most graphics do not change beyond simple affine transforms. If you decide that most surface do not change (beyond affine transforms) and want a portable graphics solution, you either write your own compositor (in D) on top of the common GPU model, or you use an engine which provides a hidden compositor (like SVG, and even Flash). With a compositor you can let your "non real time" graphics API to write to surfaces that are used by that compositor. Thus, the real time requirements for graphic primitives are much lower. But it is more work for the programmer than using a high level retained mode engine such as SVG. However the argument against shaders/GPU does not hold, I think. Using simple shaders (with your own restricted syntax) does not require a GPU. If you can parse it at compile time you should be able to generate D code for it, and you should be able to generate code for GL/DX at runtime quite easily (probably a few days of work).
Jan 08 2014
parent reply Mike Parker <aldacron gmail.com> writes:
On 1/8/2014 9:26 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 8 January 2014 at 11:34:53 UTC, Mike Parker wrote:
 Rendering to a memory buffer to generate png images is a legitimate
 use case. If Phobos has a graphics API, I would expect that to be
 supported even when no gpu is present.
Yes, this is true, but that was not the goal stated at the start of the thread. The linked framework is a wrapper for a hodge podge of graphics technologies that target real time graphics (an internal framework that was developed to do graphics for advertising I think). A generic non-real-time graphics API that is capable of generating PDF, SVG and PNG would be quite useful in web-services for instance. But then it should be based on the graphics model that can be represented efficiently in PDF and SVG.
I could have sworn I read somewhere in this thread that there was talk of including it in Phobos at some point. That's the perspective I've been arguing from. If it's fully intended to be separate from Phobos, then there's no need for any of this and the feature list could certainly be more specific.
 However, if you want interactive graphics, you enter a different domain.
 An engine that assumes that all geometry change for each frame is quite
 different from an engine that assumes that most graphics do not change
 beyond simple affine transforms.
I wouldn't expect any implementation, generic or otherwise, to assume mostly static geometry. You could bet that a simple graphics API in Phobos would be used for games by some and for generating pie charts by others. It's still possible to get a generic rendering system to handle that with decent performance. Yes, it makes for compromises in the backend that a more targeted renderer wouldn't need to make, but that's the price of genericity.
 However the argument against shaders/GPU does not hold, I think. Using
 simple shaders (with your own restricted syntax) does not require a GPU.
 If you can parse it at compile time you should be able to generate D
 code for it, and you should be able to generate code for GL/DX at
 runtime quite easily (probably a few days of work).
This is true. But assuming a) the library is going to be in Phobos and therefore b) there is going to be a software backend, and c) there's a desire for feature parity between the software renderer and any hardware-accelerated backends, then custom shaders increase the complexity of the software implementation quite a bit. There are so many rules needed to guide the implementation in terms of render quality. Ugh! That's a lot of assumptions, I know. If this is not going to be in Phobos and there's no pressing need for a software renderer then it's moot. In that case, the sky's the limit.
Jan 08 2014
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-01-08 14:09, Mike Parker wrote:

 I could have sworn I read somewhere in this thread that there was talk
 of including it in Phobos at some point. That's the perspective I've
 been arguing from. If it's fully intended to be separate from Phobos,
 then there's no need for any of this and the feature list could
 certainly be more specific.
In the first post it says something about inclusion in Phobos. I interpreted the post like that as well. -- /Jacob Carlborg
Jan 08 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 8 January 2014 at 13:09:39 UTC, Mike Parker wrote:
 I could have sworn I read somewhere in this thread that there 
 was talk of including it in Phobos at some point. That's the 
 perspective I've been arguing from. If it's fully intended to 
 be separate from Phobos, then there's no need for any of this 
 and the feature list could certainly be more specific.
Well, the goals are very unclear to me, but I guess this discussion and the friction between differing assumptions will sort out what belongs in a standard library :-) From my perspective Phobos should only provide building blocks that are generally useful for many different applications. I don't think a real time graphics engine belongs in the standard distribution. Real time engines tend to be short lived. I think the basic setup for each platform belongs there. The same tedious stuff that most apps on a platform have to deal with in order to get started with real time stuff. Once you have a GL/DX context on your hands you can get application stuff going in D with C bindings and a sample application (graphical hello world) would probably be better than a complete framework.
 I wouldn't expect any implementation, generic or otherwise, to 
 assume mostly static geometry. You could bet that a simple 
 graphics API in Phobos would be used for games by some and for 
 generating pie charts by others. It's still possible to get a 
 generic rendering system to handle that with decent 
 performance. Yes, it makes for compromises in the backend that 
 a more targeted renderer wouldn't need to make, but that's the 
 price of genericity.
2D games, web browsers etc use geometry that is mostly static between frames (although it can be hidden). So you can cache and have a dirty-flag on each surface (if the path-data change you set the dirty flag and the engine recreates the cached surface). 3D is different because of the perspective effect which force you to a complete redraw every frame.
 This is true. But assuming a) the library is going to be in 
 Phobos and therefore b) there is going to be a software 
 backend, and c) there's a desire for feature parity between the 
 software renderer and any hardware-accelerated backends, then 
 custom shaders increase the complexity of the software 
 implementation quite a bit. There are so many rules needed to 
 guide the implementation in terms of render quality. Ugh!

 That's a lot of assumptions, I know. If this is not going to be 
 in Phobos and there's no pressing need for a software renderer 
 then it's moot. In that case, the sky's the limit.
I think you can have the building blocks in Phobos: 1. A path datastructure that can be used for vector path manipulation (fully SVG/postscript/PDF compatible). Useful both for rendering and if you want to do editing/transforms of file formats etc. 2. A simple D-compatible shader language with parser and ability to generate common shader languages (also the the non GL/DX ones used by various multimedia applications for movie effects). The shader language could be this simple: "r=src1*0.4; g=src1*0.2; b=src1*0.8; a=1.0;" It could be so simple that you could embed it in your D rendering loop verbatim, as D code. 3. A software path based renderer in D that generate image masks that use (1++) as input. 4. A software compositor that can generate PNG and PDF based on (1) and (2), and SVG if you only use builtin shaders (the fill types supported by SVG). 5. A portable 2D compositor framework, that basically takes byte-masks from any source (could be a vector renderer) and images and compose them using very simple shaders. Creating a compositor that performs well with many small objects is work though, because you need to maintain a texture atlas and heuristics for how to group etc… You could probably configure a D-based compositor using templates, so I guess the hard part is figuring out what you need on the platform specific level to support a compositor written fully in D. If you go the compositor route nothing prevents you from giving access to the GL/DX context, before/after the compositor is done rendering for those with advanced needs. It is possible to implement 1,2 and 3 before designing 4 and 5. And at the end of the day 4 and 5 probably does not belong in the distribution, while 1,2,3 might.
Jan 08 2014
parent reply Mike Parker <aldacron gmail.com> writes:
On 1/8/2014 11:05 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:

 I wouldn't expect any implementation, generic or otherwise, to assume
 mostly static geometry. You could bet that a simple graphics API in
 Phobos would be used for games by some and for generating pie charts
 by others. It's still possible to get a generic rendering system to
 handle that with decent performance. Yes, it makes for compromises in
 the backend that a more targeted renderer wouldn't need to make, but
 that's the price of genericity.
2D games, web browsers etc use geometry that is mostly static between frames (although it can be hidden). So you can cache and have a dirty-flag on each surface (if the path-data change you set the dirty flag and the engine recreates the cached surface). 3D is different because of the perspective effect which force you to a complete redraw every frame.
Is this going to provide a 3D renderer as well? I thought we were just talking about 2D.
Jan 08 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 9 January 2014 at 01:56:59 UTC, Mike Parker wrote:
 Is this going to provide a 3D renderer as well? I thought we 
 were just talking about 2D.
My point was that 2D is better suited for caching and reusing rasters from previous frames than 3D. So it benefit from not being fully immediate mode. Although if you cache by indexing the id of a shape object, it looks like immediate mode. I have no idea if it is supposed to provide 3D. It was suggested as a future extension I suppose?
Jan 08 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 08 Jan 2014 18:17:11 -0800, Ola Fosheim Gr=F8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Thursday, 9 January 2014 at 01:56:59 UTC, Mike Parker wrote:
 Is this going to provide a 3D renderer as well? I thought we were jus=
t =
 talking about 2D.
My point was that 2D is better suited for caching and reusing rasters =
=
 from previous frames than 3D. So it benefit from not being fully  =
 immediate mode. Although if you cache by indexing the id of a shape  =
 object, it looks like immediate mode.

 I have no idea if it is supposed to provide 3D. It was suggested as a =
=
 future extension I suppose?
Yes, it is supposed to provide 2D and 3D and ideally have both in the sa= me = window. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 9 January 2014 at 02:21:41 UTC, Adam Wilson wrote:
 Yes, it is supposed to provide 2D and 3D and ideally have both 
 in the same window.
Then we should decide what the 2D surface properties are: 1. Are they z-ordered or do we use painters-algorithm (drawing back to front), basically: how should 3D and 2D blend into each other? 2. Are they considered to be transparent all over the surface, or are the non-transparent parts of them to be handled more efficiently? 3. Are they to be scaled as bitmaps or are they going to have exact precision. If it is desirable to skip the compositor complexity I'd say try to go entirely for triangular geometry and shaders in the first version and treat 2D the same way you treat 3D, but have shape objects so that you don't have to constantly transfer meshes to the GPU. Pre rendering: 1. build shape objects. 2. build graphic contexts with various transforms and setups (colours, scaling). 2. register shape objects with engine with expected min-max LOD level (mesh resolution). 3. engine preloads data to the GPU if desirable. Per frame: 1. get graphic contexts with various transforms and setups (colours, scaling) 2. toss shape object id's to the engine through a graphic context 3. engine queues transparent parts and renders non-transparent parts immediately 4. engine sorts transparent parts and render them It will probably not be very fast for 2D, but it is the better starting point if you later want to mix 2D and 3D. Besides, by the time the framework is ready maybe most GPUs have fast enough shaders for this to be the best way to do it for larger shapes (then you can special case for smaller shapes later by drawing to them to textures).
Jan 09 2014
parent "ttt" <ttt.ttt aol.com> writes:
http://libagar.org/index.html.en
Jan 09 2014
prev sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 08 Jan 2014 05:09:52 -0800, Mike Parker <aldacron gmail.com> wro=
te:

 On 1/8/2014 9:26 PM, "Ola Fosheim Gr=F8stad"  =
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 8 January 2014 at 11:34:53 UTC, Mike Parker wrote:
 Rendering to a memory buffer to generate png images is a legitimate
 use case. If Phobos has a graphics API, I would expect that to be
 supported even when no gpu is present.
Yes, this is true, but that was not the goal stated at the start of t=
he
 thread. The linked framework is a wrapper for a hodge podge of graphi=
cs
 technologies that target real time graphics (an internal framework th=
at
 was developed to do graphics for advertising I think).

 A generic non-real-time graphics API that is capable of generating PD=
F,
 SVG and PNG would be quite useful in web-services for instance. But t=
hen
 it should be based on the graphics model that can be represented
 efficiently in PDF and SVG.
I could have sworn I read somewhere in this thread that there was talk=
=
 of including it in Phobos at some point. That's the perspective I've  =
 been arguing from. If it's fully intended to be separate from Phobos, =
=
 then there's no need for any of this and the feature list could  =
 certainly be more specific.

 However, if you want interactive graphics, you enter a different doma=
in.
 An engine that assumes that all geometry change for each frame is qui=
te
 different from an engine that assumes that most graphics do not chang=
e
 beyond simple affine transforms.
I wouldn't expect any implementation, generic or otherwise, to assume =
=
 mostly static geometry. You could bet that a simple graphics API in  =
 Phobos would be used for games by some and for generating pie charts b=
y =
 others. It's still possible to get a generic rendering system to handl=
e =
 that with decent performance. Yes, it makes for compromises in the  =
 backend that a more targeted renderer wouldn't need to make, but that'=
s =
 the price of genericity.


 However the argument against shaders/GPU does not hold, I think. Usin=
g
 simple shaders (with your own restricted syntax) does not require a G=
PU.
 If you can parse it at compile time you should be able to generate D
 code for it, and you should be able to generate code for GL/DX at
 runtime quite easily (probably a few days of work).
This is true. But assuming a) the library is going to be in Phobos and=
=
 therefore b) there is going to be a software backend, and c) there's a=
=
 desire for feature parity between the software renderer and any  =
 hardware-accelerated backends, then custom shaders increase the  =
 complexity of the software implementation quite a bit. There are so ma=
ny =
 rules needed to guide the implementation in terms of render quality. U=
gh!
 That's a lot of assumptions, I know. If this is not going to be in  =
 Phobos and there's no pressing need for a software renderer then it's =
=
 moot. In that case, the sky's the limit.
I mentioned the Phobos inclusion at the beginning because I knew that th= e = topic would come up but I really had no idea what direction it would tak= e. = So far, everyone here seems to agree with the idea that only certain = portions that are not renderer dependent should be include in the standa= rd = library. In principle, I don't really have a problem with that, however = as = someone pointed out, you don't really expect to be able to use std.socke= t = without a NIC, so saying that it shouldn't be in just because not all = machines have graphics output is tad extreme. At some point I think that the standard library should include some basi= c = graphics rendering. Almost every language in common use today supports = graphics at some level, or, in the case of C++, are in the process of = adding it. I lurk on the ISO C++ Forums and the graphics work-group is t= he = most attended and discussed future proposal in the entire standard right= = now. People want this. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 8 January 2014 at 18:44:33 UTC, Adam Wilson wrote:
 C++, are in the process of adding it. I lurk on the ISO C++ 
 Forums and the graphics work-group is the most attended and 
 discussed future proposal in the entire standard right now. 
 People want this.
This group? https://groups.google.com/a/isocpp.org/forum/?fromgroups#!forum/graphics
Jan 08 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 08 Jan 2014 10:58:37 -0800, Ola Fosheim Gr=F8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Wednesday, 8 January 2014 at 18:44:33 UTC, Adam Wilson wrote:
 C++, are in the process of adding it. I lurk on the ISO C++ Forums an=
d =
 the graphics work-group is the most attended and discussed future  =
 proposal in the entire standard right now. People want this.
This group? https://groups.google.com/a/isocpp.org/forum/?fromgroups#!forum/graphi=
cs That's the one. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 8 January 2014 at 19:18:03 UTC, Adam Wilson wrote:
 That's the one.
Ah, I like the way they are open to all options. http://isocpp.org/files/papers/N3825.pdf I think they are very far away from creating a standard though. With only 3 attendees I think it probably just is some early ball-tossing intended to attract attention from the computer graphics community.
Jan 08 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 08 Jan 2014 11:26:58 -0800, Ola Fosheim Gr=F8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Wednesday, 8 January 2014 at 19:18:03 UTC, Adam Wilson wrote:
 That's the one.
Ah, I like the way they are open to all options. http://isocpp.org/files/papers/N3825.pdf I think they are very far away from creating a standard though. With =
 only 3 attendees I think it   probably just is some early ball-tossing=
=
 intended to attract attention from the computer graphics community.
Small meeting size probably also has to be with when and where the meeti= ng = was. Beyond that, all I can say is that not being part of standard is = freeing in certain respects. Namely the laborious process of changing th= e = standard. That said Mr. Sutter is pushing for C++17 inclusion, we shall = = see. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 8 January 2014 at 19:51:31 UTC, Adam Wilson wrote:
 Small meeting size probably also has to be with when and where 
 the meeting was. Beyond that, all I can say is that not being 
 part of standard is freeing in certain respects. Namely the
Yes, and also that they are less likely to invent something new, but will be told to just making existing practice formal. Some discussions are worth looking at though. Like this discussion of pixel coordinates: https://groups.google.com/a/isocpp.org/forum/?fromgroups#!topic/graphics/ZEIOhsJrrUQ Personally, I think that allowing multiple graphics contexts to the same surface is the better approach, so that you can scale it to either be in the range ([0,1],[0,1]), ([-1,1],[-1,1]), ([0,width],[0,height]), etc e.g.: g = surface.getContext().resetScaleNormalizedCenter(); // normalized to [-1,1] gwh = surface.getContext().resetScaleWidthHeight(); // vertices in pixel coordinates gretina = gwh.clone().scale(2,2); // vertices in points coordinates g.plot(-1,-1); //upper left corner gwh.plot(0,0); //upper left corner etc.
Jan 08 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
And well the c++ guys are right when pointing to html5 canvas. It 
is close enough to postscript and well worth having a look at for 
those who don't know it. It is semi-immediate mode, in the sense 
that it allows implementations to retain a log of draw commands.

http://www.w3.org/TR/2dcontext/
Jan 08 2014
prev sibling parent reply Mike Parker <aldacron gmail.com> writes:
On 1/9/2014 3:42 AM, Adam Wilson wrote:

 the standard library. In principle, I don't really have a problem with
 that, however as someone pointed out, you don't really expect to be able
 to use std.socket without a NIC, so saying that it shouldn't be in just
 because not all machines have graphics output is tad extreme.
What I'm saying is that if it *is* in Phobos, then there should be a software renderer so that it *is* available to use on machines with no graphics card. If it isn't in Phobos, then I wouldn't see a software renderer as a requirement.
Jan 08 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 08 Jan 2014 17:55:20 -0800, Mike Parker <aldacron gmail.com> wrote:

 On 1/9/2014 3:42 AM, Adam Wilson wrote:

 the standard library. In principle, I don't really have a problem with
 that, however as someone pointed out, you don't really expect to be able
 to use std.socket without a NIC, so saying that it shouldn't be in just
 because not all machines have graphics output is tad extreme.
What I'm saying is that if it *is* in Phobos, then there should be a software renderer so that it *is* available to use on machines with no graphics card. If it isn't in Phobos, then I wouldn't see a software renderer as a requirement.
That's fair, and I don't think anyone here would disagree. Software renderers are hard things to write though, so I don't expect we'll get one early on. But I'd like to include one at some point... -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-01-08 12:34, Mike Parker wrote:

 Perhaps. But I would argue that if you need that sort of access then you
 should be using a more specialized library anyway. I'm under the
 impression that what we're discussing here is something simple and easy
 to use. That can cover a wide range of use cases without accessing the
 low level, but would be an impedement if you do need the low level.
The question is how much you're gaining by hiding it.
 Rendering to a memory buffer to generate png images is a legitimate use
 case. If Phobos has a graphics API, I would expect that to be supported
 even when no gpu is present.
Sure, right.
 That's different. A gpu is only required for *hardware accelerated*
 rendering. And I don't think Phobos should require it.
Right.
 Whether it's distributed with the compiler as an optional package or
 simply maintained in the D organization at github and distributed
 through dub, either case is better than making it part of the standard
 lib, IMO. Then more liberty can be taken with the requirements.
It makes a big difference. It's all about what you can expect to be available if a D compiler is present. -- /Jacob Carlborg
Jan 08 2014
parent reply Mike Parker <aldacron gmail.com> writes:
On 1/8/2014 9:59 PM, Jacob Carlborg wrote:
 On 2014-01-08 12:34, Mike Parker wrote:

 Perhaps. But I would argue that if you need that sort of access then you
 should be using a more specialized library anyway. I'm under the
 impression that what we're discussing here is something simple and easy
 to use. That can cover a wide range of use cases without accessing the
 low level, but would be an impedement if you do need the low level.
The question is how much you're gaining by hiding it.
The first thing that comes to mind is protecting render state. If you expose the low-level details, you give the user the ability to change the render state. This is a big deal in OpenGL, given its state-based nature (though less so with 4.x as I understand it). If you expose that, then that means the backend has to constantly set and reset state in case the user has done anything independently, impacting performance for the majority of people who don't need that low-level access. As long as its hidden, then the implementation can manage state changes more efficiently. Unless, of course, you don't expose the lowest level (OpenGL) but instead wrap state management in an interface that you expose. Now, you've added complexity to your interface just for a small subset of users. This becomes a big deal when implementing and maintaining multiple backends. I really feel that a graphics API should choose between simplicity and power (high performance, custom special effects). This can help streamline the API and make design decisions in the interest of the target group. Trying to support both is only going to result in an API that's possibly harder to use and certainly more difficult to maintain.
Jan 08 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 8 January 2014 at 13:22:21 UTC, Mike Parker wrote:
 as I understand it). If you expose that, then that means the 
 backend has to constantly set and reset state in case the user 
 has done anything independently,
If you allow callbacks during the rendering process you should discourage it and put the burden of restoring state on the user, but you only need to do this when rendering transparent faces and Vertex Array Objects helps some. It is safe to allow it before/after rendering solid faces and when done with transparent though.
Jan 08 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 8 January 2014 at 02:38:11 UTC, Mike Parker wrote:
 It is very much possible. There are a number of high-level 
 graphics APIs out there designed in just such a way, many of 
 which predate shader APIs. Java2D is a prominent one.
Java2D is a Display Postscript wannabe. Direct2D is a Display Postscript wannabe. Quartz is a Display Postscript wannabe. Cairo is a Display Postscript wannabe. If you want a Display Postscript API. Then say so. However, these implementations are probably incompatible when it comes to the lower level compositor, what level of access you have to the compositor and so on. So NO! You cannot just make a high level design without favouring one platform over another. You have to either ANALYZE all platforms and find a subset that performs well across the board OR pick a solution that is cross platform, like Cairo. You do not want inconsistent performance across platforms for an interactive API, that makes porting software time consuming and expensive. SVG is the better choice though because it is a retained mode API that actually is a standard and it hides the compositor.
 For the sort of package we're discussing here, you should never 
 define the high-level API in terms of low-level API features. 
 Shaders shouldn't even enter into the equation. That's the sort 
 of backend implementation detail that users of the API 
 shouldn't have to worry about.
If you want access to the compositor you certainly will have to!
 model your API more closely to the hardware. But for a 
 general-purpose graphics API that needs to be available on as 
 many devices as possible, that's not going to work.
Blindly inventing your own Display Postscript and Compositor is most certainly going to make it non-interactive/not-implemented on a wide array of devices.
 It's quite possible to put together a reasonably performant, 
 easy-to-use, general purpose graphics API that can take 
 advantage of the best renderer on a given platform without 
 letting the low-level details leak into the high-level design.
No, it is most certainly not possible to come up with a PORTABLE design for INTERACTIVE graphics without taking the lower level layer into consideration. That will only work if you implement your own scanline-renderer. Which is slow.
Jan 08 2014
parent reply "Boyd" <gaboonviper gmx.net> writes:
 No, it is most certainly not possible to come up with a 
 PORTABLE design for INTERACTIVE graphics without taking the 
 lower level layer into consideration.

 That will only work if you implement your own 
 scanline-renderer. Which is slow.
------------------- While I don't know much about the details of rendering graphics. I have enough experience with developing libraries, to know that you can abstract away pretty much anything. Why do you think interactive graphics would be any different?
Jan 08 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 8 January 2014 at 11:10:10 UTC, Boyd wrote:
 graphics. I have enough experience with developing libraries, 
 to know that you can abstract away pretty much anything.
This sentence makes no sense to me. Most 2D libraries implement Adobe's 2D model, which is embedded in Postscript, PDF and SVG (all defined by Adobe). This is not a God given model. Neither is having cubic beziers as a primitive. It is all because of Adobe being the industry leader and beziers being prevalent.
 Why do you think interactive graphics would be any different?
1. I don't agree that you can "abstract" anything without bias. There is no God given graphics model. 2. Because they are real time. There is a reason for why Flash is based on quadratic beziers and not cubic beziers, or even more complex (but better) spline bases.
Jan 08 2014
prev sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 07 Jan 2014 03:52:04 -0800, Iain Buclaw <ibuclaw gdcproject.org>=
  =

wrote:

 On 7 Jan 2014 10:20, "Dmitry Olshansky" <dmitry.olsh gmail.com> wrote:=
 07-Jan-2014 12:30, Adam Wilson =D0=BF=D0=B8=D1=88=D0=B5=D1=82:
 On Tue, 07 Jan 2014 00:05:35 -0800, Ola Fosheim Gr=C3=B8stad
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 There is nothing technically wrong with DirectX on Windows
 and unlike OpenGL which requires manufacturer provided drivers, it's=
 guaranteed to be available.
Pardon, but this reads like citation of some old crap to me. And how would you use a GPU w/o manufacturer provided drivers? DX also builds on top of vendor specific drivers.
I thought it was the other way round. As in vendors write drivers to interface specifically with directX on windows, so Microsoft doesn't h=
ave
 to.
I apologize, late-night exhaustion mis-speak. What I meant to say is tha= t = unlike DirectX, which due to Aero and WinRT requires that drivers be = provided that work with DirectX, Windows does not ship OpenGL in any for= m, = drivers or API's. Therefore the vendor has to ship the OpenGL API's alon= g = with the OGL compatible drivers, and not all do since it's not required = = for Windows certification. Yes, nVidia and ATI do, and that covers the = bulk, but it's not universal like DirectX is. On Windows you have absolu= te = certainty on DX always being there, you can't make that assumption with = = OGL. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 07 2014
prev sibling next sibling parent reply Justin Whear <justin economicmodeling.com> writes:
On Sun, 05 Jan 2014 20:10:06 -0800, Adam Wilson wrote:
 
 
 If you are interested in helping with a Cinder like library for D and/or
 have code you'd like to contribute, let's start talking and see what
 happens.
I've been writing graphical applications using D and OpenGL on Linux for a few years now in my spare time, so I'd like to help out. Just last week I googled "alternatives to Cinder" looking a library that had C bindings so that I could use it from D.
 
 The logical phases as I can see them are as follows, but please suggest
 changes:
 
 - Windowing and System Interaction (Including Keyboard/Mouse/Touch
 Input) - Basic Drawing (2D Shapes, Lines, Gradients, etc)
 - Image Rendering (Image Loading, Rendering, Modification, Saving, etc.)
 - 3D Drawing (By far the most complex stage, so we'll leave it for last)
If the ultimate goal is a high-level, idiomatic D library for people who are not necessarily experienced in CG, then shouldn't we start with the high-level interface and work down? That way the end-result drives the design and not the other way around. I understand that to be accepted into Phobos someday this library will need a minimal set of system dependencies, but in the short term it may be wise to simply wrap existing libraries like GLFW3 for low-level systems like windowing. In my experience, successful products deliver the vision to the customers as quickly as possible, allowing the design to be refined. Replacing low- level components with native D implementations should be invisible to the end-users anyways, making them the most non-essential. I'd also second the suggestion to build everything on top of OpenGL and DirectX--virtually every device that has an interactive display these days has hardware acceleration in some form.
Jan 06 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 6 January 2014 at 16:52:50 UTC, Justin Whear wrote:
 If the ultimate goal is a high-level, idiomatic D library for 
 people who
 are not necessarily experienced in CG, then shouldn't we start 
 with the
 high-level interface and work down?  That way the end-result
I don't disagree with this, but in reality most graphical applications are simpler if you decouple the world-model from the visuals and hence input from output and generate the visuals from scratch for every frame. Cocos2D is a popular library that maintains graphical state for you, but if you want to decouple the model from rendering you end up syncing two data structures which in the end is more work. I think most successful libraries are focused and do one thing really well, so suggesting dependencies: CG BASIC - vec: SSE tuples with swizzle - geom2 depends on vec: basic geometry - geom3 depends on vec (and geom2): 3D geometry - spatialcollections depends on geom3: acceleration data structures GPU - gpu depends on vec: DX/GL abstraction with shader support - textureloader depends on gpu: loading jpeg/png into gpu memory with caching - texturerender depends on gpu, geom2,svgpath,image loader: render to texture VG - svgpath depends on geom2: implements all svg path primitives + transformations - fontreader depends on geom2, svgpath: reading font+metadata + generating paths - fieldfont depends on fontreader, svgpath: creates signed distance field textures - simplepostscript depends on svgpath: for building pdf/drawing on canvas2D OS - input: cross platform solution for input for various devices (might look at marmalade and other cross platform engines for ideas) SAMPLE ENGINES - canvas3D depends on canvas2D, geom3, spatialcollections, texturerender/loader,input - canvas2D... etc
Jan 06 2014
prev sibling next sibling parent reply "FreeSlave" <freeslave93 gmail.com> writes:
I'm not familiar with Cinder library yet (seems it does not 
support Linux, so it's not very interesting for me), but I 
suppose graphics library should provide at least two approaches 
to build graphic applications. The first one is something that 
SDL and SFML offer: user has to manually write cycle loop for 
event handling. The second one is more complicated: event loop is 
encapsulated by some Application class which automatically 
dispatches events to gui elements and provide signal-slot system 
to ease creation of application logic (like Qt library does). The 
second approach implementation may be built on the first one, but 
it should not be the only. Sometimes the first approach is more 
easy and it has no overhead of all these high-level gui 
abstractions.
Jan 06 2014
next sibling parent reply "Keesjan" <keesjanvanroeden gmail.com> writes:
Another lib to consider is www.cairographics.org
Herb Sutter wants to base the 2d stuff of c++ on it
http://lists.cairographics.org/archives/cairo/2013-December/024858.html
Jan 06 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 6 January 2014 at 18:34:53 UTC, Keesjan wrote:
 Another lib to consider is www.cairographics.org
I think Cairo only support straight lines and cubic bezier curves in what appears to be a somewhat inefficient format, but the backend support is quite extensive.
Jan 06 2014
parent reply "qznc" <qznc web.de> writes:
On Monday, 6 January 2014 at 19:13:19 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 6 January 2014 at 18:34:53 UTC, Keesjan wrote:
 Another lib to consider is www.cairographics.org
I think Cairo only support straight lines and cubic bezier curves in what appears to be a somewhat inefficient format, but the backend support is quite extensive.
Cairo supports defining paths and then drawing/filling them. This is all you need for basic 2D images, no? Well, then you can embed other pictures/framebuffers and clip/transform them. There is no photoshop-filter like effects, though. What do you expect? Cairo also has some naive text support, but you probably want to use Pango for text, which is closely integrated with Cairo. I used Cairo (via GtkD) to produce pdf documents. The pdf and png backends are probably pretty unique for a graphics 2D library. Most just target a screen.
Jan 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 12:11:49 UTC, qznc wrote:
 On Monday, 6 January 2014 at 19:13:19 UTC, Ola Fosheim Grøstad 
 Cairo supports defining paths and then drawing/filling them.
Yes, but it is pragmatic in the sense that it apparently makes approximations to arc (circle/ellipsis) and use more expensive cubic primitives for quad beziers. Efficient fonts are often specified as the cheaper quadratic beziers. Cubic beziers look better, but are more expensive. Maybe the backends turn the cubics back into quads, I dunno. I've just looked at the internal path representation. SVG covers a lot more ground, though. Including image fills, filtereffects and animation.
 I used Cairo (via GtkD) to produce pdf documents. The pdf and 
 png backends are probably pretty unique for a graphics 2D 
 library. Most just target a screen.
I agree that Cairo is cool. I would personally rather see SVG in a standard library since that is actually the most common vector graphics file format (Inkscape++) and vector engine DOM: IE9+, Firefox, Chrome, Safari etc. I've just looked at the cinder api, and it looks like an internal library pushed to the public (and yes, it apparently is): http://libcinder.org/docs/v0.8.5/hierarchy.html If one want a simple api for messing around with graphics then probably Cairo or Processing would be a better choice, they are at least very popular. Cinder is probably in the difficult intermediate position, not easy enough for most dabblers, and not powerful enough for those who have a strong interest in graphics. If you want something that actually is standard, then the SVG DOM is the only choice, IMHO.
Jan 07 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 07 Jan 2014 04:29:58 -0800, Ola Fosheim Gr=F8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 7 January 2014 at 12:11:49 UTC, qznc wrote:
 On Monday, 6 January 2014 at 19:13:19 UTC, Ola Fosheim Gr=F8stad Cair=
o =
 supports defining paths and then drawing/filling them.
Yes, but it is pragmatic in the sense that it apparently makes =
 approximations to arc (circle/ellipsis) and use more expensive cubic  =
 primitives for quad beziers. Efficient fonts are often specified as th=
e =
 cheaper quadratic beziers. Cubic beziers look better, but are more  =
 expensive. Maybe the backends turn the cubics back into quads, I dunno=
. =
 I've just looked at the internal path representation.

 SVG covers a lot more ground, though. Including image fills,  =
 filtereffects and animation.

 I used Cairo (via GtkD) to produce pdf documents. The pdf and png  =
 backends are probably pretty unique for a graphics 2D library. Most  =
 just target a screen.
I agree that Cairo is cool. I would personally rather see SVG in a standard library since that is =
=
 actually the most common vector graphics file format (Inkscape++) and =
=
 vector engine DOM: IE9+, Firefox, Chrome, Safari etc.

 I've just looked at the cinder api, and it looks like an internal  =
 library pushed to the public (and yes, it apparently is):

 http://libcinder.org/docs/v0.8.5/hierarchy.html

 If one want a simple api for messing around with graphics then probabl=
y =
 Cairo or Processing would be a better choice, they are at least very  =
 popular. Cinder is probably in the difficult intermediate position, no=
t =
 easy enough for most dabblers, and not powerful enough for those who  =
 have a strong interest in graphics.
Well, that's not what we saw at GoingNative. Mr. Sutter challenged the = attendees to code up a game using Cinder in less than 24 hours. Over two= = dozen people who'd never seen Cinder before responded with games of = varying complexity and completeness, but all ran and did game-like thing= s. = So I'd have to say that your theoretical appraisal of the API doesn't = match the real outcomes we saw.
 If you want something that actually is standard, then the SVG DOM is t=
he =
 only choice, IMHO.
SVG has a rep for being verbose and slow to parse (It is basically XML = after all). Yes it's supported by all major browsers, I've just never se= en = anybody actually use it outside some small niches. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 19:00:48 UTC, Adam Wilson wrote:
 SVG has a rep for being verbose and slow to parse (It is 
 basically XML after all). Yes it's supported by all major 
 browsers, I've just never seen anybody actually use it outside 
 some small niches.
It is included in your lovely cinder! SVG has a rich feature set, true enough, so the engines are still not complete, but they are getting there.
Jan 07 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 07 Jan 2014 11:23:47 -0800, Ola Fosheim Gr=F8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 7 January 2014 at 19:00:48 UTC, Adam Wilson wrote:
 SVG has a rep for being verbose and slow to parse (It is basically XM=
L =
 after all). Yes it's supported by all major browsers, I've just never=
=
 seen anybody actually use it outside some small niches.
It is included in your lovely cinder!
True enough, but for the moment we are trying to keep the scope of the = project manageable. If someone wants to add it great! But until then we = = need to focus on what is required.
 SVG has a rich feature set, true enough, so the engines are still not =
=
 complete, but they are getting there.
-- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 19:31:38 UTC, Adam Wilson wrote:
 True enough, but for the moment we are trying to keep the scope 
 of the project manageable. If someone wants to add it great! 
 But until then we need to focus on what is required.
Yes, that is true, and then we should define a set of applications which it will be suitable for and what the basic model is: Should it be immediate mode, retained mode or scene graph based? High level graphic frameworks tend to loose momentum fast, even the good ones. Example: SGI's Open Inventor is actually a neat scene graph framework, and open source, but dead. SGI's GL survived because it was low level and immediate mode. So, I really think D is better off providing the basics in phobos first, staying true to the virtue of providing independent modules that are focused: - OS application abstraction: graphics context, input stream, audio playback - generally useful vector path datatype compatible with phobos-collections and SVG - vector/matrix library with competitive SSE performance and features such as clamping
Jan 07 2014
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 07 Jan 2014 12:13:18 -0800, Ola Fosheim Gr=F8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 7 January 2014 at 19:31:38 UTC, Adam Wilson wrote:
 True enough, but for the moment we are trying to keep the scope of th=
e =
 project manageable. If someone wants to add it great! But until then =
we =
 need to focus on what is required.
Yes, that is true, and then we should define a set of applications whi=
ch =
 it will be suitable for and what the basic model is: Should it be  =
 immediate mode, retained mode or scene graph based?
I think Immediate Mode offers the greatest flexibility and performance. = = It's also the easiest to implement since you don't have to build the = mountains of data structures needed to retain the scene info.
 High level graphic frameworks tend to loose momentum fast, even the go=
od =
 ones. Example: SGI's Open Inventor is actually a neat scene graph  =
 framework, and open source, but dead. SGI's GL survived because it was=
=
 low level and immediate mode.
Cinder, Processing, etc are High-Level Immediate Mode, I think that offe= rs = the best chance of succeeding.
 So, I really think D is better off providing the basics in phobos firs=
t, =
 staying true to the virtue of providing independent modules that are  =
 focused:

 - OS application abstraction: graphics context, input stream, audio  =
 playback
 - generally useful vector path datatype compatible with  =
 phobos-collections and SVG
 - vector/matrix library with competitive SSE performance and features =
=
 such as clamping
To a large degree I agree with this. Getting some basics into Phobos is = an = excellent idea and most of the community seems to agree. The biggest = problem I can see is that windows are usually tied to the graphics = framework the implements them. There are numerous reasons for that, all = of = which make sense. For example, I don't know if it would be possible to = pass the Phobos window to Qt or DirectX, it might be, but we'll have to = be = very careful in our API design if it is. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 07 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 20:23:57 UTC, Adam Wilson wrote:
 I think Immediate Mode offers the greatest flexibility and 
 performance.
But only if you do culling yourself and then render all your butterflies first, then all your trees, then all your stones etc in order front to back... So it isn't really high level unless you only have a few objects to render, but it is the easiest to implement, I agree.
 To a large degree I agree with this. Getting some basics into 
 Phobos is an excellent idea and most of the community seems to 
 agree.
Yes, I think looking at 3-4 medium size applications that use the basics is a good starting point. Allows you to ask yourself: what framework would have made it easier to create these applications? What would a particular framework design have made more difficult?
 The biggest problem I can see is that windows are usually tied 
 to the graphics framework the implements them.
One could start with a very simple window abstraction as pointed out earlier in the thread and only allow the program to instantiate a subclass of that abstract window called "RenderWindow" or something like that, then throw an exception if the program tries to instantiate more than one. It could be made reasonable forward compatible by keeping the functionality in the abstract window class to a bare minimum initially.
Jan 07 2014
prev sibling parent reply "Ross Hays" <accounts rosshays.net> writes:
I also kind of think that having an immediate mode high level API 
is a bit pointless. It would allow some nice abstractions, but I 
would still favor retained mode and maybe some quick helper 
functions that could partially imitate immediate mode (though 
that should really be a lower priority).

I understand that immediate mode maybe is more intuitive if you 
are new to graphics, and that it may also be easier to write for 
Aurora, but I feel like in the long run it would be a detriment.

 So, I really think D is better off providing the basics in 
 phobos first, staying true to the virtue of providing 
 independent modules that are focused:

 - OS application abstraction: graphics context, input stream, 
 audio playback
 - generally useful vector path datatype compatible with 
 phobos-collections and SVG
 - vector/matrix library with competitive SSE performance and 
 features such as clamping
That is pretty much what I was thinking as well when I was talking about avoiding breaking things into some std.aurora package. Things that phobos could already stand to have such as vectors, audio, better netcode, and so on, may as well be integrated in phobos and then used in Aurora, rather than Aurora becoming some weird extension of phobos.
Jan 07 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 23:08:42 UTC, Ross Hays wrote:
 I also kind of think that having an immediate mode high level 
 API is a bit pointless. It would allow some nice abstractions, 
 but I would still favor retained mode and maybe some quick 
 helper functions that could partially imitate immediate mode 
 (though that should really be a lower priority).
I've looked a bit at the Open Inventor spin-offs: vrml, x3d, Coin3d. The latter is actually a BSD clause updated Open Inventor 2.1 implementation, but it is big and not really useful since it is C++. Except as a source for ideas, perhaps. I really wish I knew more about SGI's Performer, which was a performance oriented scene graph toolkit which could run in parallell. One possible solution that makes a trade off between ease-of-use and performance: If one accepts an extra latency of 1 frame, then a "buffered" mode with intelligent caching might be possible, as a programmer friendly effort. That is: Frame 0: Thread A: collect draw calls and update accelerator structures, cache where possible Thread B: sleeping Frame 1: Thread A: collect draw calls and update accelerator structures, cache where possible Thread B: flush optimized and reordered draw calls to GPU (collected in frame 0)
 That is pretty much what I was thinking as well when I was 
 talking about avoiding breaking things into some std.aurora 
 package. Things that phobos could already stand to have such as 
 vectors, audio, better netcode, and so on, may as well be 
 integrated in phobos and then used in Aurora, rather than 
 Aurora becoming some weird extension of phobos.
Yes, I think there are many useful building blocks that can be useful in batch mode (for conversion tools) and for game-servers that can later be useful for building a render engine. - Spatial acceleration structures - ray intersection tests - object-object intersection tests - various geometric algorithms (Graphic Gems series) - mesh algorithms A lot of this stuff could play nicely together with the collection apis, I think. What would be nice to have in phobos is a idiomatic COLLADA DOM and helper functions, because it is a quite intricate file format. Maybe that could be a starting point for an engine.
Jan 07 2014
prev sibling parent "Zz" <Zz nospam.com> writes:
More interesting was the link that Herb Sutter provided.

Lightweight Drawing Library
http://isocpp.org/files/papers/n3791.html

Looks like he nailed the problem very well - it will be 
interesting to see what they come up with.



On Monday, 6 January 2014 at 18:34:53 UTC, Keesjan wrote:
 Another lib to consider is www.cairographics.org
 Herb Sutter wants to base the 2d stuff of c++ on it
 http://lists.cairographics.org/archives/cairo/2013-December/024858.html
Jan 07 2014
prev sibling next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
One of the things I did with my files was to have a separate 
event loop module which could be used by anything, in addition to 
their own event loop things (especially since I never implemented 
the other module on Windows or even other Poxixes yet; it is 
Linux only, but that's just due to my implementation.)

The separate event loop is nice because integrating loops is a 
bit of a pain. What mine does is pass type info for a single 
argument to a central thing:

import ev = arsd.eventloop;

// two separate libraries that can use the same loop
import simpledisplay;
import terminal;

struct MyEvent { } // a custom event

void main() {
     auto window = new SimpleWindow();
     auto terminal = Terminal(ConsoleOutputMode.linear);
     auto terminalInput = RealTimeconsoleInput(&terminal,
         ConsoleInputFlags.raw | ConsoleInputFlags.allInputEvents);

     addListener((InputEvent event) {
          // input from the terminal
     });
     addListener((KeyEvent event) {
          // key info from the gui window
          send(MyEvent()); // send the custom event
     });
     addListener(MyEvent event) {
         // a custom event
     });

     ev.loop();
}

and so on. Then the higher level things can build on this to do 
whatever - minigui.d for example uses a javascript style 
item.addEventListener thingy.
Jan 06 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 6 January 2014 at 19:44:07 UTC, Adam D. Ruppe wrote:
 The separate event loop is nice because integrating loops is a 
 bit of a pain.
Yes, and you might want to have multiplayer or peer-to-peer over bluetooth/network also. So if all interaction i/o provide the same interface you might be able to get cleaner code.
 and so on. Then the higher level things can build on this to do 
 whatever - minigui.d for example uses a javascript style 
 item.addEventListener thingy.
Interestingly, Dart made that more uniform by using a stream abstraction: abstract StreamSubscription<T> listen(void onData(T event), {Function onError, void onDone(), bool cancelOnError}) So you can do "item.onClick.listen( (e){dosomething(e);} )"
Jan 06 2014
prev sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 06 Jan 2014 09:50:48 -0800, FreeSlave <freeslave93 gmail.com>  
wrote:

 I'm not familiar with Cinder library yet (seems it does not support  
 Linux, so it's not very interesting for me), but I suppose graphics  
 library should provide at least two approaches to build graphic  
 applications. The first one is something that SDL and SFML offer: user  
 has to manually write cycle loop for event handling. The second one is  
 more complicated: event loop is encapsulated by some Application class  
 which automatically dispatches events to gui elements and provide  
 signal-slot system to ease creation of application logic (like Qt  
 library does). The second approach implementation may be built on the  
 first one, but it should not be the only. Sometimes the first approach  
 is more easy and it has no overhead of all these high-level gui  
 abstractions.
I tend to agree that we need both, but in terms of writing API simplicity the second option is actually better. The first one will require some interesting and probably not idiomatic... -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 06 2014
prev sibling next sibling parent reply "Ross Hays" <accounts rosshays.net> writes:
This is all a fantastic idea and seems like the kind of project 
that D really needs to keep improving (something new and large 
scale that will be in the standard library).

I do agree with everyone else that the order of implementation 
should be changed to something more along the lines of Window 
System -> 3D Graphics -> 2D Graphics, with other bits along the 
way. It seems natural to build the window system first so the 
other parts can be tested, and it also allows the API to be a bit 
more established before getting into the nitty-gritty of it all.

I have been working on making a OpenGL game engine of sorts in D 
since I started with it, and while I know this is not the same 
goal for this project, I would be glad to help as well. I must 
admit though, I am not entirely sure I understand the entire aim 
of Cinder if performance is not most important. I will have to 
read more about the library, but I would still like to help if it 
means improving the phobos library and D.

Unrelated: If this project is going to add things such as 
mathematical vectors and such into the standard library for D, 
that perhaps they should be included in the math package 
(std.math) rather than some new package (std.aurora). Just a 
thought, since they will be useful everywhere. Furthermore, the 
window system in general could be useful as a std.window (not 
std.windows) library, but I am not really sure about that one now 
that I think about it...

-Ross
Jan 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 6 January 2014 at 20:43:22 UTC, Ross Hays wrote:
 Unrelated: If this project is going to add things such as 
 mathematical vectors and such into the standard library for D, 
 that perhaps they should be included in the math package 
 (std.math) rather than some new package (std.aurora).
Yes, and it would be nice to have rich builtin swizzle support, which probably requires compiler support… So you can do: pos.xyzw pos.xyz1 pos.xyz0 pos.xxyy pos.xy color.rgba color.rgb1 color.rgb0 color.argb etc
 Just a thought, since they will be useful everywhere. 
 Furthermore, the window system in general could be useful as a 
 std.window (not std.windows) library, but I am not really sure 
 about that one now that I think about it...
What about having a OSApplication facade to a system-specific runtime with GPU capabilities. It could be SDL in the beginning. I think you can get a long way with just: 1. a single window 3D GPU context that can switch between fullscreen/not fullscreen 2. standard GPU stuff 3. memory handler (os telling you to back down, or that GPU resources have been lost) 4. native file requester 5. system specific main menubar The single window 3D GPU context can later be just a special case for the real window system abstraction. That way you can delay the task of creating a window system abstraction. Abstracting the window system is currently very difficult to do well because Apple, Microsoft and Google are trying to differentiate themselves by constantly morphing their feature set. It might be more stable in a few years… GTK/QT look outdated already.
Jan 06 2014
next sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Monday, 6 January 2014 at 21:06:11 UTC, Ola Fosheim Grøstad 
wrote:
 Yes, and it would be nice to have rich builtin swizzle support, 
 which probably requires compiler support…
Probably not. Did you have a look at the existing D math libraries yet? IIRC even some of the D1 libraries already had solid swizzling support. David
Jan 06 2014
next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Monday, 6 January 2014 at 21:14:12 UTC, David Nadlinger wrote:
 On Monday, 6 January 2014 at 21:06:11 UTC, Ola Fosheim Grøstad 
 wrote:
 Yes, and it would be nice to have rich builtin swizzle 
 support, which probably requires compiler support…
Probably not. Did you have a look at the existing D math libraries yet? IIRC even some of the D1 libraries already had solid swizzling support. David
You can even do swizzling assignement nowadays: https://github.com/p0nce/gfm/blob/master/math/gfm/math/vector.d#L305
Jan 06 2014
parent "Ross Hays" <accounts rosshays.net> writes:
On Monday, 6 January 2014 at 21:19:08 UTC, ponce wrote:
 On Monday, 6 January 2014 at 21:14:12 UTC, David Nadlinger 
 wrote:
 On Monday, 6 January 2014 at 21:06:11 UTC, Ola Fosheim Grøstad 
 wrote:
 Yes, and it would be nice to have rich builtin swizzle 
 support, which probably requires compiler support…
Probably not. Did you have a look at the existing D math libraries yet? IIRC even some of the D1 libraries already had solid swizzling support. David
You can even do swizzling assignement nowadays: https://github.com/p0nce/gfm/blob/master/math/gfm/math/vector.d#L305
I hadn't ever looked at the D GFM library. That is really nice and pretty much how I'd expect it to be done, but nice that it is done.
Jan 06 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 6 January 2014 at 21:14:12 UTC, David Nadlinger wrote:
 On Monday, 6 January 2014 at 21:06:11 UTC, Ola Fosheim Grøstad 
 wrote:
 Yes, and it would be nice to have rich builtin swizzle 
 support, which probably requires compiler support…
Probably not. Did you have a look at the existing D math libraries yet?
I've only looked at what is available in the phobos documentation so I wasn't aware of that, but it is nice to see that it is supported in libraries that are available today! :-) You still need compiler support though, if you want performance (memory layout+backend)… I also would like to have '1' and '0' swizzle for convenience. I never understood why that was left out from other languages.
Jan 06 2014
prev sibling parent reply "Ross Hays" <accounts rosshays.net> writes:
On Monday, 6 January 2014 at 21:06:11 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 6 January 2014 at 20:43:22 UTC, Ross Hays wrote:
 Unrelated: If this project is going to add things such as 
 mathematical vectors and such into the standard library for D, 
 that perhaps they should be included in the math package 
 (std.math) rather than some new package (std.aurora).
Yes, and it would be nice to have rich builtin swizzle support, which probably requires compiler support… So you can do: pos.xyzw pos.xyz1 pos.xyz0 pos.xxyy pos.xy color.rgba color.rgb1 color.rgb0 color.argb etc
With some templating and opDispatch I imagine this can be done, I already have something basic for this in a vector implementation I wrote a while back after asking about it http://forum.dlang.org/thread/udkzrlwrvpgngelbvtlz forum.dlang.org
 What about having a OSApplication facade to a system-specific 
 runtime with GPU capabilities. It could be SDL in the 
 beginning. I think you can get a long way with just:

 1. a single window 3D GPU context that can switch between 
 fullscreen/not fullscreen
 2. standard GPU stuff
 3. memory handler (os telling you to back down, or that GPU 
 resources have been lost)
 4. native file requester
 5. system specific main menubar

 The single window 3D GPU context can later be just a special 
 case for the real window system abstraction. That way you can 
 delay the task of creating a window system abstraction.
That's basically what I have been doing as well. I have a platform.window object that basically just wraps some SDL into a nice interface using properties, but I only allow the one object accessed with platform.window rather than allowing users to create more windows (platform.window (game.platform.window technically) is just created when a game is created. I really do need to clean that code up and maybe just release it all...
Jan 06 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 6 January 2014 at 21:14:16 UTC, Ross Hays wrote:
 That's basically what I have been doing as well. I have a 
 platform.window object that basically just wraps some SDL into 
 a nice interface using properties, but I only allow the one 
 object accessed with platform.window rather than allowing users 
 to create more windows (platform.window (game.platform.window 
 technically) is just created when a game is created.
Yes, and you don't really need all that many properties or event listeners for a basic canvas window: width, height, onResize, onClose, onPreRedraw, onRedraw (realtime), fullscreen…? If the filerequester/menubar is optional then it should work out for most platforms too with some opportunities for code sharing between them: iOS: runtime in Objective-C++ Android NDK: runtime in C++ http://developer.android.com/tools/sdk/ndk/index.html Windows Phone 8 Native Code: runtime in C++ http://msdn.microsoft.com/en-us/library/windowsphone/develop/jj681687(v=vs.105).aspx Chrome/Pepper Native Client/webGL: runtime in C++ https://developers.google.com/native-client/dev/
 I really do need to clean that code up and maybe just release 
 it all...
:-)
Jan 06 2014
prev sibling next sibling parent "Joseph Cassman" <jc7919 outlook.com> writes:
On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 [...]
 So with the above framework in mind, let's talk!
I think the Azure project from Mozilla can provide some design perspective for what you are trying to do. To get the performance they needed in composing a web page Mozilla made their own stateless wrapper over Direct2D that avoided the performance issues they were facing as a result of using Cairo. Here are the relevant links. https://blog.mozilla.org/joe/2011/04/26/introducing-the-azure-project/ https://wiki.mozilla.org/Platform/Features/AzureD2DCanvas https://bugzilla.mozilla.org/show_bug.cgi?id=651858 Joseph
Jan 06 2014
prev sibling next sibling parent reply "Boyd" <gaboonviper gmx.net> writes:
I could definitely use something like this. I'm currently working 
on a GUI library, and I still could use a decent graphics 
back-end. I suspect Aurora could function in this capacity.

I would love to contribute, though my experience with graphics is 
mostly limited to Win32 GDI. However, I could probably help with 
testing it in the early stages.

Also, I'm wondering if you are planning to include(or at least 
support a possible implementation) perfect anti-aliasing and 
stuff like that. I know AntiGrain has been mentioned here, I've 
used it in the distant past for text rendering, and it was pretty 
cool to have that kind of quality. For a lot of applications, the 
performance won't be effected much by this, and in some 
applications quality is very important.

----------------
On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote:
 Hello Fellow D Heads,

 Recently, I've been working to evaluate the feasibility and 
 reasonability of building out a binding to Cinder in D. And 
 while it is certainly feasible to wrap Cinder, that a binding 
 would be necessarily complex and feel very unnatural in D.

 So after talking it over with Walter and Andrei, we feel that, 
 while we like how Cinder is designed and would very much like 
 to have something like it available in D, wrapping Cinder is 
 not the best approach in the long-term.

 With that in mind, we would like to start a discussion with 
 interested parties about building a graphics library in the 
 same concept as Cinder, but using an idiomatic D implementation 
 from the ground up. Walter has suggested that we call it 
 Aurora, and given the visual connotations associated with that 
 name, I think it is most appropriate for this project.

 I know that the community has worked through a few of the 
 problems involved. For example, I can't remember who wrote it, 
 but I've seen a module floating around that can create a window 
 in a cross-platform manner, and I know Mike Parker has been 
 heavily involved in graphics for D. And no discussion of 
 graphics would be complete without Manu, whose input Walter, 
 Andrei, and I would greatly appreciate.

 I want to point out that while Cinder will be the design 
 template, the goal here is to use D to it's maximum potential. 
 I fully expect that what we end up with will be quite different 
 than Cinder.

 Due to the scope of the project I think it would be best to 
 execute the project in stages. This will allow us to deliver 
 useful chunks of working code to the community. Although I 
 haven't yet heard anything on the subject, I would assume that 
 once Aurora reaches an acceptable quality bar it would be a 
 candidate for inclusion in Phobos, as such I would like to 
 approach the design as if that were the end goal.

 The logical phases as I can see them are as follows, but please 
 suggest changes:

 - Windowing and System Interaction (Including 
 Keyboard/Mouse/Touch Input)
 - Basic Drawing (2D Shapes, Lines, Gradients, etc)
 - Image Rendering (Image Loading, Rendering, Modification, 
 Saving, etc.)
 - 3D Drawing (By far the most complex stage, so we'll leave it 
 for last)

 Here are a couple of things that Aurora is not intended to be:
 - Aurora is not a high-performance game engine. The focus is on 
 making a general purpose API  that is accessible to 
 non-graphics programmers. That said, we don't want to purposely 
 ruin performance and any work and guidance on that aspect will 
 be warmly welcomed.
 - Aurora is not a GUI library. Aurora is intended as a creative 
 graphics programming library in the same concept as Cinder. 
 This means that it will be much closer to game's graphics 
 engine, in terms of design and capability, than a UI library; 
 therefore we should approach the design from that standpoint.

 My personal experience in graphics programming is almost 
 completely with DirectX and Windows so I would be happy to work 
 on support for that platform. However, we need to support many 
 other platforms, and I know that there are others in the 
 community have the skills needed, your help would be invaluable.

 If you are interested in helping with a Cinder like library for 
 D and/or have code you'd like to contribute, let's start 
 talking and see what happens.

 While I do have some ideas about how to design the library, I 
 would rather open the floor to the community first to see what 
 our combined intellect has to offer as I don't want to unduly 
 influence the ideas generated here. The idea is to build the 
 best technical graphics library that we can, not measure egos.

 So with the above framework in mind, let's talk!
Jan 08 2014
next sibling parent "ponce" <contact gam3sfrommars.fr> writes:
On Wednesday, 8 January 2014 at 10:57:46 UTC, Boyd wrote:
 I would love to contribute, though my experience with graphics 
 is mostly limited to Win32 GDI. However, I could probably help 
 with testing it in the early stages.

 Also, I'm wondering if you are planning to include(or at least 
 support a possible implementation) perfect anti-aliasing and 
 stuff like that. I know AntiGrain has been mentioned here, I've 
 used it in the distant past for text rendering, and it was 
 pretty cool to have that kind of quality. For a lot of 
 applications, the performance won't be effected much by this, 
 and in some applications quality is very important.
It would be great to have a AGG-like software renderer. It would also be a good showcase for D template capabilities. Also software rendering is expected to run identical in all cases, and with no drivers dependency, that makes it a huge plus for some use case like audio plugins.
Jan 08 2014
prev sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 08 Jan 2014 02:57:45 -0800, Boyd <gaboonviper gmx.net> wrote:

 I could definitely use something like this. I'm currently working on a  
 GUI library, and I still could use a decent graphics back-end. I suspect  
 Aurora could function in this capacity.

 I would love to contribute, though my experience with graphics is mostly  
 limited to Win32 GDI. However, I could probably help with testing it in  
 the early stages.

 Also, I'm wondering if you are planning to include(or at least support a  
 possible implementation) perfect anti-aliasing and stuff like that. I  
 know AntiGrain has been mentioned here, I've used it in the distant past  
 for text rendering, and it was pretty cool to have that kind of quality.  
 For a lot of applications, the performance won't be effected much by  
 this, and in some applications quality is very important.
I've been looking at AGG, and to me the biggest problem is the license. It would it difficult to use in commercial scenarios that D itself is perfectly safe in. As useful as the GPL is, I don't think it belongs in a library project. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
parent reply David Gileadi <gileadis NSPMgmail.com> writes:
On 1/8/14, 11:28 AM, Adam Wilson wrote:
 I've been looking at AGG, and to me the biggest problem is the license.
 It would it difficult to use in commercial scenarios that D itself is
 perfectly safe in. As useful as the GPL is, I don't think it belongs in
 a library project.
I think if you're willing to use version 2.4 then you get a much more permissive license, no? That's how I read http://www.antigrain.com/license/index.html anyway...
Jan 08 2014
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 08 Jan 2014 10:39:08 -0800, David Gileadi <gileadis nspmgmail.com>  
wrote:

 On 1/8/14, 11:28 AM, Adam Wilson wrote:
 I've been looking at AGG, and to me the biggest problem is the license.
 It would it difficult to use in commercial scenarios that D itself is
 perfectly safe in. As useful as the GPL is, I don't think it belongs in
 a library project.
I think if you're willing to use version 2.4 then you get a much more permissive license, no? That's how I read http://www.antigrain.com/license/index.html anyway...
Right, it will just force us to become responsible for maintaining our own fork of AGG. I'm not sure we should get into that business. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
next sibling parent David Gileadi <gileadis NSPMgmail.com> writes:
On 1/8/14, 11:48 AM, Adam Wilson wrote:
 On Wed, 08 Jan 2014 10:39:08 -0800, David Gileadi
 <gileadis nspmgmail.com> wrote:

 On 1/8/14, 11:28 AM, Adam Wilson wrote:
 I've been looking at AGG, and to me the biggest problem is the license.
 It would it difficult to use in commercial scenarios that D itself is
 perfectly safe in. As useful as the GPL is, I don't think it belongs in
 a library project.
I think if you're willing to use version 2.4 then you get a much more permissive license, no? That's how I read http://www.antigrain.com/license/index.html anyway...
Right, it will just force us to become responsible for maintaining our own fork of AGG. I'm not sure we should get into that business.
Or we could use http://sourceforge.net/projects/agg/, which wikipedia says is a maintained fork of 2.4 (and how could wikipedia be wrong?).
Jan 08 2014
prev sibling parent reply "finalpatch" <fengli gmail.com> writes:
On Wednesday, 8 January 2014 at 18:49:58 UTC, Adam Wilson wrote:
 I think if you're willing to use version 2.4 then you get a 
 much more permissive license, no? That's how I read 
 http://www.antigrain.com/license/index.html anyway...
Right, it will just force us to become responsible for maintaining our own fork of AGG. I'm not sure we should get into that business.
The development of AGG has pretty much stopped after the original author released 2.4. The 2.5 is no more than just a license change (I remember I have compared the files). The fork on SourceForge, although considered maintained, it contains only a few small changes. Right now the revision number of that repo is only about 90, and there isn't much happening in the repo over the years. I think if we pick up the 2.4 version, convert it to idiomatic D, it would be very good showcase of D's template capability. The thing I like about AGG is that it is very portable (I have ported it to embedded micro controllers in a matter of minutes). That is because all it requires is just a pixel buffer and a C++ compiler. It is also very fast for a high quality software renderer, so if extreme performance is not high on your priority list, AGG is a very good fit for you needs. And also because it's a pure software renderer that works on pixel buffers, it's a good candidate to be included in Phobos.
Jan 08 2014
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 08 Jan 2014 15:11:33 -0800, finalpatch <fengli gmail.com> wrote:

 On Wednesday, 8 January 2014 at 18:49:58 UTC, Adam Wilson wrote:
 I think if you're willing to use version 2.4 then you get a much more  
 permissive license, no? That's how I read  
 http://www.antigrain.com/license/index.html anyway...
Right, it will just force us to become responsible for maintaining our own fork of AGG. I'm not sure we should get into that business.
The development of AGG has pretty much stopped after the original author released 2.4. The 2.5 is no more than just a license change (I remember I have compared the files). The fork on SourceForge, although considered maintained, it contains only a few small changes. Right now the revision number of that repo is only about 90, and there isn't much happening in the repo over the years. I think if we pick up the 2.4 version, convert it to idiomatic D, it would be very good showcase of D's template capability. The thing I like about AGG is that it is very portable (I have ported it to embedded micro controllers in a matter of minutes). That is because all it requires is just a pixel buffer and a C++ compiler. It is also very fast for a high quality software renderer, so if extreme performance is not high on your priority list, AGG is a very good fit for you needs. And also because it's a pure software renderer that works on pixel buffers, it's a good candidate to be included in Phobos.
Even with a full port of 2.4 to D it would still fall under the BSD 3-Clause license which is not Boost compliant IIRC. So it will never end up in Phobos. If I am missing something let me know, because a Phobos Software Renderer is a good idea. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
parent reply "finalpatch" <fengli gmail.com> writes:
On Wednesday, 8 January 2014 at 23:29:59 UTC, Adam Wilson wrote:
 Even with a full port of 2.4 to D it would still fall under the 
 BSD 3-Clause license which is not Boost compliant IIRC. So it 
 will never end up in Phobos. If I am missing something let me 
 know, because a Phobos Software Renderer is a good idea.
Hi Adam, We don't necessarily have to port AGG to D. Instead, I suggest we produce something that resembles its design (a set of very flexible components that can be put together through template instantiation at compile time), but in idiomatic D. With the power of D, the group wisdom of the community, and the lessons learned from AGG and other prior projects, it's very possible we can produce something even more impressive than AGG. Since it's a pure software renderer, the scope of the project will be a lot more manageable than GPU based solutions.
Jan 08 2014
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 08 Jan 2014 15:56:18 -0800, finalpatch <fengli gmail.com> wrote:

 On Wednesday, 8 January 2014 at 23:29:59 UTC, Adam Wilson wrote:
 Even with a full port of 2.4 to D it would still fall under the BSD  
 3-Clause license which is not Boost compliant IIRC. So it will never  
 end up in Phobos. If I am missing something let me know, because a  
 Phobos Software Renderer is a good idea.
Hi Adam, We don't necessarily have to port AGG to D. Instead, I suggest we produce something that resembles its design (a set of very flexible components that can be put together through template instantiation at compile time), but in idiomatic D. With the power of D, the group wisdom of the community, and the lessons learned from AGG and other prior projects, it's very possible we can produce something even more impressive than AGG. Since it's a pure software renderer, the scope of the project will be a lot more manageable than GPU based solutions.
Well, actually software renderers are terribly complicated beasts and so probably wouldn't reduce the actual scope. And they require a lot of mathematical knowledge that can be hard to come by, I certainly don't have it. So if someone is willing to start writing one in D we'd be happy to include support for it in Aurora. But I think we should continue with the GPU based solutions because they are easier to work with and the knowledge-base is more extensive. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
parent "dajones" <dajones hotmail.com> writes:
"Adam Wilson" <flyboynw gmail.com> wrote in message 
news:op.w9d7xdpk707hn8 invictus.skynet.com...
 On Wed, 08 Jan 2014 15:56:18 -0800, finalpatch <fengli gmail.com> wrote:

 On Wednesday, 8 January 2014 at 23:29:59 UTC, Adam Wilson wrote:
 Even with a full port of 2.4 to D it would still fall under the BSD 
 3-Clause license which is not Boost compliant IIRC. So it will never 
 end up in Phobos. If I am missing something let me know, because a 
 Phobos Software Renderer is a good idea.
Hi Adam, We don't necessarily have to port AGG to D. Instead, I suggest we produce something that resembles its design (a set of very flexible components that can be put together through template instantiation at compile time), but in idiomatic D. With the power of D, the group wisdom of the community, and the lessons learned from AGG and other prior projects, it's very possible we can produce something even more impressive than AGG. Since it's a pure software renderer, the scope of the project will be a lot more manageable than GPU based solutions.
Well, actually software renderers are terribly complicated beasts and so probably wouldn't reduce the actual scope. And they require a lot of mathematical knowledge that can be hard to come by, I certainly don't have it. So if someone is willing to start writing one in D we'd be happy to include support for it in Aurora. But I think we should continue with the GPU based solutions because they are easier to work with and the knowledge-base is more extensive.
I really dont know why this algorithm is not better known but it's very fast, good quality and very simple to implement. I knocked up my own version in a couple of days (aprox 500 LOC for the rasterizer, and a bunch of asm for the line blitters). It has speed/quality comparisions with GDI+ and AGG on the website. There's C++ source code IIR, dont know about the licence. http://mlab.uiah.fi/~kkallio/antialiasing/
Jan 10 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 8 January 2014 at 23:56:19 UTC, finalpatch wrote:
 we can produce something even more impressive than AGG. Since 
 it's a pure software renderer, the scope of the project will be 
 a lot more manageable than GPU based solutions.
You could try the REYES algorithm, it should give more accurate results than the scanline AGG uses (which I assume uses alphablending for aliasing which produce inaccurate results where edges meet.). http://en.wikipedia.org/wiki/Reyes_rendering Basically you partition edges into sub-pixel polygons and sort them. Then you calculate visibility and coverage before shading. You can also render stuff like fur with it because of the precision you might be able to get on the subpixel level.
Jan 08 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 9 January 2014 at 00:20:00 UTC, Ola Fosheim Grøstad 
wrote:
 Basically you partition edges into sub-pixel polygons and sort 
 them. Then you calculate visibility and coverage before shading.
(note: this is not how REYES work for 3D, but I think it could be adapted with good results for non-realtime 2D this way.)
Jan 08 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 8 January 2014 at 23:11:34 UTC, finalpatch wrote:
 The fork on SourceForge, although considered maintained, it 
 contains only a few small changes. Right now the revision 
 number of that repo is only about 90, and there isn't much 
 happening in the repo over the years. I think if we pick up the
Sadly, the author apparently died in november: http://www.microsofttranslator.com/bv.aspx?from=ru&to=en&a=http://rsdn.ru/forum/life/5377743.flat
Jan 08 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 08 Jan 2014 15:45:45 -0800, Ola Fosheim Gr=F8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Wednesday, 8 January 2014 at 23:11:34 UTC, finalpatch wrote:
 The fork on SourceForge, although considered maintained, it contains =
=
 only a few small changes. Right now the revision number of that repo =
is =
 only about 90, and there isn't much happening in the repo over the  =
 years. I think if we pick up the
Sadly, the author apparently died in november: http://www.microsofttranslator.com/bv.aspx?from=3Dru&to=3Den&a=3Dhttp:=
//rsdn.ru/forum/life/5377743.flat Wow, that is sad! Kind of puts the whole project in an interesting spot.= .. -- = Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 08 2014
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-01-08 19:39, David Gileadi wrote:

 I think if you're willing to use version 2.4 then you get a much more
 permissive license, no? That's how I read
 http://www.antigrain.com/license/index.html anyway...
Looks like it's less permissive than the Boost license due to point 2. -- /Jacob Carlborg
Jan 09 2014
prev sibling next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
You mentioned keyboard, mouse, and touch. Something that can't be 
forgot about is pen input. It is becoming more and more common 
for laptops to not only come with touch, but also a 
stylus+digitizer for pen input. Just something to think about.
Jan 16 2014
parent reply Russel Winder <russel winder.org.uk> writes:
On Thu, 2014-01-16 at 08:07 +0000, Tofu Ninja wrote:
 You mentioned keyboard, mouse, and touch. Something that can't be 
 forgot about is pen input. It is becoming more and more common 
 for laptops to not only come with touch, but also a 
 stylus+digitizer for pen input. Just something to think about.
Agreed. I have a Wacom which I use for all of my training courses. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 16 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Thu, 16 Jan 2014 03:22:08 -0800, Russel Winder <russel winder.org.uk>  
wrote:

 On Thu, 2014-01-16 at 08:07 +0000, Tofu Ninja wrote:
 You mentioned keyboard, mouse, and touch. Something that can't be
 forgot about is pen input. It is becoming more and more common
 for laptops to not only come with touch, but also a
 stylus+digitizer for pen input. Just something to think about.
Agreed. I have a Wacom which I use for all of my training courses.
We'll try to support as much input as we can, but to some extent we'll be at the mercy of what the Operating System can give us. -- Adam Wilson IRC: LightBender Aurora Project Coordinator
Jan 16 2014
prev sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
Personally I would be disappointed if this didn't have any 
support for custom shaders, though I know it would be hard to do 
in a idiomatic cross platform way.
Jan 17 2014
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Fri, 17 Jan 2014 15:51:29 -0800, Tofu Ninja <emmons0 purdue.edu> wrote:

 Personally I would be disappointed if this didn't have any support for  
 custom shaders, though I know it would be hard to do in a idiomatic  
 cross platform way.
Since shaders are GPU dependent and writing a high-level shading language is firmly outside the scope of Aurora, the only thing that makes sense is to have the programmer write their own shaders and have Aurora provide the proper API interfaces. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Jan 17 2014
parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 18 January 2014 at 01:33:53 UTC, Adam Wilson wrote:
 On Fri, 17 Jan 2014 15:51:29 -0800, Tofu Ninja 
 <emmons0 purdue.edu> wrote:

 Personally I would be disappointed if this didn't have any 
 support for custom shaders, though I know it would be hard to 
 do in a idiomatic cross platform way.
Since shaders are GPU dependent and writing a high-level shading language is firmly outside the scope of Aurora, the only thing that makes sense is to have the programmer write their own shaders and have Aurora provide the proper API interfaces.
As long as some kind of interface was exposed, then something more advanced and portable can be added later down the road.
Jan 17 2014