www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Whats holding ~100% D GUI back?

reply aberba <karabutaworld gmail.com> writes:
There's been attempts to implement a native D GUI toolkit but it 
seems none gets community attention. The closets...DlangUI 
(https://code.dlang.org/packages/dlangui)... seems no more under 
active development...getting to a year now.

Ketmar and others have been in the talks about doing something 
towards that. Whats happening now? What's holding 100% D GUI back?
Nov 19 2019
next sibling parent reply aberba <karabutaworld gmail.com> writes:
On Tuesday, 19 November 2019 at 22:57:47 UTC, aberba wrote:
 There's been attempts to implement a native D GUI toolkit but 
 it seems none gets community attention. The closets...DlangUI 
 (https://code.dlang.org/packages/dlangui)... seems no more 
 under active development...getting to a year now.

 Ketmar and others have been in the talks about doing something 
 towards that. Whats happening now? What's holding 100% D GUI 
 back?
I've seen some C++ GUI attempts getting some success with few people working on them. http://nanapro.org/en-us/ https://google.github.io/flatui/ ...By Google...can't say few devs though https://github.com/vurtun/nuklear https://github.com/ocornut/imgui
Nov 19 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 19 November 2019 at 23:25:12 UTC, aberba wrote:
 I've seen some C++ GUI attempts getting some success with few 
 people working on them.
They all look like they are cool projects, but most of them seem to be immediate mode (low level graphics is pushed to the screen more or less directly). So I would imagine that they are mostly useful for simple GUIs or require more programmer effort than you get from typical retained mode GUI framworks. Anyway, is it really important that all the code is in D? If so, why not port a retained mode framework that already exists? Like Skia/Flutter. Keep in mind that: 1. GUI APIs have to change constantly to stay competitive. Apple recently changed their API to Swift UI, which is more declarative than the previous API (like some Web UI frameworks). A rather big change IMO. 2. OS/Hardware vendors quickly change graphics interfaces. E.g. OpenGL is no longer a thing on Apple, so you need a separate Metal engine. You need a team of at least 10 skilled people that make it their primary hobby to stay relevant. Porting seems a much more likely option if the goal is to have something that is useful for production.
 https://github.com/vurtun/nuklear
Appears to have D bindings, though.
Nov 19 2019
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 19 November 2019 at 23:25:12 UTC, aberba wrote:
 http://nanapro.org/en-us/
Thanks for sharing btw, I've now looked a bit more closely at the C++ GUIs you mentioned, and Nana seems to be the only one of these 4 that that attempts to be a UI framework, and it appears to be slow moving and a bit weird. So it has a long way to go before taking on the alternatives (gtk etc). These:
 https://google.github.io/flatui/
 https://github.com/vurtun/nuklear
 https://github.com/ocornut/imgui
seems to require you to build your own graphics/input handling... which can be a lot of work if you don't have one already (i.e. mostly for games). Dear ImGUI appears to receive some financial backing from game studios/Google. FlatUI is backed by Google. I guess you could port Dear ImGUI to D, might be a good idea actually, but you would still be a long way from writing your own GUI application.
Nov 19 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 20 November 2019 at 07:52:47 UTC, Ola Fosheim 
Grøstad wrote:
 some financial backing from game studios/Google. FlatUI is 
 backed by Google.
Having done a bit more research it appears that being supported by Google isn't enough. FPLbase which FlatUI is using for rendering does not seem to receive much love. But I found this fork: https://github.com/LukasBanana/LLGL Which might be a low level alternative worth evaluating if Skia is considered to be too high level. So there are interesting things going on, and it is moving relatively fast. I think it would be a grave mistake to not build on what is done with C/C++ at the lower level layers because maintenance is just too expensive when you don't have a really large pool of applications building on it. However, even with LLGL behind you, look at all the code you need to just display one gradient-shaded triangle on screen: https://github.com/LukasBanana/LLGL/tree/master/examples/Cpp/HelloTriangle Clearly... you want to stand on the shoulders of others to get a portable graphics layer rather than rolling all on your own.
Nov 20 2019
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Wednesday, 20 November 2019 at 07:52:47 UTC, Ola Fosheim 
Grøstad wrote:
 On Tuesday, 19 November 2019 at 23:25:12 UTC, aberba wrote:
 http://nanapro.org/en-us/
Thanks for sharing btw, I've now looked a bit more closely at the C++ GUIs you mentioned, and Nana seems to be the only one of these 4 that that attempts to be a UI framework, and it appears to be slow moving and a bit weird. So it has a long way to go before taking on the alternatives (gtk etc). These:
 https://google.github.io/flatui/
 https://github.com/vurtun/nuklear
 https://github.com/ocornut/imgui
seems to require you to build your own graphics/input handling... which can be a lot of work if you don't have one already (i.e. mostly for games). Dear ImGUI appears to receive some financial backing from game studios/Google. FlatUI is backed by Google. I guess you could port Dear ImGUI to D, might be a good idea actually, but you would still be a long way from writing your own GUI application.
Flatui like many things Google is abandonware. It never went anywhere beyond a couple of NDK game samples.
Nov 20 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 20 November 2019 at 08:22:53 UTC, Paulo Pinto wrote:
 Flatui like many things Google is abandonware.

 It never went anywhere beyond a couple of NDK game samples.
FlatUI has not received updates in 2 years, the underlying layer received an update 6 months ago. It is quite minimal, but there are some ideas in there that one can borrow. As far as C++ goes, I think there will be opportunities for much better UI APIs when stackless coroutines become available... but it will probably take a decade for library developers to figure out how to make good use of it.
Nov 20 2019
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 11/19/19 5:57 PM, aberba wrote:
 There's been attempts to implement a native D GUI toolkit but it seems 
 none gets community attention. The closets...DlangUI 
 (https://code.dlang.org/packages/dlangui)... seems no more under active 
 development...getting to a year now.
Last commit was in August, so not quite dead, but I'd say that's your best shot at something. Seems pretty legit. Having been around for the growth of gtk from GIMP to replace the X/Motif UI, I'd say it's quite a feat to have something working as well as DLangUI with the time/manpower there. If I were to create a GUI-based app (I haven't in a long time), I'd probably look there. -Steve
Nov 19 2019
prev sibling next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
On Tuesday, 19 November 2019 at 22:57:47 UTC, aberba wrote:
 Ketmar and others have been in the talks about doing something 
 towards that. Whats happening now? What's holding 100% D GUI 
 back?
I already have a 100% D gui in my libs.... just it isn't 100% complete because I haven't had the real need to continue working on it. Been doing a lot of web uis instead of desktop, but I'll keep working on it... very slowly.
Nov 19 2019
prev sibling next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
For 100% D GUI toolkit, we would need somebody who is knowledgeable on 
Unicode (text layouting e.g. Bidi) and font rasterization. These 
together are many a man year projects and quite massive.

But more realistically with using e.g. FreeType and HarfBuzz:

- Image library (started)
- Event loop (next on my list to do)
- Windowing
- Render pipeline
- Text rendering (part of render pipeline sort of)
- Widget rendering (I'm doing requirements gathering atm)
- Theme management (CSS-like + reloading + widget renderer + text rendering)
- UI automation
- Controls

So yeah, lots of stuff.

For reference: https://github.com/DlangGraphicsWG
We wanted to get something actually working before making an 
announcement (we rely quite heavily on Manu for image library/render 
pipeline parts and he has been busy most of this year).

Our discussions are on Discord.

Here is an old image that I generated from my experimenting with widget 
rendering: 
https://cdn.discordapp.com/attachments/522751717780094979/634557892556750878/Capture.PNG
Nov 19 2019
next sibling parent reply Dukc <ajieskola gmail.com> writes:
On Wednesday, 20 November 2019 at 04:29:18 UTC, rikki cattermole 
wrote:
 For reference: https://github.com/DlangGraphicsWG
 We wanted to get something actually working before making an 
 announcement (we rely quite heavily on Manu for image 
 library/render pipeline parts and he has been busy most of this 
 year).
Did you consider taking your color type (excellent work by the way) and making a mir.ndslice out of it? You would basically have an efiicient bitmap type with bells and whistles with no effort required on your part.
Nov 20 2019
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 21/11/2019 2:14 AM, Dukc wrote:
 On Wednesday, 20 November 2019 at 04:29:18 UTC, rikki cattermole wrote:
 For reference: https://github.com/DlangGraphicsWG
 We wanted to get something actually working before making an 
 announcement (we rely quite heavily on Manu for image library/render 
 pipeline parts and he has been busy most of this year).
Did you consider taking your color type (excellent work by the way) and making a mir.ndslice out of it? You would basically have an efiicient bitmap type with bells and whistles with no effort required on your part.
Most of the work on the image library has been done by Manu. The rest of us, did help come up with the requirements. https://github.com/DlangGraphicsWG/documents/blob/master/image.md https://github.com/DlangGraphicsWG/documents/blob/master/rgb-format.md No, we did not consider supporting libraries likes ndslice as a dependency of the 'core' image library. The goal was to make ImageBuffer work with graphic API's like OpenGL directly and to make it be able to be used as wide spread as possible. https://github.com/DlangGraphicsWG/image/blob/master/src/wg/image/imagebuffer.d#L17 Mutation on the CPU is a much more rare thing in practice for game development + GUI's so it isn't a requirement.
Nov 20 2019
prev sibling parent reply thedeemon <dlang thedeemon.com> writes:
On Wednesday, 20 November 2019 at 04:29:18 UTC, rikki cattermole 
wrote:
 For 100% D GUI toolkit, we would need somebody who is 
 knowledgeable on Unicode (text layouting e.g. Bidi) and font 
 rasterization. These together are many a man year projects and 
 quite massive.

 But more realistically with using e.g. FreeType and HarfBuzz:

 - Image library (started)
 - Event loop (next on my list to do)
 - Windowing
 - Render pipeline
 - Text rendering (part of render pipeline sort of)
 - Widget rendering (I'm doing requirements gathering atm)
 - Theme management (CSS-like + reloading + widget renderer + 
 text rendering)
 - UI automation
 - Controls

 So yeah, lots of stuff.
That's the problem - basically all of it is already there in DLangUI, it's done, complete, working, and has been for a few years. But programmers being programmers, prefer ignoring existing libraries and keep reimplementing the same stuff from scratch. In most cases it doesn't go too far, as the steam dries out. In some cases it does go far, we get one more UI library that nobody uses, and instead other people reimplement the same from scratch and keep asking what's holding us back... It's a cultural thing, I guess. ;)
Nov 26 2019
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 27/11/2019 1:01 AM, thedeemon wrote:
 On Wednesday, 20 November 2019 at 04:29:18 UTC, rikki cattermole wrote:
 For 100% D GUI toolkit, we would need somebody who is knowledgeable on 
 Unicode (text layouting e.g. Bidi) and font rasterization. These 
 together are many a man year projects and quite massive.

 But more realistically with using e.g. FreeType and HarfBuzz:

 - Image library (started)
 - Event loop (next on my list to do)
 - Windowing
 - Render pipeline
 - Text rendering (part of render pipeline sort of)
 - Widget rendering (I'm doing requirements gathering atm)
 - Theme management (CSS-like + reloading + widget renderer + text 
 rendering)
 - UI automation
 - Controls

 So yeah, lots of stuff.
That's the problem - basically all of it is already there in DLangUI, it's done, complete, working, and has been for a few years. But programmers being programmers, prefer ignoring existing libraries and keep reimplementing the same stuff from scratch. In most cases it doesn't go too far, as the steam dries out. In some cases it does go far, we get one more UI library that nobody uses, and instead other people reimplement the same from scratch and keep asking what's holding us back... It's a cultural thing, I guess. ;)
Things it does not have: - Event loop - Windowing (screenshots, notification messages, system tray) - Render pipeline is from AAA games, the only person I trust with designing it is Manu - UI automation (if you don't have this you can literally be sued and that ignores the customers you will loose and bad press in general) Event loop, while it does exist you can't hook into it. See glib's event loop to see something *actually* acceptable. Theme management is mostly there, not quite what I would suggest, but it is there.
Nov 26 2019
next sibling parent reply thedeemon <dlang thedeemon.com> writes:
On Tuesday, 26 November 2019 at 12:15:57 UTC, rikki cattermole 
wrote:
 - Windowing (screenshots, notification messages, system tray)
 - Render pipeline is from AAA games, the only person I trust 
 with designing it is Manu
 - UI automation (if you don't have this you can literally be 
 sued and that ignores the customers you will loose and bad 
 press in general)

 Event loop, while it does exist you can't hook into it. See 
 glib's event loop to see something *actually* acceptable.

 Theme management is mostly there, not quite what I would 
 suggest, but it is there.
Ok, fair points, sorry for the rant. Meanwhile for those with less strict demands, please don't be afraid to use the UI libraries that are not in active development. It might just indicate all the things original author wanted are already there.
Nov 26 2019
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 27/11/2019 2:23 AM, thedeemon wrote:
 On Tuesday, 26 November 2019 at 12:15:57 UTC, rikki cattermole wrote:
 - Windowing (screenshots, notification messages, system tray)
 - Render pipeline is from AAA games, the only person I trust with 
 designing it is Manu
 - UI automation (if you don't have this you can literally be sued and 
 that ignores the customers you will loose and bad press in general)

 Event loop, while it does exist you can't hook into it. See glib's 
 event loop to see something *actually* acceptable.

 Theme management is mostly there, not quite what I would suggest, but 
 it is there.
Ok, fair points, sorry for the rant. Meanwhile for those with less strict demands, please don't be afraid to use the UI libraries that are not in active development. It might just indicate all the things original author wanted are already there.
I understand. Its important to put into context the general case for a GUI and a small subset of usages for it.
Nov 26 2019
prev sibling parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Tuesday, 26 November 2019 at 12:15:57 UTC, rikki cattermole 
wrote:
 - Render pipeline is from AAA games, the only person I trust 
 with designing it is Manu
I don't understand what you mean by this. A game rendering pipeline and a desktop UI rendering pipeline are fundamentally very, very different beasts. One shouldn't be used to emulate the other as the use cases are far too dissimilar.
 - UI automation (if you don't have this you can literally be 
 sued and that ignores the customers you will loose and bad 
 press in general)
Let's not resort to hyperbole here, please. I only know of lawsuits against websites lacking accessibility features. Has any such lawsuit ever been filed for a desktop application? The real risk of a lawsuit is low. Having said that, there's applications that definitely should be accessible. There's others where a certain kind of accessibility support is just not possible. Screen readers and Photoshop obviously make little sense in combination, for example. And accessibility is different from automation. There's different APIs for that on different platforms. So, in general, I'm still not seeing a proper justification for not building on top of the work that already exists in dlangUI.
Nov 27 2019
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 28/11/2019 5:06 AM, Gregor Mückl wrote:
 On Tuesday, 26 November 2019 at 12:15:57 UTC, rikki cattermole wrote:
 - UI automation (if you don't have this you can literally be sued and 
 that ignores the customers you will loose and bad press in general)
Let's not resort to hyperbole here, please. I only know of lawsuits against websites lacking accessibility features. Has any such lawsuit ever been filed for a desktop application? The real risk of a lawsuit is low.
It covers mobile applications as well. Different API, same scope, same code pretty much.
 Having said that, there's applications that definitely should be 
 accessible. There's others where a certain kind of accessibility support 
 is just not possible. Screen readers and Photoshop obviously make little 
 sense in combination, for example. And accessibility is different from 
 automation. There's different APIs for that on different platforms.
 
 So, in general, I'm still not seeing a proper justification for not 
 building on top of the work that already exists in dlangUI.
UI automation API's is how accessibility programs like screen readers work. "Microsoft UI Automation is an accessibility framework that enables Windows applications to provide and consume programmatic information about user interfaces (UIs). It provides programmatic access to most UI elements on the desktop. It enables assistive technology products, such as screen readers, to provide information about the UI to end users and to manipulate the UI by means other than standard input. UI Automation also allows automated test scripts to interact with the UI." - https://docs.microsoft.com/en-us/windows/win32/winauto/entry-uiauto-win32
Nov 27 2019
parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Wednesday, 27 November 2019 at 16:34:33 UTC, rikki cattermole 
wrote:
 On 28/11/2019 5:06 AM, Gregor Mückl wrote:
 On Tuesday, 26 November 2019 at 12:15:57 UTC, rikki cattermole 
 wrote:
 - UI automation (if you don't have this you can literally be 
 sued and that ignores the customers you will loose and bad 
 press in general)
Let's not resort to hyperbole here, please. I only know of lawsuits against websites lacking accessibility features. Has any such lawsuit ever been filed for a desktop application? The real risk of a lawsuit is low.
It covers mobile applications as well. Different API, same scope, same code pretty much.
Sorry, I don't follow.
 Having said that, there's applications that definitely should 
 be accessible. There's others where a certain kind of 
 accessibility support is just not possible. Screen readers and 
 Photoshop obviously make little sense in combination, for 
 example. And accessibility is different from automation. 
 There's different APIs for that on different platforms.
 
 So, in general, I'm still not seeing a proper justification 
 for not building on top of the work that already exists in 
 dlangUI.
UI automation API's is how accessibility programs like screen readers work. "Microsoft UI Automation is an accessibility framework that enables Windows applications to provide and consume programmatic information about user interfaces (UIs). It provides programmatic access to most UI elements on the desktop. It enables assistive technology products, such as screen readers, to provide information about the UI to end users and to manipulate the UI by means other than standard input. UI Automation also allows automated test scripts to interact with the UI." - https://docs.microsoft.com/en-us/windows/win32/winauto/entry-uiauto-win32
No, this is only how Microsoft chose to name their accessibility interface. Google and Apple use accessibility as the relevant term, including in their actual API naming. The same goes for Gtk and Qt, which are the most common implementations of accessibility features on top of X11 (sadly, the X protocol doesn't have any notion of accessibility itself). Please don't use the single outlier's unique terms.
Nov 27 2019
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 28/11/2019 6:08 AM, Gregor Mückl wrote:
 On Wednesday, 27 November 2019 at 16:34:33 UTC, rikki cattermole wrote:
 Having said that, there's applications that definitely should be 
 accessible. There's others where a certain kind of accessibility 
 support is just not possible. Screen readers and Photoshop obviously 
 make little sense in combination, for example. And accessibility is 
 different from automation. There's different APIs for that on 
 different platforms.

 So, in general, I'm still not seeing a proper justification for not 
 building on top of the work that already exists in dlangUI.
UI automation API's is how accessibility programs like screen readers work. "Microsoft UI Automation is an accessibility framework that enables Windows applications to provide and consume programmatic information about user interfaces (UIs). It provides programmatic access to most UI elements on the desktop. It enables assistive technology products, such as screen readers, to provide information about the UI to end users and to manipulate the UI by means other than standard input. UI Automation also allows automated test scripts to interact with the UI." - https://docs.microsoft.com/en-us/windows/win32/winauto/entry-uiauto-win32
No, this is only how Microsoft chose to name their accessibility interface. Google and Apple use accessibility as the relevant term, including in their actual API naming. The same goes for Gtk and Qt, which are the most common implementations of accessibility features on top of X11 (sadly, the X protocol doesn't have any notion of accessibility itself). Please don't use the single outlier's unique terms.
"Action methods. The NSAccessibility protocol also defines a number of methods that simulate button presses, mouse clicks and selections in your view or control. By implementing these methods, you give accessibility clients the ability to drive your view or control." 1. The user says, “Open Preferences window.” 2. The screen reader sends a message to the app’s accessible element, asking for a reference to the menu bar accessible element. It then queries the menu bar for a list of its children and queries each child for its title. As soon as it finds the one whose title matches app’s name (that is, the application menu). A second iteration lets it find the Preferences menu item within the application menu. Finally, the screen reader tells the Preferences menu item to perform the press action. https://developer.apple.com/library/archive/documentation/Accessibility/Conceptual/AccessibilityMacOSX/OSXAXmodel.html#//apple_ref/doc/uid/TP40001078-CH208-TPXREF101 Sounds like UI automation to me. Regardless of what they named it. And for ATK which is Gtk's library: https://developer.gnome.org/atk/stable/AtkAction.html "AtkAction should be implemented by instances of AtkObject classes with which the user can interact directly, i.e. buttons, checkboxes, scrollbars, e.g. components which are not "passive" providers of UI information." Sounds like UI automation to me even if accessibility centric. From QT with regards to accessibility support on Windows: "Also, the new UI Automation support in Qt may become useful for application testing, since it can provide metadata and programmatic control of UI elements, which can be leveraged by automated test suites and other tools." https://www.qt.io/blog/2018/02/20/qt-5-11-brings-new-accessibility-backend-windows Linux support is hard to find anything solid on for QT, but they do have support for ATK bundled with it as part of AT-SPI2 specification.
Nov 27 2019
prev sibling parent reply Ethan <gooberman gmail.com> writes:
On Wednesday, 27 November 2019 at 16:06:38 UTC, Gregor Mückl 
wrote:
 On Tuesday, 26 November 2019 at 12:15:57 UTC, rikki cattermole 
 wrote:
 - Render pipeline is from AAA games, the only person I trust 
 with designing it is Manu
I don't understand what you mean by this. A game rendering pipeline and a desktop UI rendering pipeline are fundamentally very, very different beasts. One shouldn't be used to emulate the other as the use cases are far too dissimilar.
Stop saying this. It's thoroughly incorrect. Cripes. If you think the desktop layout engine introduced in Windows Vista, or even the layout engines used in mobile browsers and current desktop browsers, doesn't have a ton in common with a game rendering pipeline then your knowledge is well outdated.
Nov 28 2019
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Thursday, 28 November 2019 at 15:29:21 UTC, Ethan wrote:
 On Wednesday, 27 November 2019 at 16:06:38 UTC, Gregor Mückl 
 wrote:
 On Tuesday, 26 November 2019 at 12:15:57 UTC, rikki cattermole 
 wrote:
 - Render pipeline is from AAA games, the only person I trust 
 with designing it is Manu
I don't understand what you mean by this. A game rendering pipeline and a desktop UI rendering pipeline are fundamentally very, very different beasts. One shouldn't be used to emulate the other as the use cases are far too dissimilar.
Stop saying this. It's thoroughly incorrect. Cripes. If you think the desktop layout engine introduced in Windows Vista, or even the layout engines used in mobile browsers and current desktop browsers, doesn't have a ton in common with a game rendering pipeline then your knowledge is well outdated.
As an example of it, Windows 10 composition layer is basically a thin wrapper around DirectX, exposing most of its features to WinUI. https://docs.microsoft.com/en-us/windows/uwp/composition/visual-layer https://channel9.msdn.com/Events/TechDays/Techdays-2016-The-Netherlands/The-Visual-Layer-Animations-Effects--More-in-Windows-10-Anniversary-Edition
Nov 28 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 28 November 2019 at 15:42:29 UTC, Paulo Pinto wrote:
 As an example of it, Windows 10 composition layer is basically 
 a thin wrapper around DirectX, exposing most of its features to 
 WinUI.
Well, you could've said that for Amiga too, but you do things differently for a UI than a game. Then again you do things differently for different types of graphics engines and hardware too, and that does change over time... so the argument is kinda moot from the get go. Anyway, if you are building a portable UI toolkit for existing windowing systems you should not open the most expressive graphics context on all platforms. You should adopt to the platform (save resources, get better interop with native toolkits etc). AND be future proof. Meaning: be conservative. If you want to run on embedded slow monochrome LCDs, networked x-windows, os-x, windows... well, you won't get very far with a game engine. Besides there is no way the D community can fully sustain a portable UI implemented on the hardware level anyway... Just use Skia conservatively and save yourself a few man years of design, development, porting, debugging and maintenance... and that is only for the lowest level.
Nov 28 2019
prev sibling next sibling parent reply Jab <jab_293 gmall.com> writes:
On Thursday, 28 November 2019 at 15:29:21 UTC, Ethan wrote:
 On Wednesday, 27 November 2019 at 16:06:38 UTC, Gregor Mückl 
 wrote:
 On Tuesday, 26 November 2019 at 12:15:57 UTC, rikki cattermole 
 wrote:
 - Render pipeline is from AAA games, the only person I trust 
 with designing it is Manu
I don't understand what you mean by this. A game rendering pipeline and a desktop UI rendering pipeline are fundamentally very, very different beasts. One shouldn't be used to emulate the other as the use cases are far too dissimilar.
Stop saying this. It's thoroughly incorrect. Cripes. If you think the desktop layout engine introduced in Windows Vista, or even the layout engines used in mobile browsers and current desktop browsers, doesn't have a ton in common with a game rendering pipeline then your knowledge is well outdated.
Do you have any more information on the topic? I remember digging through Qt and there are sections that completely avoid the GPU all together as it is too inaccurate for the computation that was required. Can't recall exactly what it was.
Nov 28 2019
parent reply Ethan <gooberman gmail.com> writes:
On Thursday, 28 November 2019 at 19:37:47 UTC, Jab wrote:
 Do you have any more information on the topic? I remember 
 digging through Qt and there are sections that completely avoid 
 the GPU all together as it is too inaccurate for the 
 computation that was required. Can't recall exactly what it was.
This would have been an accurate statement when GPUs were entirely fixed function. But then this little technology called "shaders" was introduced to consumer hardware in 2001. GPUs these days are little more than programmable number crunchers that work *REALLY FAST* in parallel.
Nov 28 2019
parent reply Jab <jab_293 gmall.com> writes:
On Thursday, 28 November 2019 at 20:46:45 UTC, Ethan wrote:
 On Thursday, 28 November 2019 at 19:37:47 UTC, Jab wrote:
 Do you have any more information on the topic? I remember 
 digging through Qt and there are sections that completely 
 avoid the GPU all together as it is too inaccurate for the 
 computation that was required. Can't recall exactly what it 
 was.
This would have been an accurate statement when GPUs were entirely fixed function. But then this little technology called "shaders" was introduced to consumer hardware in 2001. GPUs these days are little more than programmable number crunchers that work *REALLY FAST* in parallel.
It was Qt5, which is pretty recent, so no fixed pipelines are used. That's kind of what I was surprised about looking through Qt. Quite a bit of it is still done on the CPU, things I wouldn't have expected. Which is why I was wondering if there was any more information on the topic. IIRC GPUs are limited in what they can do in parallel, so if you only need to do 1 things for a specific job the rest of the GPU isn't really being fully utilized.
Nov 28 2019
next sibling parent reply Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Friday, 29 November 2019 at 05:16:08 UTC, Jab wrote:
 IIRC GPUs are limited in what they can do in parallel, so if 
 you only need to do 1 things for a specific job the rest of the 
 GPU isn't really being fully utilized.
GPUs have used a VLIW design, but are moving to RISC as a result of the GPGPU trend. So, becoming more like simple CPUs. But what you get and what you get to access will vary based on API and hardware. (AI and raytracing will likely cause more changes in the future too.) So you need a CPU software renderer to fall back on, GPU rendering is more of an optimization in addition to CPU rendering. But more and more is moving to the GPU. Look at the roadmap for Skia to get an idea.
Nov 28 2019
parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Friday, 29 November 2019 at 06:02:40 UTC, Ola Fosheim Grostad 
wrote:
 On Friday, 29 November 2019 at 05:16:08 UTC, Jab wrote:
 IIRC GPUs are limited in what they can do in parallel, so if 
 you only need to do 1 things for a specific job the rest of 
 the GPU isn't really being fully utilized.
GPUs have used a VLIW design, but are moving to RISC as a result of the GPGPU trend. So, becoming more like simple CPUs. But what you get and what you get to access will vary based on API and hardware. (AI and raytracing will likely cause more changes in the future too.)
GPUs are vector processors, typically 16 wide SIMD. The shaders and compute kernels for then are written from a single-"threaded" perspective, but this is converted to SIMD qith one "thread" really being a single value in the 16 wide register. This has all kinds of implications for things like branching and memory accesses. Thus forum is not rhe place to go into them.
 So you need a CPU software renderer to fall back on, GPU 
 rendering is more of an optimization in addition to CPU 
 rendering. But more and more is moving to the GPU.

 Look at the roadmap for Skia to get an idea.
Yes, proper drawing of common 2d graphics primitives is hard.
Nov 29 2019
parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Friday, 29 November 2019 at 08:45:30 UTC, Gregor Mückl wrote:
 On Friday, 29 November 2019 at 06:02:40 UTC, Ola Fosheim 
 Grostad wrote:
 So you need a CPU software renderer to fall back on, GPU 
 rendering is more of an optimization in addition to CPU 
 rendering. But more and more is moving to the GPU.

 Look at the roadmap for Skia to get an idea.
Yes, proper drawing of common 2d graphics primitives is hard.
Accidentially hit send too early. Sorry. I am not aware of a full GPU implementation of a TTF or OTF font renderer. Glyphs are defined as 2nd or 3rd order splines, but these are warped according to pretty complex rules. All of that is often done with subpixel precision to get proper antialiasing. These 2d rendering engine in Qt, cairo, Skia... contain proper implememtations for primitives like arcs, polylines with various choices for joints and end caps, filled polygons with correct self-intersection handling, gradients, fill patterns, ... All of these things can be done on GPUs (most of it has), but I highly doubt that this would be that much faster. You need lots of different shaders for these primitives and switching state while rendering is expensive.
Nov 29 2019
next sibling parent reply Basile B. <b2.temp gmx.com> writes:
On Friday, 29 November 2019 at 09:00:20 UTC, Gregor Mückl wrote:
 On Friday, 29 November 2019 at 08:45:30 UTC, Gregor Mückl wrote:
 On Friday, 29 November 2019 at 06:02:40 UTC, Ola Fosheim 
 Grostad wrote:
 So you need a CPU software renderer to fall back on, GPU 
 rendering is more of an optimization in addition to CPU 
 rendering. But more and more is moving to the GPU.

 Look at the roadmap for Skia to get an idea.
Yes, proper drawing of common 2d graphics primitives is hard.
Accidentially hit send too early. Sorry. I am not aware of a full GPU implementation of a TTF or OTF font renderer. Glyphs are defined as 2nd or 3rd order splines, but these are warped according to pretty complex rules. All of that is often done with subpixel precision to get proper antialiasing. These 2d rendering engine in Qt, cairo, Skia... contain proper
cairo is not comparable to Skia or Qt, it's more an intermediate level API, which can use itself different backends. But it's clearly lower level than Skia for the few I know of it.
 implememtations for primitives like arcs, polylines with 
 various choices for joints and end caps, filled polygons with 
 correct self-intersection handling, gradients, fill patterns, 
 ... All of these things can be done on GPUs (most of it has), 
 but I highly  doubt that this would be that much faster. You 
 need lots of different shaders for these primitives and 
 switching state while rendering is expensive.
Back in the early 2010's I used something comparable to QtQuick and it had different backends. On Windows we could choose between GDI+ and D2D+DirectWrite. The later, while using the GPU was awefuly laggy compared to the good old GDI+. Back to the original topic. What people don't realize is that a 100% D GUI would be a more complex project than the D compiler itself. Just the text features is a huge thing in itself: unicode, BDI.
Nov 29 2019
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 29/11/2019 10:55 PM, Basile B. wrote:
 
 Back to the original topic. What people don't realize is that a 100% D 
 GUI would be a more complex project than the D compiler itself. Just the 
 text features is a huge thing in itself: unicode, BDI.
Assuming you mean BIDI yes. Text layouting is a real pain. Mostly because it needs an expert in Unicode to do right. But its all pretty well defined with tests described. So it shouldn't be considered out of scope. Font rasterization on the other hand... ugh
Nov 29 2019
next sibling parent Basile B. <b2.temp gmx.com> writes:
On Friday, 29 November 2019 at 10:19:58 UTC, rikki cattermole 
wrote:
 On 29/11/2019 10:55 PM, Basile B. wrote:
 
 Back to the original topic. What people don't realize is that 
 a 100% D GUI would be a more complex project than the D 
 compiler itself. Just the text features is a huge thing in 
 itself: unicode, BDI.
Assuming you mean BIDI yes. Text layouting is a real pain.
Yes of course, sorry. It's even you who said this maybe here on the IRC (something like "it's in itself a full project").
 Mostly because it needs an expert in Unicode to do right. But 
 its all pretty well defined with tests described. So it 
 shouldn't be considered out of scope. Font rasterization on the 
 other hand... ugh
Nov 29 2019
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Nov 29, 2019 at 11:19:58PM +1300, rikki cattermole via Digitalmars-d
wrote:
 On 29/11/2019 10:55 PM, Basile B. wrote:
 
 Back to the original topic. What people don't realize is that a 100%
 D GUI would be a more complex project than the D compiler itself.
 Just the text features is a huge thing in itself: unicode, BDI.
Assuming you mean BIDI yes. Text layouting is a real pain. Mostly because it needs an expert in Unicode to do right. But its all pretty well defined with tests described. So it shouldn't be considered out of scope. Font rasterization on the other hand... ugh
Text layout is non-trivial even with the Unicode specs, because the Unicode specs leaves a lot of things to implementation, mostly because it's out-of-scope (like the actual length of a piece of text in pixels, because that depends on rasterization, kerning, and font-dependent complexities -- some scripts like Arabic require what's essentially running a DSL in order to get the variant glyphs right -- not to mention optional behaviours that the Unicode spec explicitly says are up to user preferences). You can't really do layout as something apart from rasterization, because things like where/how to wrap a line of text will change depending on how the glyphs are rasterized. Also, layout in my mind also involves things like algorithms that avoid running streams of whitespaces in paragraphs, i.e., LaTeX-style O(n^3) line-breaking algorithms. This is not specified by the Unicode line-breaking algorithm, BTW, which is a misnomer: it doesn't break lines, it only finds line-breaking *opportunities*, and leaves it up to the application to decide where to actually break lines. Part of the reason is because Unicode doesn't deal with font details. It's up to the text rendition code to handle that properly. T -- It only takes one twig to burn down a forest.
Nov 29 2019
prev sibling parent reply aberba <karabutaworld gmail.com> writes:
On Friday, 29 November 2019 at 09:55:28 UTC, Basile B. wrote:

 Back to the original topic. What people don't realize is that a 
 100% D GUI would be a more complex project than the D compiler 
 itself. Just the text features is a huge thing in itself: 
 unicode, BDI.
It doesn't have to come with everything for starters. DlangUI was mostly one man. How's that possible? And people here have used it for commercial products.
Dec 02 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 3 December 2019 at 07:58:42 UTC, aberba wrote:
  DlangUI was mostly one man. How's that possible? And people 
 here have used it for commercial products.
Seems to have been derived from a C++ project he had. Also closely tied to OpenGL, which basically is a dead end at this point outside browsers (lives on as WebGL). But yeah, a pretty big project.
Dec 03 2019
parent reply thedeemon <dlang thedeemon.com> writes:
On Tuesday, 3 December 2019 at 08:54:02 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 3 December 2019 at 07:58:42 UTC, aberba wrote:
  DlangUI was mostly one man. How's that possible? And people 
 here have used it for commercial products.
Also closely tied to OpenGL
Is it? There are different "back ends", on Windows one can use WinAPI and on Linux SDL...
Dec 03 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 3 December 2019 at 17:35:19 UTC, thedeemon wrote:
 Is it? There are different "back ends", on Windows one can use 
 WinAPI and on Linux SDL...
I stand corrected then, but I only found OpenGL support and no other GPU API... but I haven't given it a hard look.
Dec 03 2019
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 04/12/2019 6:59 AM, Ola Fosheim Grøstad wrote:
 On Tuesday, 3 December 2019 at 17:35:19 UTC, thedeemon wrote:
 Is it? There are different "back ends", on Windows one can use WinAPI 
 and on Linux SDL...
I stand corrected then, but I only found OpenGL support and no other GPU API... but I haven't given it a hard look.
It supports framebuffer and OpenGL. The OpenGL support looks like its basically just a framebuffer under the hood though... Those "back ends" are different windowing implementations. Nothing to do with GPU API other than creating a context basically.
Dec 03 2019
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 29 November 2019 at 09:00:20 UTC, Gregor Mückl wrote:
 These 2d rendering engine in Qt, cairo, Skia... contain proper 
 implememtations for primitives like arcs, polylines with 
 various choices for joints and end caps, filled polygons with 
 correct self-intersection handling, gradients, fill patterns, 
 ... All of these things can be done on GPUs (most of it has), 
 but I highly  doubt that this would be that much faster. You 
 need lots of different shaders for these primitives and 
 switching state while rendering is expensive.
Well, the whole topic is quite complex. One important aspect of this is that a UI should leave as much resources to the main program as possible (memory, computational power and reduce power consumption as much as possible). Another aspect is that users have veeery different hardware. So, what works well on the GPU on one machine and with one application, might not work well with another. Games can make other assumptions than UI frameworks... But I think the most important aspect is that hardware change over time, so what is available in the standard API may not reflect what you can do with extensions or a proprietary API. As far as I can tell, there is a push for more RISC like setups (more simple cores) with more flexibility, but I don't know how far that will go. Maybe the whole coprocessor market will split into multiple incompatible lanes with a software API layer on top (like we see with OpenGL over Metal on OS-X, just with more distance between what the API delivers and what the hardware supports). AI and raytracing could lead to a departure in coprocessor tech... Having a software renderer (or CPU/GPU hybrid) clearly is the most portable approach, going GPU only seems to be a risky venture that would require lots of developers to target new architectures as they appear.
Nov 29 2019
prev sibling parent reply Ethan <gooberman gmail.com> writes:
On Friday, 29 November 2019 at 05:16:08 UTC, Jab wrote:
 IIRC GPUs are limited in what they can do in parallel, so if 
 you only need to do 1 things for a specific job the rest of the 
 GPU isn't really being fully utilized.
Yeah, that's not how GPUs work. They have a number of shader units that execute on outputs in parallel. It used to be an explicit split between vertex and pixel pipelines in the early days, where it was very easy to underutilise the vertex pipeline. But shader units have been unified for a long time. Queue up a bunch of outputs and the driver and hardware will schedule it properly.
Nov 29 2019
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 29 November 2019 at 10:12:32 UTC, Ethan wrote:
 On Friday, 29 November 2019 at 05:16:08 UTC, Jab wrote:
 IIRC GPUs are limited in what they can do in parallel, so if 
 you only need to do 1 things for a specific job the rest of 
 the GPU isn't really being fully utilized.
Yeah, that's not how GPUs work. They have a number of shader units that execute on outputs in parallel. It used to be an explicit split between vertex and pixel pipelines in the early days, where it was very easy to underutilise the vertex pipeline. But shader units have been unified for a long time. Queue up a bunch of outputs and the driver and hardware will schedule it properly.
For those that care about this stuff, mesh shaders are the way forward. https://devblogs.microsoft.com/directx/dev-preview-of-new-directx-12-features/ https://devblogs.microsoft.com/directx/coming-to-directx-12-mesh-shaders-and-amplification-shaders-reinventing-the-geometry-pipeline/ https://devblogs.nvidia.com/introduction-turing-mesh-shaders/
Nov 29 2019
prev sibling parent reply Jab <jab_293 gmall.com> writes:
On Friday, 29 November 2019 at 10:12:32 UTC, Ethan wrote:
 On Friday, 29 November 2019 at 05:16:08 UTC, Jab wrote:
 IIRC GPUs are limited in what they can do in parallel, so if 
 you only need to do 1 things for a specific job the rest of 
 the GPU isn't really being fully utilized.
Yeah, that's not how GPUs work. They have a number of shader units that execute on outputs in parallel. It used to be an explicit split between vertex and pixel pipelines in the early days, where it was very easy to underutilise the vertex pipeline. But shader units have been unified for a long time. Queue up a bunch of outputs and the driver and hardware will schedule it properly.
Wasn't really talking about different processes such as that. You can't run 1000s of different kernels in parallel. In graphical terms it'd be like queuing up a command to draw a single pixel. While that single pixel is drawing, the rest of the stack allocated to that kernel would be idle. It wouldn't be able to be utilized by another kernel.
Nov 29 2019
parent Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Friday, 29 November 2019 at 15:56:19 UTC, Jab wrote:
 On Friday, 29 November 2019 at 10:12:32 UTC, Ethan wrote:
 On Friday, 29 November 2019 at 05:16:08 UTC, Jab wrote:
 IIRC GPUs are limited in what they can do in parallel, so if 
 you only need to do 1 things for a specific job the rest of 
 the GPU isn't really being fully utilized.
Yeah, that's not how GPUs work. They have a number of shader units that execute on outputs in parallel. It used to be an explicit split between vertex and pixel pipelines in the early days, where it was very easy to underutilise the vertex pipeline. But shader units have been unified for a long time. Queue up a bunch of outputs and the driver and hardware will schedule it properly.
Wasn't really talking about different processes such as that. You can't run 1000s of different kernels in parallel. In graphical terms it'd be like queuing up a command to draw a single pixel. While that single pixel is drawing, the rest of the stack allocated to that kernel would be idle. It wouldn't be able to be utilized by another kernel.
That's not necessarily true anymore. Some more recent GPU architectures have been extended so that different cores can run different kernels. I haven't looked at the details because I haven't had a need for these features yet, but they're exposed and documented.
Nov 29 2019
prev sibling parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Thursday, 28 November 2019 at 15:29:21 UTC, Ethan wrote:
 On Wednesday, 27 November 2019 at 16:06:38 UTC, Gregor Mückl 
 wrote:
 On Tuesday, 26 November 2019 at 12:15:57 UTC, rikki cattermole 
 wrote:
 - Render pipeline is from AAA games, the only person I trust 
 with designing it is Manu
I don't understand what you mean by this. A game rendering pipeline and a desktop UI rendering pipeline are fundamentally very, very different beasts. One shouldn't be used to emulate the other as the use cases are far too dissimilar.
Stop saying this. It's thoroughly incorrect. Cripes. If you think the desktop layout engine introduced in Windows Vista, or even the layout engines used in mobile browsers and current desktop browsers, doesn't have a ton in common with a game rendering pipeline then your knowledge is well outdated.
I don't want to belabor that point too much, but I can say a few things in response to that: Yes, compositors are implemented using 3D rendering APIs these days because they slap together textured quads on screen. They don't concern themselves with how the contents of these quads came to be. And rendering the window contents is where things start to diverge a lot. A game engine is a fundamentally different beast from a renderer for the kind of graphics a UI draws. The graphics primitives that GUI code wants to deal map awkwardly to the GPU rendering pipeline. Sure, there are ways (some of them quite impressive), but it's a pain. There's no explicit scene graph. You can construct a sort of implied scene graph elements from the draw calls the widgets make in their paint event handlers and go from there. But UI code sometimes requests state changes like crazy, switches primitive types, enables and disabling blending, depends a lot on clipping etc... and you can't simply go and reorder most of that. As a result, renderer for that is quite different from a renderer for a 3D scene, and hard to do in its own right. They can use the same GPU rendering API, but the algorithms on top of that are quite different. If you don't believe me, you can go and read some code: ImGUI, cairo-gl, Qt, WPF... As for browsers: an HTML page is essentially a pretty static scene graph with quite simple constituent elements, with the exception of a few outliers like canvas. The range of styles possible through CSS is limited in such a way that a HTML rendering engine can do a lot of reasoning about that. That's a luxury of not having paint event handlers executing arbitrary code. And typical engines spend quite some time on that - hundreds of ms aren't uncommon on a page load as far as I know. DOM changes through JS can also be surprisingly slow for that reason. All that processing is too heavy for an application that has paint event handlers and wants to refresh quickly.
Nov 28 2019
next sibling parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Friday, 29 November 2019 at 02:42:28 UTC, Gregor Mückl wrote:

 And rendering the window contents is where things start to 
 diverge a lot. A game engine is a fundamentally different beast 
 from a renderer for the kind of graphics a UI draws. The 
 graphics primitives that GUI code wants to deal map awkwardly 
 to the GPU rendering pipeline. Sure, there are ways (some of 
 them quite impressive), but it's a pain. There's no explicit 
 scene graph.
As a company that use QT extensively ... https://doc.qt.io/qt-5/qtquick-visualcanvas-scenegraph.html
Nov 29 2019
parent Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Friday, 29 November 2019 at 08:49:33 UTC, Paolo Invernizzi 
wrote:
 On Friday, 29 November 2019 at 02:42:28 UTC, Gregor Mückl wrote:

 And rendering the window contents is where things start to 
 diverge a lot. A game engine is a fundamentally different 
 beast from a renderer for the kind of graphics a UI draws. The 
 graphics primitives that GUI code wants to deal map awkwardly 
 to the GPU rendering pipeline. Sure, there are ways (some of 
 them quite impressive), but it's a pain. There's no explicit 
 scene graph.
As a company that use QT extensively ... https://doc.qt.io/qt-5/qtquick-visualcanvas-scenegraph.html
Same as with browser engines: Qt Quick gets away with it mostly because the UI is declarative. But declarative UIs have their own tradeoffs, and in the case if Qt Quick it comes in the form of less powerful widgets when compared to Qt Widgets.
Nov 29 2019
prev sibling next sibling parent reply Ethan <gooberman gmail.com> writes:
On Friday, 29 November 2019 at 02:42:28 UTC, Gregor Mückl wrote:
 They don't concern themselves with how the contents of these 
 quads came to be.
Amazing. Every word of what you just said is wrong. What, you think stock Win32 widgets are rendered with CPU code with the Aero and later compositors? You're treating custom user CPU rasterisation on pre-defined bounds as the entire rendering paradigm. And you can be assured that your code is reading to- and writing from- a quarantined section of memory that will be later composited by the layout engine. If you're going to bring up examples, study WPF and UWP. Entirely GPU driven WIMP APIs. But I guess we still need homework assignments. 1) What is a Z buffer? 2) What is a frustum? What does "orthographic" mean in relation to that? 3) Comparing the traditional and Aero+ desktop compositors, which one has the advantage with redraws of any kind? Why? 4) Why does ImGui's code get so complicated behind the scenes? And what advantage does this present to a programmer who wishes to use the API? 5) Using a single untextured quad and a pixel shader, how would you rasterise a curve? (I've written UI libraries and 3D scene graphs in my career as a console engine programmer, so you're going to want to be *very* thorough if you attempt to answer all these.) On Friday, 29 November 2019 at 08:45:30 UTC, Gregor Mückl wrote:
 GPUs are vector processors, typically 16 wide SIMD. The shaders 
 and compute kernels for then are written from a 
 single-"threaded" perspective, but this is converted to SIMD 
 qith one "thread" really being a single value in the 16 wide 
 register. This has all kinds of implications for things like 
 branching and memory accesses. Thus forum is not rhe place to 
 go into them.
No, please, continue. Let's see exactly how poorly you understand this. On Friday, 29 November 2019 at 09:00:20 UTC, Gregor Mückl wrote:
 All of these things can be done on GPUs (most of it has), but I 
 highly  doubt that this would be that much faster. You need 
 lots of different shaders for these primitives and switching 
 state while rendering is expensive.
When did you last use a GPU API? 1999? Top-end gaming engines can output near-photorealistic complex scenes at 60FPS. How many state changes do you think they perform in any given scene? It's all dependent on API, driver, and even operating system. The WDDM introduced in Vista made breaking changes with XP, splitting a whole ton of the stuff that would traditionally be costly with a state change out of kernel space code and in to user space code. Modern APIs like DirectX 12, Vulkan, Metal etc go one step further and remove that responsibility from the driver and in to user code. aberba wrote:
 Whats holding ~100% D GUI back?
A lack of skilled and knowledgeable people with the necessary time and money to do it correctly.
Nov 29 2019
parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Friday, 29 November 2019 at 10:08:59 UTC, Ethan wrote:
 On Friday, 29 November 2019 at 02:42:28 UTC, Gregor Mückl wrote:
 They don't concern themselves with how the contents of these 
 quads came to be.
Amazing. Every word of what you just said is wrong.
I doubt this, but I am open to discussion. Let's try to remain civil and calm.
 What, you think stock Win32 widgets are rendered with CPU code 
 with the Aero and later compositors?
Win32? Probably still are. WPF and later? No. That has always had a DirectX rendering backend. And at least WPF has a reputation of being sluggish. I haven't had performance issues with either so far, though.
 You're treating custom user CPU rasterisation on pre-defined 
 bounds as the entire rendering paradigm. And you can be assured 
 that your code is reading to- and writing from- a quarantined 
 section of memory that will be later composited by the layout 
 engine.

 If you're going to bring up examples, study WPF and UWP. 
 Entirely GPU driven WIMP APIs.

 But I guess we still need homework assignments.
OK, I'll indulge you in the interest of a civil discussion.
 1) What is a Z buffer?
OK, back to basics. When rendering a 3D scene with opaque surfaces, the resulting image only contains the surfaces nearest to the camera. The rest is occluded. Solutions like depth sorting the triangles and rendering back to front are possible (see e.g. the DOOM engine and it's BSP traversal for rendering), but they have drawbacks. E.g. even a set of three triangles may be mutually overlapping in a way that no consistent z ordering of the entire primitives exist. You need to split primitives to make that work. And you still need to guarantee sorted input. A z buffer does solves that problem by storing the minimum z value for each pixel that was thus far encountered. When drawing a new primitive over that pixel, that primitive's z value is first compared to the stored value and when it's further away, it is discarded. Of course, a hardware z buffer can be configured in various other interesting ways. E.g. restricting the z value range to half of the NDC space, alternating half spaces and simultaneously flipping between min and max tests is an old trick to skip clearing the z buffer between frames. There's still more to this topic: transformation of stored z values to retain precision on 24 bit integer z buffers, hierarchical z buffers, early z testing... I'll just cut it short here.
 2) What is a frustum? What does "orthographic" mean in relation 
 to that?
The view frustum is the volume that is mapped to NDC. For perspective projection, it's a truncated four-sided pyramid. For orthographic projection, it's a cuboid. Fun fact: for correct stereo rendering to a flat display, you need asymmetrical perspective frustums; doing it with symmetric frustums rotated towards the vergence point leads to distortions.
 3) Comparing the traditional and Aero+ desktop compositors, 
 which one has the advantage with redraws of any kind? Why?
I'm assuming that by traditional you mean a CPU compositor. In that case, the GPU compositor has the full image of all top level windows cached as textures. All it needs to do is render these to the screen as textured quads. This is fast and, in simple terms, it can be done without interfering with the vertical scanout of the image to the screen to avoid tearing. Because the window contents is cached, applications don't need to redraw their contents when z order changes (good bye damage events!) and as a side effect, moving and repositioning top level windows is smooth.
 4) Why does ImGui's code get so complicated behind the scenes? 
 And what advantage does this present to a programmer who wishes 
 to use the API?
One word: batching. I'll briefly describe the Vulkan rendering process of ImGUI, as far as I remember it from the top of my head: it creates a single big vertex buffer for all draw operations with a pretty uniform vertex layout, regardless of the primitive involved. All drawing state that doesn't need pipeline changes goes into the vertex buffer (world space coords, UV coords, vertex color...). It also retains a memory of the pipeline state required to draw the current set of primitives. All high level primitives are broken down into triangles, even lines and bezier curves. This trick reduces the number of draw calls later. The renderer retains a list of spans in the vertex buffer and their associated pipeline state. Whenever the higher level drawing code does something that requires a state change, the current span is terminated and a new one for the new pipeline state is started. As far as I remember, the code only has two pipelines: one for solid, untextured primitives, and one for textured primitives that is used for text rendering. In this model, the higher level rendering code can just emit draw calls for individual primitives, but these are only recorded and not executed immediately. In a second pass, the vertex buffer is uploaded in a single transfer and the list of vertex buffer spans is processed, switching pipelines, setting descriptors and emitting the draw call for the relevant vertex buffer range for each span in order. The main reason why this works is a fundamental ordering guarantee given by the Vulkan API: primitives listed in a vertex buffer must be rendered in such a way that the result is as if the primitives were processed in the order given in the buffer. For example, when primitives overlap, the last one in the buffer is the one that covers the overlap region in the resulting image.
 5) Using a single untextured quad and a pixel shader, how would 
 you rasterise a curve?
I'll let Jim Blinn answer that one for you: https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch25.html I'd seriously mess up the math if I were to try to explain in detail. Bezier curves aren't my strong suit. I'm solving rendering equations for a living.
 (I've written UI libraries and 3D scene graphs in my career as 
 a console engine programmer, so you're going to want to be 
 *very* thorough if you attempt to answer all these.)

 On Friday, 29 November 2019 at 08:45:30 UTC, Gregor Mückl wrote:
 GPUs are vector processors, typically 16 wide SIMD. The 
 shaders and compute kernels for then are written from a 
 single-"threaded" perspective, but this is converted to SIMD 
 qith one "thread" really being a single value in the 16 wide 
 register. This has all kinds of implications for things like 
 branching and memory accesses. Thus forum is not rhe place to 
 go into them.
No, please, continue. Let's see exactly how poorly you understand this.
Where is this wrong? Have you looked at CUDA or compute shaders? I'm honestly willing to listen and learn. I've talked about GPUs in these terms with other experts (Intel and nVidia R&D guys, among others) and this is a common model for how GPUs work. So I'm frankly puzzled by your response.
 On Friday, 29 November 2019 at 09:00:20 UTC, Gregor Mückl wrote:
 All of these things can be done on GPUs (most of it has), but 
 I highly  doubt that this would be that much faster. You need 
 lots of different shaders for these primitives and switching 
 state while rendering is expensive.
When did you last use a GPU API? 1999?
Last weekend, in fact. I'm bootstrapping a Vulkan/RTX raytracer as pet project. I want to update an OpenGL based real time room acoustics rendering method that I published a while ago.
 Top-end gaming engines can output near-photorealistic complex 
 scenes at 60FPS. How many state changes do you think they 
 perform in any given scene?
As few as possible. They *do* take time, although they have become cheaper. Batching by shader is still a thing. Don't take my word for it. See the "Pipelines" section here: https://devblogs.nvidia.com/vulkan-dos-donts/ And that's with an API that puts pipeline state creation up front! I don't have hard numbers for state changes and draw calls in recent games, unfortunately. The only number that I remember was something like about 2000 draw calls for a frame in Ashes of the Singularity. While that game shows masses of units, I don't find the graphics particularly impressive. There's next to no animation on the units. The glitz is mostly decals and particle effects. There's also not a lot of screen space post processing going on. So I don't consider that to be representative.
 It's all dependent on API, driver, and even operating system. 
 The WDDM introduced in Vista made breaking changes with XP, 
 splitting a whole ton of the stuff that would traditionally be 
 costly with a state change out of kernel space code and in to 
 user space code. Modern APIs like DirectX 12, Vulkan, Metal etc 
 go one step further and remove that responsibility from the 
 driver and in to user code.
Ok, this is some interesting information. I haven't ever had to care for where user/kernel mode transitions happen in the driver stack. I guess I've been lucky that I have been able to file that under generic driver overhead so far. Phew, this has become a long reply and it has taken me a lot of time to write it. I hope I could prove to you that I generally know what I'm writing about. I could point to my history as some additional proof, but I'd rather let this response stand for what it is.
Nov 29 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 29 November 2019 at 13:27:17 UTC, Gregor Mückl wrote:
 GPUs are vector processors, typically 16 wide SIMD. The 
 shaders and compute kernels for then are written from a
[…]
 Where is this wrong? Have you looked at CUDA or compute 
 shaders? I'm honestly willing to listen and learn.
Out of curiosity, what is being discussed? The abstract machine, the concrete micro code, or the concrete VLSI pipeline (electric pathways)? If the latter then I guess it all depends? But I believe a trick to save real estate is to have a wide ALU that is partioned into various word-widths with gates preventing "carry". I would expect there to be a mix (i.e. I would expect 1/x to be implemented in a less efficient, but less costly manner) However, my understanding is that VLIW caused too many bubbles in the pipeline for compute shaders and that they moved to a more RISC like architecture where things like branching became less costly. However, these are just generic statements found in various online texts, so how that is made concrete in terms om VLSI design, well... that is less obvious. Though it seems reasonable that they would pick a microcode representation that was more granular (flexible).
 Last weekend, in fact. I'm bootstrapping a Vulkan/RTX raytracer 
 as pet project. I want to update an OpenGL based real time room 
 acoustics rendering method that I published a while ago.
Cool! :-D Maybe you do some version of overlap add convolution in the frequency domain, or is it in the time domain? Reading up on Laplace transforms right now... I remember when the IRCAM workstation was state-of-the-art, a funky NeXT cube with lots of DSPs. Things have come a long way in that realm since the 90s, at least on the hardware side.
Nov 29 2019
parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Friday, 29 November 2019 at 15:29:20 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 29 November 2019 at 13:27:17 UTC, Gregor Mückl wrote:
 GPUs are vector processors, typically 16 wide SIMD. The 
 shaders and compute kernels for then are written from a
[…]
 Where is this wrong? Have you looked at CUDA or compute 
 shaders? I'm honestly willing to listen and learn.
Out of curiosity, what is being discussed? The abstract machine, the concrete micro code, or the concrete VLSI pipeline (electric pathways)?
What I wrote is a very abstract view of GPUs that is useful for programming. I may no have done a good job of summarizing it, now that I read that paragraph again. This is a fairly recent presentation that gives a gentle introduction to that model: https://aras-p.info/texts/files/2018Academy%20-%20GPU.pdf This presentation is of course a simplification of what is going on in a GPU, but it gets the core idea across. AMD and nVidia do have a lot of documentation that goes into some more detail, but at some point you're going to hit a wall. A lot of low level details are hidden behind NDAs and that's quite frustrating.
 If the latter then I guess it all depends? But I believe a 
 trick to save real estate is to have a wide ALU that is 
 partioned into various word-widths with gates preventing 
 "carry". I would expect there to be a mix (i.e. I would expect 
 1/x to be implemented in a less efficient, but less costly 
 manner)

 However, my understanding is that VLIW caused too many bubbles 
 in the pipeline for compute shaders and that they moved to a 
 more RISC like architecture where things like branching became 
 less costly. However, these are just generic statements found 
 in various online texts, so how that is made concrete in terms 
 om VLSI design, well... that is less obvious. Though it seems 
 reasonable that they would pick a microcode representation that 
 was more granular (flexible).
I don't have good information on that. A lot of the details of the actual ALU designs are kept under wraps. But when you want to cram a few hundred cores that do 16 wide floating point SIMD processing each onto a single die, simpler is better. And throughput trumps latency for graphics.
 Last weekend, in fact. I'm bootstrapping a Vulkan/RTX 
 raytracer as pet project. I want to update an OpenGL based 
 real time room acoustics rendering method that I published a 
 while ago.
Cool! :-D Maybe you do some version of overlap add convolution in the frequency domain, or is it in the time domain? Reading up on Laplace transforms right now...
The convolutions for aurealization are done in the frequency domain. Room impulse responses are quite long (up to several seconds), so a time domain convolutions are barely feasible offline. The only feasible way is to use the convolution theorem, transform everything into frequency space, multiply it there, and transform things back... while encountering the pitfalls of FFT in a continuous signal context along the way. There's a lot of pitfalls. I'm doing all of the convolution on the CPU because the output buffer is read from main memory by the sound hardware. Audio buffer updates are not in lockstep with screen refreshes, so you can't reliably copy the next audio frame to the GPU, convolve it there and read it back in time because the GPU is on it's own schedule. The OpenGL part of my method is for actually propagating sound through the scene and computing the impulse response from that. That is typically so expensive that it's also run asynchronously to the audio processing and mixing. Only the final impulse response is moved to the audio processing thread. Perceptually, it seems that you can get away with a fairly low update rate for the reverb in many cases.
 I remember when the IRCAM workstation was state-of-the-art, a 
 funky NeXT cube with lots of DSPs. Things have come a long way 
 in that realm since the 90s, at least on the hardware side.
Yes, they have! I suspect that GPUs could make damn fine DSPs with their massive throughput. But they aren't linked well to audio hardware in Intel PCs. And those pesky graphics programmers want every ounce of GPU performance all to themselves and never share! ;)
Nov 29 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 29 November 2019 at 16:40:01 UTC, Gregor Mückl wrote:
 This presentation is of course a simplification of what is 
 going on in a GPU, but it gets the core idea across. AMD and 
 nVidia do have a lot of documentation that goes into some more 
 detail, but at some point you're going to hit a wall.
I think it is a bit interesting that Intel was pushing their Phi solution (many Pentium-cores), but seems to not update it recently. So I wonder if they will be pushing more independent GPU cores on-die (CPU-chip). It would make sense for them to build one architecture that can cover many market segments.
 The convolutions for aurealization are done in the frequency 
 domain. Room impulse responses are quite long (up to several 
 seconds), so a time domain convolutions are barely feasible 
 offline. The only feasible way is to use the convolution 
 theorem, transform everything into frequency space, multiply it 
 there, and transform things back...
I remember reading a paper about casting rays into a 3D model to estimate an acoustic model for the room in the mid 90s. I assume they didn't do it real time. I guess you could create a psychoacoustic parametric model that works in the time domain... it wouldn't be very accurate, but I wonder if it still could be effective. It is not like Hollywood movies have accurate sound... We have optic illusions for the visual system, but there are also auditive illusions for the aural systems. E.g. shepard tones that ascend forever, and I've heard the same have been done with motion of sound by morphing the phase of a sound over speakers, that have been carefully placed with an exact distance between them, so that a sound moves to the left forever. I find such things kinda neat... :) Some electro acoustic composers explore this field, I think it is called spatialization/diffusion? I viewed one of your vidoes and the phasing reminded me a bit of how these composers work. I don't have access to my record collection right now, but there are some soundtracks that are surprisingly spatial. Kind of like audio-versions of non-photorealistic rendering techniques. :-) The only one I can remember right now seems to be Utilty of Space by N. Barrett (unfortunately a short clip): https://electrocd.com/en/album/2322/Natasha_Barrett/Isostasie
 There's a lot of pitfalls. I'm doing all of the convolution on 
 the CPU because the output buffer is read from main memory by 
 the sound hardware. Audio buffer updates are not in lockstep 
 with screen refreshes, so you can't reliably copy the next 
 audio frame to the GPU, convolve it there and read it back in 
 time because the GPU is on it's own schedule.
Right, why can't audiobuffers be supported in the same way as screenbuffers? Anyway, if Intel decides to integrate GPU cores and CPU cores tighter then... maybe. Unfortunately, Intel tends to focus on making existing apps run faster, not to enable the next-big-thing.
 Perceptually, it seems that you can get away with a fairly low 
 update rate for the reverb in many cases.
If the sound sources are at a distance then there should be some time to work it out? I haven't actually thought very hard on that... You could also treat early and late reflections separately (like in a classic reverb). I wonder though if it actually has to be physically correct, because, it seems to me that Hollywood movies can create more intense experiences by breaking with physical rules. But the problem is coming up with a good psychoacoustic model, I guess. So in a way, going with the physical model is easier... it easier to evaluate anyway.
 And those pesky graphics programmers want every ounce of GPU 
 performance all to themselves and never share! ;)
Yes, but maybe the current focus on neural-networks will make hardware vendors focus on reducing latency and thus improve the situation for audio as well. That is my prediction, but I could be very wrong. Maybe they just will insist on making completely separate coprocessors for NN.
Nov 29 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 29 November 2019 at 23:55:55 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 29 November 2019 at 16:40:01 UTC, Gregor Mückl wrote:
 This presentation is of course a simplification of what is 
 going on in a GPU, but it gets the core idea across. AMD and 
 nVidia do have a lot of documentation that goes into some more 
 detail, but at some point you're going to hit a wall.
I think it is a bit interesting that Intel was pushing their Phi solution (many Pentium-cores), but seems to not update it recently. So I wonder if they will be pushing more independent GPU cores on-die (CPU-chip). It would make sense for them to build one architecture that can cover many market segments.
Yep, they are, seems that Intel Xe will succeed Phi: https://en.wikipedia.org/wiki/Intel_Xe But probably only for special purpose and enthusiasts, so not very relevant for UI. One interesting point regarding UI is that Linux drivers for AMD APUs support zero-copying using the unified GPU/CPU memory: https://en.wikipedia.org/wiki/Heterogeneous_System_Architecture#Software_support Whereas Intel on-die GPUs seem to require copying, but it isn't quite clear to me if that will be faster if it is in cache or not... I suspect it will have to flush first.. :-/ Another interesting fact is that NVIDIA has shown interest in RISC-V: https://abopen.com/news/nvidia-turns-to-risc-v-for-rc18-research-chip-io-core/
Nov 30 2019
prev sibling parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Friday, 29 November 2019 at 23:55:55 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 29 November 2019 at 16:40:01 UTC, Gregor Mückl wrote:
 This presentation is of course a simplification of what is 
 going on in a GPU, but it gets the core idea across. AMD and 
 nVidia do have a lot of documentation that goes into some more 
 detail, but at some point you're going to hit a wall.
I think it is a bit interesting that Intel was pushing their Phi solution (many Pentium-cores), but seems to not update it recently. So I wonder if they will be pushing more independent GPU cores on-die (CPU-chip). It would make sense for them to build one architecture that can cover many market segments.
Intel Xe is supposed to be a dedicated GPU. I expect a radical departure from their x86 cores and their previous Xeon Phi chips that used a reduced x86 instruction set. Any successor to that needs more cores, but these can be a lot simpler.
 The convolutions for aurealization are done in the frequency 
 domain. Room impulse responses are quite long (up to several 
 seconds), so a time domain convolutions are barely feasible 
 offline. The only feasible way is to use the convolution 
 theorem, transform everything into frequency space, multiply 
 it there, and transform things back...
I remember reading a paper about casting rays into a 3D model to estimate an acoustic model for the room in the mid 90s. I assume they didn't do it real time.
Back in the 90s they probably didn't. But this is slowly becoming feasible. See e.g. https://www.oculus.com/blog/simulating-dynamic-soundscapes-at-facebook-reality-labs/ This has been released as part of the Oculus Audio SDK earlier this year.
 I guess you could create a psychoacoustic parametric model that 
 works in the time domain... it wouldn't be very accurate, but I 
 wonder if it still could be effective. It is not like Hollywood 
 movies have accurate sound...  We have optic illusions for the 
 visual system, but there are also auditive illusions for the 
 aural systems. E.g. shepard tones that ascend forever, and I've 
 heard the same have been done with motion of sound by morphing 
 the phase of a sound over speakers, that have been carefully 
 placed with an exact distance between them, so that a sound 
 moves to the left forever. I find such things kinda neat... :)
I think what you're getting to is filter chains that emulate reverb, but stay in the time domain. The canonical artificial reverb is the Schroeder reverberator. However, you still need a target RT60 to get the correct reverb tail length. You can try to derive that time in various ways. Path tracing is one. Maybe you could get away with an estimated reverb time based on the Sabine equation. I've never tried. Microsoft Research is working on an approach that precomputed wave propagation using FDTD and resorts to runtime lookup of these results.
 Some electro acoustic composers explore this field, I think it 
 is called spatialization/diffusion? I viewed one of your vidoes 
 and the phasing reminded me a bit of how these composers work. 
 I don't have access to my record collection right now, but 
 there are some soundtracks that are surprisingly spatial. Kind 
 of like audio-versions of non-photorealistic rendering 
 techniques. :-) The only one I can remember right now seems to 
 be Utilty of Space by N. Barrett (unfortunately a short clip):
 https://electrocd.com/en/album/2322/Natasha_Barrett/Isostasie
Spatialization is something slightly different. It refers to the creation of the illusion that sound originate from a specific point or volume in space. That's surprisingly hard to get right and it's an active area of research. That track is interesting. I don't remember encountering any other purely artistic use of audio spatialization.
 There's a lot of pitfalls. I'm doing all of the convolution on 
 the CPU because the output buffer is read from main memory by 
 the sound hardware. Audio buffer updates are not in lockstep 
 with screen refreshes, so you can't reliably copy the next 
 audio frame to the GPU, convolve it there and read it back in 
 time because the GPU is on it's own schedule.
Right, why can't audiobuffers be supported in the same way as screenbuffers? Anyway, if Intel decides to integrate GPU cores and CPU cores tighter then... maybe. Unfortunately, Intel tends to focus on making existing apps run faster, not to enable the next-big-thing.
A GPU in compute mode doesn't really care about the semantics of the data in the buffers it gets handed. FIR filters should map fine to GPU computing, IIR filters not so much. So, depending on the workload, GPUs can do just fine. The real problem is one of keeping different sets of deadlines in a realtime system. Graphics imposes one set (the screen refresh rate) and audio imposes another (audio output buffer update rate). The GPU is usually lagging behind the CPU rather than in perfect lockstep and it's typically under high utilization, so it won't have appropriate open timeslots to meet other deadlines in most situations.
 Perceptually, it seems that you can get away with a fairly low 
 update rate for the reverb in many cases.
If the sound sources are at a distance then there should be some time to work it out? I haven't actually thought very hard on that... You could also treat early and late reflections separately (like in a classic reverb).
Early and late reverb need to be treated separately for perceptual reasons. The crux that I didn't mention previously is that you need an initial reverb ready as soon as a sound source starts playing. That can be problem with low update rates in games where sound sources come and go quite often.
 I wonder though if it actually has to be physically correct, 
 because, it seems to me that Hollywood movies can create more 
 intense experiences by breaking with physical rules.  But the 
 problem is coming up with a good psychoacoustic model, I guess. 
 So in a way, going with the physical model is easier... it 
 easier to evaluate anyway.
I'm really taking a hint from graphics here: animation studios started to use PBR (that is path tracing with physically plausible materials) same as VFX houses that do photorealistic effects. They want it that way because then the default is being correct. They can always stylize later. If you're interested, we can take this discussion offline. This thread is the wrong place for this.
Nov 30 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 30 November 2019 at 16:41:34 UTC, Gregor Mückl wrote:
 Intel Xe is supposed to be a dedicated GPU. I expect a radical 
 departure from their x86 cores and their previous Xeon Phi 
 chips that used a reduced x86 instruction set. Any successor to 
 that needs more cores, but these can be a lot simpler.
Yes, so not relevant for UI right now, as most user will only have on-die GPUs (colocated on the same chip as the CPU). But if all GPU vendors are moving in the same direction then it will eventually make it to the low end.
 I'm really taking a hint from graphics here: animation studios 
 started to use PBR (that is path tracing with physically 
 plausible materials) same as VFX houses that do photorealistic 
 effects. They want it that way because then the default is 
 being correct. They can always stylize later.
Hmm... yes... if you look to art then some of the best artists break even with correct perspective and rendering for more impactful aesthetics. There was some research done under the label "evolutionary art", where computable aesthetic measures were used for the fitness model. Not sure what the status quo is, though. Maybe NN will lead to something.
Dec 02 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
For anyone interested in how regular GPUs work, this report is 
pretty interesting:

https://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-143.pdf

It explains how Single-Instruction-Multiple-Threads execution 
units hides latency, by interleaving multiple threads. So, 8-32 
execution units are working in lockstep, but the impact of stalls 
are reduced.

The typical on-die Intel GPU seems to have 8 execution units per 
slice and 2x 128-bit ALUs per execution unit.
Dec 02 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 2 December 2019 at 20:08:15 UTC, Ola Fosheim Grøstad 
wrote:
 The typical on-die Intel GPU seems to have 8 execution units 
 per slice and 2x 128-bit ALUs per execution unit.
I meant 2x 128-bit FPUs (e.g. 8xFP32) And one has to read the report to figure out how GPUs hide latency, because... they differ. Which kinda is the point, I guess.
Dec 02 2019
prev sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Friday, 29 November 2019 at 16:40:01 UTC, Gregor Mückl wrote:
 The OpenGL part of my method is for actually propagating sound 
 through the scene and computing the impulse response from that. 
 That is typically so expensive that it's also run 
 asynchronously to the audio processing and mixing. Only the 
 final impulse response is moved to the audio processing thread. 
 Perceptually, it seems that you can get away with a fairly low 
 update rate for the reverb in many cases.
Very interesting, early reflections are hard. Interested to hear your results!
Nov 30 2019
prev sibling parent reply Ethan <gooberman gmail.com> writes:
On Friday, 29 November 2019 at 13:27:17 UTC, Gregor Mückl wrote:
 A complete wall of text that missed the point entirely.
Wow. Well. I said it would need to be thorough, I didn't say it would need to be filled with lots of irrelevant facts to hide the fact you couldn't give a thorough answer to most things. 1 and 2 can both be answered with "a method of hidden surface removal." A more detailed explanation of 1 is "a method of hidden surface removal using a scalar buffer representing distance of an object from the viewpoint" whereas 2 is "a method of hidden surface removal using a set of planes or a matrix to discard non-visible objects". Othographic is a projectionless frustum, ie nothing is distorted based on distance and there is no field of view. Given your ranting about how hard clipping 2D surfaces is, the fact that you didn't tie these questions together speaks volumes. 3, it's a simplistic understanding at best. Paint calls are no longer based on whether a region on the screen buffer needs to be filled, they're called on each control that the compositor handles whenever a control is dirty. 4 entirely misses the point. Entirely. ImGui retains state behind the scenes, and *then* decides how best to batch that up for rendering. The advantage for using the API is that you don't need to keep state yourself, and zero data is required from disk to layout your UI. 5, pathetic. The thorough answer is "determine the distance of your output pixel from the line and emit a colour accordingly." Which, consequently, is exactly how you'd handle filling regions, your line will have a direction from which you can derive a positive and negative space from. No specific curve was asked for. But especially rich is that the article you linked provides an example of how to render text on the GPU. (Anyone actually reading: You'd use this methodology these days to build a distance field atlas of glyphs that you'd use to then render strings of text. Any game you see with fantastic quality text these days uses this. Its applications in the desktop space is that you don't necessarily need to re-render your glyph atlas for zooming text or different font sizes. But as others point out: Each operating system has its own text rendering engine that gives distinctive output even with the same typefaces, so while you could homebrew it like this you'd ideally want to let the OS render your text and carry on from there.) So short story: If I wanted a bunch of barely-relevant facts, I'd read Wikipedia. If I want someone with a thorough understanding of rendering technology and how to apply that to a desktop environment, you'd be well down the bottom of the list.
Nov 30 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 30 November 2019 at 10:12:42 UTC, Ethan wrote:
 (Anyone actually reading: You'd use this methodology these days 
 to build a distance field atlas of glyphs that you'd use to 
 then render strings of text. Any game you see with fantastic 
 quality text these days uses this. Its applications in the 
 desktop space is that you don't necessarily need to re-render 
 your glyph atlas for zooming text or different font sizes.
Distance fields are ok for VR/AR applications, but not accurate enough and wastes way too much resources for a generic UI that covers all platforms and use-cases. This is where the game-engine mindset goes wrong: A generic portable UI framework should leave as much resources as possible for the application (CPU/GPU/Memory/power). You don't need to do real time scaling at a high frame-rate in a generic UI. So, even if you can get to a game-engine-like solution that only use 5% of the resources on a high end computer, that still translates to eating up 50% of the resources on the low-end. Which is unacceptable. A UI framework has to work equally well with devices that is nowhere near being able to run games...
Nov 30 2019
parent reply Ethan <gooberman gmail.com> writes:
On Saturday, 30 November 2019 at 11:13:42 UTC, Ola Fosheim 
Grøstad wrote:
 Another wall of text
Amazing. Every word you just Ah why am I even bothering, you've proven many times over that you love the sound of you own voice and nothing else. You didn't even read the section of my post you quoted correctly for starters.
Nov 30 2019
parent Jab <jab_293 gmall.com> writes:
On Saturday, 30 November 2019 at 12:44:51 UTC, Ethan wrote:
 On Saturday, 30 November 2019 at 11:13:42 UTC, Ola Fosheim 
 Grøstad wrote:
 Another wall of text
Amazing. Every word you just Ah why am I even bothering, you've proven many times over that you love the sound of you own voice and nothing else. You didn't even read the section of my post you quoted correctly for starters.
I mean you're being pretty condescending and keep assuming people's knowledge of GPUs is 20 years old for some reason. I get that impression from you, you just want to hear your own voice.
Nov 30 2019
prev sibling parent Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Saturday, 30 November 2019 at 10:12:42 UTC, Ethan wrote:
 4 entirely misses the point. Entirely. ImGui retains state 
 behind the scenes, and *then* decides how best to batch that up 
 for rendering. The advantage for using the API is that you 
 don't need to keep state yourself, and zero data is required 
 from disk to layout your UI.
I described the actual Vulkan implementation of ImGUI rendering based on the its source code. And it does exactly what you just said: batching! I even included reasons for why it does what it does the way it does. It's straightforward. Go check the code yourself if you don't believe me.
 5, pathetic. The thorough answer is "determine the distance of 
 your output pixel from the line and emit a colour accordingly." 
 Which, consequently, is exactly how you'd handle filling 
 regions, your line will have a direction from which you can 
 derive a positive and negative space from. No specific curve 
 was asked for. But especially rich is that the article you 
 linked provides an example of how to render text on the GPU.

 (Anyone actually reading: You'd use this methodology these days 
 to build a distance field atlas of glyphs that you'd use to 
 then render strings of text. Any game you see with fantastic 
 quality text these days uses this. Its applications in the 
 desktop space is that you don't necessarily need to re-render 
 your glyph atlas for zooming text or different font sizes. But 
 as others point out: Each operating system has its own text 
 rendering engine that gives distinctive output even with the 
 same typefaces, so while you could homebrew it like this you'd 
 ideally want to let the OS render your text and carry on from 
 there.)
I didn't point anybody to distance field based text rendering because it doesn't handle a few things that desktop graphics care about. The main things are font hinting (which depends on font size in relation to screen resolution, so glyphs change *shape* when scaling under hinting to retain readability) and text shaping, so that text glyphs change shape depending on the neighboring glyphs. Text shaping is mandatory for Arabic script, for example. Also, distance field based text rendering is prone to artifacts under magnification: texture interpolation on the distance field texture causes sharp edges to be rounded off (that's even described in the original Valve paper!). I'll stop this discussion with you here, Ethan. This is becoming unhealthy. We need to take a step back from this.
Nov 30 2019
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 29 November 2019 at 02:42:28 UTC, Gregor Mückl wrote:
 Yes, compositors are implemented using 3D rendering APIs these 
 days
I read on one of the blogs made by a browser developer that they making your own compositor was wasteful, because the windowing system would then do an additional copy. So that they should instead use the windowing systems compositor in order to save resources. Seems like they did just that in Firefox recently: https://mozillagfx.wordpress.com/2019/10/22/dramatically-reduced-power-usage-in-firefox-70-on-macos-with-core-animation/
Nov 29 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 29 November 2019 at 11:16:32 UTC, Ola Fosheim Grøstad 
wrote:
 https://mozillagfx.wordpress.com/2019/10/22/dramatically-reduced-power-usage-in-firefox-70-on-macos-with-core-animation/
Btw, one of the comments in that thread points out that Safari uses CoreAnimation for all its composition. Hm. I guess one advantage with that is that you get your own program updated when the user installs a new version of the OS. Apple tends to sync OS releases with the new hardware.
Nov 29 2019
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On Tuesday, 19 November 2019 at 22:57:47 UTC, aberba wrote:
 There's been attempts to implement a native D GUI toolkit but 
 it seems none gets community attention. The closets...DlangUI 
 (https://code.dlang.org/packages/dlangui)... seems no more 
 under active development...getting to a year now.

 Ketmar and others have been in the talks about doing something 
 towards that. Whats happening now? What's holding 100% D GUI 
 back?
There's DWT [1]. But it doesn't currently get any updates besides from make sure it works on the latest compiler. One of the issues is that some will prefer a native GUI that follows the conventions on a given platform and looks and behaves slightly different across the supported platforms. But some will prefer a non-native custom GUI that looks the same on all platforms. There might not be enough people in the community that are interested in implementing GUI applications. [1] https://github.com/d-widget-toolkit/dwt -- /Jacob Carlborg
Nov 20 2019
parent reply aberba <karabutaworld gmail.com> writes:
On Wednesday, 20 November 2019 at 09:05:50 UTC, Jacob Carlborg 
wrote:
 On Tuesday, 19 November 2019 at 22:57:47 UTC, aberba wrote:
 There's been attempts to implement a native D GUI toolkit but 
 it seems none gets community attention. The closets...DlangUI 
 (https://code.dlang.org/packages/dlangui)... seems no more 
 under active development...getting to a year now.

 Ketmar and others have been in the talks about doing something 
 towards that. Whats happening now? What's holding 100% D GUI 
 back?
There's DWT [1]. But it doesn't currently get any updates besides from make sure it works on the latest compiler. One of the issues is that some will prefer a native GUI that follows the conventions on a given platform and looks and behaves slightly different across the supported platforms. But some will prefer a non-native custom GUI that looks the same on all platforms. There might not be enough people in the community that are interested in implementing GUI applications. [1] https://github.com/d-widget-toolkit/dwt -- /Jacob Carlborg
Now there's rarely the need to get it to work with platform widget across all platform. They get more different by the day. I think it's better to have only some common custom drawn widges (like how Material UI works) and make it flexible to build third party custom widgets and styling. Community/users will take care of the rest...including third party components/widget for both Mobile and desktop
Nov 21 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 21 November 2019 at 12:53:42 UTC, aberba wrote:
 Now there's rarely the need to get it to work with platform 
 widget across all platform. They get more different by the day.
That really depends on what a customer asks for... So you might not need platform widgets (because non-tech people care about the experience, not the tech), but they might requires similar look and behaviour. GUI toolkits that emulate platform look and feel tend to be more popular.
 I think it's better to have only some common custom drawn 
 widges (like how Material UI works) and make it flexible to 
 build third party custom widgets and styling.
I view Material as a platform GUI-design. They make it available on many platforms... Just like GTK was a platform GUI-design, then made it available on many platforms. That makes GTK unappealing for many devs. And let's be honest, GTK applications look outdated today. Google will at some point drop Material, and then all apps that use it will look out-of-style... and you'll need a major rewrite. I'll give Material 3-5 years.
 Community/users will take care of the rest...including third 
 party components/widget for both Mobile and desktop
Sadly, individuals tend to «creatively» break basic UI guidelines when given the opportunity based on their very personal preferences. To get something consistent you need a disciplined team that follow strict and comprehensive guidelines. Which is why good GUI toolkits are rare... very rare.
Nov 21 2019
prev sibling parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-11-19 22:57:47 +0000, aberba said:

 ... Whats happening now? What's holding 100% D GUI back?
Nothing, so why don't you do one? We are developing a GUI framework in D for our product. But, the framework is only a menas to an end. And, the focus is on the stuff we need, not to create a ready-to-use-everyone-will-love-it framework. I bet the ratio is 1:50 for dev-power:feedback-power for such projects in the public => If you want to get things done, you need to do it in a small closed group. If you want to go public, you need to have a level that others can take it and use it. Otherwise you are wasting a lot of capacity with a lot of wishes but low support. Anyway, here is a shor teaser: https://www.dropbox.com/s/57s19lj0xzr7sqo/Bildschirmaufnahme%202019-11-21 20um%2022.38.37.mov Mihgt not look like a lot, but everything is already working, so focusing on more basic widgets. And after this we do decoration (a bit) and BTW we can do font rendering too already. -- Robert M. Mnch http://www.saphirion.com smarter | better | faster
Nov 21 2019