www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - LDC 1.18.0-beta1

reply kinke <noone nowhere.com> writes:
Glad to announce the first beta for LDC 1.18:

* Based on D 2.088.0+ (yesterday's stable).
* Bundled dub upgraded to v1.17.0+ with improved LDC support, 
incl. cross-compilation.
* Init symbols of zero-initialized structs are no longer emitted.
* druntime: DMD-compatible {load,store}Unaligned and prefetch 
added to core.simd.
* JIT improvements, incl. multi-threaded compilation.

Full release log and downloads: 
https://github.com/ldc-developers/ldc/releases/tag/v1.18.0-beta1

Please help test, and thanks to all contributors!
Sep 12 2019
next sibling parent GreatSam4sure <greatsam4sure gmail.com> writes:
On Thursday, 12 September 2019 at 23:49:04 UTC, kinke wrote:
 Glad to announce the first beta for LDC 1.18:

 * Based on D 2.088.0+ (yesterday's stable).
 * Bundled dub upgraded to v1.17.0+ with improved LDC support, 
 incl. cross-compilation.
 * Init symbols of zero-initialized structs are no longer 
 emitted.
 * druntime: DMD-compatible {load,store}Unaligned and prefetch 
 added to core.simd.
 * JIT improvements, incl. multi-threaded compilation.

 Full release log and downloads: 
 https://github.com/ldc-developers/ldc/releases/tag/v1.18.0-beta1

 Please help test, and thanks to all contributors!
Thanks to all who make this possible
Sep 12 2019
prev sibling next sibling parent zoujiaqing <zoujiaqing gmail.com> writes:
On Thursday, 12 September 2019 at 23:49:04 UTC, kinke wrote:
 Glad to announce the first beta for LDC 1.18:

 * Based on D 2.088.0+ (yesterday's stable).
 * Bundled dub upgraded to v1.17.0+ with improved LDC support, 
 incl. cross-compilation.
 * Init symbols of zero-initialized structs are no longer 
 emitted.
 * druntime: DMD-compatible {load,store}Unaligned and prefetch 
 added to core.simd.
 * JIT improvements, incl. multi-threaded compilation.

 Full release log and downloads: 
 https://github.com/ldc-developers/ldc/releases/tag/v1.18.0-beta1

 Please help test, and thanks to all contributors!
Thank you kinke. Is there a timeline for iOS support?
Sep 23 2019
prev sibling next sibling parent reply Martin Tschierschke <mt smartdolphin.de> writes:
On Thursday, 12 September 2019 at 23:49:04 UTC, kinke wrote:
 Glad to announce the first beta for LDC 1.18:

 * Based on D 2.088.0+ (yesterday's stable).
 * Bundled dub upgraded to v1.17.0+ with improved LDC support, 
 incl. cross-compilation.
 * Init symbols of zero-initialized structs are no longer 
 emitted.
 * druntime: DMD-compatible {load,store}Unaligned and prefetch 
 added to core.simd.
 * JIT improvements, incl. multi-threaded compilation.

 Full release log and downloads: 
 https://github.com/ldc-developers/ldc/releases/tag/v1.18.0-beta1

 Please help test, and thanks to all contributors!
Great! Can you please give (again?) a link or a more detailed description of the JIT, explaining some use cases? Regards mt.
Sep 23 2019
parent reply Ivan Butygin <ivan.butygin gmail.com> writes:
On Monday, 23 September 2019 at 12:22:47 UTC, Martin Tschierschke 
wrote:

 Can you please give (again?) a link or a more detailed 
 description of the JIT, explaining some use cases?
https://wiki.dlang.org/LDC-specific_language_changes#.40.28ldc.attributes.dynamicCompile.29 dynamicCompile attribute allow you to delay final function optimization to runtime. You mark any function (virtual functions and lambdas are also supported) with dynamicCompile and then call compileDynamicCode during runtime to finally optimize and compile function to native code, using host processor instruction set. There is also jit bind which works much like c++ bind but also non-placeholder params are treated and optimized as constants by optimizer. dynamicCompileEmit int foo(int a, int b, int c, bool flag) { if (flag) { // this check and code will be removed by optimizer ... } return a + b + c; // this will be optimized to 40 + c } auto f = bind(&foo, 30, 10, placeholder, false); int delegate(int) d = f.toDelegate(); compileDynamicCode(...); assert(f(2) == 42); assert(d(2) == 42); https://github.com/ldc-developers/ldc/blob/v1.18.0-beta1/runtime/jit-rt/d/ldc/dynamic_compile.d
Sep 23 2019
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 23 September 2019 at 19:40:13 UTC, Ivan Butygin wrote:
 On Monday, 23 September 2019 at 12:22:47 UTC, Martin 
 Tschierschke wrote:

 Can you please give (again?) a link or a more detailed 
 description of the JIT, explaining some use cases?
https://wiki.dlang.org/LDC-specific_language_changes#.40.28ldc.attributes.dynamicCompile.29 [snip]
I think the wiki has room for improvement...or, ideally, there would be a tutorial that goes through all the JIT functionality in LDC. I don't really understand the difference between dynamicCompile and dynamicCompileEmit. Is it that with dynamicCompileEmit I can still call foo normally? Also, do I have to use either bind or a delegate to get the JIT functionality? What is the advantage or cost of f (the binded version) and d (the delegate version)? Does the indirection from the delegates outweigh the benefit from simplified computation in these cases? Is there any issue with aliasing f or d to be named foo (or just calling them foo from the start?)? What am I doing wrong on run.dlang.org: https://run.dlang.io/is/itIPQK
Sep 23 2019
parent reply Ivan Butygin <ivan.butygin gmail.com> writes:
On Monday, 23 September 2019 at 20:24:54 UTC, jmh530 wrote:
 On Monday, 23 September 2019 at 19:40:13 UTC, Ivan Butygin 
 wrote:
 On Monday, 23 September 2019 at 12:22:47 UTC, Martin 
 Tschierschke wrote:

 Can you please give (again?) a link or a more detailed 
 description of the JIT, explaining some use cases?
https://wiki.dlang.org/LDC-specific_language_changes#.40.28ldc.attributes.dynamicCompile.29 [snip]
I think the wiki has room for improvement...or, ideally, there would be a tutorial that goes through all the JIT functionality in LDC. I don't really understand the difference between dynamicCompile and dynamicCompileEmit. Is it that with dynamicCompileEmit I can still call foo normally? Also, do I have to use either bind or a delegate to get the JIT functionality? What is the advantage or cost of f (the binded version) and d (the delegate version)? Does the indirection from the delegates outweigh the benefit from simplified computation in these cases? Is there any issue with aliasing f or d to be named foo (or just calling them foo from the start?)? What am I doing wrong on run.dlang.org: https://run.dlang.io/is/itIPQK
With dynamicCompileEmit normal calls to function will go to static version but these functions can still be targets for bind. Objects returned from bind are reference counted. You can get delegate from them to use is context where delegate is expected but you need to retain object somewhere. Delegate version will add additional call indirection I think but otherwise they are identical. Also, something got broken with bools, I need to check :) https://run.dlang.io/is/x3orGK
Sep 23 2019
parent jmh530 <john.michael.hall gmail.com> writes:
On Monday, 23 September 2019 at 20:57:49 UTC, Ivan Butygin wrote:
 [snip]

 With  dynamicCompileEmit normal calls to function will go to 
 static version but these functions can still be targets for 
 bind.
The confusing thing is that if I add a normal call to foo and then change dynamicCompileEmit to dynamicCompile, then it still works without problem https://run.dlang.io/is/XZVs0k
 Objects returned from bind are reference counted. You can get 
 delegate from them to use is context where delegate is expected 
 but you need to retain object somewhere. Delegate version will 
 add additional call indirection I think but otherwise they are 
 identical.
The delegate version adds extra indirection relative to the bind version, correct? Does the bind version also have an extra indirection relative to the normal function call? I suppose what I'm trying to find out is if auto f = bind(&foo, 30, 10, placeholder); should have the same run-time performance as int f(int c) { return 40 + c; }
 Also, something got broken with bools, I need to check :)
 https://run.dlang.io/is/x3orGK
Thanks, now that I got it working, I confirmed that if you take an alias of the result of bind with the same name as the original function, then it will be called before the normal function. https://run.dlang.io/is/Tv88PS
Sep 24 2019
prev sibling parent reply Martin Tschierschke <mt smartdolphin.de> writes:
On Monday, 23 September 2019 at 19:40:13 UTC, Ivan Butygin wrote:
 On Monday, 23 September 2019 at 12:22:47 UTC, Martin 
 Tschierschke wrote:

 Can you please give (again?) a link or a more detailed 
 description of the JIT, explaining some use cases?
https://wiki.dlang.org/LDC-specific_language_changes#.40.28ldc.attributes.dynamicCompile.29
Thank you, I found this too, but it is more an example of the principle, but what is the use case? It is only useful if the instruction set of the compiling computer differ from target hardware and by this you get
 using host processor instruction set
??? Regards mt.
Sep 24 2019
parent reply kinke <kinke gmx.net> writes:
On Tuesday, 24 September 2019 at 07:41:35 UTC, Martin 
Tschierschke wrote:
 Thank you, I found this too, but it is more an example of the 
 principle, but what is the use case?

 It is only useful if the instruction set of the compiling 
 computer differ from target
 hardware and by this you get

 using host processor instruction set
???
If you don't want to ship 10 fine-tuned binaries for 10 different CPUs (see `-mcpu=?`), you can use JIT to compile and tune performance-critical pieces for the executing/target CPU. E.g., letting the auto-vectorizer exploit the full register width for AVX-512 CPUs etc.
Sep 24 2019
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 24 September 2019 at 16:48:48 UTC, kinke wrote:
 [snip]

 If you don't want to ship 10 fine-tuned binaries for 10 
 different CPUs (see `-mcpu=?`), you can use JIT to compile and 
 tune performance-critical pieces for the executing/target CPU. 
 E.g., letting the auto-vectorizer exploit the full register 
 width for AVX-512 CPUs etc.
Ivan provided an example here [1] (you recommended he write it up a wiki). [1] https://forum.dlang.org/thread/bskpxhrqyfkvaqzoospx forum.dlang.org
Sep 24 2019
parent reply Ivan Butygin <ivan.butygin gmail.com> writes:
On Tuesday, 24 September 2019 at 17:49:13 UTC, jmh530 wrote:

About bind call overhead, bind object hold pointer to shared 
payload, which is allocated via malloc. This payload has function 
pointer (initially null).
During compileDynamicCode call runtime will update this pointer 
to generated code.
Bind object opCall call this function pointer from payload.

Call itself
https://github.com/ldc-developers/ldc/blob/v1.18.0-beta1/runtime/jit-rt/d/ldc/dynamic_compile.d#L352
https://github.com/ldc-developers/ldc/blob/v1.18.0-beta1/runtime/jit-rt/d/ldc/dynamic_compile.d#L493

toDelegate
https://github.com/ldc-developers/ldc/blob/v1.18.0-beta1/runtime/jit-rt/d/ldc/dynamic_compile.d#L509
https://github.com/ldc-developers/ldc/blob/v1.18.0-beta1/runtime/jit-rt/d/ldc/dynamic_compile.d#L355
Sep 24 2019
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 24 September 2019 at 18:24:36 UTC, Ivan Butygin wrote:
 On Tuesday, 24 September 2019 at 17:49:13 UTC, jmh530 wrote:

 About bind call overhead, bind object hold pointer to shared 
 payload, which is allocated via malloc. This payload has 
 function pointer (initially null).
 During compileDynamicCode call runtime will update this pointer 
 to generated code.
 Bind object opCall call this function pointer from payload.

 [snip]
That's very helpful. The bind stuff is making a little more sense to me now. Is there a concern that ldc cannot inline these function pointers versus the normal function calls?
Sep 24 2019
parent Ivan Butygin <ivan.butygin gmail.com> writes:
On Tuesday, 24 September 2019 at 19:17:25 UTC, jmh530 wrote:
 On Tuesday, 24 September 2019 at 18:24:36 UTC, Ivan Butygin 
 wrote:
 On Tuesday, 24 September 2019 at 17:49:13 UTC, jmh530 wrote:

 About bind call overhead, bind object hold pointer to shared 
 payload, which is allocated via malloc. This payload has 
 function pointer (initially null).
 During compileDynamicCode call runtime will update this 
 pointer to generated code.
 Bind object opCall call this function pointer from payload.

 [snip]
That's very helpful. The bind stuff is making a little more sense to me now. Is there a concern that ldc cannot inline these function pointers versus the normal function calls?
We probably can't do anything with static->jit calls overhead. Just try to jit big enough functions to make this overhead negligible. But jit->static calls will be optimized. When compiler sees call to other function in jitted code and function body is available it will try to pull this function to jitted code as well even if it isn't marked dynamicCompile/ dynamicCompileEmit. Static calls to such functions will still use static version but jit will use its own version which can be optimized with rest of jitted code and can be inlined into jitted code.
Sep 24 2019
prev sibling parent Newbie2019 <newbie2019 gmail.com> writes:
On Thursday, 12 September 2019 at 23:49:04 UTC, kinke wrote:
 Glad to announce the first beta for LDC 1.18:

 * Based on D 2.088.0+ (yesterday's stable).
 * Bundled dub upgraded to v1.17.0+ with improved LDC support, 
 incl. cross-compilation.
 * Init symbols of zero-initialized structs are no longer 
 emitted.
 * druntime: DMD-compatible {load,store}Unaligned and prefetch 
 added to core.simd.
 * JIT improvements, incl. multi-threaded compilation.

 Full release log and downloads: 
 https://github.com/ldc-developers/ldc/releases/tag/v1.18.0-beta1

 Please help test, and thanks to all contributors!
Thanks for keep the great work. Maybe https://github.com/ldc-developers/ldc/issues/3156 should in Known issues ?
Sep 23 2019