www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Things that keep D from evolving?

reply NX <nightmarex1337 hotmail.com> writes:
So I came here to ask about things that prevent D to become 
better.

What language semantics prevent precise & fast GC implementations?

What makes it impossible to have ref counted classes?

What are some other technical / design problems you encountered? 
(other than poor implementation and lack of resources)

Enlighten! :)
Feb 06 2016
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 6 February 2016 at 08:07:42 UTC, NX wrote:
 What language semantics prevent precise & fast GC 
 implementations?
This prevents fast GC: Pointers. This prevents precise GC: internal Pointers + FFI. Go now has <10ms latency for small heaps, <20ms latency for up to 100GB heaps and <40ms latency for up to 250GB heaps. But IIRC when calling a C function in Go, the function called should not retain pointers or go down indirections? https://talks.golang.org/2016/state-of-go.slide#37 So, basically, if you want fast memory release, forget using the GC in D.
 What makes it impossible to have ref counted classes?
Nothing.
 What are some other technical / design problems you 
 encountered? (other than poor implementation and lack of 
 resources)
Lack of focus on what most programmers expect from system level programming.
Feb 06 2016
next sibling parent reply NX <nightmarex1337 hotmail.com> writes:
On Saturday, 6 February 2016 at 10:29:32 UTC, Ola Fosheim Grøstad 
wrote:
 What makes it impossible to have ref counted classes?
Nothing.
Then why do we need DIP74 ? And why documentation says RefCounted doesn't work with classes?
Feb 06 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 6 February 2016 at 11:09:28 UTC, NX wrote:
 On Saturday, 6 February 2016 at 10:29:32 UTC, Ola Fosheim 
 Grøstad wrote:
 What makes it impossible to have ref counted classes?
Nothing.
Then why do we need DIP74 ?
I think they aim for compiler optimizations, like ARC on Swift. But ARC requires all ref counting to be done behind the scene, so I think it is a bad idea for D to be honest.
 And why documentation says RefCounted doesn't work with classes?
I don't use Phobos much. I think RefCounted creates a wrapper for an embedded struct or something. Something like struct { int refcount; T payload; } Nothing prevents you from creating your own reference counting mechanism.
Feb 06 2016
next sibling parent reply rsw0x <anonymous anonymous.com> writes:
On Saturday, 6 February 2016 at 11:15:06 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 6 February 2016 at 11:09:28 UTC, NX wrote:
 On Saturday, 6 February 2016 at 10:29:32 UTC, Ola Fosheim 
 Grøstad wrote:
 What makes it impossible to have ref counted classes?
Nothing.
Then why do we need DIP74 ?
I think they aim for compiler optimizations, like ARC on Swift. But ARC requires all ref counting to be done behind the scene, so I think it is a bad idea for D to be honest.
 And why documentation says RefCounted doesn't work with 
 classes?
I don't use Phobos much. I think RefCounted creates a wrapper for an embedded struct or something. Something like struct { int refcount; T payload; } Nothing prevents you from creating your own reference counting mechanism.
reference counting is incredibly slow, DIP74 attempts to partially amend that in D as it can't be done any other way besides compiler help. IIRC, it essentially just allows RC inc/dec to be elided where possible
Feb 06 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 6 February 2016 at 11:33:05 UTC, rsw0x wrote:
 On Saturday, 6 February 2016 at 11:15:06 UTC, Ola Fosheim 
 Grøstad wrote:
 reference counting is incredibly slow, DIP74 attempts to 
 partially amend that in D as it can't be done any other way 
 besides compiler help.
 IIRC, it essentially just allows RC inc/dec to be elided where 
 possible
_Automatic_ reference counting can be slow. Manual reference counting can be very efficient (but takes programmer skill). The better solution is to adopt borrow-semantics and only use reference counting for ownership. Just like you ought to use unique_ptr and shared_ptr in C++. Of course, Swift does not aim for very high performance, but for convenient application/gui development. And frankly JavaScript is fast enough for that kind of programming.
Feb 06 2016
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Sat, 06 Feb 2016 11:47:02 +0000
schrieb Ola Fosheim Gr=C3=B8stad
<ola.fosheim.grostad+dlang gmail.com>:

 Of course, Swift does not aim for very high performance, but for=20
 convenient application/gui development. And frankly JavaScript is=20
 fast enough for that kind of programming.
My code would not see much ref counting in performance critical loops. There is no point in ref counting every single point in a complex 3D scene. I could imagine it used on bigger items. Textures for example since they may be used by several objects. Or - a prime example - any outside resource that is potentially scarce and benefits from deterministic release: file handles, audio buffers, widgets, ... --=20 Marco
Feb 06 2016
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 7 February 2016 at 02:46:39 UTC, Marco Leise wrote:
 My code would not see much ref counting in performance critical
 loops. There is no point in ref counting every single point in
 a complex 3D scene.
 I could imagine it used on bigger items. Textures for example
 since they may be used by several objects. Or - a prime
 example - any outside resource that is potentially scarce and
 benefits from deterministic release: file handles, audio
 buffers, widgets, ...
In my experience most such resources don't need reference counting. Yes, Textures if you load dynamically, but if you load Textures before entering the render loop... not so much and it should really be a caching system so you don't have to reload the texture right after freeing it. File handles are better done as single borrow from owner, or pass by reference, you don't want multiple locations to write to the same file handle, audio buffers should be preallocated as they cannot be deallocated cheaply on the real time thread, widgets benefit more from weak-pointers/back-pointers-to-borrowers as they tend to have a single owning parent... What would be better is to build better generic static analysis and optimization into the compiler. So that the compiler can deduce that an integer is never read except when decremented, and therefore can elide inc/dec pairs.
Feb 07 2016
prev sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 6 February 2016 at 11:15:06 UTC, Ola Fosheim Grøstad 
wrote:
 Nothing prevents you from creating your own reference counting 
 mechanism.
A struct wrapper doesn't give the things you need to reliably handle inheritance. interface A {} interface B {} class C : A, B {} void use(RefCounted!A) {} RefCounted!C c; use(c); With alias this tricks, you can handle one level of inheritance, you could make it return a RefCounted!A or B, but not both. Multiple alias this could solve this... but that PR is in limbo again AFAIK. Of course, you could just write some named function to return the right interface and tell the user to call it, but it won't be an implicit conversion like it is with interfaces normally. (BTW, for the record, I have no problem with named functions, it just is different than the built-in thing.)
Feb 06 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 6 February 2016 at 17:22:03 UTC, Adam D. Ruppe wrote:
 On Saturday, 6 February 2016 at 11:15:06 UTC, Ola Fosheim 
 Grøstad wrote:
 Nothing prevents you from creating your own reference counting 
 mechanism.
A struct wrapper doesn't give the things you need to reliably handle inheritance.
I don't think I suggested using a struct wrapper? :-) That just cause issues with alignment or requires a more complex allocator. You can either build the refcount into the root class or use an extra indirection like C++'s shared_ptr.
Feb 06 2016
parent reply rsw0x <anonymous anonymous.com> writes:
On Saturday, 6 February 2016 at 17:36:28 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 6 February 2016 at 17:22:03 UTC, Adam D. Ruppe 
 wrote:
 On Saturday, 6 February 2016 at 11:15:06 UTC, Ola Fosheim 
 Grøstad wrote:
 Nothing prevents you from creating your own reference 
 counting mechanism.
A struct wrapper doesn't give the things you need to reliably handle inheritance.
I don't think I suggested using a struct wrapper? :-) That just cause issues with alignment or requires a more complex allocator. You can either build the refcount into the root class or use an extra indirection like C++'s shared_ptr.
Can't be done with the root class because classes never trigger RAII outside of (deprecated) scope allocations. Can't be done with indirection because you still hit the same issue. Applies to storage classes aswell, btw.
Feb 06 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 6 February 2016 at 17:38:30 UTC, rsw0x wrote:
 Can't be done with the root class because classes never trigger 
 RAII outside of (deprecated) scope allocations.
Not sure what you mean. The class instance doesn't have to trigger anything? You "retain(instance)" to increase the refcount and "release(instance)" to decrease refcount or destroy the instance.
Feb 06 2016
parent reply rsw0x <anonymous anonymous.com> writes:
On Saturday, 6 February 2016 at 17:46:00 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 6 February 2016 at 17:38:30 UTC, rsw0x wrote:
 Can't be done with the root class because classes never 
 trigger RAII outside of (deprecated) scope allocations.
Not sure what you mean. The class instance doesn't have to trigger anything? You "retain(instance)" to increase the refcount and "release(instance)" to decrease refcount or destroy the instance.
Might as well manually free and delete instead.
Feb 06 2016
next sibling parent rsw0x <anonymous anonymous.com> writes:
On Saturday, 6 February 2016 at 17:46:48 UTC, rsw0x wrote:
 On Saturday, 6 February 2016 at 17:46:00 UTC, Ola Fosheim 
 Grøstad wrote:
 On Saturday, 6 February 2016 at 17:38:30 UTC, rsw0x wrote:
 Can't be done with the root class because classes never 
 trigger RAII outside of (deprecated) scope allocations.
Not sure what you mean. The class instance doesn't have to trigger anything? You "retain(instance)" to increase the refcount and "release(instance)" to decrease refcount or destroy the instance.
Might as well manually free and delete instead.
Er, malloc and free* : )
Feb 06 2016
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 6 February 2016 at 17:46:48 UTC, rsw0x wrote:
 Might as well manually free and delete instead.
Not really, this was used in Objective-C before ARC. But you can always move retain/release/borrow/unborrow into your own pointer struct like shared_ptr.
Feb 06 2016
prev sibling parent reply cy <dlang verge.info.tm> writes:
On Saturday, 6 February 2016 at 10:29:32 UTC, Ola Fosheim Grøstad 
wrote:
 This prevents fast GC: Pointers.
Would it be possible to write a fast garbage collector that just didn't track any pointers? Just offer a head's up that if you use "this collector" and pointers on collectable data, you're gonna have a bad time? How limited would you be if you couldn't use pointers in your code? Do all D references count as pointers, or is it only the T* types? Does Nullable!T count as one of those pointers that can't be tracked quickly?
Feb 06 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 6 February 2016 at 22:41:28 UTC, cy wrote:
 On Saturday, 6 February 2016 at 10:29:32 UTC, Ola Fosheim 
 Grøstad wrote:
 This prevents fast GC: Pointers.
Would it be possible to write a fast garbage collector that just didn't track any pointers? Just offer a head's up that if you use "this collector" and pointers on collectable data, you're gonna have a bad time? How limited would you be if you couldn't use pointers in your code? Do all D references count as pointers, or is it only the T* types?
Yes, all references to gc allocated memory. "fast" is perhaps the wrong word, Go has a concurrent collector with low latency pauses, but requires slower code for mutating pointers since another thread is collwcting garbage at the same time. Things that could speed up collection: - drop destructors so you don't track dead objects - require pointers to beginning of gc object and make them a special type, so you scan less - use local heaps and local collection per fiber - change layout so pointers end up on the same cachelines
Feb 06 2016
parent Marco Leise <Marco.Leise gmx.de> writes:
Am Sat, 06 Feb 2016 23:18:59 +0000
schrieb Ola Fosheim Gr=C3=B8stad
<ola.fosheim.grostad+dlang gmail.com>:

 Things that could speed up collection:
 - drop destructors so you don't track dead objects
Interesting, that would also finally force external resources off the GC heap and into deterministic release. That needs a solution to inheritance though. Think widget kits. --=20 Marco
Feb 06 2016
prev sibling next sibling parent reply Kagamin <spam here.lot> writes:
On Saturday, 6 February 2016 at 08:07:42 UTC, NX wrote:
 What language semantics prevent precise
Lack of resources. Precise GC needs to know which fields are pointers. Somebody must generate that map. AFAIK there was an experiment on that.
 fast GC
Fast GC needs to be notified about pointer changes, C won't do that and for some reason people don't want to rely on C code not changing GC pointers.
Feb 06 2016
parent ZombineDev <valid_email he.re> writes:
On Saturday, 6 February 2016 at 15:14:06 UTC, Kagamin wrote:
 On Saturday, 6 February 2016 at 08:07:42 UTC, NX wrote:
 What language semantics prevent precise
Lack of resources. Precise GC needs to know which fields are pointers. Somebody must generate that map. AFAIK there was an experiment on that.
That information has already been present for a couple of releases (http://dlang.org/spec/traits.html#getPointerBitmap), however currently the precise GC is slower than the conservative one, because of the overhead the the extra metadata has. For more info, you can read the comments on these two PRs: https://github.com/D-Programming-Language/druntime/pull/1022 https://github.com/D-Programming-Language/druntime/pull/1057
Feb 06 2016
prev sibling parent reply thedeemon <dlang thedeemon.com> writes:
On Saturday, 6 February 2016 at 08:07:42 UTC, NX wrote:
 What language semantics prevent precise & fast GC  
 implementations?
Unions and easy type casting prevent precise GC. Lack of write barriers for reference-type fields prevent fast (generational and/or concurrent) GC. Some more detailed explanations here: http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html
Feb 08 2016
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 8 February 2016 at 11:22:45 UTC, thedeemon wrote:
 http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html
Well, the latest Intel CPUs have a theoretical throughput of 30GB/s... so that makes for up to 30MB/ms. But language changes are needed, I think. I also don't quite understand how RC can solve the issue with pointers to internal fields in classes.
Feb 08 2016
prev sibling next sibling parent reply NX <nightmarex1337 hotmail.com> writes:
On Monday, 8 February 2016 at 11:22:45 UTC, thedeemon wrote:
 On Saturday, 6 February 2016 at 08:07:42 UTC, NX wrote:
 What language semantics prevent precise & fast GC  
 implementations?
Unions and easy type casting prevent precise GC. Lack of write barriers for reference-type fields prevent fast (generational and/or concurrent) GC. Some more detailed explanations here: http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html
I see... By any chance, can we solve this issue with GC managed pointers? AFAIK, this is what C++/CLR does: There are 2 different pointer types, (*) and (^). (*) is the famous raw pointer, second one is GC managed pointer. A GC pointer has write barrier (unlike raw pointer) so we can have both raw C performance (if we want) and fast generational GC or concurrent GC (which are a magnitude better than a mark and sweep GC). As you realized, there is a major problem with this: classes. The desicion of making classes reference-type is actually fine (+simplicity), but it doesn't really help with the current situation, I expect D to be pragmatic and let me decide. Maybe in future... Who knows...
Feb 08 2016
parent reply Wyatt <wyatt.epp gmail.com> writes:
On Monday, 8 February 2016 at 16:33:09 UTC, NX wrote:
 I see... By any chance, can we solve this issue with GC managed 
 pointers?
Maybe we could. But it's never going to happen. Even if Walter weren't fundamentally opposed to multiple pointer types in D, it wouldn't happen. You asked about things that prevent improvement, right? Here's the big one, and a major point of friction in the community: Walter and Andrei refuse to break existing code in pursuit of changes that substantially improve the language. (Never mind that code tends to break anyway.) -Wyatt
Feb 08 2016
next sibling parent rsw0x <anonymous anonymous.com> writes:
On Monday, 8 February 2016 at 17:15:11 UTC, Wyatt wrote:
 On Monday, 8 February 2016 at 16:33:09 UTC, NX wrote:
 I see... By any chance, can we solve this issue with GC 
 managed pointers?
Maybe we could. But it's never going to happen. Even if Walter weren't fundamentally opposed to multiple pointer types in D, it wouldn't happen. You asked about things that prevent improvement, right? Here's the big one, and a major point of friction in the community: Walter and Andrei refuse to break existing code in pursuit of changes that substantially improve the language. (Never mind that code tends to break anyway.) -Wyatt
Pretty much this. We can't go a version without code breakage, but also can't introduce features that would drastically help the language because it would introduce breakage. i.e, all the great ownership/scope/what-have-you proposals and shit like DIP25 gets pushed through instead, then 2 days later it gets proven to be worthless anyways. Woops.
Feb 08 2016
prev sibling next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 8 February 2016 at 17:15:11 UTC, Wyatt wrote:
 Maybe we could.  But it's never going to happen.  Even if 
 Walter weren't fundamentally opposed to multiple pointer types 
 in D, it wouldn't happen.

 You asked about things that prevent improvement, right?  Here's 
 the big one, and a major point of friction in the community: 
 Walter and Andrei refuse to break existing code in pursuit of 
 changes that substantially improve the language.  (Never mind 
 that code tends to break anyway.)
You are of course right, but it isn't an absolute. Nothing prevents someone to restrict a D compiler in such a way that you can get faster GC. C++ compilers have lots of optional warnings/errors, so it is quite possible. But I suppose those that want it would rather use compilation.
Feb 08 2016
parent reply NX <nightmarex1337 hotmail.com> writes:
On Monday, 8 February 2016 at 17:51:02 UTC, Ola Fosheim Grøstad 
wrote:
 C++ compilers have lots of optional warnings/errors, so it is 
 quite possible. But I suppose those that want it would rather 

 compilation.
something else: - Interfacing with native API without jumping through hoops - Incredibly high abstraction and meta-programming possibilities with relatively easier syntax + semantics. - It's harder to reverse engineer native code than byte code equivalent. - Trading off anything according to your needs. - Expressiveness and purity, immutablity concepts. - Having GC (but not a horribly slow one) - Syntactic sugars (associtive arrays, powerful foreach, slices...) - Compile times cross-platform work in many cases) I wish D could be better. I really want it with all of my heart...
Feb 09 2016
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 9 February 2016 at 13:41:30 UTC, NX wrote:

 something else:
 - Interfacing with native API without jumping through hoops
Well, but the hoops are there to get safe and fast GC.
 - Incredibly high abstraction and meta-programming 
 possibilities with relatively easier syntax + semantics.
Not incredibly high level abstraction... But I get what you mean. It is fairly high level for a low level language.
 - Having GC (but not a horribly slow one)
So you want this to be worked on (as D has a horribly slow one)?

 cross-platform work in many cases)
Feb 09 2016
next sibling parent Chris Wright <dhasenan gmail.com> writes:
On Tue, 09 Feb 2016 14:35:48 +0000, Ola Fosheim Grøstad wrote:

 On Tuesday, 9 February 2016 at 13:41:30 UTC, NX wrote:

 something else:
 - Interfacing with native API without jumping through hoops
Well, but the hoops are there to get safe and fast GC.
 - Incredibly high abstraction and meta-programming possibilities with
 relatively easier syntax + semantics.
Not incredibly high level abstraction... But I get what you mean. It is fairly high level for a low level language.
 - Having GC (but not a horribly slow one)
So you want this to be worked on (as D has a horribly slow one)?

 cross-platform work in many cases)
If you develop against .NET on Windows, you have a moderate chance of producing something non-portable. If you develop against Mono on Linux, you can produce something more portable more easily. Mono, by the way, has a good garbage collector, and .NET probably has better. Mono advertises: * precise scanning for stack, heap, and registers * generational collection using write barriers * per-thread sub-heaps for faster allocation * multithreaded scanning I think D could implement all that with a couple caveats. Write barriers are a problem for real-time code. Right now, we can tell you: you can write code with real-time sections as long as you don't allocate GC memory in the real-time sections. If we introduced write barriers, well, the most straightforward way of doing that is to use memory protection and install a fault handler. If you write to a page that hasn't been written to since the last collection, you get a page fault, the kernel dispatches it to your fault handler, the fault handler marks a "card" (it sets a boolean corresponding to the page you tried to write to). Then the handler marks that page as writable and you go on with your day. Alternatively, the compiler could insert code at every pointer write. (This is the deamortized version. Consistent latency, but if you write to the same pointer variable a million times between GC allocations, you pay that cost a million times more than you really need to.) You would need a compiler switch to disable this behavior. That's not likely to happen. Walter and Andrei would not accept that, since it makes it difficult to get bare-metal performance (and also makes it harder to interface with C). Which leads me to another thing holding D back. What use cases are we trying to support? What are we trying to optimize for? Apparently everything. That doesn't work. To some extent you can make things faster in general, but you'll still end up supporting all use cases moderately well and none exceedingly well. Anyway, another GC issue is supporting generational collection. You need a moving GC to make it work efficiently. D could support a moving collector, but it would require more runtime type information. (And stack maps, but that might be there already. Not sure.) Walter has been strongly against adding extra runtime type information in the past.
Feb 09 2016
prev sibling parent reply NX <nightmarex1337 hotmail.com> writes:
On Tuesday, 9 February 2016 at 14:35:48 UTC, Ola Fosheim Grøstad 
wrote:
 Not incredibly high level abstraction... But I get what you 
 mean. It is fairly high level for a low level language.
when coming from C++.
 So you want this to be worked on (as D has a horribly slow one)?
I would want it to be solved rather than being worked on... which requires design change which is probably not going to happen. There is still room for improvement though.

 platforms?
There are differences, but yeah I shouldn't have said that ~
Feb 09 2016
next sibling parent reply thedeemon <dlang thedeemon.com> writes:
On Tuesday, 9 February 2016 at 17:41:34 UTC, NX wrote:

 I would want it to be solved rather than being worked on... 
 which requires design change which is probably not going to 
 happen. There is still room for improvement though.
Right. I think there are at least two things that can improve current GC without any changes in design: parallel marking and lazy sweeping. Currently (at least last time I checked) GC pauses the world, then does all the marking in one thread, then all the sweeping. We can do the marking in several parallel threads (this is much harder to implement but still doable), and we can kick the sweeping out of stop-the-world pause and do the sweeping lazily: when you try to allocate some memory it will not just look in free lists, it will try to collect some unused unswept memory from the heap first. This way allocations become a bit slower but GC pause time reduces significantly. Concurrent sweeping is another possibility. Of course, it's all easier said than done, without an actual hero who would code this, it remains just talk.
Feb 10 2016
parent reply Chris Wright <dhasenan gmail.com> writes:
On Wed, 10 Feb 2016 08:57:51 +0000, thedeemon wrote:

 Currently (at least last time I checked) GC pauses the world, then does
 all the marking in one thread, then all the sweeping.
Right.
 We can do the
 marking in several parallel threads (this is much harder to implement
 but still doable),
Parallel marking would not be a breaking change by any means. No user code runs during GC collections, so we can do anything. The major fly in the ointment is that creating threads normally invokes the GC, since Thread is an object, and invoking the GC during a collection isn't the best. This can be solved by preallocating several mark threads. Then you just divide the stack and roots between those threads. Moderately annoying sync issues This doesn't guarantee an even distribution of work. You can solve that problem with a queue, though that requires locking. The main wrinkle is writing a bit to shared data structures, which can be slow. On the other hand, in the mark phase, we're only ever going to write the same value to each, so it doesn't matter if GC thread A . I don't know how to tell the CPU that it doesn't have to read back the memory before writing it.
 and we can kick the sweeping out of stop-the-world
 pause and do the sweeping lazily
This would be a breaking change. Right now, your destructors are guaranteed to run when no other code is running. You'd need to introduce locks in a few places. I'm not saying this is a bad thing. I think people generally wouldn't notice if we made this change. But some code would break, so we'd have to stage that change. Anyway, I'm hacking up parallel mark phase to see how it would work. I could use some GC benchmarks if anyone's got them lying around.
Feb 10 2016
parent Laeeth Isharc <laeethnospam nospam.laeeth.com> writes:
On Wednesday, 10 February 2016 at 20:21:22 UTC, Chris Wright 
wrote:
 On Wed, 10 Feb 2016 08:57:51 +0000, thedeemon wrote:

 Currently (at least last time I checked) GC pauses the world, 
 then does all the marking in one thread, then all the sweeping.
Right.
 We can do the
 marking in several parallel threads (this is much harder to 
 implement
 but still doable),
Parallel marking would not be a breaking change by any means. No user code runs during GC collections, so we can do anything. The major fly in the ointment is that creating threads normally invokes the GC, since Thread is an object, and invoking the GC during a collection isn't the best. This can be solved by preallocating several mark threads. Then you just divide the stack and roots between those threads. Moderately annoying sync issues This doesn't guarantee an even distribution of work. You can solve that problem with a queue, though that requires locking. The main wrinkle is writing a bit to shared data structures, which can be slow. On the other hand, in the mark phase, we're only ever going to write the same value to each, so it doesn't matter if GC thread A . I don't know how to tell the CPU that it doesn't have to read back the memory before writing it.
 and we can kick the sweeping out of stop-the-world
 pause and do the sweeping lazily
This would be a breaking change. Right now, your destructors are guaranteed to run when no other code is running. You'd need to introduce locks in a few places. I'm not saying this is a bad thing. I think people generally wouldn't notice if we made this change. But some code would break, so we'd have to stage that change. Anyway, I'm hacking up parallel mark phase to see how it would work. I could use some GC benchmarks if anyone's got them lying around.
https://github.com/D-Programming-Language/druntime/tree/master/benchmark/gcbench
Feb 10 2016
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 9 February 2016 at 17:41:34 UTC, NX wrote:
 On Tuesday, 9 February 2016 at 14:35:48 UTC, Ola Fosheim 
 Grøstad wrote:
 Not incredibly high level abstraction... But I get what you 
 mean. It is fairly high level for a low level language.
incredible when coming from C++.
If D had parity with C++ features and didn't have it's own set of inconsistencies it could do a lot better. But the lack of willingness to change probably means that a new language will come along instead of D improving. Right now D sits somewhere between C++/Rust and Swift/Go, which might not be the best position. It is natural that people will gravitate towards either end. If you don't want to deal with memory management use Swift/Go, if you want to optimize memory management go C++/Rust.
 So you want this to be worked on (as D has a horribly slow 
 one)?
I would want it to be solved rather than being worked on... which requires design change which is probably not going to happen. There is still room for improvement though.
I think it would help a lot if there was a broad effort to refactor and document the compiler. If the compiler is difficult to change it becomes more difficult to experiment and there will be much more resistance to changing semantics...
Feb 10 2016
prev sibling parent reply Matt Elkins <notreal fake.com> writes:
On Tuesday, 9 February 2016 at 13:41:30 UTC, NX wrote:

 something else:
I will focus on comparing against C++, because that has been my favorite general purpose language for a long time. While I often faced with the choice on pure technical merits I will go for C++ any day (haven't tried Go, but was unimpressed by my initial read-over). D is the first language I have ever encountered with a serious chance of unseating C++ as my personal favorite.
 - Interfacing with native API without jumping through hoops
Concur. Though I get this with C++, too.
 - Incredibly high abstraction and meta-programming 
 possibilities with relatively easier syntax + semantics.
Yes. The lack of powerful meta-programming is so frustrating in only D has the ease of reading and writing.
 - It's harder to reverse engineer native code than byte code 
 equivalent.
Meh. True, but this doesn't do much for me; it still isn't -that- hard to reverse native code, at least to the point of exploitation (to the point of copying is much harder). It just takes longer.
 - Trading off anything according to your needs.
Yes. This is critical. I actually feel like D does this a little worse than C++ (though not significantly so), if only because it is difficult to completely avoid the GC, and if you want to avoid it and still use inheritance you need to break out the custom allocators. Most of the time this isn't a problem.
 - Expressiveness and purity, immutablity concepts.
Expressiveness is key, though I haven't found D to be terribly more expressive than C++. A little better here, a little worse there. On the other hand, it is usually syntactically nicer when expressing concepts, sometimes greatly so. Immutability is nice. The attention paid to threading was what caused me to take a closer look at D in the first place.
 - Having GC (but not a horribly slow one)
Meh. I know there are things which are much easier to express with a GC, but they don't really come up for me. On the other hand, I often need deterministic cleanup, so the GC can be kind of an annoyance, since it lends itself to a lot of wrapping things in structs and forcing me to pay more attention to lifetime rules than I have to in C++. The other (main?) purported benefits of a GC (avoiding leaks and dangling pointers) don't do much for me, since it is almost trivially easy to avoid those problems in C++ anyway, without introducing the headaches of the GC; certainly it is easier than the focus I have to give D object lifetimes now. That may be a matter of relative practice, though, since I've used C++ for a long long time and D for...3 weeks? :)
 - Syntactic sugars (associtive arrays, powerful foreach, 
 slices...)
I'm still adjusting to the idea of AAs as part of the language rather than library. Not sure I like it, but on the other hand it doesn't really hurt. The foreach construct isn't any better (or worse) than C++'s, unless I'm missing something (which is very possible). But slices are awesome!
 - Compile times
Oh god yes. This makes metaprogramming so much more palatable.

 cross-platform work in many cases)
I'll go it one step further, and note that D feels more portable than C++ to me...at least to the major platforms I usually work on. Maybe it's the simple fact that things like sockets are defined in the libraries, or that I don't have to #include <Windows.h> :).
 I wish D could be better. I really want it with all of my 
 heart...
D has a lot to offer. Here are a few other things I've really liked over C++: * Modules. C++ is supposed(?) to get them at some point I suppose, but for here and now it's a clear advantage for D. * Not syntactically separating interface and implementation (e.g., C++'s header vs source file dichotomy). This was never a true separation in C++, and just led to lots of extra syntax and minor DRY violations. Of course you could write everything inline anyway...until it depended on something declared later. * Related to the point above, not having to think about whether to make something inline. Sure, C++ compilers make that choice for you, but you still have to decide whether to allow them (or at least the ones without link-time code generation) by putting your source in the header file. Needless headache for something a compiler can do. * Properly doing away with the C preprocessor. I haven't seen a need for it that wasn't addressed by another D feature. * Properly doing away with MI. Unlike some languages which just canned it, D actually replaced its functionality with other features. * Thread-local by default. So simple. So useful. * The in keyword. This is nice syntactic sugar over having a special trait in C++ which deduces whether to pass by value or const-reference. "foo(in bar)" is way more readable than something like "foo(traits<bar>::fast_param bar)" * Convenient operator overloading for numerical types. * property. This little feature has been invaluable in porting my C++ code, letting me shave off tons of accessors and mutators that existed only for the sake of possibly being needed in the future. I didn't even need to use property for this; its simple existence did the work for me! There are others. Lots of others. Basically they boil down to: D mostly feels like C++, but just easier to read and write. Which is fantastic! Of course, it's not all rosy. There has been some debate on the General forum about D's ability to express move semantics. I'm wary of the GC. Some language behaviors seem unintuitive (which can lead to bugs), like how you can store keys in AA that are null. And probably the biggest thing I don't like about D isn't the language design, but the immaturity of DMD. I've only used D for a few weeks, and I've already reported three bugs against it. This means that unlike with most languages, I don't have confidence that program failures are really my fault as opposed to just another compiler bug. Summary: D has a lot of potential. With a little maturation, it could be a serious rival to C++. Oh, and for whatever it's worth, I'm all for breaking changes if they improve the language. C++ has long suffered from unwillingness to do this. With a proper deprecation path, it can be a palatable way to continue evolving the language and avoiding stagnation.
Feb 10 2016
next sibling parent reply tsbockman <thomas.bockman gmail.com> writes:
On Thursday, 11 February 2016 at 04:51:39 UTC, Matt Elkins wrote:
 - Syntactic sugars (associtive arrays, powerful foreach, 
 slices...)
I'm still adjusting to the idea of AAs as part of the language rather than library. Not sure I like it, but on the other hand it doesn't really hurt. The foreach construct isn't any better (or worse) than C++'s, unless I'm missing something (which is very possible). But slices are awesome!
In D you can `foreach` over a list of types (AliasSeq) at compile time, not just over ranges at runtime. (For the moment, it's still only available in function bodies though, unlike `static if`.)
Feb 10 2016
parent Matt Elkins <notreal fake.com> writes:
On Thursday, 11 February 2016 at 05:05:22 UTC, tsbockman wrote:
 On Thursday, 11 February 2016 at 04:51:39 UTC, Matt Elkins 
 wrote:
 - Syntactic sugars (associtive arrays, powerful foreach, 
 slices...)
I'm still adjusting to the idea of AAs as part of the language rather than library. Not sure I like it, but on the other hand it doesn't really hurt. The foreach construct isn't any better (or worse) than C++'s, unless I'm missing something (which is very possible). But slices are awesome!
In D you can `foreach` over a list of types (AliasSeq) at compile time, not just over ranges at runtime. (For the moment, it's still only available in function bodies though, unlike `static if`.)
Neat! I didn't know that. You can do that in C++, but in typical fashion not with a convenient foreach statement. You have to do some crazy type list recursion stuff. So chalk up another point for D's "ease of metaprogramming" :).
Feb 10 2016
prev sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 2/10/16 11:51 PM, Matt Elkins wrote:

 * The in keyword. This is nice syntactic sugar over having a special
 trait in C++ which deduces whether to pass by value or const-reference.
 "foo(in bar)" is way more readable than something like
 "foo(traits<bar>::fast_param bar)"
Hm... in is short for scope const. It is not pass by reference. Perhaps you meant auto ref?
 *  property. This little feature has been invaluable in porting my C++
 code, letting me shave off tons of accessors and mutators that existed
 only for the sake of possibly being needed in the future. I didn't even
 need to use  property for this; its simple existence did the work for me!
Well, interestingly, D still allows property syntax without the property notation. I'm in the habit now of never documenting accessors with property. Mutators, I still would like to see D require property to access that syntax. Note that the only good reason to defensively add accessors and mutators for public fields is to keep a consistent binary API. In other words, if have a shared library. D is not quite there yet for shared library support, however. -Steve
Feb 12 2016
parent reply Matt Elkins <notreal fake.com> writes:
On Friday, 12 February 2016 at 14:03:05 UTC, Steven Schveighoffer 
wrote:
 On 2/10/16 11:51 PM, Matt Elkins wrote:

 * The in keyword. This is nice syntactic sugar over having a 
 special
 trait in C++ which deduces whether to pass by value or 
 const-reference.
 "foo(in bar)" is way more readable than something like
 "foo(traits<bar>::fast_param bar)"
Hm... in is short for scope const. It is not pass by reference. Perhaps you meant auto ref?
Right...maybe I've been operating under false pretenses, but I was under the impression that the compiler was allowed to interpret scope const as either "pass by value" or "pass by const reference" freely so long as there was no custom post-blit defined? For the purposes of optimization, I mean, to avoid needless copying.
Feb 12 2016
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 2/12/16 9:37 AM, Matt Elkins wrote:
 On Friday, 12 February 2016 at 14:03:05 UTC, Steven Schveighoffer wrote:
 On 2/10/16 11:51 PM, Matt Elkins wrote:

 * The in keyword. This is nice syntactic sugar over having a special
 trait in C++ which deduces whether to pass by value or const-reference.
 "foo(in bar)" is way more readable than something like
 "foo(traits<bar>::fast_param bar)"
Hm... in is short for scope const. It is not pass by reference. Perhaps you meant auto ref?
Right...maybe I've been operating under false pretenses, but I was under the impression that the compiler was allowed to interpret scope const as either "pass by value" or "pass by const reference" freely so long as there was no custom post-blit defined? For the purposes of optimization, I mean, to avoid needless copying.
Pass by reference and pass by value means different treatment inside the function itself, so it can't differ from call to call. It could potentially differ based on the type being passed, but I'm unaware of such an optimization, and it definitely isn't triggered specifically by 'in'. 'in' is literally replaced with 'scope const' when it is a storage class. -Steve
Feb 12 2016
next sibling parent reply rsw0x <anonymous anonymous.com> writes:
On Friday, 12 February 2016 at 15:12:19 UTC, Steven Schveighoffer 
wrote:
 On 2/12/16 9:37 AM, Matt Elkins wrote:
 [...]
Pass by reference and pass by value means different treatment inside the function itself, so it can't differ from call to call. It could potentially differ based on the type being passed, but I'm unaware of such an optimization, and it definitely isn't triggered specifically by 'in'. 'in' is literally replaced with 'scope const' when it is a storage class. -Steve
note that 'in' and 'scope'(other than for delegates) parameter storage class usage should be avoided. It really should be a warning.
Feb 12 2016
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Friday, 12 February 2016 at 17:20:23 UTC, rsw0x wrote:
 note that 'in' and 'scope'(other than for delegates) parameter 
 storage class usage should be avoided.
 It really should be a warning.
Add to docs!
Feb 12 2016
prev sibling parent reply Matt Elkins <notreal fake.com> writes:
On Friday, 12 February 2016 at 17:20:23 UTC, rsw0x wrote:
 On Friday, 12 February 2016 at 15:12:19 UTC, Steven 
 Schveighoffer wrote:
 On 2/12/16 9:37 AM, Matt Elkins wrote:
 [...]
Pass by reference and pass by value means different treatment inside the function itself, so it can't differ from call to call. It could potentially differ based on the type being passed, but I'm unaware of such an optimization, and it definitely isn't triggered specifically by 'in'. 'in' is literally replaced with 'scope const' when it is a storage class. -Steve
note that 'in' and 'scope'(other than for delegates) parameter storage class usage should be avoided. It really should be a warning.
Why is that?
Feb 12 2016
parent rsw0x <anonymous anonymous.com> writes:
On Friday, 12 February 2016 at 17:29:54 UTC, Matt Elkins wrote:
 On Friday, 12 February 2016 at 17:20:23 UTC, rsw0x wrote:
 On Friday, 12 February 2016 at 15:12:19 UTC, Steven 
 Schveighoffer wrote:
 On 2/12/16 9:37 AM, Matt Elkins wrote:
 [...]
Pass by reference and pass by value means different treatment inside the function itself, so it can't differ from call to call. It could potentially differ based on the type being passed, but I'm unaware of such an optimization, and it definitely isn't triggered specifically by 'in'. 'in' is literally replaced with 'scope const' when it is a storage class. -Steve
note that 'in' and 'scope'(other than for delegates) parameter storage class usage should be avoided. It really should be a warning.
Why is that?
Unless it has changed, 'scope' is a noop for everything but delegates. Code that works now will break when(if...) it gets implemented.
Feb 12 2016
prev sibling next sibling parent Matt Elkins <notreal fake.com> writes:
On Friday, 12 February 2016 at 15:12:19 UTC, Steven Schveighoffer 
wrote:
 It could potentially differ based on the type being passed,
Yes, that's what I meant.
 but I'm unaware of such an optimization,
Hm. Unfortunate.
 and it definitely isn't triggered specifically by 'in'. 'in' is 
 literally replaced with 'scope const' when it is a storage 
 class.
Yeah, I didn't mean 'in' definitely triggered it. I meant that 'in' (or rather, as you say, 'scope const') provides the conditions by which a compiler could make such an optimization, since it can know that the parameter will be unaffected by the function. It seems like that would mean it could, in theory, choose to pass small objects by value and large objects by reference under the hood, to avoid the large object copy (assuming no custom post-blit...and I guess it would have to check for taking the address?). To achieve that in C++ I use a special trait which deduces whether pass-by-value or pass-by-const-reference makes more sense for the type...but maybe I should be doing the same thing in D, if that optimization isn't actually present? It does seem like the compiler could probably perform that optimization even if 'in' (or 'scope const') wasn't used, if it was smart enough... This sort of micro-optimization generally doesn't matter at the application level unless one has actually profiled it. But it comes up a lot for me when writing generic libraries which can't know whether it will be used in a situation someday where those optimizations do actually matter.
Feb 12 2016
prev sibling parent rsw0x <anonymous anonymous.com> writes:
On Friday, 12 February 2016 at 15:12:19 UTC, Steven Schveighoffer 
wrote:
 but I'm unaware of such an optimization, and it definitely 
 isn't triggered specifically by 'in'. 'in' is literally 
 replaced with 'scope const' when it is a storage class.

 -Steve
I'd imagine GCC or LLVM may be able to make use of such (type) information for optimizations — moreso probably LLVM due to all the functional languages that use it nowadays.
Feb 12 2016
prev sibling parent reply Laeeth Isharc <laeethnospam nospam.laeeth.com> writes:
On Monday, 8 February 2016 at 17:15:11 UTC, Wyatt wrote:
 On Monday, 8 February 2016 at 16:33:09 UTC, NX wrote:
 I see... By any chance, can we solve this issue with GC 
 managed pointers?
Maybe we could. But it's never going to happen. Even if Walter weren't fundamentally opposed to multiple pointer types in D, it wouldn't happen. You asked about things that prevent improvement, right? Here's the big one, and a major point of friction in the community: Walter and Andrei refuse to break existing code in pursuit of changes that substantially improve the language. (Never mind that code tends to break anyway.) -Wyatt
I have no special knowledge but strikes this observer that they are serious about working on solutions (whether that's a better GC or alternatives or both). But some patience required as its not such a straightforward problem and its better to take time than rush and make a mistake. It wasn't all that long ago that Andrei quit and I guess he moved across country and it certainly takes time to sort out one's home office and find a new working pattern. The discussions in the mailing list are quite interesting although beyond my technical knowledge for now. The GC itself may still be far from perfect but its much better than it was, and there are more options now. I have found emsi containers (built on top of Andrei's allocator) pretty nice myself for my own use.
Feb 08 2016
parent reply NX <nightmarex1337 hotmail.com> writes:
On Monday, 8 February 2016 at 22:21:50 UTC, Laeeth Isharc wrote:
 The GC itself may still be far from perfect but its much better 
 than it was, and there are more options now.  I have found emsi 
 containers (built on top of Andrei's allocator) pretty nice 
 myself for my own use.
Well, GC being better than it used to be doesn't change the fact it's still the worst of it's kind. I don't know if this[1] work actually got released or merged but looks like it's abandoned. Pretty sad as it seemed very promising. Anyway, I was expecting a lot more people to tell their specific problems, like "bla bla design desicion makes ARC incredibly dangerous and we can't properly interface with Objective-C without that" or like "bla bla D feature overlaps with some other stuff and requires redesign to be solved" or maybe "being unsafe ( system) by default breaks the deal"... GC is just one of the hundreds of problems with D and it was an example rather than the main point in this thread but thanks for anyone who replied. [1] http://forum.dlang.org/thread/mailman.655.1399956110.2907.digitalmars-d puremagic.com
Feb 09 2016
next sibling parent Laeeth Isharc <laeethnospam nospamlaeeth.com> writes:
On Tuesday, 9 February 2016 at 13:01:29 UTC, NX wrote:
 On Monday, 8 February 2016 at 22:21:50 UTC, Laeeth Isharc wrote:
 The GC itself may still be far from perfect but its much 
 better than it was, and there are more options now.  I have 
 found emsi containers (built on top of Andrei's allocator) 
 pretty nice myself for my own use.
Well, GC being better than it used to be doesn't change the fact it's still the worst of it's kind. I don't know if this[1] work actually got released or merged but looks like it's abandoned. Pretty sad as it seemed very promising. Anyway, I was expecting a lot more people to tell their specific problems, like "bla bla design desicion makes ARC incredibly dangerous and we can't properly interface with Objective-C without that" or like "bla bla D feature overlaps with some other stuff and requires redesign to be solved" or maybe "being unsafe ( system) by default breaks the deal"... GC is just one of the hundreds of problems with D and it was an example rather than the main point in this thread but thanks for anyone who replied. [1] http://forum.dlang.org/thread/mailman.655.1399956110.2907.digitalmars-d puremagic.com
Thanks for pointing this one out. Opportunity comes dressed in work clothes,and I guess that until someone takes the initiative to integrate this with the newest version of the runtime / GC then nothing will happen. It's not true that there are no professional opportunities in D, as some people say, and I can say that for some people at least impressive contributions to the language and community have paid off personally even though it was a labour of love and not motivated by that. Good programmers don't grow on trees, and one benefit of the current size of the D community is that it's easier to make an impact and easier to stand out than in a much more crowded and mature domain where one person can only hope to achieve incremental progress. My impression is that barriers to adoption are fairly well understood by now and it's a matter of time and hard work for them to be addressed step by step. It's not only addressing negatives but also completing positive things that will help. Ndslice and porting BLAS on the numerical side and the interface with R will both increase the attractiveness of D on finance, not a small area. It's not yet mature, but knowing one can use all the R libraries is already a big win.
Feb 09 2016
prev sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 9 February 2016 at 13:01:29 UTC, NX wrote:
 Well, GC being better than it used to be doesn't change the 
 fact it's still the > worst of it's kind. I don't know if 
 this[1] work actually got released or merged but looks like 
 it's abandoned. Pretty sad as it seemed very promising.
 [1] 
 http://forum.dlang.org/thread/mailman.655.1399956110.2907.digitalmars-d puremagic.com
It looks like interesting stuff, but the guy last posted in 2014. In other posts, people asked him for the code and I don't see anything on the forum indicating that he provided it. Probably an important step to improving the GC... DMD 2.067 had some garbage collector improvements, but I'm not sure how influenced those would have been by this.
Feb 09 2016
prev sibling parent Chris Wright <dhasenan gmail.com> writes:
On Mon, 08 Feb 2016 11:22:45 +0000, thedeemon wrote:

 On Saturday, 6 February 2016 at 08:07:42 UTC, NX wrote:
 What language semantics prevent precise & fast GC implementations?
easy type casting prevent precise GC.
To expand on this point: A GC makes a tradeoff between allocating efficiently and deallocating efficiently. (And a compiler+runtime makes a tradeoff between generating larger binaries that take more time to deal with and being able to produce precise garbage collection.) You can write a GC that allocates each type in its own region of memory. Every block has a pointer map associated with it. But this means the minimum allocation for each type is one page -- typically 4KB. This is bad for applications that have very few instances of each type and many types of object allocated. A simpler thing you can do is write a GC that has two regions of memory, one with pointers that might point to GC memory and one without. This gets rid of the overhead problem but doesn't allow precise collection. Alternatively, a language might prevent all casting, even upcasting, for any type that might contain pointers. Specifically: class Foo {} class Bar : Foo {} Foo foo = new Bar(); // type error! This means that the GC doesn't ever need to store the type of an allocated object anywhere. It can get the information it needs from a stack map ("pointer to Foo is stored in this stack frame at offset 8") and a similarly formatted map for allocated types. It would work, but it's sufficiently constraining that I don't think anyone has this in a real programming language.
Feb 08 2016