www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - How to use destroy and free.

reply Alain De Vod <devosalain ymail.com> writes:
Is this a correct program to explicit call destroy & free ?

```
void main(){
     int[] i=new int[10000];
     import object: destroy;
     destroy(i);
     import core.memory: GC;
     GC.free(GC.addrOf(cast(void *)(i.ptr)));
}
```
Apr 24 2022
next sibling parent reply Salih Dincer <salihdb hotmail.com> writes:
On Sunday, 24 April 2022 at 21:00:50 UTC, Alain De Vod wrote:
 Is this a correct program to explicit call destroy & free ?

 ```
 void main(){
     int[] i=new int[10000];
     import object: destroy;
     destroy(i);
     import core.memory: GC;
     GC.free(GC.addrOf(cast(void *)(i.ptr)));
 }
 ```
Yes, first destroy() then free()... ```d import std.stdio; import object: doDestroy = destroy; import core.memory : MEM = GC; void disposeOf(T)(T obj) { auto mem = cast(void*) obj; scope (exit) MEM.free(mem); doDestroy(obj); } void main() { int[] i = new int[1024]; i[$-1] = 41; doDestroy(i); MEM.free(i.ptr); // You don't need to addrOf(cast(void*)i) //i.length = 1024; //assert(i[$-1] == 0); //i[$-1].writeln(" am there?"); if (i !is null) { "still alive!".writeln; disposeOf(i); } "bye...".writeln; } ``` SDB 79
Apr 24 2022
parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 4/24/22 17:26, Salih Dincer wrote:

 first destroy() then free()...
Makes sense only if we allocated the memory.
 import object: doDestroy = destroy;
I like adding 'do' to verbs that can be confused with nouns. For example, because 'copy' is both a noun and a verb, I think it helps when we name a function as 'doCopy'. However, because 'destroy' is already a verb, I would leave it alone. :)
    MEM.free(i.ptr);
    // You don't need to addrOf(cast(void*)i)
Good point about i.ptr but that free() does not or should not do anything because it is "memory not originally allocated by this garbage collector": https://dlang.org/phobos/core_memory.html#.GC.free Well... maybe it was allocated by that garbage collector and may be it points to the beginning of an allocated block but we don't know that. I wouldn't call free() on an array's memory. Ali
Apr 24 2022
next sibling parent reply Alain De Vos <devosalain ymail.com> writes:
Ali, thanks for the answer but i rephrase my question.
How to destroy,free , for garbage-collection-cycle in the 
destructor of this code :

```
import std.stdio: writeln;

class C{
	int[] i=null;
	this(){
		writeln("Allocate heap");
		i=new int[10000];
		writeln(typeid(typeof(i)));
		writeln(typeid(typeof(i.ptr)));
		i[9000]=5;
	}
	~this(){
		writeln("Free heap");
		import object: destroy;
		import core.memory: GC;
		i=null;
                 // But How to force destroy and free , GC-cycle 
for heap object i ?
	};
}

struct S{
	C c;
	 disable this();
	this(int dummy){c=new C;}
}

void main(){
	enum _=0;
	writeln(S(_).c.i[9000]);
}
```
Apr 25 2022
next sibling parent reply Salih Dincer <salihdb hotmail.com> writes:
On Monday, 25 April 2022 at 10:13:43 UTC, Alain De Vos wrote:
 destructor of this code :

 ```d
  ~this(){
 	writeln("Free heap");
 	import object: destroy;
 	import core.memory: GC;
 	i=null;
 	// But How to force destroy and free , GC-cycle for heap 
 object i ?
 };
 ```
If you use destroy in destructor (~this), it will call destructor two times but thats not error. SDB 79
Apr 25 2022
parent Alain De Vos <devosalain ymail.com> writes:
Note, heap object i is not an instance of the class C
Apr 25 2022
prev sibling parent reply Stanislav Blinov <stanislav.blinov gmail.com> writes:
On Monday, 25 April 2022 at 10:13:43 UTC, Alain De Vos wrote:
 Ali, thanks for the answer but i rephrase my question.
 How to destroy,free , for garbage-collection-cycle in the 
 destructor of this code :

 // But How to force destroy and free , GC-cycle for heap object 
 i ?
Short answer: use `destroy`. Long answer: don't do that. https://dlang.org/spec/class.html#destructors GC is not guaranteed to call destructors, and in fact it may run into situations when it can't (i.e. objects pointing to one another). Neither does it specify in what order destructors of GC-allocated objects are run (when they are run). If you need deterministic destruction e.g. for resource management, do not use GC.
Apr 25 2022
parent reply Alain De Vos <devosalain ymail.com> writes:
 GC-allocated objects are run (when they are run). If you need 
 deterministic destruction e.g. for resource management, do not 
 use GC.
Descend destroy and free functions should return something. "destroy" should return if the destructor was called successfully. "free" should return the exact number of bytes freed on the heap. Probably this is not implemented in the library because it is probably "buggy".
Apr 25 2022
parent reply Alain De Vos <devosalain ymail.com> writes:
Could thc or hboehm provide solutions ?
Apr 25 2022
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Apr 25, 2022 at 01:28:01PM +0000, Alain De Vos via Digitalmars-d-learn
wrote:
 Could thc or hboehm provide solutions ?
In general, GC (of any kind) does not (and cannot) guarantee the order objects will be collected in. So in the dtor, you cannot assume that any objects you depend on still exist (they may have already been collected). There is also no guarantee that the object will *ever* get collected: in theory, the GC may only collect just enough to make space for further allocations, it's not obligated to collect *everything* that's collectible. Or the collection might not take place before the end of the program -- the GC may skip the final collection because it knows the OS will reclaim everything automatically anyway. Basically, deterministic destruction and GC are antithetical to each other, and trying to have both is the road to trouble. If you wish to have deterministic destruction, don't use the GC; use RAII or reference counting instead. T -- What do you mean the Internet isn't filled with subliminal messages? What about all those buttons marked "submit"??
Apr 25 2022
parent Alain De Vos <devosalain ymail.com> writes:
On Monday, 25 April 2022 at 14:25:17 UTC, H. S. Teoh wrote:
 On Mon, Apr 25, 2022 at 01:28:01PM +0000, Alain De Vos via 
 Digitalmars-d-learn wrote:
 Could thc or hboehm provide solutions ?
In general, GC (of any kind) does not (and cannot) guarantee the order objects will be collected in. So in the dtor, you cannot assume that any objects you depend on still exist (they may have already been collected). There is also no guarantee that the object will *ever* get collected: in theory, the GC may only collect just enough to make space for further allocations, it's not obligated to collect *everything* that's collectible. Or the collection might not take place before the end of the program -- the GC may skip the final collection because it knows the OS will reclaim everything automatically anyway. Basically, deterministic destruction and GC are antithetical to each other, and trying to have both is the road to trouble. If you wish to have deterministic destruction, don't use the GC; use RAII or reference counting instead. T
When you can foresee a "maximum size" , you can create "deterministic" stack objects. ``` class C { nogc this(){} nogc this(int dummy){}; nogc int[3] fixarr=new int[3]; }//C nogc void myfun(){ int a; scope c = new C(); scope c2 = new C(5); }//myfun void main(){ myfun(); }//main ``` It's just the variable length arrays which are "problematic". Feel free to elaborate.
Apr 25 2022
prev sibling parent reply frame <frame86 live.com> writes:
On Monday, 25 April 2022 at 02:07:50 UTC, Ali Çehreli wrote:
      import core.memory: GC;
GC.free(GC.addrOf(cast(void *)(i.ptr))); That is wrong because you did not allocate that address yourself.
Hmm? The GC did allocate here(?)
 On 4/24/22 17:26, Salih Dincer wrote:

    MEM.free(i.ptr);
    // You don't need to addrOf(cast(void*)i)
Wrong.
 Good point about i.ptr but that free() does not or should not 
 do anything because it is "memory not originally allocated by 
 this garbage collector":

   https://dlang.org/phobos/core_memory.html#.GC.free

 Well... maybe it was allocated by that garbage collector and 
 may be it points to the beginning of an allocated block but we 
 don't know that. I wouldn't call free() on an array's memory.

 Ali
And if it was, the freeing must be done with `GC.addrOf` or it will fail with larger arrays. You will need the GC address to free the block. That is what `__delete` actually does - which was patched back recently, reported by Adam: https://issues.dlang.org/show_bug.cgi?id=21550
Apr 25 2022
parent =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 4/25/22 16:02, frame wrote:
 On Monday, 25 April 2022 at 02:07:50 UTC, Ali Çehreli wrote:
      import core.memory: GC;
GC.free(GC.addrOf(cast(void *)(i.ptr))); That is wrong because you did not allocate that address yourself.
Hmm? The GC did allocate here(?)
Yes. I still don't understand the need to free GC memory explicitly. I can understand GC.collect() but not the memory of a specific array.
 On 4/24/22 17:26, Salih Dincer wrote:

    MEM.free(i.ptr);
    // You don't need to addrOf(cast(void*)i)
Wrong.
You are right. I missed the fact that addrOf is a GC function. Ali
Apr 25 2022
prev sibling next sibling parent =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 4/24/22 14:00, Alain De Vod wrote:
 Is this a correct program to explicit call destroy & free ?
destroy() is called with the object that you want its destructor to be executed on. This is very rare in D because when the destructor has to be called, one relies on the lifetime of a struct object, or uses scope(exit), scope(success), or scope(failure). free is called on memory that you explicitly allocated.
 ```
 void main(){
      int[] i=new int[10000];
      import object: destroy;
      destroy(i);
That does not have any effect for an int array because int does not have any destructor. Assuming you are asking for struct elements, let me try: import std.stdio; struct S { ~this() { writeln(__FUNCTION__); } } void main() { auto arr = [ S() ]; writeln("calling destroy"); destroy(arr); writeln("called destroy"); writeln("leaving main"); } No, destroy'in an array does not call the destructor of the elements: calling destroy <-- No destructor called here called destroy leaving main deneme.S.~this And that is expected because arrays don't have destructors. If you want to destroy the elements of an array, you must call destroy for each element. One way: import std.algorithm : each; arr.each!((ref e) => destroy(e)); Now the output has our destructor call: calling destroy deneme.S.~this called destroy <-- HERE leaving main deneme.S.~this The final destructor is called for the .init state of the element because destroy() blits the .init state on objects.
      import core.memory: GC;
      GC.free(GC.addrOf(cast(void *)(i.ptr)));
That is wrong because you did not allocate that address yourself. It is further wrong for arrays in general because there may be slices to arrays, which you would not free the elements of. Ali
Apr 24 2022
prev sibling parent reply Dukc <ajieskola gmail.com> writes:
On Sunday, 24 April 2022 at 21:00:50 UTC, Alain De Vod wrote:
 Is this a correct program to explicit call destroy & free ?

 ```
 void main(){
     int[] i=new int[10000];
     import object: destroy;
     destroy(i);
     import core.memory: GC;
     GC.free(GC.addrOf(cast(void *)(i.ptr)));
 }
 ```
A few picks. 1: You do not need to import `destroy`. Everything in `object` is automatically imported. 2: As others have said, destroying an int, or an array if them, is meaningless, since they do not have destructors. The only thing your call does is to set `i` to `null`. In this case it means that there are no more references to the array, so the garbage collector can collect it at some point. 3: Because `i` is now null, `GC.free` also does nothing. If you had something that really mandated using `destroy`, you'd want to use `destroy!false(i)` to avoid setting `i` to `null`. 4: Manually freeing garbage collected memory is just as dangerous as freeing manually managed memory. It's preferable to just let the GC collect the memory. Also like said, there's probably no point to allocate with GC in the first place if you're going to manually free the memory. 5: If you're in a hurry to free the memory, a safer way is to call `GC.collect`. It will most likely free your old data immediately. Don't trust it quite 100% though, since it's possible that some unlucky bit pattern in your remaining data, that isn't really a pointer, "points" to your old data and it won't get collected. Using a precise GC would eliminate that risk I think. This is rarely a problem though, as long as you either don't do anything mandatory in destructors, or use `destroy` before forgetting the data. The GC can still usually free the majority of your memory.
Apr 30 2022
parent reply Tejas <notrealemail gmail.com> writes:
On Saturday, 30 April 2022 at 09:25:18 UTC, Dukc wrote:
 On Sunday, 24 April 2022 at 21:00:50 UTC, Alain De Vod wrote:
 [...]
A few picks. 1: You do not need to import `destroy`. Everything in `object` is automatically imported. [...]
Hell, just using `scope int[] i` should be enough to trigger deterministic destruction, no? `typecons`'s `Scoped!` template can be used if 100% guarantee is needed _and_ the memory has to be stack allocated
Apr 30 2022
parent reply Dukc <ajieskola gmail.com> writes:
On Saturday, 30 April 2022 at 11:37:32 UTC, Tejas wrote:
 Hell, just using `scope int[] i` should be enough to trigger 
 deterministic destruction, no? `typecons`'s `Scoped!` template 
 can be used if 100% guarantee is needed _and_ the memory has to 
 be stack allocated
Didn't think of that. To be frank, I don't know if `scope int[] i` means deterministic destruction. `Scoped` does IIRC.
Apr 30 2022
parent reply Alain De Vos <devosalain ymail.com> writes:
Error: array literal in  nogc function test.myfun may cause a GC 
allocation

 nogc void myfun(){
	scope int[] i=[1,2,3];
}//myfun

May is a fuzzy word...
May 03 2022
next sibling parent reply Mike Parker <aldacron gmail.com> writes:
On Tuesday, 3 May 2022 at 12:59:31 UTC, Alain De Vos wrote:
 Error: array literal in  nogc function test.myfun may cause a 
 GC allocation

  nogc void myfun(){
 	scope int[] i=[1,2,3];
 }//myfun

 May is a fuzzy word...
It means if the compiler is free to allocate on the stack if possible. In practice, though, you can usually assume there will be a GC allocation.
May 03 2022
parent reply Alain De Vos <devosalain ymail.com> writes:
Note, It's not i'm against GC. But my preference is to use 
builtin types and libraries if possible,
But at the same time be able to be sure memory is given free when 
a variable is going out of scope.
It seems not easy to combine the two with a GC which does his 
best effort but as he likes or not.
May 03 2022
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, May 03, 2022 at 02:57:46PM +0000, Alain De Vos via Digitalmars-d-learn
wrote:
 Note, It's not i'm against GC. But my preference is to use builtin
 types and libraries if possible,
 But at the same time be able to be sure memory is given free when a
 variable is going out of scope.
 It seems not easy to combine the two with a GC which does his best
 effort but as he likes or not.
If your objects have a well-defined lifetime and you want to control when they get freed, just use malloc/free or equivalents (use emplace to initialize the object in custom-allocated memory). Don't use the GC. Using the GC means you relinquish control over when (and in what order) your objects get freed. T -- There's light at the end of the tunnel. It's the oncoming train.
May 03 2022
prev sibling next sibling parent =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 5/3/22 07:57, Alain De Vos wrote:

 But at the same time be able to be sure memory is given free when a
 variable is going out of scope.
Let's expand on that please. What exactly is the worry there? Are you concerned that the program will have memory leaks, and eventually got killed by the OS? Do you want memory to be freed all the way to the OS? Would it be possible that a call to some_library_free() puts that memory in a free list to be used for later allocations? Or do you insist that memory really goes back to the OS? Why are you worried about how memory is managed? The way I think is this: When I use some feature and that feature allocates memory, it is not up to me to free memory at all. I don't want to get involved in how that memory is managed. On the other hand, if I were the party that did allocate memory, fine, then I might be involved in freeing. Note that 'new' is not raw memory allocation. So it should not involve raw memory freeing. Sorry for all the questions but I am really curious why. At the same time, I have a suspicion: You come from a language like C++ that thinks deterministic memory freeing is the only way to go. It took me many years to learn that C++'s insintence on that topic is wrong. Memory can be freed altogether at some later time. Further, not every object needs to be destroyed. These are based on one of John Lakos's C++Now presentations where he shows comparisons of different destruction and freeing schemes where (paraphrasing) "no destruction whatsoever; poof the array disappears." Not surprisingly, that happens to be the fastest destruction plus free. Ali
May 03 2022
prev sibling parent reply Mike Parker <aldacron gmail.com> writes:
On Tuesday, 3 May 2022 at 14:57:46 UTC, Alain De Vos wrote:
 Note, It's not i'm against GC. But my preference is to use 
 builtin types and libraries if possible,
 But at the same time be able to be sure memory is given free 
 when a variable is going out of scope.
 It seems not easy to combine the two with a GC which does his 
 best effort but as he likes or not.
What I described is an optional compiler optimization. The compiler is free to avoid the GC allocation for an array literal initializer if it is possible to do so. If you were to, e.g., return the array from the function, it would 100% for sure be allocated on the GC and not the stack. In practice, I don't know if any of the compilers actually do this. Anyway, if you care when memory is deallocated, then the GC isn't the right tool for the job. The point of the GC is that you don't have to care.
May 03 2022
parent reply forkit <forkit gmail.com> writes:
On Wednesday, 4 May 2022 at 02:42:44 UTC, Mike Parker wrote:
 On Tuesday, 3 May 2022 at 14:57:46 UTC, Alain De Vos wrote:
 Note, It's not i'm against GC. But my preference is to use 
 builtin types and libraries if possible,
 But at the same time be able to be sure memory is given free 
 when a variable is going out of scope.
 It seems not easy to combine the two with a GC which does his 
 best effort but as he likes or not.
What I described is an optional compiler optimization. The compiler is free to avoid the GC allocation for an array literal initializer if it is possible to do so. If you were to, e.g., return the array from the function, it would 100% for sure be allocated on the GC and not the stack. In practice, I don't know if any of the compilers actually do this. Anyway, if you care when memory is deallocated, then the GC isn't the right tool for the job. The point of the GC is that you don't have to care.
GC is about reducing the complexity, cognitive load, and possible bugs - associated with manual memory management. It is certainly *not* about you not having to care anymore (about memory management). Why not have an option to mark an object, so that real-time garbage collection occurs on it as it exits scope?
May 03 2022
parent reply Mike Parker <aldacron gmail.com> writes:
On Wednesday, 4 May 2022 at 04:52:05 UTC, forkit wrote:

 It is certainly *not* about you not having to care anymore 
 (about memory management).
That's not at all what I said. You don't have to care about *when* memory is deallocated, meaning you don't have to manage it yourself.
May 03 2022
parent reply forkit <forkit gmail.com> writes:
On Wednesday, 4 May 2022 at 05:13:04 UTC, Mike Parker wrote:
 On Wednesday, 4 May 2022 at 04:52:05 UTC, forkit wrote:

 It is certainly *not* about you not having to care anymore 
 (about memory management).
That's not at all what I said. You don't have to care about *when* memory is deallocated, meaning you don't have to manage it yourself.
In any case, I disagree that caring about when memory gets deallocted means you shouldn't be using GC. (or did I get that one wrong too??) You can have the best of both worlds, surely (and easily). This (example from first post): void main(){ int[] i = new int[10000]; import object: destroy; destroy(i); import core.memory: GC; GC.free(GC.addrOf(cast(void *)(i.ptr))); } could (in theory) be replaced with this: void main(){ inscope int[] i = new int[10000]; // inscope means 2 things: // (1) i cannot be referenced anywhere except within this scope. // (2) i *will* be GC'd when this scope ends }
May 03 2022
next sibling parent reply Mike Parker <aldacron gmail.com> writes:
On Wednesday, 4 May 2022 at 05:37:49 UTC, forkit wrote:

 That's not at all what I said. You don't have to care about 
 *when* memory is deallocated, meaning you don't have to manage 
 it yourself.
In any case, I disagree that caring about when memory gets deallocted means you shouldn't be using GC. (or did I get that one wrong too??) You can have the best of both worlds, surely (and easily). This (example from first post): void main(){ int[] i = new int[10000]; import object: destroy; destroy(i); import core.memory: GC; GC.free(GC.addrOf(cast(void *)(i.ptr))); }
All you're doing here is putting unnecessary pressure on the GC. Just use `malloc` and then `free` on `scope(exit)`. Or if you want to append to the array without managing the memory yourself, then use `std.container.array` instead. That's made for deterministic memory management with no GC involvement.
May 04 2022
parent reply forkit <forkit gmail.com> writes:
On Wednesday, 4 May 2022 at 08:23:33 UTC, Mike Parker wrote:
 On Wednesday, 4 May 2022 at 05:37:49 UTC, forkit wrote:

 That's not at all what I said. You don't have to care about 
 *when* memory is deallocated, meaning you don't have to 
 manage it yourself.
In any case, I disagree that caring about when memory gets deallocted means you shouldn't be using GC. (or did I get that one wrong too??) You can have the best of both worlds, surely (and easily). This (example from first post): void main(){ int[] i = new int[10000]; import object: destroy; destroy(i); import core.memory: GC; GC.free(GC.addrOf(cast(void *)(i.ptr))); }
All you're doing here is putting unnecessary pressure on the GC. Just use `malloc` and then `free` on `scope(exit)`. Or if you want to append to the array without managing the memory yourself, then use `std.container.array` instead. That's made for deterministic memory management with no GC involvement.
Reverting to C style 'malloc and free' is not the solution here, since the intent is not to revert to manually managing dynamically allocated memory. Rather, the intent was to just have 'a simple form of control' over the lifetime of the dynamically allocated memory - the object being pointed to in GC memory pool. I understand that my idea may put uncessary pressure on the existing GC, but a GC (in theory) could surely handle this scenario.. If D had such a feature, I'd already be using it.
May 04 2022
parent reply cc <cc nevernet.com> writes:
The MemUtils package offers a `ScopedPool` utility that seems 
interesting.  It isn't well documented however so I have no idea 
if it actually works like I expect.  I presume this would work 
something akin to a VM memory snapshot/rollback for the GC?  It 
would be pretty handy for some scenarios, say a serialization 
library.  You specify a snapshot point (add a pool to the 
stack?), incur all your GC allocations necessary for generating 
the structure of your serialized data (which go into the pool 
instead of the GC proper?), then you write it to disk and pop the 
stack, effectively rolling back to the original memory state of 
your program's GC.  As long as you make sure not to leak anything 
allocated within that phase, seems like a good deal.

https://code.dlang.org/packages/memutils
May 04 2022
parent forkit <forkit gmail.com> writes:
On Wednesday, 4 May 2022 at 15:04:13 UTC, cc wrote:
 The MemUtils package offers a `ScopedPool` utility that seems 
 interesting.  It isn't well documented however so I have no 
 idea if it actually works like I expect.  I presume this would 
 work something akin to a VM memory snapshot/rollback for the 
 GC?  It would be pretty handy for some scenarios, say a 
 serialization library.  You specify a snapshot point (add a 
 pool to the stack?), incur all your GC allocations necessary 
 for generating the structure of your serialized data (which go 
 into the pool instead of the GC proper?), then you write it to 
 disk and pop the stack, effectively rolling back to the 
 original memory state of your program's GC.  As long as you 
 make sure not to leak anything allocated within that phase, 
 seems like a good deal.

 https://code.dlang.org/packages/memutils
Interesting. My idea was ... objects marked as 'inscope' would be GC allocated in a LIFO region of the heap, rather than the general GC pool. Explicately deallocating such objects at end of scope then becomes a no brainer for the GC (since 'inscope' would ensure at compile time that no pointers/aliasing outside of that scope could exist). The LIFO would also avoid the problem of fragmentation (i.e. if the objects were allocated in the general GC pool instead of a separate pool). This would give the programmer 'scope-based deallocation of GC allocated memory'.
May 04 2022
prev sibling next sibling parent cc <cc nevernet.com> writes:
On Wednesday, 4 May 2022 at 05:37:49 UTC, forkit wrote:
     inscope int[] i = new int[10000];
You often see the "here's an array of ints that exists only in one scope to do one thing, should we leave it floating in memory or destroy it immediately?" as examples for these GC discussions. Not to steal OP's thread and whatever particular needs he's trying to achive, but hopefully provide another use case: I write games, and performance is the number one priority, and I stumbled heavily with the GC when I first began writing them in D. Naively, I began writing the same types of engines I always did, and probably thinking with a C/C++ mentality of "just delete anything you create", with a game loop that involved potentially hundreds of entities coming into existence or being destroyed every frame, in >=60 frame per second applications. The results were predictably disastrous, with collections running every couple seconds, causing noticeable stutters in the performance and disruptions of the game timing. It might have been my fault, but it really, really turned me off from the GC completely for a good long while. I don't know what types of programs the majority of the D community writes. My perception, probably biased, was that D's documentation, tours, and blogs leaned heavily towards "run once, do a thing, and quit" applications that have no problem leaving every single thing up to the GC, and this wasn't necessarily a good fit for programs that run for hours at a time and are constantly changing state. Notably, an early wiki post people with GC issues were directed to revolved heavily around tweaks and suggestions to work within the GC, with the malloc approach treated as a last-resort afterthought. Pre-allocating lists wasn't a good option as I didn't want to set an upper limit on the number of potential entities. The emergency fix at the time was inserting GC.free to forcibly deallocate things. Ultimately, the obvious *correct* answer is just using the malloc/emplace/free combo, but I'm just disappointed with how ugly and hacky this looks, at least until they've been wrapped in some nice NEW()/DELETE() templates. ```d auto foo = new Foo; delete foo; // R.I.P. ``` ```d import core.stdc.stdlib : malloc, free; import core.lifetime : emplace; auto foo = cast(Foo) malloc(__traits(classInstanceSize, Foo)); emplace!Foo(foo); destroy(foo); free(cast(void*) foo); ``` Can you honestly say the second one looks as clean and proper as the first? Maybe it's a purely cosmetic quibble, but one feels like I'm using the language correctly (I'm not!), and the other feels like I'm breaking it (I'm not!). I still use the GC for simple niceties like computations and searches that don't occur every frame, though even then I've started leaning more towards std.container.array and similar solutions; additionally, if something IS going to stay in memory forever (once-loaded data files, etc), why put it in the GC at all, if that's just going to increase the area that needs to be scanned when a collection finally does occur? I'd like to experiment more with reference counting in the future, but since it's just kind of a "cool trick" in D currently involving wrapping references in structs, there are some hangups. Consider for example: ```d import std.container.array; struct RC(T : Object) { T obj; // insert postblit and refcounting magic here } class Farm { Array!(RC!Animal) animals; } class Animal { RC!Farm myFarm; // Error: struct `test.RC(T : Object)` recursive template expansion } ``` Logically, this can lead to leaked memory, as a Farm and Animal that both reference each other going out of scope simultaneously would never get deallocated. But, something like this ought to at least *compile* (it doesn't), and leave it up to the programmer to handle logical leak problems, or so my thinking goes at least. I also really hate having to prepend RC! or RefCounted! to *everything*, unless I wrap it all in prettier aliases.
May 04 2022
prev sibling parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 5/3/22 22:37, forkit wrote:

 In any case, I disagree that caring about when memory gets deallocted
 means you shouldn't be using GC. (or did I get that one wrong too??)
At least I don't agree with you there. :) Yes, one should not care about how memory gets freed if one did not care how that memory was allocated.
 You can have the best of both worlds, surely (and easily).
There are always many subtleties.
 This (example from first post):

 void main(){
      int[] i = new int[10000];

      import object: destroy;
      destroy(i);
      import core.memory: GC;
      GC.free(GC.addrOf(cast(void *)(i.ptr)));
 }
So is this about that simplest case? 1) We know the number of elements up front. 2) The elements are simple like 'int' that there is no separate allocation for each? If so, I don't think adding a language feature is necessary for this narrow case. Note: To answer a related question: I haven't designed or implemented any language, haven't contributed to any compiler, etc. I have no strong say on D or any other language. I am just a user who disagrees with the request in this thread.
 could (in theory) be replaced with this:

 void main(){
      inscope int[] i = new int[10000];

      // inscope means 2 things:
      // (1) i cannot be referenced anywhere except within this scope.
      // (2) i *will* be GC'd when this scope ends
If this feature was for more complicated examples as well, i.e. if we added new elements ever, there might be multiple memory allocations by the GC. Are we concerned only about the very last one? So, allocations 0, 1, .. N-1 can be left to the GC but allocation N must be freed now? If not, should the GC keep a list of allocations for each 'inscope' object? (Is this only for arrays? What about potentially multiple allocations for a single object throughout its lifetime?) What about that list itself? :) Should that be allocated from the GC and be 'inscope' as well? There must be other questions that I miss. What I am sure is, somebody in the future will be unhappy how 'inscope' is implemented for their case and will propose another feature on top of it. I think it is better to leave such special decisions to the programmer. Again though, I would be happy to be corrected. Ali
May 04 2022
parent reply forkit <forkit gmail.com> writes:
On Wednesday, 4 May 2022 at 12:57:26 UTC, Ali Çehreli wrote:
 On 5/3/22 22:37, forkit wrote:

 In any case, I disagree that caring about when memory gets
deallocted
 means you shouldn't be using GC. (or did I get that one wrong
too??) At least I don't agree with you there. :) Yes, one should not care about how memory gets freed if one did not care how that memory was allocated. ....
That languages with GC typically give the programmer some control over the GC, is evidence that programmers do care (otherwise such features would not be needed). To deny a programmer the option to release the memory that was GC allocated within a particular scope, to be release immediately after that scope exits, seems kinda cruel. To force a programmer to run a full GC in such a situation, is also kinda cruel. To force a programmer back to using the ancient malloc/free.... well.. that's even crueler.
May 04 2022
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, May 04, 2022 at 09:46:50PM +0000, forkit via Digitalmars-d-learn wrote:
[...]
 That languages with GC typically give the programmer some control over
 the GC, is evidence that programmers do care (otherwise such features
 would not be needed).
 
 To deny a programmer the option to release the memory that was GC
 allocated within a particular scope, to be release immediately after
 that scope exits, seems kinda cruel.
[...] scope ptr = GC.malloc(size); scope(exit) GC.free(ptr); ... // use ptr however you like until end of scope T -- It's amazing how careful choice of punctuation can leave you hanging:
May 04 2022
parent reply forkit <forkit gmail.com> writes:
On Wednesday, 4 May 2022 at 21:55:18 UTC, H. S. Teoh wrote:
 On Wed, May 04, 2022 at 09:46:50PM +0000, forkit via 
 Digitalmars-d-learn wrote: [...]
 That languages with GC typically give the programmer some 
 control over the GC, is evidence that programmers do care 
 (otherwise such features would not be needed).
 
 To deny a programmer the option to release the memory that was 
 GC allocated within a particular scope, to be release 
 immediately after that scope exits, seems kinda cruel.
[...] scope ptr = GC.malloc(size); scope(exit) GC.free(ptr); ... // use ptr however you like until end of scope T
that's cruel! I just want 'scope-based deallocation of GC allocated memory'. I just want to write one word for this to happen -> 'inscope'
May 04 2022
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, May 04, 2022 at 10:04:53PM +0000, forkit via Digitalmars-d-learn wrote:
 On Wednesday, 4 May 2022 at 21:55:18 UTC, H. S. Teoh wrote:
 On Wed, May 04, 2022 at 09:46:50PM +0000, forkit via Digitalmars-d-learn
 wrote: [...]
[...]
 To deny a programmer the option to release the memory that was GC
 allocated within a particular scope, to be release immediately
 after that scope exits, seems kinda cruel.
[...] scope ptr = GC.malloc(size); scope(exit) GC.free(ptr); ... // use ptr however you like until end of scope
[...]
 that's cruel!
 
 I just want 'scope-based deallocation of GC allocated memory'.
 
 I just want to write one word for this to happen -> 'inscope'
------- import std; // Put this in some common module auto scoped(T, Args...)(Args args) { static struct Result { private T* payload; ref T get() { return *payload; } alias get this; ~this() { import core.memory : GC; writeln("dtor"); GC.free(payload); } } return Result(new T(args)); } // Then you can use it in just a single line, as below void main() { struct MyType { int blah, blahblah; } auto data = scoped!MyType(10, 20); // <--- like this data.blah = 123; // use data as you like // automatically frees on scope exit } ------- T -- Тише едешь, дальше будешь.
May 04 2022
prev sibling parent Tejas <notrealemail gmail.com> writes:
On Tuesday, 3 May 2022 at 12:59:31 UTC, Alain De Vos wrote:
 Error: array literal in  nogc function test.myfun may cause a 
 GC allocation

  nogc void myfun(){
 	scope int[] i=[1,2,3];
 }//myfun

 May is a fuzzy word...
For this particular piece of code, you can use a static array to guarantee the usage of stack allocation ```d import std; nogc void myfun(){ /* no need to use scope now */int[3] i=[1,2,3];//this now compiles }//myfun void main() { writeln("Hello D"); } ```
May 03 2022