www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - More on C++ stack arrays

reply "bearophile" <bearophileHUGS lycos.com> writes:
More discussions about variable-sized stack-allocated arrays in 
C++, it seems there is no yet a consensus:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

I'd like variable-sized stack-allocated arrays in D.

Bye,
bearophile
Oct 20 2013
next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Sunday, 20 October 2013 at 14:25:37 UTC, bearophile wrote:
 I'd like variable-sized stack-allocated arrays in D.
I think I would too, though it'd be pretty important, at least for safe, to get scope working right. Ideally, the stack allocated array would be a different type than a normal array, but offer the slice operator, perhaps on alias this, to give back a normal T[] in a scope storage class (the return value could only be used in a context where references cannot escape). This way, the owner is clear and you won't be accidentally storing it somewhere. An alternative to a stack allocated array would be one made from a thread-local region allocator, which returns a Unique!T or similar, which frees it when it goes out of scope. Such an allocator would be substantially similar to the system stack, fast to allocate and free, although probably not done in registers and perhaps not as likely to be in cpu cache. But that might not matter much anyway, I don't actually know.
Oct 20 2013
prev sibling next sibling parent Lionello Lunesu <lionello lunesu.remove.com> writes:
On 10/20/13 16:25, bearophile wrote:
 More discussions about variable-sized stack-allocated arrays in C++, it
 seems there is no yet a consensus:

 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

 I'd like variable-sized stack-allocated arrays in D.

 Bye,
 bearophile
Good read, but many of the problems don't apply to D ;) The problem is that it'll probably be like using alloca, which doesn't get cleaned up until after the function exits. Using it within a loop is bound to cause a stack overflow. I wonder if there's something we can do to 'fix' alloca in that respect. L.
Oct 20 2013
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/20/2013 7:25 AM, bearophile wrote:
 More discussions about variable-sized stack-allocated arrays in C++, it seems
 there is no yet a consensus:

 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

 I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth. Just use: auto a = new T[n]; Stack allocated arrays are far more trouble than they're worth. But what about efficiency? Here's what I often do something along the lines of: T[10] tmp; T[] a; if (n <= 10) a = tmp[0..n]; else a = new T[n]; scope (exit) if (a != tmp) delete a; The size of the static array is selected so the dynamic allocation is almost never necessary.
Oct 20 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/20/13 9:33 AM, Walter Bright wrote:
 Stack allocated arrays are far more trouble than they're worth. But what
 about efficiency? Here's what I often do something along the lines of:

      T[10] tmp;
      T[] a;
      if (n <= 10)
      a = tmp[0..n];
      else
      a = new T[n];
      scope (exit) if (a != tmp) delete a;

 The size of the static array is selected so the dynamic allocation is
 almost never necessary.
Fallback allocators will make it easy to define an allocator on top of a fixed array, backed by another allocator when capacity is exceeded. BTW I'm scrambling to make std.allocator available for people to look at and experiment with. Andrei
Oct 20 2013
parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/10/13 18:57, Andrei Alexandrescu wrote:
 Fallback allocators will make it easy to define an allocator on top of a fixed
 array, backed by another allocator when capacity is exceeded. BTW I'm
scrambling
 to make std.allocator available for people to look at and experiment with.
Great to hear, I'm looking forward to seeing that. :-)
Oct 20 2013
prev sibling next sibling parent reply "Namespace" <rswhite4 googlemail.com> writes:
On Sunday, 20 October 2013 at 16:33:35 UTC, Walter Bright wrote:
 On 10/20/2013 7:25 AM, bearophile wrote:
 More discussions about variable-sized stack-allocated arrays 
 in C++, it seems
 there is no yet a consensus:

 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

 I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth. Just use: auto a = new T[n]; Stack allocated arrays are far more trouble than they're worth. But what about efficiency? Here's what I often do something along the lines of: T[10] tmp; T[] a; if (n <= 10) a = tmp[0..n]; else a = new T[n]; scope (exit) if (a != tmp) delete a; The size of the static array is selected so the dynamic allocation is almost never necessary.
But delete is deprecated. ;)
Oct 20 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/20/2013 9:56 AM, Namespace wrote:
 But delete is deprecated. ;)
I know. But I wanted to show where to put the free, in the case where you're doing manual allocation.
Oct 20 2013
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Sunday, 20 October 2013 at 16:33:35 UTC, Walter Bright wrote:
 Stack allocated arrays are far more trouble than they're worth. 
 But what about efficiency? Here's what I often do something 
 along the lines of:
Aye, that's a pretty good solution too.
     scope (exit) if (a != tmp) delete a;
but I think you meant if(a is tmp) :) Though, even that isn't necessarily right since you might use a to iterate through it (e.g. a = a[1 .. $]), so I'd use a separate flag variable for it.
Oct 20 2013
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 Just use:

     auto a = new T[n];
Sometimes I don't want to do that.
 Stack allocated arrays are far more trouble than they're worth.
I don't believe that.
 But what about efficiency? Here's what I often do something 
 along the lines of:

     T[10] tmp;
     T[] a;
     if (n <= 10)
 	a = tmp[0..n];
     else
 	a = new T[n];
     scope (exit) if (a != tmp) delete a;

 The size of the static array is selected so the dynamic 
 allocation is almost never necessary.
That's 7 lines of bug-prone code that uses a deprecated functionality and sometimes over-allocates on the stack. And I think you have to compare just the .ptr of those arrays at the end. And if you return one of such arrays you will produce nothing good. And what if you need 2D arrays? The code becomes even more complex. (You can of course create a matrix struct for that). Dynamically sized stack allocated arrays are meant to solve all those problems: to offer a nice, compact, clean, easy to remember and safe syntax. To be usable for 2D arrays too; and when you pass or return one of them the data is copied by the compiler on the heap (sometimes this doesn't happen if the optimizing compiler allocates the array in the stack frame of the caller, as sometimes done for structs). D dynamic array usage should decrease and D should encourage much more the usage of small stack-allocated arrays. This is what languages as Ada and Rust teach us. Heap allocation of arrays should be much less common, almost a special case. ---------------- Andrei Alexandrescu:
 Fallback allocators will make it easy to define an allocator on 
 top of a
fixed array, This over-allocates on the stack, and sometimes needlessly allocates on the heap or in an arena. Dynamic stack arrays avoid those downsides. Bye, bearophile
Oct 20 2013
next sibling parent "Froglegs" <barf barf.com> writes:
  One of my most anticipated C++14 features actually, hope they 
don't dawdle too much with the TS it apparently got pushed back 
into:(
Oct 20 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/20/2013 10:46 AM, bearophile wrote:
 That's 7 lines of bug-prone code that uses a deprecated functionality and
 sometimes over-allocates on the stack. And I think you have to compare just the
 .ptr of those arrays at the end. And if you return one of such arrays you will
 produce nothing good. And what if you need 2D arrays? The code becomes even
more
 complex. (You can of course create a matrix struct for that).

 Dynamically sized stack allocated arrays are meant to solve all those problems:
 to offer a nice, compact, clean, easy to remember and safe syntax. To be usable
 for 2D arrays too; and when you pass or return one of them the data is copied
by
 the compiler on the heap (sometimes this doesn't happen if the optimizing
 compiler allocates the array in the stack frame of the caller, as sometimes
done
 for structs).
If your optimizing compiler is that good, it can optimize "new T[n]" to be on the stack as well. I'm not particularly enamored with the compiler inserting silent copying to the heap - D programmers tend to not like such things.
 D dynamic array usage should decrease and D should encourage much more the
usage
 of small stack-allocated arrays. This is what languages as Ada and Rust teach
 us. Heap allocation of arrays should be much less common, almost a special
case.
Rust is barely used at all, and constantly changes. I saw a Rust presentation recently by one of its developers, and he said his own slides showing pointer stuff were obsolete. I don't think there's enough experience with Rust to say it teaches us how to do things.
 This over-allocates on the stack,
I use this technique frequently. Allocating a few extra bytes on the stack generally costs nothing unless you're in a recursive function. Of course, if you're in a recursive function, stack allocated dynamic arrays can have unpredictable stack overflow issues.
 and sometimes needlessly allocates on the heap
 or in an arena. Dynamic stack arrays avoid those downsides.
The technique I showed is also generally faster than dynamic stack allocation.
Oct 20 2013
next sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Sunday, 20 October 2013 at 18:42:06 UTC, Walter Bright wrote:
 If your optimizing compiler is that good, it can optimize "new 
 T[n]" to be on the stack as well.
Just a side note: LDC actually does this if it can prove statically that the size is bounded. Unfortunately, the range detection is rather conservative (unless your allocation size turns out to be a constant due to inlining, LLVM is unlikely to get it). One idea that might be interesting to think about is to insert a run-time check for the size if an allocation is known not to be escaped, but the size is not yet determined. As a GC allocation is very expensive anyway, this probably wouldn't even be much of a pessimization in the general case.
 I'm not particularly enamored with the compiler inserting 
 silent copying to the heap - D programmers tend to not like 
 such things.
Well, this is exactly what happens with closures, so one could argue that there is precedent. In general, I agree with you, though.
 I use this technique frequently. Allocating a few extra bytes 
 on the stack generally costs nothing unless you're in a 
 recursive function. Of course, if you're in a recursive 
 function, stack allocated dynamic arrays can have unpredictable 
 stack overflow issues.
I also find this pattern to be very useful. The LLVM support libraries even package it up into a nice llvm::SmallVector<T, n> template that allocates space for n elements inside the object, falling back to heap allocation only if that threshold has been exceeded (a tunable small string optimization, if you want). David
Oct 20 2013
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/20/2013 12:15 PM, David Nadlinger wrote:
 I'm not particularly enamored with the compiler inserting silent copying to
 the heap - D programmers tend to not like such things.
Well, this is exactly what happens with closures, so one could argue that there is precedent.
Not at all. The closure code does not *copy* the data to the heap. It is allocated on the heap to start with.
 I also find this pattern to be very useful. The LLVM support libraries even
 package it up into a nice llvm::SmallVector<T, n> template that allocates space
 for n elements inside the object, falling back to heap allocation only if that
 threshold has been exceeded (a tunable small string optimization, if you want).
Nice!
Oct 20 2013
prev sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 20 October 2013 20:15, David Nadlinger <code klickverbot.at> wrote:
 On Sunday, 20 October 2013 at 18:42:06 UTC, Walter Bright wrote:
 If your optimizing compiler is that good, it can optimize "new T[n]" to be
 on the stack as well.
Just a side note: LDC actually does this if it can prove statically that the size is bounded. Unfortunately, the range detection is rather conservative (unless your allocation size turns out to be a constant due to inlining, LLVM is unlikely to get it). One idea that might be interesting to think about is to insert a run-time check for the size if an allocation is known not to be escaped, but the size is not yet determined. As a GC allocation is very expensive anyway, this probably wouldn't even be much of a pessimization in the general case.
David, can you check the code generation of: http://dpaste.dzfl.pl/3e333df6 PS: Walter, looks the above causes an ICE in DMD? -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Oct 21 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/21/2013 9:24 AM, Iain Buclaw wrote:
 http://dpaste.dzfl.pl/3e333df6

 PS:  Walter, looks the above causes an ICE in DMD?
All ICE's should be filed in bugzilla: http://d.puremagic.com/issues/show_bug.cgi?id=11315
Oct 21 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 21 October 2013 18:42, Walter Bright <newshound2 digitalmars.com> wrote:
 On 10/21/2013 9:24 AM, Iain Buclaw wrote:
 http://dpaste.dzfl.pl/3e333df6

 PS:  Walter, looks the above causes an ICE in DMD?
All ICE's should be filed in bugzilla: http://d.puremagic.com/issues/show_bug.cgi?id=11315
I've told enough people to raise bugs in GDC to know this. My intention wasn't to find a bug in DMD though when I pasted that link. ;-) I was more curious what LDC does if it stack allocates array literals assigned to static arrays in that program. My guess is that the dynamic array will get the address of the stack allocated array literal, and it's values will be lost after calling fill(); If so, this is another bug that needs to be filled and fixed. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Oct 21 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/21/2013 07:53 PM, Iain Buclaw wrote:
 On 21 October 2013 18:42, Walter Bright <newshound2 digitalmars.com> wrote:
 On 10/21/2013 9:24 AM, Iain Buclaw wrote:
 http://dpaste.dzfl.pl/3e333df6

 PS:  Walter, looks the above causes an ICE in DMD?
All ICE's should be filed in bugzilla: http://d.puremagic.com/issues/show_bug.cgi?id=11315
I've told enough people to raise bugs in GDC to know this. My intention wasn't to find a bug in DMD though when I pasted that link. ;-) I was more curious what LDC does if it stack allocates array literals assigned to static arrays in that program. My guess is that the dynamic array will get the address of the stack allocated array literal, and it's values will be lost after calling fill(); If so, this is another bug that needs to be filled and fixed.
Why? AFAICS it is the expected behaviour in any case.
Oct 21 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 21 October 2013 21:24, Timon Gehr <timon.gehr gmx.ch> wrote:
 On 10/21/2013 07:53 PM, Iain Buclaw wrote:
 On 21 October 2013 18:42, Walter Bright <newshound2 digitalmars.com>
 wrote:
 On 10/21/2013 9:24 AM, Iain Buclaw wrote:
 http://dpaste.dzfl.pl/3e333df6

 PS:  Walter, looks the above causes an ICE in DMD?
All ICE's should be filed in bugzilla: http://d.puremagic.com/issues/show_bug.cgi?id=11315
I've told enough people to raise bugs in GDC to know this. My intention wasn't to find a bug in DMD though when I pasted that link. ;-) I was more curious what LDC does if it stack allocates array literals assigned to static arrays in that program. My guess is that the dynamic array will get the address of the stack allocated array literal, and it's values will be lost after calling fill(); If so, this is another bug that needs to be filled and fixed.
Why? AFAICS it is the expected behaviour in any case.
It's an assignment to a dynamic array, so it should invoke the GC and do a _d_arraycopy. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Oct 21 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/21/2013 10:32 PM, Iain Buclaw wrote:
 On 21 October 2013 21:24, Timon Gehr <timon.gehr gmx.ch> wrote:
 On 10/21/2013 07:53 PM, Iain Buclaw wrote:
 On 21 October 2013 18:42, Walter Bright <newshound2 digitalmars.com>
 wrote:
 On 10/21/2013 9:24 AM, Iain Buclaw wrote:
 http://dpaste.dzfl.pl/3e333df6

 PS:  Walter, looks the above causes an ICE in DMD?
All ICE's should be filed in bugzilla: http://d.puremagic.com/issues/show_bug.cgi?id=11315
I've told enough people to raise bugs in GDC to know this. My intention wasn't to find a bug in DMD though when I pasted that link. ;-) I was more curious what LDC does if it stack allocates array literals assigned to static arrays in that program. My guess is that the dynamic array will get the address of the stack allocated array literal, and it's values will be lost after calling fill(); If so, this is another bug that needs to be filled and fixed.
Why? AFAICS it is the expected behaviour in any case.
It's an assignment to a dynamic array, so it should invoke the GC and do a _d_arraycopy.
This code: int[] x; int[3] y; x = y = [1,2,3]; Is equivalent to this code: int[] x; int[3] y; y = [1,2,3]; x = y; // <-- here Are you saying the line marked with "here" should perform an implicit allocation and copy the contents of y to the heap?
Oct 21 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 21 October 2013 21:41, Timon Gehr <timon.gehr gmx.ch> wrote:
 On 10/21/2013 10:32 PM, Iain Buclaw wrote:
 On 21 October 2013 21:24, Timon Gehr <timon.gehr gmx.ch> wrote:
 On 10/21/2013 07:53 PM, Iain Buclaw wrote:
 On 21 October 2013 18:42, Walter Bright <newshound2 digitalmars.com>
 wrote:
 On 10/21/2013 9:24 AM, Iain Buclaw wrote:
 http://dpaste.dzfl.pl/3e333df6

 PS:  Walter, looks the above causes an ICE in DMD?
All ICE's should be filed in bugzilla: http://d.puremagic.com/issues/show_bug.cgi?id=11315
I've told enough people to raise bugs in GDC to know this. My intention wasn't to find a bug in DMD though when I pasted that link. ;-) I was more curious what LDC does if it stack allocates array literals assigned to static arrays in that program. My guess is that the dynamic array will get the address of the stack allocated array literal, and it's values will be lost after calling fill(); If so, this is another bug that needs to be filled and fixed.
Why? AFAICS it is the expected behaviour in any case.
It's an assignment to a dynamic array, so it should invoke the GC and do a _d_arraycopy.
This code: int[] x; int[3] y; x = y = [1,2,3]; Is equivalent to this code: int[] x; int[3] y; y = [1,2,3]; x = y; // <-- here Are you saying the line marked with "here" should perform an implicit allocation and copy the contents of y to the heap?
In GDC, the allocation currently is: y = [1,2,3]; // <--- here So it is a safe to not copy. But yes. I think a GC memcopy should be occuring, as dynamic arrays aren't passed by value, so are expected to last the lifetime of the reference to the address. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Oct 21 2013
parent reply "David Nadlinger" <code klickverbot.at> writes:
On Monday, 21 October 2013 at 21:07:46 UTC, Iain Buclaw wrote:
 But yes.  I think a GC memcopy should be occuring, as dynamic 
 arrays
 aren't passed by value, so are expected to last the lifetime of 
 the
 reference to the address.
This doesn't produce a heap copy (neither according to the spec nor to actual DMD/LDC behaviour): --- void foo() { int[3] a; int[] b = a; } --- Thus, your example will not copy any data either, as due to associativity, it is equivalent to an assignment to y followed by an assignment of y to x. x simply is a slice of the stack-allocated static array. David
Oct 21 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 21 October 2013 22:24, David Nadlinger <code klickverbot.at> wrote:
 On Monday, 21 October 2013 at 21:07:46 UTC, Iain Buclaw wrote:
 But yes.  I think a GC memcopy should be occuring, as dynamic arrays
 aren't passed by value, so are expected to last the lifetime of the
 reference to the address.
This doesn't produce a heap copy (neither according to the spec nor to actual DMD/LDC behaviour): --- void foo() { int[3] a; int[] b = a; } --- Thus, your example will not copy any data either, as due to associativity, it is equivalent to an assignment to y followed by an assignment of y to x. x simply is a slice of the stack-allocated static array.
I know this, but it does deter me against changing gdc over to stack allocating array literals. :-) I'll mull on it over night. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Oct 21 2013
parent reply "David Nadlinger" <code klickverbot.at> writes:
On Monday, 21 October 2013 at 21:41:24 UTC, Iain Buclaw wrote:
 I know this, but it does deter me against changing gdc over to 
 stack
 allocating array literals. :-)

 I'll mull on it over night.
There is no change in behaviour due to stack-allocating the literal in the static array assignment (or just not emitting it at all), at least if GDC correctly implements slice <- sarray assignment. The dynamic array never "sees" the literal at all, as shown by Timon. In the general case (i.e. when assigned to dynamic arrays), you obviously can't stack-allocate literals, but I don't think we disagree here. David
Oct 21 2013
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 21 October 2013 22:48, David Nadlinger <code klickverbot.at> wrote:
 On Monday, 21 October 2013 at 21:41:24 UTC, Iain Buclaw wrote:
 I know this, but it does deter me against changing gdc over to stack
 allocating array literals. :-)

 I'll mull on it over night.
There is no change in behaviour due to stack-allocating the literal in the static array assignment (or just not emitting it at all), at least if GDC correctly implements slice <- sarray assignment. The dynamic array never "sees" the literal at all, as shown by Timon. In the general case (i.e. when assigned to dynamic arrays), you obviously can't stack-allocate literals, but I don't think we disagree here.
That we do not. :o) -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Oct 21 2013
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 If your optimizing compiler is that good, it can optimize "new 
 T[n]" to be on the stack as well.
That's escape analysis, and it returns a failure as soon as you return the array, unless you also analyze the caller, and allocate in the caller stack frame, but this can't be done if the length of the array is computed in the middle of the called function. From what I've seen escape analysis is not bringing Java close to D performance when you use 3D vectors implemented as small class instances. We need something that guarantees stack allocation if there's enough space on the stack.
 I'm not particularly enamored with the compiler inserting 
 silent copying to the heap - D programmers tend to not like 
 such things.
An alternative solution is to statically require a ".dup" if you want to return one of such arrays (so it becomes a normal dynamic array). This makes the heap allocation visible. An alternative solution is to copy the data to the stack frame of the caller. But if you do this there are some cases where one of such arrays can't be put (as in the C++ proposals), but this is not too much bad.
 Rust is barely used at all,
Right, there is only an experimental compiler written in it, and little more, like most of the compiler. On the other hand Ada is around since a lot of time. And in the Ada 2005 standard library they have added bounded containers, so you can even allocate max-sized associative arrays on the stack :-) This shows how they care to not use heap. I think that generally Ada code allocates much less often on the heap compared to the D code.
 Allocating a few extra bytes on the stack generally costs 
 nothing unless you're in a recursive function.
If you over-allocate you are using more stack space than necessary, this means you are moving away from cache-warm parts of the stack to parts that are outside the L1 or L2 cache. This costs you time. Saving stack saves some run-time. Another problem is that D newbies and normal usage of D tends to stick to the simplest coding patterns. Your coding pattern is bug-prone even for you and it's not what programmers will use in casual D code. Stack allocation of (variable sized) arrays should become much simpler, otherwise most people in most cases will use heap allocation. Such allocation is not silent, but it's not better than the "silent heap allocations" discussed above.
 Of course, if you're in a recursive function, stack allocated 
 dynamic arrays can have unpredictable stack overflow issues.
Unless you are using a segmented stack as Go or Rust.
 The technique I showed is also generally faster than dynamic 
 stack allocation.
Do you have links to benchmarks? Bye, bearophile
Oct 20 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/20/2013 12:23 PM, bearophile wrote:
 Walter Bright:

 If your optimizing compiler is that good, it can optimize "new T[n]" to be on
 the stack as well.
That's escape analysis,
Yes, I know :-)
 and it returns a failure as soon as you return the
 array, unless you also analyze the caller, and allocate in the caller stack
 frame, but this can't be done if the length of the array is computed in the
 middle of the called function.
Yes. I know you don't believe me :-) but I am familiar with data flow analysis and what it can achieve.
 Another problem is that D newbies and normal usage of D tends to stick to the
 simplest coding patterns. Your coding pattern is bug-prone even for you
I haven't had bugs with my usage of it.
 and it's not what programmers will use in casual D code. Stack allocation of
(variable
 sized) arrays should become much simpler, otherwise most people in most cases
 will use heap allocation. Such allocation is not silent, but it's not better
 than the "silent heap allocations" discussed above.


 Of course, if you're in a recursive function, stack allocated dynamic arrays
 can have unpredictable stack overflow issues.
Unless you are using a segmented stack as Go or Rust.
Segmented stacks have performance problems and do not interface easily with C functions. Go is not known for high performance execution, and we'll see about Rust.
 The technique I showed is also generally faster than dynamic stack allocation.
Do you have links to benchmarks?
No. But I do know that alloca() causes pessimizations in the code generation, and it costs many instructions to execute. Allocating fixed size things on the stack executes zero instructions.
Oct 20 2013
parent reply "Tove" <tove fransson.se> writes:
On Sunday, 20 October 2013 at 19:42:29 UTC, Walter Bright wrote:
 On 10/20/2013 12:23 PM, bearophile wrote:
 Walter Bright:
No. But I do know that alloca() causes pessimizations in the code generation, and it costs many instructions to execute. Allocating fixed size things on the stack executes zero instructions.
1) Alloca allows allocating in the parent context, which is guaranteed to elide copying, without relying on a "sufficiently smart compiler". ref E stalloc(E)(ref E mem = *(cast(E*)alloca(E.sizeof))) { return mem; } 2) If only accessing the previous function parameter was supported(which is just an arbitrary restriction), it would be sufficient to create a helper-function to implement VLA:s. 3) Your "fixed size stack allocation" could be combined with alloca also, in which case it likely would be faster still.
Oct 20 2013
parent Nick Treleaven <ntrel-public yahoo.co.uk> writes:
On 20/10/2013 21:39, Tove wrote:
 ref E stalloc(E)(ref E mem = *(cast(E*)alloca(E.sizeof)))
 {
    return mem;
 }
Another trick is to use a template alias parameter for array length: T[] stackArray(T, alias N)(void* m = alloca(T.sizeof * N)) { return (cast(T*)m)[0 .. N]; } void main(string[] args) { auto n = args.length; int[] arr = stackArray!(int, n)(); } Note: The built-in length property couldn't be aliased when I tested this, hence 'n'. Reference: http://forum.dlang.org/post/aepqtotvkjyausrlsmad forum.dlang.org
Oct 22 2013
prev sibling parent reply Bruno Medeiros <brunodomedeiros+dng gmail.com> writes:
On 20/10/2013 20:23, bearophile wrote:
  From what I've seen escape analysis is not bringing Java close to D
 performance when you use 3D vectors implemented as small class
 instances. We need something that guarantees stack allocation if there's
 enough space on the stack.
If my recollection and understanding are correct, that's not due to a limitation in the algorithm itself of Java's escape analysis, but because Java arrays are allocated using a native call (even within the Java bytecode layer that is), and the escape analysis does not see beyond any native call. Even if it originates from a Java operation with well-known semantics (with regards to escape analysis). Thefore it can't ellide the allocations... :/ -- Bruno Medeiros - Software Engineer
Oct 22 2013
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 22.10.2013 14:26, schrieb Bruno Medeiros:
 On 20/10/2013 20:23, bearophile wrote:
  From what I've seen escape analysis is not bringing Java close to D
 performance when you use 3D vectors implemented as small class
 instances. We need something that guarantees stack allocation if there's
 enough space on the stack.
If my recollection and understanding are correct, that's not due to a limitation in the algorithm itself of Java's escape analysis, but because Java arrays are allocated using a native call (even within the Java bytecode layer that is), and the escape analysis does not see beyond any native call. Even if it originates from a Java operation with well-known semantics (with regards to escape analysis). Thefore it can't ellide the allocations... :/
Just thinking out loud, I would say it is JVM specific how much the implementors have improved escape analysis. -- Paulo
Oct 22 2013
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 22 October 2013 at 15:07:47 UTC, Paulo Pinto wrote:
 Just thinking out loud, I would say it is JVM specific how much 
 the implementors have improved escape analysis.
Even better, some does it even when escape analysis isn't proven, just noticed at runtime. If it turns out the JVM is wrong, the object is moved on head at the escape point.
Oct 22 2013
parent Paulo Pinto <pjmlp progtools.org> writes:
Am 22.10.2013 19:51, schrieb deadalnix:
 On Tuesday, 22 October 2013 at 15:07:47 UTC, Paulo Pinto wrote:
 Just thinking out loud, I would say it is JVM specific how much the
 implementors have improved escape analysis.
Even better, some does it even when escape analysis isn't proven, just noticed at runtime. If it turns out the JVM is wrong, the object is moved on head at the escape point.
Yep, I must confess I keep jumping between both sides of the fence about the whole JIT vs AOT compilation, depending on the use case and deployment scenario. For example, as language geek it was quite interesting to discover how OS/400 has a kernel JIT with a bytecode based userspace. Or that there were Native Oberon ports that used JIT on module load for the whole OS, instead of AOT. Only the boot loader, some critical drivers and the kernel module were AOT. -- Paulo
Oct 22 2013
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, October 20, 2013 09:33:36 Walter Bright wrote:
 On 10/20/2013 7:25 AM, bearophile wrote:
 More discussions about variable-sized stack-allocated arrays in C++, it
 seems there is no yet a consensus:
 
 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
 
 I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth. Just use: auto a = new T[n]; Stack allocated arrays are far more trouble than they're worth. But what about efficiency? Here's what I often do something along the lines of: T[10] tmp; T[] a; if (n <= 10) a = tmp[0..n]; else a = new T[n]; scope (exit) if (a != tmp) delete a; The size of the static array is selected so the dynamic allocation is almost never necessary.
If that paradigm is frequent enough, it might be worth wrapping it in a struct. Then, you'd probably get something like StaticArray!(int, 10) tmp(n); int[] a = tmp[]; which used T[10] if n was 10 or less and allocated T[] otherwise. The destructor could then deal with freeing the memory. - Jonathan M Davis
Oct 20 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/20/2013 5:59 PM, Jonathan M Davis wrote:
 If that paradigm is frequent enough, it might be worth wrapping it in a
 struct. Then, you'd probably get something like

 StaticArray!(int, 10) tmp(n);
 int[] a = tmp[];

 which used T[10] if n was 10 or less and allocated T[] otherwise. The
 destructor could then deal with freeing the memory.
Sounds like a good idea - and it should fit in with Andrei's nascent allocator design.
Oct 20 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 21 October 2013 11:48, Walter Bright <newshound2 digitalmars.com> wrote:

 On 10/20/2013 5:59 PM, Jonathan M Davis wrote:

 If that paradigm is frequent enough, it might be worth wrapping it in a
 struct. Then, you'd probably get something like

 StaticArray!(int, 10) tmp(n);
 int[] a = tmp[];

 which used T[10] if n was 10 or less and allocated T[] otherwise. The
 destructor could then deal with freeing the memory.
Sounds like a good idea - and it should fit in with Andrei's nascent allocator design.
I use this pattern all over the place. I don't love it though. It doesn't feel elegant at all and it wastes stack space, but it's acceptable, and I'd really like to see this pattern throughout phobos, especially where strings and paths are concerned. System interface functions that pass zero-terminated strings through to the OS are the primary offender, needless garbage, those should be on the stack. I like to use alloca too where it's appropriate. I'd definitely like if D had a variable-sized static array syntax for pretty-ing alloca. I thought about something similar using alloca via a mixin template, but that feels really hackey!
Oct 21 2013
parent reply Denis Shelomovskij <verylonglogin.reg gmail.com> writes:
21.10.2013 14:30, Manu пишет:
 System interface functions that pass zero-terminated strings through to
 the OS are the primary offender, needless garbage, those should be on
 the stack.

 I like to use alloca too where it's appropriate. I'd definitely like if
 D had a variable-sized static array syntax for pretty-ing alloca.
 I thought about something similar using alloca via a mixin template, but
 that feels really hackey!
No hacks needed. See `unstd.c.string` module from previous post: http://forum.dlang.org/thread/lqdktyndevxfcewgthcj forum.dlang.org?page=2#post-l42evp:241ok7:241:40digitalmars.com -- Денис В. Шеломовский Denis V. Shelomovskij
Oct 21 2013
parent reply Manu <turkeyman gmail.com> writes:
On 21 October 2013 21:24, Denis Shelomovskij <verylonglogin.reg gmail.com>w=
rote:

 21.10.2013 14:30, Manu =D0=BF=D0=B8=D1=88=D0=B5=D1=82:

  System interface functions that pass zero-terminated strings through to
 the OS are the primary offender, needless garbage, those should be on
 the stack.

 I like to use alloca too where it's appropriate. I'd definitely like if
 D had a variable-sized static array syntax for pretty-ing alloca.
 I thought about something similar using alloca via a mixin template, but
 that feels really hackey!
No hacks needed. See `unstd.c.string` module from previous post: http://forum.dlang.org/thread/**lqdktyndevxfcewgthcj forum.** dlang.org?page=3D2#post-l42evp:**241ok7:241:40digitalmars.com<http://foru=
m.dlang.org/thread/lqdktyndevxfcewgthcj forum.dlang.org?page=3D2#post-l42ev= p:241ok7:241:40digitalmars.com> Super awesome! Phobos devs should be encouraged to use these in non-recursive functions (particularly OS pass-through's).
Oct 21 2013
next sibling parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 21.10.2013 15:04, schrieb Manu:
 On 21 October 2013 21:24, Denis Shelomovskij
<verylonglogin.reg gmail.com>wrote:

 21.10.2013 14:30, Manu пишет:

  System interface functions that pass zero-terminated strings through to
 the OS are the primary offender, needless garbage, those should be on
 the stack.

 I like to use alloca too where it's appropriate. I'd definitely like if
 D had a variable-sized static array syntax for pretty-ing alloca.
 I thought about something similar using alloca via a mixin template, but
 that feels really hackey!
No hacks needed. See `unstd.c.string` module from previous post: http://forum.dlang.org/thread/**lqdktyndevxfcewgthcj forum.** dlang.org?page=2#post-l42evp:**241ok7:241:40digitalmars.com<http://forum.dlang.org/thread/lqdktyndevxfcewgthcj forum.dlang.org?page=2#post-l42evp:241ok7:241:40digitalmars.com>
Super awesome! Phobos devs should be encouraged to use these in non-recursive functions (particularly OS pass-through's).
looks like Walters solution - but cleaner "...Implementation note: For small strings tempCString will use stack allocated buffer, for large strings (approximately 1000 characters and more) it will allocate temporary one from unstd.memory.allocation.threadHeap..." does that mean that tempCString reserves minimum 1000 bytes on stack else using heap? if so i would prefer a template based version where i can put in the size
Oct 21 2013
parent reply Denis Shelomovskij <verylonglogin.reg gmail.com> writes:
21.10.2013 18:04, dennis luehring пишет:
 "...Implementation note:
 For small strings  tempCString will use stack allocated buffer, for
 large strings (approximately 1000 characters and more) it will allocate
 temporary one from unstd.memory.allocation.threadHeap..."

 does that mean that tempCString reserves minimum 1000 bytes on stack
 else using heap?

 if so i would prefer a template based version where i can put in the size
Yes, `tempCString` allocates `1024 * To.sizeof` bytes on the stack. Note that it doesn't initialize the data so it is O(1) operation which will just do ~1 KiB move of stack pointer. As function stack frame can easily eat 50-100 bytes it is like 10-20 function calls. IIRC typical stack size is ~1 MiB and `tempCString` isn't expected to be used in some deep recursion or be ~1000 times used in one function. So I'd prefer to change default stack allocation size if needed and not confuse user with manual choice. -- Денис В. Шеломовский Denis V. Shelomovskij
Oct 21 2013
parent "Wyatt" <wyatt.epp gmail.com> writes:
On Monday, 21 October 2013 at 15:26:33 UTC, Denis Shelomovskij 
wrote:
 So I'd prefer to change default stack allocation size if needed 
 and not confuse user with manual choice.
Wouldn't it work to make it optional then? Something like this, I think: auto tempCString(To = char, From, Length = 1024)(in From[] str) if (isSomeChar!To && isSomeChar!From); Choosing a sane default but allowing specialist users an easy way to fine-tune it for their needs while keeping the basic usage simple is something I'd advocate for. (Personally, I think 1K sounds quite high; I'd probably make it 256 (one less than the max length of filenames on a whole bunch of filesystems)). -Wyatt
Oct 21 2013
prev sibling parent reply Lionello Lunesu <lionello lunesu.remove.com> writes:
On 10/21/13 15:04, Manu wrote:
 On 21 October 2013 21:24, Denis Shelomovskij
 <verylonglogin.reg gmail.com <mailto:verylonglogin.reg gmail.com>> wrote:

     21.10.2013 14:30, Manu пишет:

         System interface functions that pass zero-terminated strings
         through to
         the OS are the primary offender, needless garbage, those should
         be on
         the stack.

         I like to use alloca too where it's appropriate. I'd definitely
         like if
         D had a variable-sized static array syntax for pretty-ing alloca.
         I thought about something similar using alloca via a mixin
         template, but
         that feels really hackey!


     No hacks needed. See `unstd.c.string` module from previous post:
     http://forum.dlang.org/thread/__lqdktyndevxfcewgthcj forum.__dlang.org?page=2#post-l42evp:__241ok7:241:40digitalmars.com
     <http://forum.dlang.org/thread/lqdktyndevxfcewgthcj forum.dlang.org?page=2#post-l42evp:241ok7:241:40digitalmars.com>


 Super awesome! Phobos devs should be encouraged to use these in
 non-recursive functions (particularly OS pass-through's).
Careful! Alloca doesn't get cleaned up when used in loops! foreach(t; 0..1000) { int[t] stack_overflow; }
Oct 22 2013
next sibling parent reply Denis Shelomovskij <verylonglogin.reg gmail.com> writes:
23.10.2013 1:05, Lionello Lunesu пишет:
 Careful! Alloca doesn't get cleaned up when used in loops!
And I don't use `alloca`. -- Денис В. Шеломовский Denis V. Shelomovskij
Oct 23 2013
parent Lionello Lunesu <lionello lunesu.remove.com> writes:
On 10/23/13, 21:36, Denis Shelomovskij wrote:
 23.10.2013 1:05, Lionello Lunesu пишет:
 Careful! Alloca doesn't get cleaned up when used in loops!
And I don't use `alloca`.
Ah, indeed. I got your post mixed up with the one using alloca.
Oct 23 2013
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 22 October 2013 at 21:05:33 UTC, Lionello Lunesu 
wrote:
 Careful! Alloca doesn't get cleaned up when used in loops!
scope(exit) works in a loop, so you can automatically clean it up like that. Destructors are also called on each iteration so RAII is an option.
Oct 23 2013
parent Lionello Lunesu <lionello lunesu.remove.com> writes:
On 10/23/13, 23:30, John Colvin wrote:
 On Tuesday, 22 October 2013 at 21:05:33 UTC, Lionello Lunesu wrote:
 Careful! Alloca doesn't get cleaned up when used in loops!
scope(exit) works in a loop, so you can automatically clean it up like that. Destructors are also called on each iteration so RAII is an option.
You can't clean up alloca'ed memory, AFAIK.
Oct 24 2013
prev sibling parent "Tove" <tove fransson.se> writes:
On Monday, 21 October 2013 at 01:48:56 UTC, Walter Bright wrote:
 On 10/20/2013 5:59 PM, Jonathan M Davis wrote:
 If that paradigm is frequent enough, it might be worth 
 wrapping it in a
 struct. Then, you'd probably get something like

 StaticArray!(int, 10) tmp(n);
 int[] a = tmp[];

 which used T[10] if n was 10 or less and allocated T[] 
 otherwise. The
 destructor could then deal with freeing the memory.
Sounds like a good idea - and it should fit in with Andrei's nascent allocator design.
Hmmm, it gave me a weird idea... void smalloc(T)(ushort n, void function(T[]) statement) { if(n <= 256) { if(n <= 16) { T[16] buf = void; statement(buf[0..n]); } else { T[256] buf = void; statement(buf[0..n]); } } else { if(n <= 4096) { T[4096] buf = void; statement(buf[0..n]); } else { T[65536] buf = void; statement(buf[0..n]); } } } smalloc(256, (int[] buf) { });
Oct 21 2013
prev sibling parent "PauloPinto" <pjmlp progtools.org> writes:
On Monday, 21 October 2013 at 00:59:38 UTC, Jonathan M Davis 
wrote:
 On Sunday, October 20, 2013 09:33:36 Walter Bright wrote:
 On 10/20/2013 7:25 AM, bearophile wrote:
 More discussions about variable-sized stack-allocated arrays 
 in C++, it
 seems there is no yet a consensus:
 
 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
 
 I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth. Just use: auto a = new T[n]; Stack allocated arrays are far more trouble than they're worth. But what about efficiency? Here's what I often do something along the lines of: T[10] tmp; T[] a; if (n <= 10) a = tmp[0..n]; else a = new T[n]; scope (exit) if (a != tmp) delete a; The size of the static array is selected so the dynamic allocation is almost never necessary.
If that paradigm is frequent enough, it might be worth wrapping it in a struct. Then, you'd probably get something like StaticArray!(int, 10) tmp(n); int[] a = tmp[]; which used T[10] if n was 10 or less and allocated T[] otherwise. The destructor could then deal with freeing the memory. - Jonathan M Davis
Well that's the approach taken by std::array (C++11), if I am not mistaken. -- Paulo
Oct 21 2013
prev sibling parent reply "Namespace" <rswhite4 googlemail.com> writes:
On Sunday, 20 October 2013 at 16:33:35 UTC, Walter Bright wrote:
 On 10/20/2013 7:25 AM, bearophile wrote:
 More discussions about variable-sized stack-allocated arrays 
 in C++, it seems
 there is no yet a consensus:

 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

 I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth. Just use: auto a = new T[n]; Stack allocated arrays are far more trouble than they're worth. But what about efficiency? Here's what I often do something along the lines of: T[10] tmp; T[] a; if (n <= 10) a = tmp[0..n]; else a = new T[n]; scope (exit) if (a != tmp) delete a; The size of the static array is selected so the dynamic allocation is almost never necessary.
Another idea would be to use something like this: http://dpaste.dzfl.pl/8613c9be It has a syntax similar to T[n] and is likely more efficient because the memory is freed when it is no longer needed. :)
Oct 23 2013
parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 23.10.2013 15:59, schrieb Namespace:
 On Sunday, 20 October 2013 at 16:33:35 UTC, Walter Bright wrote:
 On 10/20/2013 7:25 AM, bearophile wrote:
 More discussions about variable-sized stack-allocated arrays
 in C++, it seems
 there is no yet a consensus:

 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

 I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth. Just use: auto a = new T[n]; Stack allocated arrays are far more trouble than they're worth. But what about efficiency? Here's what I often do something along the lines of: T[10] tmp; T[] a; if (n <= 10) a = tmp[0..n]; else a = new T[n]; scope (exit) if (a != tmp) delete a; The size of the static array is selected so the dynamic allocation is almost never necessary.
Another idea would be to use something like this: http://dpaste.dzfl.pl/8613c9be It has a syntax similar to T[n] and is likely more efficient because the memory is freed when it is no longer needed. :)
but it would be still nice to change the 4096 size by template parameter maybe defaulted to 4096 :)
Oct 23 2013
parent reply "Namespace" <rswhite4 googlemail.com> writes:
On Wednesday, 23 October 2013 at 14:35:12 UTC, dennis luehring 
wrote:
 Am 23.10.2013 15:59, schrieb Namespace:
 On Sunday, 20 October 2013 at 16:33:35 UTC, Walter Bright 
 wrote:
 On 10/20/2013 7:25 AM, bearophile wrote:
 More discussions about variable-sized stack-allocated arrays
 in C++, it seems
 there is no yet a consensus:

 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

 I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth. Just use: auto a = new T[n]; Stack allocated arrays are far more trouble than they're worth. But what about efficiency? Here's what I often do something along the lines of: T[10] tmp; T[] a; if (n <= 10) a = tmp[0..n]; else a = new T[n]; scope (exit) if (a != tmp) delete a; The size of the static array is selected so the dynamic allocation is almost never necessary.
Another idea would be to use something like this: http://dpaste.dzfl.pl/8613c9be It has a syntax similar to T[n] and is likely more efficient because the memory is freed when it is no longer needed. :)
but it would be still nice to change the 4096 size by template parameter maybe defaulted to 4096 :)
That is true. ;) And can be easily done. :)
Oct 23 2013
parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 23.10.2013 16:41, schrieb Namespace:
 On Wednesday, 23 October 2013 at 14:35:12 UTC, dennis luehring
 wrote:
 Am 23.10.2013 15:59, schrieb Namespace:
 On Sunday, 20 October 2013 at 16:33:35 UTC, Walter Bright
 wrote:
 On 10/20/2013 7:25 AM, bearophile wrote:
 More discussions about variable-sized stack-allocated arrays
 in C++, it seems
 there is no yet a consensus:

 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

 I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth. Just use: auto a = new T[n]; Stack allocated arrays are far more trouble than they're worth. But what about efficiency? Here's what I often do something along the lines of: T[10] tmp; T[] a; if (n <= 10) a = tmp[0..n]; else a = new T[n]; scope (exit) if (a != tmp) delete a; The size of the static array is selected so the dynamic allocation is almost never necessary.
Another idea would be to use something like this: http://dpaste.dzfl.pl/8613c9be It has a syntax similar to T[n] and is likely more efficient because the memory is freed when it is no longer needed. :)
but it would be still nice to change the 4096 size by template parameter maybe defaulted to 4096 :)
That is true. ;) And can be easily done. :)
can't you remove the if(this.ptr is null) return; checks everywhere - how should that happen - without exception at creation time
Oct 23 2013
next sibling parent reply "Namespace" <rswhite4 googlemail.com> writes:
 can't you remove the if(this.ptr is null) return; checks 
 everywhere - how should that happen - without exception at 
 creation time
Yes, this is somehow true. Here, the adjusted version. http://dpaste.dzfl.pl/e4dcc2ea
Oct 23 2013
parent "Namespace" <rswhite4 googlemail.com> writes:
On Wednesday, 23 October 2013 at 15:19:46 UTC, Namespace wrote:
 can't you remove the if(this.ptr is null) return; checks 
 everywhere - how should that happen - without exception at 
 creation time
Yes, this is somehow true. Here, the adjusted version. http://dpaste.dzfl.pl/e4dcc2ea
What if D would support variable-sized stack-allocated arrays through syntax sugar? ---- int n = 128; int[n] arr; ---- would be rewritten with: ---- int n = 128; int* __tmpptr = Type!int[n]; scope(exit) Type!int.deallocate(__tmpptr); int[] arr = __tmpptr[0 .. n]; ---- Where 'Type' is a struct like that: ---- struct Type(T) { static { enum Limit = 4096; void[Limit] _buffer = void; size_t _bufferLength; } static void deallocate(ref T* ptr) { .free(ptr); ptr = null; } static T* opIndex(size_t N) { if ((this._bufferLength + N) <= Limit) { scope(exit) this._bufferLength += N; return cast(T*)(&this._buffer[this._bufferLength]); } return cast(T*) .malloc(N * T.sizeof); } } ---- which could be placed in std.typecons. I think this should be easy to implement. What do you think?
Nov 06 2013
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 23 October 2013 at 14:54:22 UTC, dennis luehring 
wrote:
 can't you remove the if(this.ptr is null) return; checks 
 everywhere - how should that happen - without exception at 
 creation time
Struct.init must be a valid state according to D specs, and it is pretty much unavoidable considering we have no default constructor for structs.
Oct 23 2013
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, October 23, 2013 19:28:27 deadalnix wrote:
 On Wednesday, 23 October 2013 at 14:54:22 UTC, dennis luehring
 
 wrote:
 can't you remove the if(this.ptr is null) return; checks
 everywhere - how should that happen - without exception at
 creation time
Struct.init must be a valid state according to D specs, and it is pretty much unavoidable considering we have no default constructor for structs.
And what do you mean by valid? It's perfectly legal to have fields initialized to void so that the init state is effectively garbage. That can cause problems in some scenarios (particularly any case where something assumes that init is useable without calling a function which would make the state valid), but it's legal. And you can disable init if you want to - which also causes its own set of problems, but technically, you don't even have to have an init value (though it can certainly be restrictive if you don't - particularly when arrays get involved). You also have cases where the struct's init is in a completely valid and yet unusable state. For instance, SysTime.init is useless unless you set its timezone and will segfault if you try and use it (since the timezone is null), but thanks to the limitations of CTFE, you _can't_ have a fully valid SysTime.init (though that's not invalid in the sense that part of the struct is garbage - just that it blows up when you use it). I don't know why you think that the spec requires that a struct's init value be valid. It just causes issues with some uses of the struct if its init value isn't valid. - Jonathan M Davis
Oct 23 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
'void' initialization means uninitialized. This applies to fields, as well, 
meaning that the .init value of an aggregate with void initializations will
have 
unreliable values in those locations.

This is why 'void' initializers don't belong in safe code, and reading 'void' 
initialized data will get you implementation defined data.
Oct 23 2013
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, October 23, 2013 22:40:55 Walter Bright wrote:
 'void' initialization means uninitialized. This applies to fields, as well,
 meaning that the .init value of an aggregate with void initializations will
 have unreliable values in those locations.
 
 This is why 'void' initializers don't belong in safe code, and reading
 'void' initialized data will get you implementation defined data.
Agreed. But there's a significant difference between system and illegal, and deadalnix was claiming that such init values were illegal per the language spec, which is what I was objecting to. - Jonathan M Davis P.S. Please quote at least _some_ of the message when replying. Without that, if the threading gets screwed up, or if someone doesn't use a threaded view, it's a guessing game as to which post you're replying to. Thanks.
Oct 24 2013
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 24 October 2013 at 18:22:49 UTC, Jonathan M Davis 
wrote:
 Agreed. But there's a significant difference between  system 
 and illegal, and
 deadalnix was claiming that such init values were illegal per 
 the language
 spec, which is what I was objecting to.

 - Jonathan M Davis
I never claimed that. I claimed that the init value, whatever it is, must be considered as valid. This is only loosly coupled with void as init value.
Oct 24 2013
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, October 24, 2013 21:07:16 deadalnix wrote:
 On Thursday, 24 October 2013 at 18:22:49 UTC, Jonathan M Davis
 
 wrote:
 Agreed. But there's a significant difference between  system
 and illegal, and
 deadalnix was claiming that such init values were illegal per
 the language
 spec, which is what I was objecting to.
 
 - Jonathan M Davis
I never claimed that. I claimed that the init value, whatever it is, must be considered as valid. This is only loosly coupled with void as init value.
Then what do you mean by valid? - Jonathan M Davis
Oct 24 2013
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 24 October 2013 at 20:04:38 UTC, Jonathan M Davis 
wrote:
 On Thursday, October 24, 2013 21:07:16 deadalnix wrote:
 On Thursday, 24 October 2013 at 18:22:49 UTC, Jonathan M Davis
 
 wrote:
 Agreed. But there's a significant difference between  system
 and illegal, and
 deadalnix was claiming that such init values were illegal per
 the language
 spec, which is what I was objecting to.
 
 - Jonathan M Davis
I never claimed that. I claimed that the init value, whatever it is, must be considered as valid. This is only loosly coupled with void as init value.
Then what do you mean by valid?
Code operating on the struct must handle that case. It is a valid state for the struct to be in.
Oct 24 2013
parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, October 25, 2013 00:06:52 deadalnix wrote:
 On Thursday, 24 October 2013 at 20:04:38 UTC, Jonathan M Davis
 
 wrote:
 On Thursday, October 24, 2013 21:07:16 deadalnix wrote:
 On Thursday, 24 October 2013 at 18:22:49 UTC, Jonathan M Davis
 
 wrote:
 Agreed. But there's a significant difference between  system
 and illegal, and
 deadalnix was claiming that such init values were illegal per
 the language
 spec, which is what I was objecting to.
 
 - Jonathan M Davis
I never claimed that. I claimed that the init value, whatever it is, must be considered as valid. This is only loosly coupled with void as init value.
Then what do you mean by valid?
Code operating on the struct must handle that case. It is a valid state for the struct to be in.
As in all the functions will work on it without blowing up (e.g. segfault due to a null pointer)? That's definitely desirable, but it's definitely not required by the spec, and there are times that it can't be done without adding overhead to the struct in general. For instance, SysTime.init will blow up on many of its function calls due to a null TimeZone. The only way that I could make that not blow up would be to put a lot of null checks in the code and then give the TimeZone a default value. The alternative is to do what I've done and make it so that you have to assign it a new value (or assign it a TimeZone) if you want to actually use it. Sometimes, that might be annoying, but no one has ever even reported it as a bug, and SysTime.init isn't a particularly useful value anyway, even if it had a valid TimeZone (since it's midnight January 1st, 1 A.D.). I could disable the init value, but that would make SysTime useless in a bunch of settings where it currently works just fine so long as you assign it a real value later. I agree that ideally Foo.init wouldn't do things like segfault if you used it, but that's not really possible with all types, and IMHO not having an init is far worse than having a bad one in most cases, since there are so many things than need an init value but don't necessarily call any functions on it (e.g. allocating dynamic arrays). Regardless, the language spec makes no requirements that Foo.init do anything useful. It's basically just a default state that the compiler/runtime can assign to stuff when it needs a default value. It doesn't have to actually work, just be a consistent set of bits that don't include any pointers or references which refer to invalid memory. - Jonathan M Davis
Oct 24 2013
prev sibling next sibling parent Denis Shelomovskij <verylonglogin.reg gmail.com> writes:
20.10.2013 18:25, bearophile пишет:
 More discussions about variable-sized stack-allocated arrays in C++, it
 seems there is no yet a consensus:

 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

 I'd like variable-sized stack-allocated arrays in D.
I'd say the most common case one need a stack-allocated array is a temporary allocation which isn't going to survive end of scope. Even more in such cases for too large for stack data one want to allocate from thread local heap instead of shared one to prevent needless locking. `unstd.memory.allocation.tempAlloc` [1] will do the job. As the one of the most common subcases is a temporary C string creation `unstd.c.string.tempCString` will help here. [1] http://denis-sh.bitbucket.org/unstandard/unstd.memory.allocation.html#tempAlloc [2] http://denis-sh.bitbucket.org/unstandard/unstd.c.string.html#tempCString -- Денис В. Шеломовский Denis V. Shelomovskij
Oct 20 2013
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 20 October 2013 at 14:25:37 UTC, bearophile wrote:
 More discussions about variable-sized stack-allocated arrays in 
 C++, it seems there is no yet a consensus:

 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

 I'd like variable-sized stack-allocated arrays in D.

 Bye,
 bearophile
I think that is a job for the optimizer. Consider cases like : auto foo() { return new Foo(); } void bar() { auto f = foo(); f.someMethod(); } This is an incredibly common pattern, and it won't be possible to optimize it via added language design without dramatic increase in language complexity. However, once the inliner is passed, you'll end up with something like : auto foo() { return new Foo(); } void bar() { auto f = new Foo(); f.someMethod(); } And if the optimizer is aware of GC calls (LDC is already aware of them, even if only capable of limited optimizations, it is already a good start and show the feasibility of the idea). Obviously Foo is a struct or an class here, but that is the exact same problem as for arrays.
Oct 21 2013