www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - OutputRange should be infinite?

reply "monarch_dodra" <monarchdodra gmail.com> writes:
A good while ago, I ran into some issues regarding output ranges. 
(reference 
http://forum.dlang.org/thread/xyvnifnetythvrhtcexm forum.dlang.org)

The gist of the problem is that with "put" an OutputRange that 
accepts a T will accept a Range!T, and a Range(Range!T) and a 
Range!(Range(Range!T)) add infinitum.

This all works nice and well, provided the output range does not 
ever become empty => is infinite. However, this is currently not 
the case, and this code will blow in your face:

//--------
auto a = new int[](1);
auto b = new int[](2);
assert(isOutputRange!(typeof(a), typeof(b)));
if(!a.empty)
    put(a, b); //Nope
//--------
auto a = new int[](10);
auto b = new int[][](3, 5);
assert(isOutputRange!(typeof(a), typeof(b)));
if(a.length > b.length)
     put(a, b); //Nope
//--------

I had made a "formal request" to deprecate this feature: 
http://forum.dlang.org/thread/xgncorvzlbtcmaxjuvkz forum.dlang.org 
. I had not come back to this since, but I *have* been thinking 
about it since. I think my request was wrong.

However, I that the "isOutputRange" definition should require 
infinite-ness, as mentioned by others.

The "fun" part of OutputRange is that delegates, or as a general 
rule, any object that implements "put", or in some way shape or 
form, accepts put(range, stuff) is considered OutputRange.

To enforce infiniteness, I'd like to add this to the requirement 
of output range:
*Must meet one of these two criteria:
**isInifite!Range
or
**Does not define "range.empty"
   //notion of infiniteness by default: delegates etc...

This actually has some very very low impact in phobos: The only 
OutputRanges ever used by phobos are appenders/delegates/printers 
anyways.

Also, it does not actually prevent writing put(dynamicArray, [1, 
2]): The dynamic array will seize to match the "isOutputRange", 
but that don't mean the function "put" won't work on it anymore. 
Nuance ho!

The *only* function that is really impacted is "copy". However, 
arguably, it was wrong to use it with OutputRanges to begin with. 
copy(r1, r2) and put(r1, r2) are NOT the same semantic, and 
should not be used as such.

I'd like to try to push for this change. Would this be a lost 
cause, or does the community feel this is indeed the way to go?
Oct 05 2012
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 05 Oct 2012 11:15:44 -0400, monarch_dodra <monarchdodra gmail.com>  
wrote:

 However, I that the "isOutputRange" definition should require  
 infinite-ness, as mentioned by others.
No, this is very wrong. A slice is an output range, but is finite. If you are putting something that is larger into something that is smaller and cannot be extended, I would expect an error. You don't? This cannot be changed, as the fundamental target for an in-memory output range is a slice.
 To enforce infiniteness, I'd like to add this to the requirement of  
 output range:
 *Must meet one of these two criteria:
 **isInifite!Range
 or
 **Does not define "range.empty"
    //notion of infiniteness by default: delegates etc...

 This actually has some very very low impact in phobos: The only  
 OutputRanges ever used by phobos are appenders/delegates/printers  
 anyways.
Just because it isn't *used* by phobos (and I doubt the statement above) doesn't mean that it's not a worthwhile part of the API. Phobos is a utility library, not a complete program. For instance, there is nothing in Phobos that uses std.container.RedBlackTree (at least that I know of), but that doesn't mean it doesn't have value. -Steve
Oct 05 2012
next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 6 October 2012 at 05:24:06 UTC, Steven Schveighoffer
wrote:
 On Fri, 05 Oct 2012 11:15:44 -0400, monarch_dodra 
 <monarchdodra gmail.com> wrote:

 However, I that the "isOutputRange" definition should require 
 infinite-ness, as mentioned by others.
No, this is very wrong. A slice is an output range, but is finite.
A slice is an input range and can safely be used as such. What is the merit of *also* defining it as an output range? Why even bother with defining "OutputRange" if it just means "InputRange" + "functions"?
 If you are putting something that is larger into something that 
 is smaller and cannot be extended, I would expect an error.  
 You don't?
Yes, but as shown the semantics of "put" are basically: "Cram *anything* you want inside of me. I can take it". As evidenced by my two examples, this is clearly not the case, and, even worse, the developer has _no way_ of knowing this.
 [SNIP]
 -Steve
Long story short, the *only* reason to ever use the "OutputRange" interface over the "InputRange" interface is: *When cramming things into a delegate. (which are/should be infinite by design) *When cramming things into an input range, but not caring about capacity. I'm just saying, "put" is convenient and all, and I have no plan to have it changed. Users can use it at their own discretion if they want to use it on an InputRange, and at their own risk. However, I really don't like having a range tell me "yeah, I'm an Output Range", just to choke on the first call to put.
Oct 06 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sat, 06 Oct 2012 04:07:41 -0400, monarch_dodra <monarchdodra gmail.com>  
wrote:

 On Saturday, 6 October 2012 at 05:24:06 UTC, Steven Schveighoffer
 wrote:
 On Fri, 05 Oct 2012 11:15:44 -0400, monarch_dodra  
 <monarchdodra gmail.com> wrote:

 However, I that the "isOutputRange" definition should require  
 infinite-ness, as mentioned by others.
No, this is very wrong. A slice is an output range, but is finite.
A slice is an input range and can safely be used as such. What is the merit of *also* defining it as an output range? Why even bother with defining "OutputRange" if it just means "InputRange" + "functions"?
try doing this on a unix system cat /dev/zero > ~/zeros And see if the output file zeros is infinite :) Even an appender is finite when you run out of memory. Do you think output files make bad output ranges, even though they are finite? An output range is nothing but an interface to a storage location. Whether the storage location is infinite or not is up to the location, the output range has to support both infinite and finite targets. What you are proposing would make it illegal to use std.algorithm.copy on *any* memory-based construct, or else have it blissfully succeed by throwing away any extra data. Neither of these situations are tenable.
 If you are putting something that is larger into something that is  
 smaller and cannot be extended, I would expect an error.  You don't?
Yes, but as shown the semantics of "put" are basically: "Cram *anything* you want inside of me. I can take it".
No, definitely not. An output range can take input, but must obviously be able to say "I'm full".
 As evidenced by my two examples, this is clearly not the case,
 and, even worse, the developer has _no way_ of knowing this.
I'm not against defining a standard way to say "I'm full", but proposing it *can't* say that is not the solution. Clearly, we could do better in defining a standard way to test for fullness (full property akin to empty?). Even so, putting into a non-full range could generate an error.
 However, I really don't like having a range tell me "yeah, I'm an Output  
 Range", just to choke on the first call to put.
What about an input range that is immediately empty? These are corner cases, but certainly valid. -Steve
Oct 09 2012
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 9 October 2012 at 13:22:28 UTC, Steven Schveighoffer 
wrote:
 [SNIP]
I tend to disagree with your examples, because, you are mixing the notion of run-time failure with logic error. For example: "new" New can fail. And you don't know unless you try. But new will throw an exception to tell you it failed.. An appender, as you say, is finite in memory, and will end up throwing an exception, yes. You also have a chance to try to catch it and react. Over-putting into a finite slice, on the other end, will *assert*. Game over. It is a catch 22: You can't know unless you try, you crash if you do.
 I'm not against defining a standard way to say "I'm full", but 
 proposing it *can't* say that is not the solution.  Clearly, we 
 could do better in defining a standard way to test for fullness 
 (full property akin to empty?).  Even so, putting into a 
 non-full range could generate an error.
Hum... I'm just kind of wondering here: Couldn't we simply have put throw an actual exception? Something along the lines of "OutputRangFullException"? That would be a pretty good compromise. Performance wise, I don't think there'd be any real toll: delegates/functions don't have empty anyways, so it would just be a matter of catch-rethrow. As for input ranges, well, I think it would be safer anyways if they checked and threw, rather than blindly over pop and crash... Didn't fully think this threw yet (just thought of it typing), but I thought I'd throw it out there.
 However, I really don't like having a range tell me "yeah, I'm 
 an Output Range", just to choke on the first call to put.
What about an input range that is immediately empty? These are corner cases, but certainly valid.
Wouldn't "empty" simply answer "true" before even starting? At least it is being honest.
 -Steve
Thanks for debating.
Oct 09 2012
next sibling parent "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Tuesday, 9 October 2012 at 14:03:36 UTC, monarch_dodra wrote:
 I tend to disagree with your examples, because, you are mixing 
 the notion of run-time failure with logic error.

 For example: "new" New can fail. And you don't know unless you 
 try.
 But new will throw an exception to tell you it failed..

 An appender, as you say, is finite in memory, and will end up 
 throwing an exception, yes. You also have a chance to try to 
 catch it and react.

 Over-putting into a finite slice, on the other end, will 
 *assert*. Game over. It is a catch 22: You can't know unless 
 you try, you crash if you do.
Actually, OutOfMemoryError and AssertError are the same class of Throwable - namely Error. They're both non-recoverable exceptions. I agree that AssertError is not an appropriate type to throw if an OutputRange is full, though.
Oct 09 2012
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 09 Oct 2012 09:39:32 -0400, monarch_dodra <monarchdodra gmail.com>  
wrote:

 On Tuesday, 9 October 2012 at 13:22:28 UTC, Steven Schveighoffer wrote:
 [SNIP]
I tend to disagree with your examples, because, you are mixing the notion of run-time failure with logic error.
They are one and the same. Putting into a file that runs out of disk space, and putting into an array that runs out of memory. Take the viewpoint of std.algorithm.copy. It's been asked to copy from A to B, and B cannot accept it. What does it do? Saying it has to just return success doesn't make any sense.
 For example: "new" New can fail. And you don't know unless you try.
 But new will throw an exception to tell you it failed..

 An appender, as you say, is finite in memory, and will end up throwing  
 an exception, yes. You also have a chance to try to catch it and react.
No, these are Errors, not (supposed to be) catchable.
 Over-putting into a finite slice, on the other end, will *assert*. Game  
 over. It is a catch 22: You can't know unless you try, you crash if you  
 do.
I agree, this could have a better interface. However, I think in terms of what to do (assuming we add some way of checking for fullness), if someone calls put on an output buffer and that range is not able to handle it, it should be an Error/assert as it is now, just like calling front on an empty array is an assert.
 I'm not against defining a standard way to say "I'm full", but  
 proposing it *can't* say that is not the solution.  Clearly, we could  
 do better in defining a standard way to test for fullness (full  
 property akin to empty?).  Even so, putting into a non-full range could  
 generate an error.
Hum... I'm just kind of wondering here: Couldn't we simply have put throw an actual exception? Something along the lines of "OutputRangFullException"? That would be a pretty good compromise.
I think it would work, but I think we still need a way to check for fullness. Here is what I propose: OutputRange is defined as an entity that consumes data. If you put data into an OutputRange that cannot accept the data, the range has the option of asserting or throwing an exception. TerminatingOutputRange is an extension of OutputRange, but defines bool property full(). R.full returns true if it cannot accept any new data. It should assert if you try to put data into a full TerminatingOutputRange. In other words the following sequence should always assert or not compile: static assert(isTerminatingOutputRange!(typeof(r))); assert(r.full); r.put(x); If you try and put into a TerminatingOutputRange that is *not* full, behavior reverts to OutputRange (can either assert or throw an exception), depending on the assumptions that can be made for that condition.
 However, I really don't like having a range tell me "yeah, I'm an  
 Output Range", just to choke on the first call to put.
What about an input range that is immediately empty? These are corner cases, but certainly valid.
Wouldn't "empty" simply answer "true" before even starting? At least it is being honest.
Right, but you seem to be saying the condition that an OutputRange might throw on the first call to put is an invalid reaction. I don't think it is any less valid than throwing on the first call to front on an empty range. -Steve
Oct 09 2012
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 9 October 2012 at 15:27:44 UTC, Steven Schveighoffer 
wrote:
 On Tue, 09 Oct 2012 09:39:32 -0400, monarch_dodra 
 <monarchdodra gmail.com> wrote:

 On Tuesday, 9 October 2012 at 13:22:28 UTC, Steven 
 Schveighoffer wrote:
 [SNIP]
I tend to disagree with your examples, because, you are mixing the notion of run-time failure with logic error.
They are one and the same. Putting into a file that runs out of disk space, and putting into an array that runs out of memory.
I'm not convinced. A file running out of memory is an implementation defined limitation that is out of the field of control of the developer, just as much as an OutOfMemoryError. An array that runs out of memory is predictable logic error. The problem is that we aren't giving the developer the tools required to predict it.
 Take the viewpoint of std.algorithm.copy.  It's been asked to 
 copy from A to B, and B cannot accept it.  What does it do?  
 Saying it has to just return success doesn't make any sense.
I never said copy should return success.
 For example: "new" New can fail. And you don't know unless you 
 try.
 But new will throw an exception to tell you it failed..

 An appender, as you say, is finite in memory, and will end up 
 throwing an exception, yes. You also have a chance to try to 
 catch it and react.
No, these are Errors, not (supposed to be) catchable.
Hum. Yes, but the point (IMO) remains that the error is not thrown by Appender itself, but by the underlying implementation, and by no fault of appender itself, nor the caller. I mean, it is not the *appender* that is full. You are just running into out of memory on your machine... Anyways, I don't think there is anything to be gained disagreeing on this point any longer, as it would seem the solution is going towards other paths anyways.
 Over-putting into a finite slice, on the other end, will 
 *assert*. Game over. It is a catch 22: You can't know unless 
 you try, you crash if you do.
I agree, this could have a better interface. However, I think in terms of what to do (assuming we add some way of checking for fullness), if someone calls put on an output buffer and that range is not able to handle it, it should be an Error/assert as it is now, just like calling front on an empty array is an assert.
 I'm not against defining a standard way to say "I'm full", 
 but proposing it *can't* say that is not the solution.  
 Clearly, we could do better in defining a standard way to 
 test for fullness (full property akin to empty?).  Even so, 
 putting into a non-full range could generate an error.
Hum... I'm just kind of wondering here: Couldn't we simply have put throw an actual exception? Something along the lines of "OutputRangFullException"? That would be a pretty good compromise.
I think it would work, but I think we still need a way to check for fullness. Here is what I propose: OutputRange is defined as an entity that consumes data. If you put data into an OutputRange that cannot accept the data, the range has the option of asserting or throwing an exception. TerminatingOutputRange is an extension of OutputRange, but defines bool property full(). R.full returns true if it cannot accept any new data. It should assert if you try to put data into a full TerminatingOutputRange. In other words the following sequence should always assert or not compile: static assert(isTerminatingOutputRange!(typeof(r))); assert(r.full); r.put(x); If you try and put into a TerminatingOutputRange that is *not* full, behavior reverts to OutputRange (can either assert or throw an exception), depending on the assumptions that can be made for that condition.
I'll have to try to sleep on this before making any judgements/thoughts/comments. But off the top of my head, you'll still run into the same problem of an output range becoming full *during* a put: if r accepts a T, then it accepts an input range of T.
 However, I really don't like having a range tell me "yeah, 
 I'm an Output Range", just to choke on the first call to put.
What about an input range that is immediately empty? These are corner cases, but certainly valid.
Wouldn't "empty" simply answer "true" before even starting? At least it is being honest.
Right, but you seem to be saying the condition that an OutputRange might throw on the first call to put is an invalid reaction. I don't think it is any less valid than throwing on the first call to front on an empty range. -Steve
No, my problem is not one of "first call", it is one of answering not empty, but choking on a put(element) afterwards. *Me "outputRange, are you an output range or int[] ?" *outputRange: "Yes" *Me: "outputRange are you empty?" *outputRange: "No" *Me: "then put this int[] _element_" *outputRange: "OutOfRangeError" *Me: "WTF?" To me, this is not acceptable behavior. ---- Another solution could be something closer to my very first proposal of tightening the valid *ElementTypes* that are compatible with an output range (but not put itself). For example, a delegate D that accepts a T (like a char) would be defined as return true to: isOutputRange!(D, T) //true isOutputRange!(D, T[]) //true isOutputRange!(D, T[][]) //true An actual inputRange!T (IR) (such as int[]) that defines empty, though, would only be an output range for EXACTLY T: isOutputRange!(IR, T) //true isOutputRange!(IR, T[]) //false isOutputRange!(IR, T[][]) //false This would nip the problem in the bud, as empty would *really* mean empty. If R says he's an outputRange of T, but not of T[], then don't trust it to not overflow if you feed it a T[]... As for the delegates, well they don't have empty anyways, so you can go ahead and attempt to cram anything you want. Unlike my very first proposal way back when, put would still work to copy several items at once, but at the caller's responsibility.
Oct 09 2012
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 09 Oct 2012 11:52:29 -0400, monarch_dodra <monarchdodra gmail.com>  
wrote:

 On Tuesday, 9 October 2012 at 15:27:44 UTC, Steven Schveighoffer wrote:
 On Tue, 09 Oct 2012 09:39:32 -0400, monarch_dodra  
 <monarchdodra gmail.com> wrote:

 On Tuesday, 9 October 2012 at 13:22:28 UTC, Steven Schveighoffer wrote:
 [SNIP]
I tend to disagree with your examples, because, you are mixing the notion of run-time failure with logic error.
They are one and the same. Putting into a file that runs out of disk space, and putting into an array that runs out of memory.
I'm not convinced. A file running out of memory is an implementation defined limitation that is out of the field of control of the developer, just as much as an OutOfMemoryError. An array that runs out of memory is predictable logic error. The problem is that we aren't giving the developer the tools required to predict it.
predictable logic errors == assert
 I'm not against defining a standard way to say "I'm full", but  
 proposing it *can't* say that is not the solution.  Clearly, we could  
 do better in defining a standard way to test for fullness (full  
 property akin to empty?).  Even so, putting into a non-full range  
 could generate an error.
Hum... I'm just kind of wondering here: Couldn't we simply have put throw an actual exception? Something along the lines of "OutputRangFullException"? That would be a pretty good compromise.
I think it would work, but I think we still need a way to check for fullness. Here is what I propose: OutputRange is defined as an entity that consumes data. If you put data into an OutputRange that cannot accept the data, the range has the option of asserting or throwing an exception. TerminatingOutputRange is an extension of OutputRange, but defines bool property full(). R.full returns true if it cannot accept any new data. It should assert if you try to put data into a full TerminatingOutputRange. In other words the following sequence should always assert or not compile: static assert(isTerminatingOutputRange!(typeof(r))); assert(r.full); r.put(x); If you try and put into a TerminatingOutputRange that is *not* full, behavior reverts to OutputRange (can either assert or throw an exception), depending on the assumptions that can be made for that condition.
I'll have to try to sleep on this before making any judgements/thoughts/comments. But off the top of my head, you'll still run into the same problem of an output range becoming full *during* a put: if r accepts a T, then it accepts an input range of T.
OK, I see your point, you need to know "can I put x into this output range" instead of "can I put an element into this output range". We are delving at this point into streams, and streams have a much better interface for that: int write(x) where the int returned is how much data from x was actually written. As put doesn't return anything, there is no way to tell what was written. I don't know if it can be changed at this point.
 No, my problem is not one of "first call", it is one of answering not  
 empty, but choking on a put(element) afterwards.

 *Me "outputRange, are you an output range or int[] ?"
 *outputRange: "Yes"
 *Me: "outputRange are you empty?"
 *outputRange: "No"
 *Me: "then put this int[] _element_"
 *outputRange: "OutOfRangeError"
 *Me: "WTF?"

 To me, this is not acceptable behavior.
Neither is requiring output ranges to be infinite. There are definitely finite output ranges. Note that you are not asking the right question "are you empty?" This is an input range property, not an output range property. There is no equivalent output range property. And as you point out, this question necessarily has to be worded in a way that is clear. "are you full?" would be a property that says "there is enough space left for at least one more element", and "can you accept x elements" would be an entirely different question. But even if we *had* the right functions to ask those questions, finding the answer may not be feasible (e.g. no length property). I would say we need a new function, like tryPut or something, that returns the number of elements actually put.
 Another solution could be something closer to my very first proposal of  
 tightening the valid *ElementTypes* that are compatible with an output  
 range (but not put itself).

 For example, a delegate D that accepts a T (like a char) would be  
 defined as return true to:
 isOutputRange!(D, T)     //true
 isOutputRange!(D, T[])   //true
 isOutputRange!(D, T[][]) //true

 An actual inputRange!T (IR) (such as int[]) that defines empty, though,  
 would only be an output range for EXACTLY T:
 isOutputRange!(IR, T)     //true
 isOutputRange!(IR, T[])   //false
 isOutputRange!(IR, T[][]) //false

 This would nip the problem in the bud, as empty would *really* mean  
 empty. If R says he's an outputRange of T, but not of T[], then don't  
 trust it to not overflow if you feed it a T[]...
No, not really. The only correct (and efficient) way to fix this is to support partial writes.
 As for the delegates, well they don't have empty anyways, so you can go  
 ahead and attempt to cram anything you want.
Then you have accomplished nothing. void foo( int[] x) { int[5] y; uint filled = 0; put((int n) { y[filled++] = n; }, x); } -Steve
Oct 09 2012
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/9/12 11:52 AM, monarch_dodra wrote:
 On Tuesday, 9 October 2012 at 15:27:44 UTC, Steven Schveighoffer wrote:
 On Tue, 09 Oct 2012 09:39:32 -0400, monarch_dodra
 <monarchdodra gmail.com> wrote:

 On Tuesday, 9 October 2012 at 13:22:28 UTC, Steven Schveighoffer wrote:
 [SNIP]
I tend to disagree with your examples, because, you are mixing the notion of run-time failure with logic error.
They are one and the same. Putting into a file that runs out of disk space, and putting into an array that runs out of memory.
I'm not convinced. A file running out of memory is an implementation defined limitation that is out of the field of control of the developer, just as much as an OutOfMemoryError. An array that runs out of memory is predictable logic error. The problem is that we aren't giving the developer the tools required to predict it.
I agree with this distinction. In brief a disk getting full is an exceptional occurrence whereas a non-appendable structure running out of room is a different category of error. Andrei
Oct 09 2012
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 09 Oct 2012 14:18:30 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 10/9/12 11:52 AM, monarch_dodra wrote:
 On Tuesday, 9 October 2012 at 15:27:44 UTC, Steven Schveighoffer wrote:
 On Tue, 09 Oct 2012 09:39:32 -0400, monarch_dodra
 <monarchdodra gmail.com> wrote:

 On Tuesday, 9 October 2012 at 13:22:28 UTC, Steven Schveighoffer  
 wrote:
 [SNIP]
I tend to disagree with your examples, because, you are mixing the notion of run-time failure with logic error.
They are one and the same. Putting into a file that runs out of disk space, and putting into an array that runs out of memory.
I'm not convinced. A file running out of memory is an implementation defined limitation that is out of the field of control of the developer, just as much as an OutOfMemoryError. An array that runs out of memory is predictable logic error. The problem is that we aren't giving the developer the tools required to predict it.
I agree with this distinction. In brief a disk getting full is an exceptional occurrence whereas a non-appendable structure running out of room is a different category of error.
I also agree that running out of disk space or general heap memory is a different high-level error. But it depends on the level you are looking from. From the low level, it's "I've been asked to put A into B, and B is saying no". From that point of view, it doesn't seem any different to me, and I don't know that 'put' really is the one to decide that. Each range itself must decide whether this is absolutely a logic error or a runtime error. Two different things to think about: 1) you can check how much disk space is left just like you can check how much space is left in your array. 2) The determination of how much space is available in a range could be unavailable at runtime, even for memory-based ranges. I agree that if you try to copy an input range into a smaller *array*, we should handle that as a logic error. But as a range in general, I don't think it can be handled without a runtime error. In other words, the range should decide what kind of error it is, not the definition of OutputRange. -Steve
Oct 09 2012
prev sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 6 October 2012 at 05:24:06 UTC, Steven Schveighoffer
wrote:
 On Fri, 05 Oct 2012 11:15:44 -0400, monarch_dodra 
 <monarchdodra gmail.com> wrote:

 However, I that the "isOutputRange" definition should require 
 infinite-ness, as mentioned by others.
No, this is very wrong. A slice is an output range, but is finite.
A slice is an input range and can safely be used as such. What is the merit of *also* defining it as an output range? Why even bother with defining "OutputRange" if it just means "InputRange" + "functions"?
 If you are putting something that is larger into something that 
 is smaller and cannot be extended, I would expect an error.  
 You don't?
Yes, but as shown the semantics of "put" are basically: "Cram *anything* you want inside of me. I can take it". As evidenced by my two examples, this is clearly not the case, and, even worse, the developer has _no way_ of knowing this.
 [SNIP]
 -Steve
Long story short, the *only* reason to ever use the "OutputRange" interface over the "InputRange" interface is: *When cramming things into a delegate. (which are/should be infinite by design) *When cramming things into an input range, but not caring about capacity. I'm just saying, "put" is convenient and all, and I have no plan to have it changed. Users can use it at their own discretion if they want to use it on an InputRange, and at their own risk. However, I really don't like having a range tell me "yeah, I'm an Output Range", just to choke on the first call to put.
Oct 06 2012
prev sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Fri, 05 Oct 2012 17:15:44 +0200
schrieb "monarch_dodra" <monarchdodra gmail.com>:

 A good while ago, I ran into some issues regarding output ranges. 
 (reference 
 http://forum.dlang.org/thread/xyvnifnetythvrhtcexm forum.dlang.org)
 
 The gist of the problem is that with "put" an OutputRange that 
 accepts a T will accept a Range!T, and a Range(Range!T) and a 
 Range!(Range(Range!T)) add infinitum.
 
 This all works nice and well, provided the output range does not 
 ever become empty => is infinite. However, this is currently not 
 the case, and this code will blow in your face:
 
 //--------
 auto a = new int[](1);
 auto b = new int[](2);
 assert(isOutputRange!(typeof(a), typeof(b)));
 if(!a.empty)
     put(a, b); //Nope
 //--------
 auto a = new int[](10);
 auto b = new int[][](3, 5);
 assert(isOutputRange!(typeof(a), typeof(b)));
 if(a.length > b.length)
      put(a, b); //Nope
 //--------
Couldn't we just fix std.range.put to check for an 'empty' property?
Oct 06 2012
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 6 October 2012 at 08:00:42 UTC, Johannes Pfau wrote:
 Am Fri, 05 Oct 2012 17:15:44 +0200
 schrieb "monarch_dodra" <monarchdodra gmail.com>:

 [SNIP]
Couldn't we just fix std.range.put to check for an 'empty' property?
Well, the issue (imo) is not put's implementation: as Steven Schveighoffer said, cramming too big into too small is wrong (logic error). The problem (I think), is that once a range verifies the isOutputRange criteria, the user should be able to call "put" without (too much) worries.
Oct 06 2012
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 06-Oct-12 12:13, monarch_dodra wrote:
 On Saturday, 6 October 2012 at 08:00:42 UTC, Johannes Pfau wrote:
 Am Fri, 05 Oct 2012 17:15:44 +0200
 schrieb "monarch_dodra" <monarchdodra gmail.com>:

 [SNIP]
Couldn't we just fix std.range.put to check for an 'empty' property?
Well, the issue (imo) is not put's implementation: as Steven Schveighoffer said, cramming too big into too small is wrong (logic error). The problem (I think), is that once a range verifies the isOutputRange criteria, the user should be able to call "put" without (too much) worries.
Not possible. The only thing isOutputRange serves is that putting stuff into X is sensible in one of many ways (delegates, own put, input range with assignable elements). Any run-time properties such as lengths and maximums are out of isOutputRange business. And in the end one can still run out of supposedly "infinite" things like RAM, disk space etc. -- Dmitry Olshansky
Oct 06 2012
parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 6 October 2012 at 08:51:02 UTC, Dmitry Olshansky 
wrote:
 On 06-Oct-12 12:13, monarch_dodra wrote:
 On Saturday, 6 October 2012 at 08:00:42 UTC, Johannes Pfau 
 wrote:
 Am Fri, 05 Oct 2012 17:15:44 +0200
 schrieb "monarch_dodra" <monarchdodra gmail.com>:

 [SNIP]
Couldn't we just fix std.range.put to check for an 'empty' property?
Well, the issue (imo) is not put's implementation: as Steven Schveighoffer said, cramming too big into too small is wrong (logic error). The problem (I think), is that once a range verifies the isOutputRange criteria, the user should be able to call "put" without (too much) worries.
Not possible. The only thing isOutputRange serves is that putting stuff into X is sensible in one of many ways (delegates, own put, input range with assignable elements). Any run-time properties such as lengths and maximums are out of isOutputRange business. And in the end one can still run out of supposedly "infinite" things like RAM, disk space etc.
Yes, but that would be an exception that also holds true for "isInfiniteRange", but only happens under "exceptional" circumstances. Hence the (too much) above.
Oct 06 2012