## digitalmars.D - Do you like bounded integrals?

- Andrei Alexandrescu (22/22) Aug 23 2016 Currently checkedint (https://github.com/dlang/phobos/pull/4613) stands
- Cauterite (11/14) Aug 23 2016 Well, this occurs very frequently in my code:
- Lodovico Giaretta (24/37) Aug 23 2016 The ideal would be implicit cast allowed when widening, explicit
- Andrei Alexandrescu (9/41) Aug 23 2016 That wouldn't work for e.g. NaN. A NaN wants to "steal" a value but only...
- Lodovico Giaretta (27/29) Aug 24 2016 Ok, I get your point. WithNaN should not waste the "official
- Andrei Alexandrescu (6/35) Aug 24 2016 Yes, that is indeed a potential problem.
- Robert burner Schadek (4/4) Aug 24 2016 what about two types.
- Meta (11/12) Aug 23 2016 Just to make sure I understand what you mean by "composing
- =?UTF-8?B?Tm9yZGzDtnc=?= (24/39) Aug 24 2016 For comparion take a look at my solution at:
- =?UTF-8?B?Tm9yZGzDtnc=?= (3/5) Aug 24 2016 Correction: Index type here is actually a Ada-style modulo type
- Bill Hicks (16/42) Aug 24 2016 It's just an abortion.
- Andrei Alexandrescu (28/28) Aug 25 2016 Update: I've taken a shot at adding bounds, see
- Marco Leise (26/55) Aug 29 2016 In Pascal/Delphi I used them often for day of the week,
- Andrei Alexandrescu (9/17) Aug 29 2016 Since then I've decided to confine static bounds computation to a
- tsbockman (11/12) Aug 29 2016 They should. Generally speaking, if that doesn't produce
- Chris Wright (4/10) Aug 29 2016 The ranges expand very fast. Addition and subtraction double the range,
- tsbockman (17/25) Aug 30 2016 Ranges don't always grow. Some operations will also cause them to
- Chris Wright (33/48) Aug 30 2016 We can only track types, not values, and that strongly hampers our
- Andrei Alexandrescu (3/13) Aug 30 2016 Yah, was thinking of the same. So before long you run into the bounds,

Currently checkedint (https://github.com/dlang/phobos/pull/4613) stands at 2432 lines and implements a variety of checking behaviors. At this point I just figured I can very easily add custom bounds, e.g. an int limited to 0 through 100 etc. It would take just a few lines because a lot of support is there (bounds hooks, custom min/max) anyway. However, I fear it might complicate definition and just be a bit much. Here's the design I'm thinking of. Current: struct Checkedint(T, Hook = Abort); Under consideration: struct Checkedint(T, Hook = Abort, T min = T.min, T max = T.max); It's easy to take the limits into account, but then there are a few messes to mind: * When assigning a Checked to another, should the limits be matched statically or checked dynamically? * When composing, do the limits compose meaningfully? * How to negotiate when both the user of Checked and the Hook need to customize the limits? (e.g. if you look at WithNaN it needs to reserve a special value, thus limiting the representable range). I think all of these questions have answers, but I wanted to gauge the interest in bounded checked integrals. Would the need for them justify additional complications in the definition? Andrei

Aug 23 2016

On Tuesday, 23 August 2016 at 20:40:06 UTC, Andrei Alexandrescu wrote:I think all of these questions have answers, but I wanted to gauge the interest in bounded checked integrals. Would the need for them justify additional complications in the definition?Well, this occurs very frequently in my code: struct S { int foo; invariant { assert(ordered_(MIN, foo, MAX)); }; }; If bounded integers can handle this for me then you've got my support. Assuming the checks disappear in -release.

Aug 23 2016

On Tuesday, 23 August 2016 at 20:40:06 UTC, Andrei Alexandrescu wrote:* When assigning a Checked to another, should the limits be matched statically or checked dynamically?The ideal would be implicit cast allowed when widening, explicit required when narrowing. As I think we will have to settle for always explicit, I would say dynamic check. This of course opens the way to many customizations: an user may want to customize what happens when the range check fails. Another user may even want a switch to statically disallow narrowing conversions.* When composing, do the limits compose meaningfully?Every layer should build on the limits exposed by the underlying layer, so I don't see what may go wrong.* How to negotiate when both the user of Checked and the Hook need to customize the limits? (e.g. if you look at WithNaN it needs to reserve a special value, thus limiting the representable range).The idea is that WithNaN will not see the limits of the underlying types, but the limits set by the user. How to do this, see below.I think all of these questions have answers, but I wanted to gauge the interest in bounded checked integrals. Would the need for them justify additional complications in the definition?From what I can see in my experience, saturation with custom min/max pops up once in a while in projects. So it's nice to have, even if it is a slight complication in the definition.Under consideration: struct Checkedint(T, Hook = Abort, T min = T.min, T max = T.max);Can I suggest a different approach? Different bounds implemented as a hook? alias MyBoundedInt = CheckedInt!(int, WithBounds!(0, 42)); The WithBounds hook would only define min, max and opCast. It may have other optional parameters, like whether to statically disallow narrowing casts, or what to do if a narrowing cast is found impossible at runtime. What do you think about this?

Aug 23 2016

On 08/23/2016 05:08 PM, Lodovico Giaretta wrote:On Tuesday, 23 August 2016 at 20:40:06 UTC, Andrei Alexandrescu wrote:That wouldn't work for e.g. NaN. A NaN wants to "steal" a value but only if that's not available. It's complicated.* When assigning a Checked to another, should the limits be matched statically or checked dynamically?The ideal would be implicit cast allowed when widening, explicit required when narrowing. As I think we will have to settle for always explicit, I would say dynamic check. This of course opens the way to many customizations: an user may want to customize what happens when the range check fails. Another user may even want a switch to statically disallow narrowing conversions.* When composing, do the limits compose meaningfully?Every layer should build on the limits exposed by the underlying layer, so I don't see what may go wrong.Nonono, NaN should be able to see the underlying type.* How to negotiate when both the user of Checked and the Hook need to customize the limits? (e.g. if you look at WithNaN it needs to reserve a special value, thus limiting the representable range).The idea is that WithNaN will not see the limits of the underlying types, but the limits set by the user. How to do this, see below.It's the first thing I tried and it doesn't do well. The first thing ou want with a bounded type is to combine it with any other policy (abort, assert, saturate...). This kind of horizontal communication between hooks is tenuous and not supported well by the design. AndreiI think all of these questions have answers, but I wanted to gauge the interest in bounded checked integrals. Would the need for them justify additional complications in the definition?From what I can see in my experience, saturation with custom min/max pops up once in a while in projects. So it's nice to have, even if it is a slight complication in the definition.Under consideration: struct Checkedint(T, Hook = Abort, T min = T.min, T max = T.max);Can I suggest a different approach? Different bounds implemented as a hook? alias MyBoundedInt = CheckedInt!(int, WithBounds!(0, 42)); The WithBounds hook would only define min, max and opCast. It may have other optional parameters, like whether to statically disallow narrowing casts, or what to do if a narrowing cast is found impossible at runtime. What do you think about this?

Aug 23 2016

On Wednesday, 24 August 2016 at 00:40:15 UTC, Andrei Alexandrescu wrote:That wouldn't work for e.g. NaN. A NaN wants to "steal" a value but only if that's not available. It's complicated.Ok, I get your point. WithNaN should not waste the "official range" when there is a huge wasted representable range that can be exploited to choose the special NaN value. While this would be a very good thing, it poses a problem. If a hook is allowed to special-case values outside the official range, then it may happen that composing two hooks, they both special-case the same value, leading to wrong results. For example, in some statistical oriented environments, the special value NA (not available), very similar to NaN, is used to represent missing input data; a WithNA hook may decide to use the same policy used by WithNaN to reserve its NA value, causing corruption, as the same value has two meanings. So, the first solution is that hooks should always reserve special values from the edges of the official range, and expose to the higher layer a correctly reduced official range. The second solution is having two ranges: the "official range" and the "usable range", where the usable range is the full representable range minus the special values already used. Hooks must take special values from the usable range and reduce it accordingly, while leaving the official range intact. The third (more complex) solution, which leaves the official range intact, is that hooks should take special values from wherever outside the official range and in some way communicate to the higher level which values are already taken. This is not impossible nor conceptually difficult, but it may not be worth.

Aug 24 2016

On 08/24/2016 05:05 AM, Lodovico Giaretta wrote:On Wednesday, 24 August 2016 at 00:40:15 UTC, Andrei Alexandrescu wrote:Yes, thanks for divining the meaning from the poor explanation.That wouldn't work for e.g. NaN. A NaN wants to "steal" a value but only if that's not available. It's complicated.Ok, I get your point. WithNaN should not waste the "official range" when there is a huge wasted representable range that can be exploited to choose the special NaN value.While this would be a very good thing, it poses a problem. If a hook is allowed to special-case values outside the official range, then it may happen that composing two hooks, they both special-case the same value, leading to wrong results. For example, in some statistical oriented environments, the special value NA (not available), very similar to NaN, is used to represent missing input data; a WithNA hook may decide to use the same policy used by WithNaN to reserve its NA value, causing corruption, as the same value has two meanings.Yes, that is indeed a potential problem.So, the first solution is that hooks should always reserve special values from the edges of the official range, and expose to the higher layer a correctly reduced official range. The second solution is having two ranges: the "official range" and the "usable range", where the usable range is the full representable range minus the special values already used. Hooks must take special values from the usable range and reduce it accordingly, while leaving the official range intact. The third (more complex) solution, which leaves the official range intact, is that hooks should take special values from wherever outside the official range and in some way communicate to the higher level which values are already taken. This is not impossible nor conceptually difficult, but it may not be worth.The fourth solution is to document hooks appropriately and acknowledge the fact (which is already true) that not all hooks can work together. Andrei

Aug 24 2016

what about two types. alias BI = Bound!int; alias CBI = CheckedInt!BI; if Bound behaves as an integer people can choose.

Aug 24 2016

On Tuesday, 23 August 2016 at 20:40:06 UTC, Andrei Alexandrescu wrote:* When composing, do the limits compose meaningfully?Just to make sure I understand what you mean by "composing limits", do you mean this? alias NonNegativeInt = CheckedInt!(int, Abort, 0, int.max); alias Ubyte = CheckedInt!(NonNegativeInt, Abort, NonNegativeInt(0), NonNegativeInt(255)); Ubyte b; //b is guaranteed to be in [0, 255] And then we should expect this to fail: alias Byte = CheckedInt!(NonNegativeInt, Abort, NonNegativeInt(-128), NonNegativeInt(127));

Aug 23 2016

On Tuesday, 23 August 2016 at 20:40:06 UTC, Andrei Alexandrescu wrote:Currently checkedint (https://github.com/dlang/phobos/pull/4613) stands at 2432 lines and implements a variety of checking behaviors. At this point I just figured I can very easily add custom bounds, e.g. an int limited to 0 through 100 etc. It would take just a few lines because a lot of support is there (bounds hooks, custom min/max) anyway. struct Checkedint(T, Hook = Abort, T min = T.min, T max = T.max);For comparion take a look at my solution at: https://github.com/nordlow/phobos-next/blob/master/src/bound.d It may answer some of your questions. The CT-param `exceptional` should be related to your `Hook`. The solution is currently bloated as it is interleaved with an Optional implementation and packing logic. The CT-params `optional`, `exceptional`, `packed` and `signed` should probably be merged and stored in a Flags-type.* When assigning a Checked to another, should the limits be matched statically or checked dynamically?I'm not sure. I believe the answer lies in the semantic (real-life) interpretations of the boundeed types. If the ranges are related to physical boundaries it should probably be checked at compile-time like we do with units of measurement libraries.* When composing, do the limits compose meaningfully?What do you mean with composing? If you're talking about value-range algebra then take a look at the definitions of `opBinary`, `min`, `max` and `abs` in my `bound.d`.

Aug 24 2016

On Wednesday, 24 August 2016 at 08:39:28 UTC, NordlĂ¶w wrote:Such an implementation is `IndexedBy` at https://github.com/nordlow/phobos-next/blob/master/src/typecons_ex.d#L167Correction: Index type here is actually a Ada-style modulo type which does wrap-around instead of truncation.

Aug 24 2016

On Tuesday, 23 August 2016 at 20:40:06 UTC, Andrei Alexandrescu wrote:Currently checkedint (https://github.com/dlang/phobos/pull/4613) stands at 2432 lines and implements a variety of checking behaviors. At this point I just figured I can very easily add custom bounds, e.g. an int limited to 0 through 100 etc. It would take just a few lines because a lot of support is there (bounds hooks, custom min/max) anyway. However, I fear it might complicate definition and just be a bit much. Here's the design I'm thinking of. Current: struct Checkedint(T, Hook = Abort); Under consideration: struct Checkedint(T, Hook = Abort, T min = T.min, T max = T.max); It's easy to take the limits into account, but then there are a few messes to mind: * When assigning a Checked to another, should the limits be matched statically or checked dynamically? * When composing, do the limits compose meaningfully? * How to negotiate when both the user of Checked and the Hook need to customize the limits? (e.g. if you look at WithNaN it needs to reserve a special value, thus limiting the representable range). I think all of these questions have answers, but I wanted to gauge the interest in bounded checked integrals. Would the need for them justify additional complications in the definition? AndreiIt's just an abortion. There is a C++ bounded::integer library by David Stone that looks much better than what you have here. For a language that's claiming to be a better C++ and with better CT features, I would expect something more elegant and useful. Besides, this only works with integers. You've been bashing Go for lack of generics, so how about we see something that works with other types too, no? Perl 6: subset MyInt of Int where (4 < * < 123); or for string, subset MyStr of Str where (0 < *.chars < 100); Just beautiful, and what you would expect from a language that's trying to be better than the past.

Aug 24 2016

Update: I've taken a shot at adding bounds, see https://github.com/dlang/phobos/pull/4613/commits/bbdcea723e3dd98a979ae f06a6786645647a778. Here are a few notes: * The Hook API works nicely with built-in or custom hooks, so no change there was necessary. * The constructor got quite a bit more complicated because it needs to evaluate statically the bounds, which in turn evaluate the constructor statically. My solution was to define minRep and maxRep as the built-in representations of the min and max, and use those inside the constructor. * There's trouble about choosing between a conservative and a statically precise approach to bounds computation. The simplest example is computing -x where x has type Checked!int. The precise result type is Checked!(int, int.min + 1, int.max). (If x == int.min, attempting to evaluate -x aborts the program.) The precise type is technically correct, but I assume in practice it'll just create a lot of annoyance due to the fact that x and -x have "slightly" different types. * It gets worse with binary operators. Implementing the precise limits is a mini-project of its own (akin to the VRP algorithms). Going conservative (as the current work does) is annoying in a different way - any binary operator loses the custom limits information that the user in all likelihood had carefully planted. My decision following this experiment is to drop support for custom limits in Checked. The complexity/power balance is untenable. However, the hooks should be reusable with a different artifact called e.g.: struct Bounded(T, T min, T max, Hook); That should do precise static bounds computation in all VRP glory and interact well with Checked so the two can be composed. Andrei

Aug 25 2016

Am Tue, 23 Aug 2016 16:40:06 -0400 schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:Currently checkedint (https://github.com/dlang/phobos/pull/4613) stands at 2432 lines and implements a variety of checking behaviors. At this point I just figured I can very easily add custom bounds, e.g. an int limited to 0 through 100 etc. It would take just a few lines because a lot of support is there (bounds hooks, custom min/max) anyway.In Pascal/Delphi I used them often for day of the week, percentages and anything else that is naturally bounded. I guess as so often with "library features" it would depend on some pull and push factors. "What module was it? What do I have to set for 'Hook' again, so I can set 'min' and 'max'?" vs. "It's the proper type to use. It save me time here by catching bugs."However, I fear it might complicate definition and just be a bit much. Here's the design I'm thinking of. Current: struct Checkedint(T, Hook = Abort); Under consideration: struct Checkedint(T, Hook = Abort, T min = T.min, T max = T.max); It's easy to take the limits into account, but then there are a few messes to mind: * When assigning a Checked to another, should the limits be matched statically or checked dynamically?As for statically checking, "a = b" should work if b's range is included in a's range, just like we can assign a ubyte to a uint. Types with only a partial overlap should probably be dealt with at runtime the same way assignments of normal ints would be checked. As an upgrade to that one could also statically check if the ranges have no overlap at all.* When composing, do the limits compose meaningfully?You mean like when one multiplies two Checkedints ?* How to negotiate when both the user of Checked and the Hook need to customize the limits? (e.g. if you look at WithNaN it needs to reserve a special value, thus limiting the representable range). I think all of these questions have answers, but I wanted to gauge the interest in bounded checked integrals. Would the need for them justify additional complications in the definition? AndreiI have no answer to that either. The following could make it more accessible, much like the two Exception constructors we have: alias CheckedInt(T, T min, T max, Hook = Abort) = CheckedIntImpl!(T, Hook, min, max); alias CheckedInt(T, Hook = Abort) = CheckedIntImpl!(T, Hook, T.min, T.max); struct CheckedIntImpl(T, Hook = Abort, T min, T max); -- Marco

Aug 29 2016

On 08/29/2016 11:33 AM, Marco Leise wrote:The following could make it more accessible, much like the two Exception constructors we have: alias CheckedInt(T, T min, T max, Hook = Abort) = CheckedIntImpl!(T, Hook, min, max); alias CheckedInt(T, Hook = Abort) = CheckedIntImpl!(T, Hook, T.min, T.max); struct CheckedIntImpl(T, Hook = Abort, T min, T max);Since then I've decided to confine static bounds computation to a separate type. Checked does naturally allow for hooks that impose dynamic bounds enforcement. One interesting tidbit - I was looking at a bounded integer library for C++ and it looks like it didn't implement the most interesting parts - computing bounds for logical operations: https://bitbucket.org/davidstone/bounded_integer/src/cd25b513c508498e882acb6543db5e08999243fb/include/bounded/detail/arithmetic/bitwise_and.hpp?at=default&fileviewer=file-view-default Andrei

Aug 29 2016

On Tuesday, 23 August 2016 at 20:40:06 UTC, Andrei Alexandrescu wrote:When composing, do the limits compose meaningfully?They should. Generally speaking, if that doesn't produce reasonable bounds (leaving aside rounding errors) at the end of the computation, it means that the logic of the computation itself is wrong. Ordinary abstract numbers don't really have "minimum" and "maximum" values; any variable that requires them has some sort of implied unit which determines its bounds. Therefore, the bounds should follow the rules of dimensional analysis (https://en.wikipedia.org/wiki/Dimensional_analysis).

Aug 29 2016

On Mon, 29 Aug 2016 18:20:05 +0000, tsbockman wrote:On Tuesday, 23 August 2016 at 20:40:06 UTC, Andrei Alexandrescu wrote:The ranges expand very fast. Addition and subtraction double the range, multiplication squares it, exponentiation is n^^n. Larger bounds are usually not useful bounds.When composing, do the limits compose meaningfully?They should. Generally speaking, if that doesn't produce reasonable bounds (leaving aside rounding errors) at the end of the computation, it means that the logic of the computation itself is wrong.

Aug 29 2016

On Tuesday, 30 August 2016 at 03:37:06 UTC, Chris Wright wrote:On Mon, 29 Aug 2016 18:20:05 +0000, tsbockman wrote:Ranges don't always grow. Some operations will also cause them to shrink, if they're really being tracked correctly: bitwise AND, integer division, and modulus all tend to do so. If the ranges are intrinsically significant (it's logically impossible for a creature to have -7 eyes) and the calculation is meaningful and correct, things should balance out at the end to produce a useful range for the final result. On the other hand, arbitrary sanity check ranges (it's merely extremely unlikely for a creature to have 2147483647 eyes) shouldn't be passed on to intermediate values at all; they must be assigned manually based on human intuition, and this is only really worth doing for inputs and outputs. In such cases, the best return value for binary operations would be normal (conceptually) unbounded CheckedInt. Give me mathematically correct bounds, or no bounds at all. Anything else will cause problems.They should. Generally speaking, if that doesn't produce reasonable bounds (leaving aside rounding errors) at the end of the computation, it means that the logic of the computation itself is wrong.The ranges expand very fast. Addition and subtraction double the range, multiplication squares it, exponentiation is n^^n. Larger bounds are usually not useful bounds.

Aug 30 2016

On Tue, 30 Aug 2016 13:24:04 +0000, tsbockman wrote:On Tuesday, 30 August 2016 at 03:37:06 UTC, Chris Wright wrote:We can only track types, not values, and that strongly hampers our ability to reduce ranges. We can still do it sometimes, but no operation always reduces ranges (and every operation sometimes extends ranges).On Mon, 29 Aug 2016 18:20:05 +0000, tsbockman wrote:Ranges don't always grow. Some operations will also cause them to shrink, if they're really being tracked correctly:They should. Generally speaking, if that doesn't produce reasonable bounds (leaving aside rounding errors) at the end of the computation, it means that the logic of the computation itself is wrong.The ranges expand very fast. Addition and subtraction double the range, multiplication squares it, exponentiation is n^^n. Larger bounds are usually not useful bounds.bitwise AND,It tends to extend the lower bound. For instance: Bounded!(int, -100, 5) a = -99, b = -32; auto c = a & b; Here, c has type Bounded!(int, -128, 5) and value -128. Similarly, Bounded!(int, 7, 55) & Bounded!(int, 12, 64) yields Bounded! (int, 0, 55). And for an odd one, Bounded!(int, 50, 55) & Bounded!(int, 60, 62) yields Bounded!(int, 48, 54). So bitwise AND *sometimes* reduces the range (in sometimes hard to predict ways), but it also sometimes extends it.integer division,You can only reduce a range with integer division if the divisor's static range does not include the identity. For example (assuming the range is open on both ends): Bounded!(int, 0, 100) a = 1, b = 100; auto c = b / a; c has type Bounded!(0, 100) and value 100. On the other hand, it can sometimes reduce the range: Bounded!(int, 4, 8) a = 4, b = 8; auto c = b / a; Here, c has type Bounded!(int, 1, 2) and value 2. And sometimes it extends the range: Bounded!(int, -1, 1) a = -1; Bounded!(int, 0, 128) b = 128; auto c = a / b; Here, c has type Bounded!(int, -128, 128) and value -128.and modulus all tend to do so.Modulus can result in a range larger than its operands': Bounded!(int, 1, 2) a = 1, b = 2; auto c = b % a; Here, c has type Bounded!(int, 0, 2) and value 0.

Aug 30 2016

On Tuesday, 30 August 2016 at 14:58:16 UTC, Chris Wright wrote:On Tue, 30 Aug 2016 13:24:04 +0000, tsbockman wrote:Then don't try to propagate the ranges at all. Use regular `CheckedInt` as the return type for all of `BoundInt`'s non-assignment binary operations. Make the user explicitly label which variables should be checked against what ranges, instead of automatically inserting the usually-wrong guess that every intermediate value belongs in the same narrow range as the inputs. Otherwise, `BoundInt` will generate tons of spurious exceptions, particularly for multiplication. Expressions that are complicated enough that custom range-based sanity checks need to be done inside of it, on rvalues, should probably be broken up into smaller pieces, anyway.Ranges don't always grow. Some operations will also cause them to shrink, if they're really being tracked correctly:We can only track types, not values, and that strongly hampers our ability to reduce ranges. We can still do it sometimes, but no operation always reduces ranges (and every operation sometimes extends ranges).

Aug 30 2016

On Tuesday, 30 August 2016 at 14:58:16 UTC, Chris Wright wrote:We can only track types, not values, and that strongly hampers our ability to reduce ranges.Playing with some examples, I think the real issue is that precise bounds tracking requires analyzing the formula as a whole, rather than just each operation individually: auto blend( Bound!(int, 0, 100) x, Bound!(int, 0, 100) y, Bound!(int, 0, 100) z) { enum Bound!(int, 100, 100) h = 100; return ((x * z) + (y * (h - z))) / h; } With propagation computed at the individual operation level, the above returns a `Bound!(int, 0, 200)`. But, using the expression as a whole it can easily be proven that the true maximum is `100`. (With just-plain-wrong "union of the input ranges" propagation, the above will generate many spurious exceptions.) Unfortunately, proving the true bounds statically in the general case would require something like a compile-time computer algebra system connected combined with AST macros. That's obviously overkill for what we're doing here.

Aug 30 2016

On 08/29/2016 11:37 PM, Chris Wright wrote:On Mon, 29 Aug 2016 18:20:05 +0000, tsbockman wrote:Yah, was thinking of the same. So before long you run into the bounds, at which points it's all up to the checking policy. -- AndreiOn Tuesday, 23 August 2016 at 20:40:06 UTC, Andrei Alexandrescu wrote:The ranges expand very fast. Addition and subtraction double the range, multiplication squares it, exponentiation is n^^n. Larger bounds are usually not useful bounds.When composing, do the limits compose meaningfully?They should. Generally speaking, if that doesn't produce reasonable bounds (leaving aside rounding errors) at the end of the computation, it means that the logic of the computation itself is wrong.

Aug 30 2016