www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Signed word lengths and indexes

reply bearophile <bearophileHUGS lycos.com> writes:
I have found a Reddit discussion few days old:
http://www.reddit.com/r/programming/comments/cdwz5/the_perils_of_unsigned_iteration_in_cc/

It contains this, that I quote (I have no idea if it's true), plus follow-ups:

At Google using uints of all kinds for anything other than bitmasks or other
inherently bit-y, non computable things is strongly discouraged. This includes
things like array sizes, and the warnings for conversion of size_t to int are
disabled. I think it's a good call.<

I have expressed similar ideas here: http://d.puremagic.com/issues/show_bug.cgi?id=3843 Unless someone explains me why I am wrong, I will keep thinking that using unsigned words to represent lengths and indexes, as D does, is wrong and unsafe, and using signed words (I think C# uses ints for that purpose) in D is a better design choice. In a language as greatly numerically unsafe as D (silly C-derived conversion rules, fixed-sized numbers used everywhere on default, no runtime numerical overflows) the usage of unsigned numbers can be justified inside bit vectors, bitwise operations, and few other similar situations only. If D wants to be "a systems programming language. Its focus is on combining the power and high performance of C and C++ with the programmer productivity of modern languages like Ruby and Python." it must understand that numerical safety is one of the not secondary things that make those languages as Ruby and Python more productive. Bye, bearophile
Jun 14 2010
next sibling parent reply Byron Heads <wyverex.cypher gmail.com> writes:
On Mon, 14 Jun 2010 16:52:04 -0400, bearophile wrote:

 If D wants to be "a systems programming language. Its focus is on
 combining the power and high performance of C and C++ with the
 programmer productivity of modern languages like Ruby and Python." it
 must understand that numerical safety is one of the not secondary things
 that make those languages as Ruby and Python more productive.
 
 Bye,
 bearophile

Isn't this why D has foreach and foreach_reverse? -By
Jun 14 2010
parent bearophile <bearophileHUGS lycos.com> writes:
Byron Heads:
 Isn't this why D has foreach and foreach_reverse?

If you mean the exact problem the original article was talking about, then you are right. But foreach and foreach_reverse are not enough to solve the general safety problem caused by the widespread usage of unsigned words in a language that at the same time has C conversion rules, uses mostly fixed-sized numbers and lacks run-time integral numerical overflows. Four things that if present at the same time create an explosive mix. I am happy to see that (if that quote is right) Google C++ coding standards agree with me about this. Bye, bearophile
Jun 14 2010
prev sibling next sibling parent reply div0 <div0 users.sourceforge.net> writes:
On 14/06/2010 21:52, bearophile wrote:
 I have found a Reddit discussion few days old:
 http://www.reddit.com/r/programming/comments/cdwz5/the_perils_of_unsigned_iteration_in_cc/

  It contains this, that I quote (I have no idea if it's true), plus
 follow-ups:

 At Google using uints of all kinds for anything other than bitmasks
 or other inherently bit-y, non computable things is strongly
 discouraged. This includes things like array sizes, and the
 warnings for conversion of size_t to int are disabled. I think it's
 a good call.<

I have expressed similar ideas here: http://d.puremagic.com/issues/show_bug.cgi?id=3843 Unless someone explains me why I am wrong, I will keep thinking that using unsigned words to represent lengths and indexes, as D does, is wrong and unsafe, and using signed words (I think C# uses ints for that purpose) in D is a better design choice.

Well for a start, you lose half your addressable memory. unsigned numbers are only a problem if you don't understand how they work, but that goes for just about everything else as well. Personally I hate the use of signed numbers as array indices; it's moronic and demonstrates the writers lack of understanding. It's very rare to actually want to index an array with a negative number. Last time I did that was years ago when writing in assembler; and that was an optimisation hack to squeeze maximum performance out of my code. c.f. Item getItem(int indx) { if(indx >= 0 && indx < _arr.length) return _arr[indx]; throw new Error(...) } vs. // cleaner no? Item getItem(uint indx) { if(indx < _arr.length) return _arr[indx]; throw new Error(...) } and backwards iteration: for(int i = end - 1; i >= 0; --i) ... vs for(uint i = end - 1; i < length; --i) ... Ok about the same, but I find the second more clear, the i < length clearly indicates iteration over the whole array. And that second wrong bit of code on the blog is wrong with signed numbers as well: int len = strlen(some_c_str); // say some_c_str is empty so len = 0 int i; for (i = 0; i < len - 1; ++i) { // so len - 1 == -1 // iterate until i wraps round and becomes -1 } Using 'int's doesn't magically fix it. Wrong code is just wrong. I do think that allowing un-casted assignments between signed/unsigned is a problem though; that's where most of the bugs creep up I've come across crop up. I think D should simply disallow implicit mixing of signd-ness. Hasn't that been discussed before? (I'm not referring to the recent post in d.learn) It seems familiar. -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk
Jun 14 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
div0 wrote:
 I do think that allowing un-casted assignments between signed/unsigned 
 is a problem though; that's where most of the bugs creep up I've come 
 across crop up. I think D should simply disallow implicit mixing of 
 signd-ness.

Andrei and I went down that alley for a while. It's not practical.
 Hasn't that been discussed before? (I'm not referring to the recent post 
 in d.learn) It seems familiar.

Yes, many times.
Jun 14 2010
parent reply Alex Makhotin <alex bitprox.com> writes:
Walter Bright wrote:
 div0 wrote:
 I do think that allowing un-casted assignments between signed/unsigned 
 is a problem though; that's where most of the bugs creep up I've come 
 across crop up. I think D should simply disallow implicit mixing of 
 signd-ness.

Andrei and I went down that alley for a while. It's not practical.

A link on the discussion or examples of unpractical explicit cast would be helpful to me to try to understand such decision. -- Alex Makhotin, the founder of BITPROX, http://bitprox.com
Jun 15 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Alex Makhotin wrote:
 Walter Bright wrote:
 Andrei and I went down that alley for a while. It's not practical.

A link on the discussion or examples of unpractical explicit cast would be helpful to me to try to understand such decision.

I don't have one, the message database of this n.g. is enormous. You can try the search box here: http://www.digitalmars.com/d/archives/digitalmars/D/index.html
Jun 15 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Alex Makhotin wrote:
 Walter Bright wrote:
 Andrei and I went down that alley for a while. It's not practical.

A link on the discussion or examples of unpractical explicit cast would be helpful to me to try to understand such decision.

I don't have one, the message database of this n.g. is enormous. You can try the search box here: http://www.digitalmars.com/d/archives/digitalmars/D/index.html

The discussions about polysemous types should be relevant. We tried to fix things quite valiantly. Currently I believe that improving value range propagation is the best way to go. Andrei
Jun 15 2010
prev sibling next sibling parent reply "Steven E. Harris" <seh panix.com> writes:
div0 <div0 users.sourceforge.net> writes:

 for(uint i = end - 1; i < length; --i)
   ...

What does "length" represent here? It's not clear to me how "i" descending toward zero is going to break the guard condition. -- Steven E. Harris
Jun 14 2010
next sibling parent reply BCS <none anon.com> writes:
Hello Steven,

 div0 <div0 users.sourceforge.net> writes:
 
 for(uint i = end - 1; i < length; --i)
 ...

descending toward zero is going to break the guard condition.

My thought exactly. If i<j and you --i, I'd assume i<j, if your code depends on the case where the assumption is wrong, don't ask me to do a code review because I won't sign off on it. -- ... <IXOYE><
Jun 14 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Steven Schveighoffer:
 i is unsigned, and therefore can never be less than 0.  It's actually a  
 clever way to do it that I've never thought of.

Clever code is bad. It must be minimized. In some rare situations it becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things. Bye, bearophile
Jun 15 2010
next sibling parent reply Pelle <pelle.mansson gmail.com> writes:
On 06/15/2010 02:10 PM, Steven Schveighoffer wrote:
 On Tue, 15 Jun 2010 07:30:52 -0400, bearophile
 <bearophileHUGS lycos.com> wrote:

 Steven Schveighoffer:
 i is unsigned, and therefore can never be less than 0. It's actually a
 clever way to do it that I've never thought of.

Clever code is bad. It must be minimized. In some rare situations it becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things.

Clever code is bad? What are you smoking? In my opinion, clever code that is clear and concise should always be favored over code that is unnecessarily verbose.

Clever code is bad because you have to think a couple of times more every time you see it. Also, it looks wrong.
 In this particular instance, the code is both clear and concise.

 The following line of code should generate the exact same code, but is
 more verbose:

 for(uint i = end - 1; i < length && i >= 0; --i)

 The compiler will throw away the second check during optimization,
 because i is always >= 0. I don't see why such code should be preferred.

 -Steve

This will probably generate similar code: for (uint i = end - 1; i < uint.max; --i) Same functionality, really clever.
Jun 15 2010
parent reply Pelle <pelle.mansson gmail.com> writes:
On 06/15/2010 03:25 PM, Adam Ruppe wrote:
 On 6/15/10, Pelle<pelle.mansson gmail.com>  wrote:
 for (uint i = end - 1; i<  uint.max; --i)

 Same functionality, really clever.

That's wrong rather than clever though. for(i< length) is saying "continue as long as you are inside the array's bounds", which is exactly what you mean in that loop. The only "tricky" is the understanding that an array index is never negative, but this shouldn't require extra thought in the first place.

Using the length is meaningless, any uint >= length will work just as well. Using the length there is meaningless, since that's really not what you compare against. Notice why clever tricks are bad? They generate meaningless discussions :)
Jun 15 2010
parent Pelle <pelle.mansson gmail.com> writes:
On 06/15/2010 04:12 PM, Lars T. Kyllingstad wrote:
 On Tue, 15 Jun 2010 16:05:08 +0200, Pelle wrote:

 On 06/15/2010 03:25 PM, Adam Ruppe wrote:
 On 6/15/10, Pelle<pelle.mansson gmail.com>   wrote:
 for (uint i = end - 1; i<   uint.max; --i)

 Same functionality, really clever.

That's wrong rather than clever though. for(i< length) is saying "continue as long as you are inside the array's bounds", which is exactly what you mean in that loop. The only "tricky" is the understanding that an array index is never negative, but this shouldn't require extra thought in the first place.

Using the length is meaningless, any uint>= length will work just as well. Using the length there is meaningless, since that's really not what you compare against.

As long as you only decrease by one, your trick will work just as well. In a more general case, it won't: for (uint i=end-1; i<uint.max; i--) { if (badTiming) i--; // Oops, we may just have set i = uint.max - 1. }

The same can be said if you use length, and length happens to be big. You really should use continue in this case.
Jun 15 2010
prev sibling parent Justin Johansson <no spam.com> writes:
bearophile wrote:
 Steven Schveighoffer:
 i is unsigned, and therefore can never be less than 0.  It's actually a  
 clever way to do it that I've never thought of.

Clever code is bad. It must be minimized. In some rare situations it becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things. Bye, bearophile

I agree with bearophile's sentiments. To my interpretation this means that at sometimes trying to be clever is actually stupid. If I misinterpret those sentiments, please correct me. Cheers Justin
Jun 15 2010
prev sibling parent Kagamin <spam here.lot> writes:
Steven Schveighoffer Wrote:

 On Mon, 14 Jun 2010 21:48:10 -0400, BCS <none anon.com> wrote:
 
 Hello Steven,

 div0 <div0 users.sourceforge.net> writes:

 for(uint i = end - 1; i < length; --i)
 ...

descending toward zero is going to break the guard condition.

My thought exactly. If i<j and you --i, I'd assume i<j, if your code depends on the case where the assumption is wrong, don't ask me to do a code review because I won't sign off on it.

i is unsigned, and therefore can never be less than 0. It's actually a clever way to do it that I've never thought of. Read it more like this: for(uint i = end - 1; i < length && i >= 0; --i) But the i >= 0 is implicit because i is unsigned.

Jun 15 2010
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 14 Jun 2010 21:48:10 -0400, BCS <none anon.com> wrote:

 Hello Steven,

 div0 <div0 users.sourceforge.net> writes:

 for(uint i = end - 1; i < length; --i)
 ...

descending toward zero is going to break the guard condition.

My thought exactly. If i<j and you --i, I'd assume i<j, if your code depends on the case where the assumption is wrong, don't ask me to do a code review because I won't sign off on it.

i is unsigned, and therefore can never be less than 0. It's actually a clever way to do it that I've never thought of. Read it more like this: for(uint i = end - 1; i < length && i >= 0; --i) But the i >= 0 is implicit because i is unsigned. -Steve
Jun 15 2010
next sibling parent BCS <none anon.com> writes:
Hello Steven,

 On Mon, 14 Jun 2010 21:48:10 -0400, BCS <none anon.com> wrote:
 
 Hello Steven,
 
 div0 <div0 users.sourceforge.net> writes:
 
 for(uint i = end - 1; i < length; --i)
 ...

descending toward zero is going to break the guard condition.

If i<j and you --i, I'd assume i<j, if your code depends on the case where the assumption is wrong, don't ask me to do a code review because I won't sign off on it.

a clever way to do it that I've never thought of.

It's /to/ clever. That's the problem. If you haven't seen it in a while, it's confusing and it LOOKS wrong even if you have.
 
 Read it more like this:
 
 for(uint i = end - 1; i < length && i >= 0; --i)
 
 But the i >= 0 is implicit because i is unsigned.
 

I know, that's exactly "the case where the assumption is wrong". -- ... <IXOYE><
Jun 15 2010
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Jun 2010 10:08:38 -0400, BCS <none anon.com> wrote:

 Hello Steven,

 On Mon, 14 Jun 2010 21:48:10 -0400, BCS <none anon.com> wrote:

 Hello Steven,

 div0 <div0 users.sourceforge.net> writes:

 for(uint i = end - 1; i < length; --i)
 ...

descending toward zero is going to break the guard condition.

If i<j and you --i, I'd assume i<j, if your code depends on the case where the assumption is wrong, don't ask me to do a code review because I won't sign off on it.

a clever way to do it that I've never thought of.

It's /to/ clever. That's the problem. If you haven't seen it in a while, it's confusing and it LOOKS wrong even if you have.

This is easily solved - put in a comment. I frequently put comments in my code because I know I'm going to forget why I did something.
  Read it more like this:
  for(uint i = end - 1; i < length && i >= 0; --i)
  But the i >= 0 is implicit because i is unsigned.

I know, that's exactly "the case where the assumption is wrong".

Reading code assuming integer wrapping never occurs is a big mistake. You should learn to assume wrapping is always possible. -Steve
Jun 15 2010
next sibling parent BCS <none anon.com> writes:
Hello Steven,

 On Tue, 15 Jun 2010 10:08:38 -0400, BCS <none anon.com> wrote:
 
 Hello Steven,
 
 On Mon, 14 Jun 2010 21:48:10 -0400, BCS <none anon.com> wrote:
 
 Hello Steven,
 
 div0 <div0 users.sourceforge.net> writes:
 
 for(uint i = end - 1; i < length; --i)
 ...

descending toward zero is going to break the guard condition.

If i<j and you --i, I'd assume i<j, if your code depends on the case where the assumption is wrong, don't ask me to do a code review because I won't sign off on it.

actually a clever way to do it that I've never thought of.

while, it's confusing and it LOOKS wrong even if you have.

in my code because I know I'm going to forget why I did something.

All else being equal, code that *requiters* comments to understand is inferior to code that doesn't.
 Read it more like this:
 for(uint i = end - 1; i < length && i >= 0; --i)
 But the i >= 0 is implicit because i is unsigned.


You should learn to assume wrapping is always possible.

You should learn to write code where I and everyone else doesn't /need/ to assume it is possible. (personably, I find it marginally offensive/greedy when someone's first proposal as to how to fix a problem if for the rest of the world to change and the second option is for the person to change.) -- ... <IXOYE><
Jun 15 2010
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Jun 2010 11:47:34 -0400, BCS <none anon.com> wrote:

 Hello Steven,

 This is easily solved - put in a comment.  I frequently put comments
 in my  code because I know I'm going to forget why I did something.

All else being equal, code that *requiters* comments to understand is inferior to code that doesn't.

Code should *always* have comments. I hate reading code that doesn't have comments, it allows you to understand what the person is thinking. That being said, I don't think this construct requires comments, maybe a note like 'uses underflow' or something to let the reader know the writer was aware of the issue and did it on purpose, but a comment is not essential to understanding the code. *That* being said, I don't expect to use this construct often. Typically one iterates forwards through an array, and foreach is much better suited for iteration anyways.
 Reading code assuming integer wrapping never occurs is a big mistake.
 You  should learn to assume wrapping is always possible.

You should learn to write code where I and everyone else doesn't /need/ to assume it is possible.

Why? If you can't understand/spot overflow/underflow problems, then why should I cater to you? It's like lowering academic testing standards for school children so they can pass on to the next grade.
 (personably, I find it marginally offensive/greedy when someone's first  
 proposal as to how to fix a problem if for the rest of the world to  
 change and the second option is for the person to change.)

Why is it offensive if I expect a code reviewer to take overflow into consideration when reviewing code? It's not some sort of snobbery, I just expect reviewers to be competent. -Steve
Jun 15 2010
next sibling parent BCS <none anon.com> writes:
Hello Steven,

 On Tue, 15 Jun 2010 11:47:34 -0400, BCS <none anon.com> wrote:
 
 Hello Steven,
 
 This is easily solved - put in a comment.  I frequently put comments
 in my  code because I know I'm going to forget why I did something.
 

inferior to code that doesn't.

have comments, it allows you to understand what the person is thinking.

I agree. It should have comments. But if stripping them out would render the code unmaintainable, that indicates to me that it's likely the code is to complex. It's a sliding scale, the more difference the comments make, the more of an issue it is. And again, this is an "all else being equal" case; given two option and nothing else to chose between them, I'll pick the one that needs fewer comments.
 Reading code assuming integer wrapping never occurs is a big
 mistake. You  should learn to assume wrapping is always possible.
 

/need/ to assume it is possible.

why should I cater to you? It's like lowering academic testing standards for school children so they can pass on to the next grade.

The way peoples brains are wired, the first thought people will have about that code is wrong. If that can be avoided, why not avoid it?
 
 (personably, I find it marginally offensive/greedy when someone's
 first  proposal as to how to fix a problem if for the rest of the
 world to  change and the second option is for the person to change.)
 

consideration when reviewing code

That's /not/ offensive. For one thing, only very few people will ever need to be involved in that. The reason I wouldn't let it pass code review has zero to do with me not understanding it (I do understand for one thing) but has 100% with anyone who ever needs to touch the code needing to understand it. That is an open set (and that is why I find it marginally offensive). The cost of putting something in your code that is harder (note I'm not saying "hard") to understand goes up the more successful the code is and is effectively unbounded.
 It's not some sort of snobbery, I
 just  expect reviewers to be competent.

I expect that to. I also expect people reading my code (for review or what-not) to have better things to do with their time than figure out clever code. -- ... <IXOYE><
Jun 15 2010
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Jun 2010 16:07:26 -0400, BCS <none anon.com> wrote:

 Hello Steven,
 Why?  If you can't understand/spot overflow/underflow problems, then
 why  should I cater to you?  It's like lowering academic testing
 standards for  school children so they can pass on to the next grade.

The way peoples brains are wired, the first thought people will have about that code is wrong. If that can be avoided, why not avoid it?

Because the alternatives are uglier, and it's not as easy to see subtle sign problems with them. The code we are discussing has no such subtle problems since all arithmetic/comparison is done with unsigned values.
 Why is it offensive if I expect a code reviewer to take overflow into
 consideration when reviewing code

That's /not/ offensive. For one thing, only very few people will ever need to be involved in that. The reason I wouldn't let it pass code review has zero to do with me not understanding it (I do understand for one thing) but has 100% with anyone who ever needs to touch the code needing to understand it. That is an open set (and that is why I find it marginally offensive). The cost of putting something in your code that is harder (note I'm not saying "hard") to understand goes up the more successful the code is and is effectively unbounded.

So I have to worry about substandard coders trying to understand my code? If anything, they ask a question, and it is explained to them. There is no trickery or deception or obfuscation. I'd expect a coder who understands bitwise operations to understand this code no problem. I would not, on the other hand, expect a reasonably knowledgeable coder to see subtle sign errors due to comparing/subtracting signed and unsigned integers. Those are much trickier to see, even for experienced coders. In other words, the code looks strange, but is not hiding anything. Code that looks correct but contains a subtle sign bug is worse.
 It's not some sort of snobbery, I
 just  expect reviewers to be competent.

I expect that to. I also expect people reading my code (for review or what-not) to have better things to do with their time than figure out clever code.

I guess I'd say that's a prejudice against learning new code tricks because not everybody knows them. It sounds foolish to me. -Steve
Jun 15 2010
parent BCS <none anon.com> writes:
Hello Steven,

 On Tue, 15 Jun 2010 16:07:26 -0400, BCS <none anon.com> wrote:
 
 Hello Steven,
 
 Why is it offensive if I expect a code reviewer to take overflow
 into consideration when reviewing code
 

need to be involved in that. The reason I wouldn't let it pass code review has zero to do with me not understanding it (I do understand for one thing) but has 100% with anyone who ever needs to touch the code needing to understand it. That is an open set (and that is why I find it marginally offensive). The cost of putting something in your code that is harder (note I'm not saying "hard") to understand goes up the more successful the code is and is effectively unbounded.

code? If anything, they ask a question, and it is explained to them.

If *any* user *ever* has to ask a question about how code, that does something as simple as loop over any array backwards, works the author has failed. If even a handful of users take long enough to understand it that they even notice thay'er are thinking about it, the author didn't do a good job. I guess I can restate my opinion as: I'm (slightly) offended that you are asking me to think about something that trivial. Would you rather I spend any time think about that or would you rather I spend it thinking about the rest of your code?
 In other words, the code looks strange, but is not hiding anything.
 Code  that looks correct but contains a subtle sign bug is worse.
 

Looks correct & is correct > looks wrong & is wrong > looks wrong and isn't
 looks right and isn't

You might talk me into switching the middle two, but they are darn close.
 It's not some sort of snobbery, I
 just  expect reviewers to be competent.

what-not) to have better things to do with their time than figure out clever code.

because not everybody knows them. It sounds foolish to me.

I have no problem with code trick. I have problems with complex code where simple less interesting code does just as well. I guess we aren't likely to agree on this so I'll just say; many you maintain interesting code. -- ... <IXOYE><
Jun 15 2010
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Jun 2010 07:30:52 -0400, bearophile <bearophileHUGS lycos.com>  
wrote:

 Steven Schveighoffer:
 i is unsigned, and therefore can never be less than 0.  It's actually a
 clever way to do it that I've never thought of.

Clever code is bad. It must be minimized. In some rare situations it becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things.

Clever code is bad? What are you smoking? In my opinion, clever code that is clear and concise should always be favored over code that is unnecessarily verbose. In this particular instance, the code is both clear and concise. The following line of code should generate the exact same code, but is more verbose: for(uint i = end - 1; i < length && i >= 0; --i) The compiler will throw away the second check during optimization, because i is always >= 0. I don't see why such code should be preferred. -Steve
Jun 15 2010
next sibling parent BCS <none anon.com> writes:
Hello Steven,

 On Tue, 15 Jun 2010 07:30:52 -0400, bearophile
 <bearophileHUGS lycos.com>  wrote:
 
 Steven Schveighoffer:
 
 i is unsigned, and therefore can never be less than 0.  It's
 actually a clever way to do it that I've never thought of.
 

becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things.

that is clear and concise should always be favored over code that is unnecessarily verbose.

Cleaver in my book normally equates to: requiters extra thought to create and read. The exact opposite of clever is not dumb, but simple: with very un-clever code the reader is I/O bound, they can understand as fast as they can read it.
 
 In this particular instance, the code is both clear and concise.

That code might be concise but it is not clear.
 
 The following line of code should generate the exact same code, but is
 more verbose:
 
 for(uint i = end - 1; i < length && i >= 0; --i)

That code is just as bad IMO and for exactly the same reason: you are counting on underflow and wrapping to make a i<j test start failing after i decrease.
 
 The compiler will throw away the second check during optimization,
 because  i is always >= 0.  I don't see why such code should be
 preferred.
 
 -Steve
 

... <IXOYE><
Jun 15 2010
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Jun 2010 10:34:21 -0400, BCS <none anon.com> wrote:

 Hello Steven,

 On Tue, 15 Jun 2010 07:30:52 -0400, bearophile
 <bearophileHUGS lycos.com>  wrote:

 Steven Schveighoffer:

 i is unsigned, and therefore can never be less than 0.  It's
 actually a clever way to do it that I've never thought of.

becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things.

that is clear and concise should always be favored over code that is unnecessarily verbose.

Cleaver in my book normally equates to: requiters extra thought to create and read. The exact opposite of clever is not dumb, but simple: with very un-clever code the reader is I/O bound, they can understand as fast as they can read it.

Clever code does not have to be hard to understand. In this case, it's not hard to understand. You admit yourself that you understood it within a second ;)
  In this particular instance, the code is both clear and concise.

That code might be concise but it is not clear.

Since uint is declared inside the loop statement, I'd say it is clear -- it's not open to misinterpretation. If i was defined elsewhere, I'd agree.
  The following line of code should generate the exact same code, but is
 more verbose:
  for(uint i = end - 1; i < length && i >= 0; --i)

That code is just as bad IMO and for exactly the same reason: you are counting on underflow and wrapping to make a i<j test start failing after i decrease.

Well, I guess that's one way to look at it. But what I like a lot about the original example, is there is no mixing of unsigned/signed types. So you are always dealing with unsigned, so you do not have to deal with worrying about integer promotion. Thinking about unsigned arithmetic is sometimes difficult, but if you understand the rules, using underflow to your advantage is fine IMO. -Steve
Jun 15 2010
parent BCS <none anon.com> writes:
Hello Steven,

 On Tue, 15 Jun 2010 10:34:21 -0400, BCS <none anon.com> wrote:
 
 Hello Steven,
 
 On Tue, 15 Jun 2010 07:30:52 -0400, bearophile
 <bearophileHUGS lycos.com>  wrote:
 Steven Schveighoffer:
 
 i is unsigned, and therefore can never be less than 0.  It's
 actually a clever way to do it that I've never thought of.
 

it becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things.

code that is clear and concise should always be favored over code that is unnecessarily verbose.

create and read. The exact opposite of clever is not dumb, but simple: with very un-clever code the reader is I/O bound, they can understand as fast as they can read it.

it's not hard to understand. You admit yourself that you understood it within a second ;)

Yup, and a second is several times to long for code that accomplishes something that simple.
 In this particular instance, the code is both clear and concise.
 


-- it's not open to misinterpretation. If i was defined elsewhere, I'd agree.

That i is a uint is clear, but any code that depends on underflow is IMO not clear as it requiters thinking in a (for most people) less than intuitive way.
 if you understand the rules

Other forms avoid that requirement. The fewer constraints/requirements you place on the reader of code the better. I think the difference of opinion here stems from you basing your assessment on what is required of the (single) person who writes the code where as I'm basing my assessment on what is required of the (open set of) people who read the code. -- ... <IXOYE><
Jun 15 2010
prev sibling next sibling parent Adam Ruppe <destructionator gmail.com> writes:
Not referring to anyone in particular, but it just occurred to me:

Python's use of -1 to mean length, and -2 to mean length -1 isn't
signed ints... it is using overflow! .... sort of. The thing is that
they are overflowing at a dynamic amount (array.length) instead of the
fixed size.

Actually, if it is treated as full blown proper overflow, you could
get some potentially useful stuff out of it. Given length == 5, -1 >
3. I don't know if Python actually lets you do that, but I doubt it.


But I just had a chuckle about that thought :)
Jun 15 2010
prev sibling parent Graham Fawcett <fawcett uwindsor.ca> writes:
On Tue, 15 Jun 2010 11:26:34 -0400, Adam Ruppe wrote:

 Not referring to anyone in particular, but it just occurred to me:
 
 Python's use of -1 to mean length, and -2 to mean length -1 isn't signed
 ints... it is using overflow! .... sort of. The thing is that they are
 overflowing at a dynamic amount (array.length) instead of the fixed
 size.

Well, conceptually maybe. :) Python integers don't overflow, they automatically convert up to bigints. It's more correct to just say that Python specifies that a negative array index means a reference from the right end.
 Actually, if it is treated as full blown proper overflow, you could get
 some potentially useful stuff out of it. Given length == 5, -1 > 3. I
 don't know if Python actually lets you do that, but I doubt it.

Only in explicit modular arithmetic, e.g. '-1 % 5 == 4'. You can slice a Python list using both postive and negative indexes: [10,20,30,40,50][3:-1] ==> [40] ...but that doesn't imply overflow or modular arithmetic: it's just the array-indexing contract. Graham
 
 
 But I just had a chuckle about that thought :)

Jun 15 2010
prev sibling next sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
On 6/15/10, Pelle <pelle.mansson gmail.com> wrote:
 for (uint i = end - 1; i < uint.max; --i)

 Same functionality, really clever.

That's wrong rather than clever though. for(i < length) is saying "continue as long as you are inside the array's bounds", which is exactly what you mean in that loop. The only "tricky" is the understanding that an array index is never negative, but this shouldn't require extra thought in the first place.
Jun 15 2010
parent BCS <none anon.com> writes:
Hello Adam,

 but this shouldn't require extra thought in the first place.

shouldn't != doesn't. When I first saw the code, it took me about a second to go from, "backwards loop" to "wait, thats wrong" to "Oh, I guess that works". That's two stages and 750ms to long. How would I write the loop? foreach_reverse(uint i, 0 .. length) { ... } or for(uint i = length; i > 0;) { --i; ... } or for(int i = length; i >= 0; --i) { ... } or uint i = length -1; do { ... } while(i-- > 0); None of those at first glance seem to be wrong or work differently than they do. -- ... <IXOYE><
Jun 15 2010
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Tue, 15 Jun 2010 16:05:08 +0200, Pelle wrote:

 On 06/15/2010 03:25 PM, Adam Ruppe wrote:
 On 6/15/10, Pelle<pelle.mansson gmail.com>  wrote:
 for (uint i = end - 1; i<  uint.max; --i)

 Same functionality, really clever.

That's wrong rather than clever though. for(i< length) is saying "continue as long as you are inside the array's bounds", which is exactly what you mean in that loop. The only "tricky" is the understanding that an array index is never negative, but this shouldn't require extra thought in the first place.

Using the length is meaningless, any uint >= length will work just as well. Using the length there is meaningless, since that's really not what you compare against.

As long as you only decrease by one, your trick will work just as well. In a more general case, it won't: for (uint i=end-1; i<uint.max; i--) { if (badTiming) i--; // Oops, we may just have set i = uint.max - 1. }
 Notice why clever tricks are bad? They generate meaningless discussions
 :)

I don't think the discussion is meaningless. I learned a new trick (or a new abomination, depending on your viewpoint), that I'll keep in mind next time I write a similar loop. ;) -Lars
Jun 15 2010
prev sibling next sibling parent Adam Ruppe <destructionator gmail.com> writes:
On 6/15/10, Pelle <pelle.mansson gmail.com> wrote:
 Using the length is meaningless, any uint >= length will work just as
 well. Using the length there is meaningless, since that's really not
 what you compare against.

The nice thing about length is x < arr.length can be read as "oh, stop upon going out of bounds". Yes, other numbers would do the same thing here, but they wouldn't read the same way. It is more about what the code is saying to the human reader than to the computer.
 Notice why clever tricks are bad? They generate meaningless discussions :)

Haha, yes!
Jun 15 2010
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Jun 2010 08:49:56 -0400, Pelle <pelle.mansson gmail.com> wrote:

 On 06/15/2010 02:10 PM, Steven Schveighoffer wrote:
 On Tue, 15 Jun 2010 07:30:52 -0400, bearophile
 <bearophileHUGS lycos.com> wrote:

 Steven Schveighoffer:
 i is unsigned, and therefore can never be less than 0. It's actually a
 clever way to do it that I've never thought of.

Clever code is bad. It must be minimized. In some rare situations it becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things.

Clever code is bad? What are you smoking? In my opinion, clever code that is clear and concise should always be favored over code that is unnecessarily verbose.

Clever code is bad because you have to think a couple of times more every time you see it.

This is a temporary problem. Once you get used to any particular coding trick, you understand it better.
 Also, it looks wrong.

Why? i is unsigned, therefore >= 0, and must be < length. That seems reasonable and correct to me.
 In this particular instance, the code is both clear and concise.

 The following line of code should generate the exact same code, but is
 more verbose:

 for(uint i = end - 1; i < length && i >= 0; --i)

 The compiler will throw away the second check during optimization,
 because i is always >= 0. I don't see why such code should be preferred.

 -Steve

This will probably generate similar code: for (uint i = end - 1; i < uint.max; --i) Same functionality, really clever.

What if end > length? This is no more clever than the original, but allows bugs. It's not clever, it's wrong. In addition, it's purposefully obfuscated, while the original code is quite clear. I can obfuscate even further, but I don't see why you would want such a thing: for(uint i = end - 1; i < -1; --i) "There's a fine line between clever and stupid" --Nigel Tufnel, This is Spinal Tap -Steve
Jun 15 2010
next sibling parent reply BCS <none anon.com> writes:
Hello Steven,

 On Tue, 15 Jun 2010 08:49:56 -0400, Pelle <pelle.mansson gmail.com>
 wrote:
 
 On 06/15/2010 02:10 PM, Steven Schveighoffer wrote:
 
 On Tue, 15 Jun 2010 07:30:52 -0400, bearophile
 <bearophileHUGS lycos.com> wrote:
 Steven Schveighoffer:
 
 i is unsigned, and therefore can never be less than 0. It's
 actually a clever way to do it that I've never thought of.
 

it becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things.

that is clear and concise should always be favored over code that is unnecessarily verbose.

every time you see it.

coding trick, you understand it better.

People cutting you off on the road is a temporary problem, once you tell everyone off, they will understand better. Your statement might have merit if the "you" in it were the specific "you" rather than the universal "you". If that assumption is made more explicit the statement becomes blatantly silly: "This is a temporary problem. Once everyone gets used to any particular coding trick, everyone understands it better."
 Also, it looks wrong.
 

reasonable and correct to me.

It looks wrong because i only gets smaller. People are hardwired to think about continues number system, not modulo number system (explain that 0 - 1 = -1 to a 6 year old; easy, explain that 0 - 1 = 2^32-1 to them, good luck). Yes we can be trained to use such system, but most people still wont think that way reflexively. -- ... <IXOYE><
Jun 15 2010
parent Don <nospam nospam.com> writes:
Steven Schveighoffer wrote:
 On Tue, 15 Jun 2010 11:28:43 -0400, BCS <none anon.com> wrote:
 
 Hello Steven,

 On Tue, 15 Jun 2010 08:49:56 -0400, Pelle <pelle.mansson gmail.com>
 wrote:

 On 06/15/2010 02:10 PM, Steven Schveighoffer wrote:

 On Tue, 15 Jun 2010 07:30:52 -0400, bearophile
 <bearophileHUGS lycos.com> wrote:
 Steven Schveighoffer:

 i is unsigned, and therefore can never be less than 0. It's
 actually a clever way to do it that I've never thought of.

it becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things.

that is clear and concise should always be favored over code that is unnecessarily verbose.

every time you see it.

coding trick, you understand it better.

People cutting you off on the road is a temporary problem, once you tell everyone off, they will understand better. Your statement might have merit if the "you" in it were the specific "you" rather than the universal "you".

In fact, I meant the specific you. Once a person gets used to any particular coding trick, that person will understand it better when the trick is encountered again. This is a basic principle of learning.
 Also, it looks wrong.

reasonable and correct to me.

It looks wrong because i only gets smaller. People are hardwired to think about continues number system, not modulo number system (explain that 0 - 1 = -1 to a 6 year old; easy, explain that 0 - 1 = 2^32-1 to them, good luck). Yes we can be trained to use such system, but most people still wont think that way reflexively.

It's really easy to explain. Use an odometer as an example. And we don't have to be specific in this case, you can substitue 'some very large number' for '2^32 - 1'. Besides, why does a 6-year old have to understand a for loop? D doesn't cater to people who can't grasp the modulo arithmetic concept. I think that this discussion is becoming pointless. Let's just accept that we don't have to review code for one another, and we like it that way :) -Steve

I would say, if you have trouble understanding that trick, you should NOT be using unsigned arithmetic EVER. And I agree that most people have trouble with it.
Jun 15 2010
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Jun 2010 11:28:43 -0400, BCS <none anon.com> wrote:

 Hello Steven,

 On Tue, 15 Jun 2010 08:49:56 -0400, Pelle <pelle.mansson gmail.com>
 wrote:

 On 06/15/2010 02:10 PM, Steven Schveighoffer wrote:

 On Tue, 15 Jun 2010 07:30:52 -0400, bearophile
 <bearophileHUGS lycos.com> wrote:
 Steven Schveighoffer:

 i is unsigned, and therefore can never be less than 0. It's
 actually a clever way to do it that I've never thought of.

it becomes useful, but its usage must be seen as a failure of the programmer, that was unable to write not-clever code that does the same things.

that is clear and concise should always be favored over code that is unnecessarily verbose.

every time you see it.

coding trick, you understand it better.

People cutting you off on the road is a temporary problem, once you tell everyone off, they will understand better. Your statement might have merit if the "you" in it were the specific "you" rather than the universal "you".

In fact, I meant the specific you. Once a person gets used to any particular coding trick, that person will understand it better when the trick is encountered again. This is a basic principle of learning.
 Also, it looks wrong.

reasonable and correct to me.

It looks wrong because i only gets smaller. People are hardwired to think about continues number system, not modulo number system (explain that 0 - 1 = -1 to a 6 year old; easy, explain that 0 - 1 = 2^32-1 to them, good luck). Yes we can be trained to use such system, but most people still wont think that way reflexively.

It's really easy to explain. Use an odometer as an example. And we don't have to be specific in this case, you can substitue 'some very large number' for '2^32 - 1'. Besides, why does a 6-year old have to understand a for loop? D doesn't cater to people who can't grasp the modulo arithmetic concept. I think that this discussion is becoming pointless. Let's just accept that we don't have to review code for one another, and we like it that way :) -Steve
Jun 15 2010
parent BCS <none anon.com> writes:
Hello Steven,

 On Tue, 15 Jun 2010 11:28:43 -0400, BCS <none anon.com> wrote:
 
 Hello Steven,
 
 On Tue, 15 Jun 2010 08:49:56 -0400, Pelle <pelle.mansson gmail.com>
 wrote:
 
 Clever code is bad because you have to think a couple of times more
 every time you see it.
 

coding trick, you understand it better.

tell everyone off, they will understand better. Your statement might have merit if the "you" in it were the specific "you" rather than the universal "you".

particular coding trick, that person will understand it better when the trick is encountered again. This is a basic principle of learning.

Yes, once Pelle (sorry to pick on you) gets used to any particular coding trick, Pelle will understand it better when the trick is encountered again. But what about everyone else? If Pelle were the only one who was going to read your code, that would be fine. But unless you can, right now, list by name everyone who will ever read your (and if you can, just go buy a lottery ticket and retire) then anything but the universal "you" makes the statement irrelevant.
 
 Also, it looks wrong.
 

seems reasonable and correct to me.

think about continues number system, not modulo number system (explain that 0 - 1 = -1 to a 6 year old; easy, explain that 0 - 1 = 2^32-1 to them, good luck). Yes we can be trained to use such system, but most people still wont think that way reflexively.

don't have to be specific in this case, you can substitue 'some very large number' for '2^32 - 1'.

Most 6 year olds will need to have an odometer explained to them first.
 Besides, why does a 6-year old have to understand a for loop?  D
 doesn't  cater to people who can't grasp the modulo arithmetic
 concept.

I wasn't taking about for loops, but the semantics of int vs. uint near zero. If a 6 year old can understand something, I won't have to think about it to work with it and I can use the time and cycles I gain for something else. -- ... <IXOYE><
Jun 15 2010
prev sibling parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
BCS <none anon.com> wrote:

 It looks wrong because i only gets smaller. People are hardwired to  
 think about continues number system, not modulo number system (explain  
 that 0 - 1 = -1 to a 6 year old; easy, explain that 0 - 1 = 2^32-1 to  
 them, good luck). Yes we can be trained to use such system, but most  
 people still wont think that way reflexively.

Hardwired? Hardly. However, continuous number systems are ubiquitous, modulo systems are not. As for teaching a 6-year old, give him a wheel with the numbers 0-9 written on each of the ten spokes, and ask him what number you get by going backward one step from 0. -- Simen
Jun 15 2010
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Jun 2010 10:05:08 -0400, Pelle <pelle.mansson gmail.com> wrote:

 On 06/15/2010 03:25 PM, Adam Ruppe wrote:
 On 6/15/10, Pelle<pelle.mansson gmail.com>  wrote:
 for (uint i = end - 1; i<  uint.max; --i)

 Same functionality, really clever.

That's wrong rather than clever though. for(i< length) is saying "continue as long as you are inside the array's bounds", which is exactly what you mean in that loop. The only "tricky" is the understanding that an array index is never negative, but this shouldn't require extra thought in the first place.

Using the length is meaningless, any uint >= length will work just as well. Using the length there is meaningless, since that's really not what you compare against.

No, it's not. for(uint i = initialize(); i < length; modify(i)) This construct is valid no matter what initialize or modify does to i. -Steve
Jun 15 2010
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
div0:

Well for a start, you lose half your addressable memory.<

This matters mostly with char/ubyte/byte arrays on 32 bit systems. If you have arrays of shorts, ints or pointers/references or you are on 64 bit systems this is not so important. And the extra safety it gives me is a price I can pay. And if you don't want to pay that addressable indexes price you can use longs on 32 bit systems :-)
unsigned numbers are only a problem if you don't understand how they work, but
that goes for just about everything else as well.<

This can help you understand why you are very wrong: "Array bound errors are a problem only if you don't understand how arrays work." I have understood how unsigned numbers work, but I keep writing some bugs once in a while.
Personally I hate the use of signed numbers as array indices; it's moronic and
demonstrates the writers lack of understanding.<

I am not moronic, and C# designers are smart people.
It's very rare to actually want to index an array with a negative number.<

That's beside the main point. The main problems come from mixing signed and unsigned values.
 c.f.
 
 Item getItem(int indx) {
    if(indx >= 0 && indx < _arr.length)
      return _arr[indx];
    throw new Error(...)
 }
 
 vs.
 
 // cleaner no?
 Item getItem(uint indx) {
    if(indx < _arr.length)
      return _arr[indx];
    throw new Error(...)
 }

The second is shorter (and one less test can make it a bit faster) but it's not cleaner.
Using 'int's doesn't magically fix it. Wrong code is just wrong.<

I agree. But ints can avoid some bugs.
Hasn't that been discussed before?<

Discussions about signed-unsigned-derived troubles have happened before. But this time I have expressed a focused request, to turn indexes and lenghts into signed words (as I have written in my enhancement request). I think this was not discussed before in a focused way (or I was not present yet). Bye, bearophile
Jun 14 2010
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 I have found a Reddit discussion few days old: 
 http://www.reddit.com/r/programming/comments/cdwz5/the_perils_of_unsigned_iteration_in_cc/
 
 
 It contains this, that I quote (I have no idea if it's true), plus
 follow-ups:
 
 At Google using uints of all kinds for anything other than bitmasks or
 other inherently bit-y, non computable things is strongly discouraged. This
 includes things like array sizes, and the warnings for conversion of size_t
 to int are disabled. I think it's a good call.<

I have expressed similar ideas here: http://d.puremagic.com/issues/show_bug.cgi?id=3843 Unless someone explains me why I am wrong, I will keep thinking that using unsigned words to represent lengths and indexes, as D does, is wrong and unsafe, and using signed words (I think C# uses ints for that purpose) in D is a better design choice.

D provides powerful abstractions for iteration; it is becoming less and less desirable to hand-build loops with for-statements. As for "unsafe", I think you need to clarify this, as D is not memory unsafe despite the existence of integer over/under flows.
 In a language as greatly numerically unsafe as D (silly C-derived conversion
 rules,

Actually, I think they make a lot of sense, and D's improvement on them that only disallows conversions that lose bits based on range propagation is far more sensible than C#'s overzealous restrictions.
 fixed-sized numbers used everywhere on default, no runtime numerical
 overflows) the usage of unsigned numbers can be justified inside bit vectors,
 bitwise operations, and few other similar situations only.
 
 If D wants to be "a systems programming language. Its focus is on combining
 the power and high performance of C and C++ with the programmer productivity
 of modern languages like Ruby and Python." it must understand that numerical
 safety is one of the not secondary things that make those languages as Ruby
 and Python more productive.

I have a hard time believing that Python and Ruby are more productive primarily because they do not have an unsigned type. Python did not add overflow protection until 3.0, so it's very hard to say this crippled productivity in early versions. http://www.python.org/dev/peps/pep-0237/ Ruby & Python 3.0 dynamically switch to larger integer types when overflow happens. This is completely impractical in a systems language, and is one reason why Ruby & Python are execrably slow compared to C-style languages.
Jun 14 2010
next sibling parent reply Ellery Newcomer <ellery-newcomer utulsa.edu> writes:
On 06/14/2010 05:48 PM, Walter Bright wrote:
 bearophile wrote:
 I have found a Reddit discussion few days old:
 http://www.reddit.com/r/programming/comments/cdwz5/the_perils_of_unsigned_iteration_in_cc/



 It contains this, that I quote (I have no idea if it's true), plus
 follow-ups:

 At Google using uints of all kinds for anything other than bitmasks or
 other inherently bit-y, non computable things is strongly
 discouraged. This
 includes things like array sizes, and the warnings for conversion of
 size_t
 to int are disabled. I think it's a good call.<

I have expressed similar ideas here: http://d.puremagic.com/issues/show_bug.cgi?id=3843 Unless someone explains me why I am wrong, I will keep thinking that using unsigned words to represent lengths and indexes, as D does, is wrong and unsafe, and using signed words (I think C# uses ints for that purpose) in D is a better design choice.

D provides powerful abstractions for iteration; it is becoming less and less desirable to hand-build loops with for-statements.

Ooo ooo, can we remove it?
 As for "unsafe", I think you need to clarify this, as D is not memory
 unsafe despite the existence of integer over/under flows.

I think the problem is people don't generally think of fixnums as fixnums when they use them. Just recently I was reading about some security vulnerabilities in ruby from a few years ago which were caused when whoever wrote the underlying C didn't take integer overflow into consideration. What I take away from this anecdote is that it's that much harder to write trustworthy code in D. As always, the existence of issue 259 doesn't help matters. And from personal experience, I submit that checking for overflow is very painful to do manually (I tried to write a modular arithmetic lib for fixnums - and gave up rather quickly). Want language support (or library support, I don't care).
 In a language as greatly numerically unsafe as D (silly C-derived
 conversion
 rules,

Actually, I think they make a lot of sense, and D's improvement on them that only disallows conversions that lose bits based on range propagation is far more sensible than C#'s overzealous restrictions.
 fixed-sized numbers used everywhere on default, no runtime numerical
 overflows) the usage of unsigned numbers can be justified inside bit
 vectors,
 bitwise operations, and few other similar situations only.

 If D wants to be "a systems programming language. Its focus is on
 combining
 the power and high performance of C and C++ with the programmer
 productivity
 of modern languages like Ruby and Python." it must understand that
 numerical
 safety is one of the not secondary things that make those languages as
 Ruby
 and Python more productive.

I have a hard time believing that Python and Ruby are more productive primarily because they do not have an unsigned type.

They're more productive because their built in number types aren't fixnums. That's a nice large class of errors that don't exist in those languages.
 Python did not add overflow protection until 3.0, so it's very hard to
 say this crippled productivity in early versions.
 http://www.python.org/dev/peps/pep-0237/

 Ruby & Python 3.0 dynamically switch to larger integer types when
 overflow happens. This is completely impractical in a systems language,
 and is one reason why Ruby & Python are execrably slow compared to
 C-style languages.

Jun 14 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Ellery Newcomer wrote:
 On 06/14/2010 05:48 PM, Walter Bright wrote:
 D provides powerful abstractions for iteration; it is becoming less and
 less desirable to hand-build loops with for-statements.

Ooo ooo, can we remove it?

No :-)
 I have a hard time believing that Python and Ruby are more productive
 primarily because they do not have an unsigned type.

They're more productive because their built in number types aren't fixnums. That's a nice large class of errors that don't exist in those languages.

Like I said, this didn't appear in Python until quite recently (3.0), so that cannot be the primary productivity advantage of Python.
Jun 14 2010
parent bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 Like I said, this didn't appear in Python until quite recently (3.0), so that 
 cannot be the primary productivity advantage of Python.

You are wrong, see my other answer. Bye, bearophile
Jun 14 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

D provides powerful abstractions for iteration; it is becoming less and less
desirable to hand-build loops with for-statements.<

I agree.
As for "unsafe", I think you need to clarify this, as D is not memory unsafe
despite the existence of integer over/under flows.<

Modern languages must understand that there are other forms of safety beside memory safety. Integer overflows and signed-unsigned conversion-derived bugs can cause disasters as well. In current D language the usage of unsigned numbers is a safety hazard. So far nothing I have seen written by you or other people has shown that this is false.
Actually, I think they make a lot of sense, and D's improvement on them that
only disallows conversions that lose bits based on range propagation is far
more sensible than C#'s overzealous restrictions.<

1) I'd like D to use signed words to represent lengths and array indexes. We are going to 64 bit systems where 63 bits can be enough for lenghts. If arrays of 4 billion items are seen as important on 32 bit systems too, then use a long :-) 2) I don't like D to silently gulp down expressions that mix signed and unsigned integers and spit out wrong results when the integers were negative.
I have a hard time believing that Python and Ruby are more productive primarily
because they do not have an unsigned type.<

Python is very productive (for small or medium sized programs! On large programs Python is less good) because of a quite long list of factors. My experience with D and Python (and several other languages) has shown me that Python not using fixnums is one of the factors that help productivity. It's surely not the only factor, and I agree with you that it's not the most important, but it's surely one of the significant factors and it can't be ignored. Python integers don't overflow, this at the same time allows you to safe brain time and brain power thinking about possible overflows and the code to avoid their risk, and makes coding more relaxed. And if you try to write 50 Project Euler programs in Python and D you will surely see how many bugs the Python code has avoided you compared to D. Finding and fixing such bugs in D code requires lot of time that you save in Python. In D there are other bugs derived from mixing signed and unsigned numbers (and you can't avoid them just avoiding using unsigned numbers in your code, because lenghts and indexes and other things use them).
 Python did not add overflow protection until 3.0, so it's very hard to say
 this crippled productivity in early versions.
http://www.python.org/dev/peps/pep-0237/ 

You are wrong. Python 2.x dynamically switches to larger integer types when overflow happens. This is done transparently and avoids bugs and keeps programs more efficient. This is on Python V.2.6.5 but similar things happen in much older versions of Python:
 a = 2
 type(a)



 a += 10 ** 1000
 len(str(a))



 type(a)



 Ruby & Python 3.0 dynamically switch to larger integer types when overflow
 happens.

This is wrong. Python 3.0 has just the multi-precision integer type, that is called "int". For small values it can and will probably use under the cover an user-invisible optimization that is essentially the same thing that Python 2.x does. At the moment Python 3 integers are a bit slower than Python 2.x ones because this optimization is not done yet, one of the main design goals of Python is to keep the C interpreter of Python itself really simple, so even not expert C programmer can hack it and help in the develpment of Python. The PEP 237 and its unification of types was done because: 1) there's no need to keep two integer types in the language, you can just keep one and the language can use invisible optimizations where possible. Python is designed to be simple, so removing one type is good. 2) Actually in very uncommon situations the automatic switch to multi-precision integers can't happen. Such situations are very hard to find, they do not come up in normal numerical code, they come up when you use C extensions (or Python standard library code that is written in C). You can program every day four years in Python 2.x and never find such cases.
This is completely impractical in a systems language, and is one reason why
Ruby & Python are execrably slow compared to C-style languages.<

Lisp languages can be only a 1.0-3.0 times slower can C despite using mostly multi-precision numbers. So I don't think well implemented multi-precision numbers are so bad in a very fast language. And where performance really matters fixnums can be used. In the last years I am starting to think that using fixnums everywhere is a premature optimization. But anyway, the purpose of my original post was not to advocate the replacement of fixnums in D with multi-precision numbers, it was about the change of array indexes and lenghts from unsigned to signed. Python is slow compared to D, and surely their multi-precision numbers don't help their performance, but the "lack" of Python performance has many causes and the main ones are not the multi-precision numbers. The main cause is that Python is designed to have a simple interpterer that can be modified by not very expert C programmers. This allows lot of people to write and work on it, this was one of the causes of the Python success. The unladen swallow project has shown that you can make Python 2-4 times faster just "improving" (messing up and adding some hairy hacks to it) its interpreter, etc. One of the main causes of the low Python performance is that it's dynamically typed and at the same time it lacks a Just-in-time compiler. A Psyco JIT compiler allows me to write Python code that is usually no more than 10 times slower than D. The wonderful JIT compiler of Lua (that lacks multi-precion numbers but has dynamic typing) allows it to run usually at 0.9-2.5 times slower than D compiled with DMD (0.9 means it's faster on some FP-heavy code). Other causes of Python low performance is just that Python code is often not written with performance in mind. I am often able to write Python programs that are 2-3 times faster than Python programs I can find around. Bye, bearophile
Jun 14 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
bearophile:
 2) I don't like D to silently gulp down expressions that mix signed and
 unsigned integers and spit out wrong results when the integers were negative.

Walter, answering something similar:
Andrei and I went down that alley for a while. It's not practical.

OK. Then just removing as many unsigned words as possible from normal code (you can see this as the code you want to write in SafeD) can be an alternative. The indexes and lengths are a common source of unsigned word usage inside SafeD programs. Bye, bearophile
Jun 14 2010
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 As for "unsafe", I think you need to clarify this, as D is not memory
 unsafe despite the existence of integer over/under flows.<

Modern languages must understand that there are other forms of safety beside memory safety. Integer overflows and signed-unsigned conversion-derived bugs can cause disasters as well. In current D language the usage of unsigned numbers is a safety hazard. So far nothing I have seen written by you or other people has shown that this is false.

D's safe mode, integer overflow *cannot* lead to memory corruption. So when you say something is "unsafe", I think it's reasonable to ask what you mean by it. For example, if you define "safe" as "guaranteed to not have bugs", then you're requiring that there be a proof of correctness for all programs in D.
 Actually, I think they make a lot of sense, and D's improvement on them
 that only disallows conversions that lose bits based on range propagation
 is far more sensible than C#'s overzealous restrictions.<

1) I'd like D to use signed words to represent lengths and array indexes.

This would lead to silent breakage of code transferred from C and C++. We've tried very hard to not have such things in D. The idea is that code that looks the same either behaves the same or issues an error. There's no way to make your proposal pass this requirement.
 We are going to 64 bit systems where 63 bits can be enough for lenghts. If
 arrays of 4 billion items are seen as important on 32 bit systems too, then
 use a long :-) 2) I don't like D to silently gulp down expressions that mix
 signed and unsigned integers and spit out wrong results when the integers
 were negative.

That idea has a lot of merit for 64 bit systems. But there are two problems with it: 1. D source code is supposed to be portable between 32 and 64 bit systems. This would fail miserably if the sign of things silently change in the process. 2. For an operating system kernel's memory management logic, it still would make sense to represent the address space as a flat range from 0..n, not one that's split in the middle, half of which is accessed with negative offsets. D is supposed to support OS development.
 I have a hard time believing that Python and Ruby are more productive
 primarily because they do not have an unsigned type.<

Python is very productive (for small or medium sized programs! On large programs Python is less good) because of a quite long list of factors. My experience with D and Python (and several other languages) has shown me that Python not using fixnums is one of the factors that help productivity. It's surely not the only factor, and I agree with you that it's not the most important, but it's surely one of the significant factors and it can't be ignored.

We can argue forever with how significant it is, I don't assign nearly as much to it as you do.
 Python integers don't overflow,
 this at the same time allows you to safe
 brain time and brain power thinking about possible overflows and the code to
 avoid their risk, and makes coding more relaxed. And if you try to write 50
 Project Euler programs in Python and D you will surely see how many bugs the
 Python code has avoided you compared to D. Finding and fixing such bugs in D
 code requires lot of time that you save in Python.

This is where we differ. I very rarely have a bug due to overflow or signed/unsigned differences. If you use the D loop abstractions, you should never have these issues with it.
 Python did not add overflow protection until 3.0, so it's very hard to say 
 this crippled productivity in early versions.
 http://www.python.org/dev/peps/pep-0237/

You are wrong. Python 2.x dynamically switches to larger integer types when overflow happens. This is done transparently and avoids bugs and keeps programs more efficient. This is on Python V.2.6.5 but similar things happen in much older versions of Python:
 a = 2 type(a)



 a += 10 ** 1000 len(str(a))



 type(a)




Here's what the wikipedia said about it. "In Python, a number that becomes too large for an integer seamlessly becomes a long.[1] And in Python 3.0, integers and arbitrary sized longs are unified." -- http://en.wikipedia.org/wiki/Integer_overflow (Just switching to long isn't good enough - what happens when long overflows? I generally don't like solution like this because it makes tripping the bug so rare that it can lurk for years. I prefer to flush bugs out in the open early.)
 This is completely impractical in a systems language, and is one reason why
 Ruby & Python are execrably slow compared to C-style languages.

Lisp languages can be only a 1.0-3.0 times slower can C despite using mostly multi-precision numbers. So I don't think well implemented multi-precision numbers are so bad in a very fast language.

3x is a BIG deal. If you're running a major site, this means you only need 1/3 of the hardware, and 1/3 of the electric bill. If you're running a program that takes all day, now you can run it 3 times that day.
Jun 14 2010
next sibling parent Pelle <pelle.mansson gmail.com> writes:
On 06/15/2010 03:49 AM, Walter Bright wrote:
 Here's what the wikipedia said about it.

 "In Python, a number that becomes too large for an integer seamlessly
 becomes a long.[1] And in Python 3.0, integers and arbitrary sized longs
 are unified."

 -- http://en.wikipedia.org/wiki/Integer_overflow

 (Just switching to long isn't good enough - what happens when long
 overflows? I generally don't like solution like this because it makes
 tripping the bug so rare that it can lurk for years. I prefer to flush
 bugs out in the open early.)

A long in pythonic would be a BigInt in D, so no overflows. Python integers don't overflow.
Jun 15 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

D's safe mode, integer overflow *cannot* lead to memory corruption. So when you
say something is "unsafe", I think it's reasonable to ask what you mean by it.<

I meant "more numerical safe". That is it helps avoid part of the integral-derived bugs.
We've tried very hard to not have such things in D. The idea is that code that
looks the same either behaves the same or issues an error. There's no way to
make your proposal pass this requirement.<

I see. We can drop this, then.
We can argue forever with how significant it is, I don't assign nearly as much
to it as you do.<

I see. If you try solving many Project Euler problems you can see how common those bugs are :-) For other kind of code they are probably less common.
If you use the D loop abstractions, you should never have these issues with it.<

In D I am probably using higher loop abstractions than the ones you use normally, but now and then I have those bugs anyway. Talking the length of an array is necessary now and then even if you use loop abstractions (and higher-order functions as maps, filters, etc).
Here's what the wikipedia said about it.
"In Python, a number that becomes too large for an integer seamlessly becomes a
long.[1] And in Python 3.0, integers and arbitrary sized longs are unified."<<


This is exactly the same things I have said :-)
(Just switching to long isn't good enough - what happens when long overflows?<

Maybe this is where you didn't understand the situation: Python 2.x "long" means multi-precision integral numbers. In my example the number was 1001 decimal digits long.
I generally don't like solution like this because it makes tripping the bug so
rare that it can lurk for years. I prefer to flush bugs out in the open early.)<

In Python 2.x this causes zero bugs because those "longs" are multi-precision.
3x is a BIG deal. If you're running a major site, this means you only need 1/3
of the hardware, and 1/3 of the electric bill. If you're running a program that
takes all day, now you can run it 3 times that day.<

This point of the discussion is probably too much indefinite to say something useful about it. I can answer you that in critical spots of the program it is probably easy enough to replace multiprecision ints with fixnums, and this can make the whole program no significantly slower than C code. And in some places the compiler can infer where fixnums are enough and use them automatically. In the end regarding this point mine is mostly a gut feeling derived from many years of usage of multiprecision numbers: I think that in a nearly-system language as D well implemented multi-precision numbers (with the option to use fixnums in critical spots) can lead to efficient enough programs. I have programmed in a compiled CLisp a bit, and the integer value performance is not so bad. I can of course be wrong, but only an actual test can show it :-) Maybe someday I will try it and do some benchmarks. Current BigInt of D need the small-number optimization before a test can be tried (that is to avoid heap allocation when the bignumber fits in 32 or 64 bits), and the compiler is not smart enough to replace bigints with ints where bigints are not necessary. In the meantime I have done several benchmarks in C# with runtime ingegral overflow enabled or disabled, and I have seen that the performance with those enabled is only a bit less, not significantly so (I have seen the same thing in Delphi years ago).
That idea has a lot of merit for 64 bit systems. But there are two problems
with it: 1. D source code is supposed to be portable between 32 and 64 bit
systems. This would fail miserably if the sign of things silently change in the
process.<

Then we can use a signed word on 32 bit systems too. Or if you don't like that, to represent lengths/indexes we can use 64 bit signed values on 32 bit systems too.
2. For an operating system kernel's memory management logic, it still would
make sense to represent the address space as a flat range from 0..n, not one
that's split in the middle, half of which is accessed with negative offsets. D
is supposed to support OS development.<

I am not expert enough about this to understand well the downsides of signed numbers used in this. But I can say that D is already not the best language to develop non-toy operating systems. And even if someone writes a serious operating system with D, this is an uncommon application of D language, where probably 95% of other people write other kinds of programs where unsigned integers everywhere are not the best thing. And the uncommon people that want to write an OS or device driver with D can use signed words. Such uncommon people can even design and use their own arrays with unsigned-word lengths/indexes :-) Designing D to appeal to a very uncommon kind of power-users that need to write an operating system with D doesn't look like a good design choice. If this whole thread goes nowhere then later I can even close bug 3843, because there's little point in keeping it open. Bye, bearophile
Jun 15 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
 And the uncommon people that want to write an OS or device driver with D can
use signed words.

Sorry, I meant they can use unsigned words. Bye, bearophile
Jun 15 2010
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 But I can say that D is already not the best language
 to develop non-toy operating systems.

Why?
Jun 15 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 But I can say that D is already not the best language
 to develop non-toy operating systems.


This is partially off-topic to the topic of this thread. I have not written an OS yet, so I can't be sure. But from what I have read and seen D seems designed for different purposes, mostly as a high-performance low-level application language that currently is programmed in a style that doesn't assume a very efficient GC. D has many features that are useless or negative if you want to write code close to the metal as a kernel, as classes, virtual functions, garbage collector, operator overloading, interfaces, exceptions and try-catch-finally blocks, closures, references, delegates, nested functions and structs, array concat, built-in associative arrays, monitor, automatic destructors. When you write code close to the metal you want to know exactly what your code is doing, so all the automatic things or higher level things become useless or worse, they keep you from seeing what the hardware is actually doing. On the other hand current D language (and C and C++) lacks other hard-to-implement features that allow the kernel programmer to give more semantics to the code. So such semantics has to be expressed through normal coding. Future languages maybe will improve on this, but it will be a hard work. ATS language tries to improve a bit on this, but it's far from being good and its syntax is awful. D also lacks a good number of nonstandard C features that are present in the "C" compiled by GCC, such low-level features and compilation flags can be quite useful if you write a kernel. Even LDC has a bit of such features. Bye, bearophile
Jun 15 2010
next sibling parent Don <nospam nospam.com> writes:
bearophile wrote:
 Walter Bright:
 
 But I can say that D is already not the best language
 to develop non-toy operating systems.


This is partially off-topic to the topic of this thread. I have not written an OS yet, so I can't be sure. But from what I have read and seen D seems designed for different purposes, mostly as a high-performance low-level application language that currently is programmed in a style that doesn't assume a very efficient GC. D has many features that are useless or negative if you want to write code close to the metal as a kernel

Indeed, only a subset of D is useful for low-level development. But D has more close-to-the-metal features than C does. (Compare with C++, which didn't improve the machine model it inherited from C). Of course, the market for kernel development is so small and so dominated by C that it's not really worth worrying it.
Jun 15 2010
prev sibling next sibling parent Alex Makhotin <alex bitprox.com> writes:
bearophile wrote:
 When you write code close to the metal you want to know exactly what your code
is doing, so all the automatic things or higher level things become useless or
worse, they keep you from seeing what the hardware is actually doing.

Right. That's why I well respect the point of view of Linus on that matter. And his last comments on that look well motivated to me. -- Alex Makhotin, the founder of BITPROX, http://bitprox.com
Jun 15 2010
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 
 But I can say that D is already not the best language to develop non-toy
 operating systems.


I have not written an OS yet, so I can't be sure. But from what I have read and seen D seems designed for different purposes, mostly as a high-performance low-level application language that currently is programmed in a style that doesn't assume a very efficient GC.

I'd rephrase that as D supports many different styles. One of those styles is as a "better C".
 D has many features that are useless or negative if you want to write code
 close to the metal as a kernel, as classes, virtual functions, garbage
 collector, operator overloading, interfaces, exceptions and try-catch-finally
 blocks, closures, references, delegates, nested functions and structs, array
 concat, built-in associative arrays, monitor, automatic destructors. When you
 write code close to the metal you want to know exactly what your code is
 doing, so all the automatic things or higher level things become useless or
 worse, they keep you from seeing what the hardware is actually doing.

I agree on those points. Those features would not be used when using D as a "better C". So, you could ask why not use C++ as a "better C" and eschew the C++ features that cause trouble for kernel dev? The answer is that C++ doesn't offer much over C that does not involve those trouble causing features. D, on the other hand, offers substantial and valuable features not available in C or C++ that can be highly useful for kernel dev. Read on.
 On the other hand current D language (and C and C++) lacks other
 hard-to-implement features that allow the kernel programmer to give more
 semantics to the code. So such semantics has to be expressed through normal
 coding. Future languages maybe will improve on this, but it will be a hard
 work. ATS language tries to improve a bit on this, but it's far from being
 good and its syntax is awful.

I think you are giving zero weight to the D features that assist kernel programming.
 D also lacks a good number of nonstandard C features that are present in the
 "C" compiled by GCC, such low-level features and compilation flags can be
 quite useful if you write a kernel. Even LDC has a bit of such features.

A non-standard feature means the language is inadequate. There is nothing at all preventing non-standard features from being added to D for specific tasks. There is no reason to believe it is harder to do that for D than to C. As for standard features D has that make it more suitable for low level programming than C is: 1. inline assembler as a standard feature 2. const/immutable qualifiers 3. identification of shared data with the shared type constructor 4. enforced function purity 5. guaranteed basic type sizes 6. arrays that actually work 7. scope guard (yes, even without exception handling) BTW, you might ask "how do I know my D code doesn't have exception handling or GC calls in it?" There are several ways: 1. Remove the support from it from the library. Then, attempts to use such features will cause the link step to fail. Kernel C programmers use a custom library anyway, no reason why D kernel dev cannot. 2. Compiling code with "nothrow" will check that exceptions are not generated. 3. The compiler could be easily modified to add a switch that prevents such features from being used. This is no different from the customizations done to C compilers for kernel dev.
Jun 15 2010
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 bearophile wrote:
 Walter Bright:

 But I can say that D is already not the best language to develop 
 non-toy
 operating systems.


I have not written an OS yet, so I can't be sure. But from what I have read and seen D seems designed for different purposes, mostly as a high-performance low-level application language that currently is programmed in a style that doesn't assume a very efficient GC.

I'd rephrase that as D supports many different styles. One of those styles is as a "better C".
 D has many features that are useless or negative if you want to write 
 code
 close to the metal as a kernel, as classes, virtual functions, garbage
 collector, operator overloading, interfaces, exceptions and 
 try-catch-finally
 blocks, closures, references, delegates, nested functions and structs, 
 array
 concat, built-in associative arrays, monitor, automatic destructors. 
 When you
 write code close to the metal you want to know exactly what your code is
 doing, so all the automatic things or higher level things become 
 useless or
 worse, they keep you from seeing what the hardware is actually doing.

I agree on those points. Those features would not be used when using D as a "better C". So, you could ask why not use C++ as a "better C" and eschew the C++ features that cause trouble for kernel dev? The answer is that C++ doesn't offer much over C that does not involve those trouble causing features. D, on the other hand, offers substantial and valuable features not available in C or C++ that can be highly useful for kernel dev. Read on.
 On the other hand current D language (and C and C++) lacks other
 hard-to-implement features that allow the kernel programmer to give more
 semantics to the code. So such semantics has to be expressed through 
 normal
 coding. Future languages maybe will improve on this, but it will be a 
 hard
 work. ATS language tries to improve a bit on this, but it's far from 
 being
 good and its syntax is awful.

I think you are giving zero weight to the D features that assist kernel programming.
 D also lacks a good number of nonstandard C features that are present 
 in the
 "C" compiled by GCC, such low-level features and compilation flags can be
 quite useful if you write a kernel. Even LDC has a bit of such features.

A non-standard feature means the language is inadequate. There is nothing at all preventing non-standard features from being added to D for specific tasks. There is no reason to believe it is harder to do that for D than to C. As for standard features D has that make it more suitable for low level programming than C is: 1. inline assembler as a standard feature 2. const/immutable qualifiers 3. identification of shared data with the shared type constructor 4. enforced function purity 5. guaranteed basic type sizes 6. arrays that actually work

6.5. arrays that actually work and don't need garbage collection
 7. scope guard (yes, even without exception handling)

Andrei
Jun 15 2010
prev sibling next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Walter Bright wrote:
 I think you are giving zero weight to the D features that assist kernel 
 programming.

What bothers me about this discussion is consider D with features 1 2 3 4, and language X with features 1 2 5. X is determined to be better than D because X has feature 5, but since X does not have features 3 and 4, therefore 3 and 4 are irrelevant. For example, the more I use scope guard statements, the more of a game changer I believe they are in eliminating the usual rat's nest of goto's one finds in C code.
Jun 15 2010
prev sibling parent reply Stephan <spam extrawurst.org> writes:
On 15.06.2010 19:41, Walter Bright wrote:
 3. The compiler could be easily modified to add a switch that prevents
 such features from being used. This is no different from the
 customizations done to C compilers for kernel dev.

Why not make such a change in a future release of the official version ?
Jun 16 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Stephan wrote:
 On 15.06.2010 19:41, Walter Bright wrote:
 3. The compiler could be easily modified to add a switch that prevents
 such features from being used. This is no different from the
 customizations done to C compilers for kernel dev.

Why not make such a change in a future release of the official version ?

It's pretty low on the priority list, because the absence of such a switch would not prevent you from using D as a better C compiler.
Jun 16 2010
parent Walter Bright <newshound1 digitalmars.com> writes:
Walter Bright wrote:
 Stephan wrote:
 On 15.06.2010 19:41, Walter Bright wrote:
 3. The compiler could be easily modified to add a switch that prevents
 such features from being used. This is no different from the
 customizations done to C compilers for kernel dev.

Why not make such a change in a future release of the official version ?

It's pretty low on the priority list, because the absence of such a switch would not prevent you from using D as a better C compiler.

I would move it up in the priority if there was a serious project that needed it, as opposed to being a convenient excuse to not use D. One reason that dmd comes with source is so that people can try out things like this.
Jun 16 2010
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 D also lacks a good number of nonstandard C features that are present in the
 "C" compiled by GCC, such low-level features and compilation flags can be
 quite useful if you write a kernel. Even LDC has a bit of such features.

It's interesting that D already has most of the gcc extensions: http://gcc.gnu.org/onlinedocs/gcc-2.95.3/gcc_4.html as standard features, rather than extensions. Being part of the standard language implies D being more suitable for kernel dev than standard C is.
Jun 15 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

I'd rephrase that as D supports many different styles. One of those styles is
as a "better C".<

D can replace many but not all usages of C; think about programming an Arduino (http://en.wikipedia.org/wiki/Arduino ) with a dmd compiler of today.
I agree on those points. Those features would not be used when using D as a
"better C".<

A problem is that some of those D features can worsen a kernel code. So for example you have to review code to avoid operator overloading usage :-) There is lot of D compiler complexity useless for that kind of code. A simpler compiler means less bugs and less D manual to read.
The answer is that C++ doesn't offer much over C that does not involve those
trouble causing features. D, on the other hand, offers substantial and valuable
features not available in C or C++ that can be highly useful for kernel dev.
Read on.<

I don't know if D offers enough of what a kernel developer needs.
A non-standard feature means the language is inadequate.<

I agree, standard C is not perfect for that purpose.
There is nothing at all preventing non-standard features from being added to D
for specific tasks. There is no reason to believe it is harder to do that for D
than to C.<

I agree. (But note that here we are talking just about low level features. Linus has said that such features are important but he desires other things absent in C).
As for standard features D has that make it more suitable for low level
programming than C is:<

I agree.
Since it has more than C does, and C is used for kernel dev, then it must be
enough.<

Kernel C code uses several GCC extensions to the C language. And Linus says he desires higher level features absent from C, C++ and absent from those GCC extensions.
I'll await your reply there.<

I appreciate your trust, but don't expect me to be able to teach you things about C and the kind of code needed to write a kernel, you have way more experience than me :-) --------------------
With all due respect to Linus, in 30 years of professionally writing software,
I've found that if you solely base improvements on what customers ask for, all
you have are incremental improvements. No quantum leaps, no paradigm shifts, no
game changers.<

You are right in general, but I don't know how much you are right regarding Linus. Linus desires some higher level features but maybe he doesn't exactly know what he desires :-) I don't know if Linus has ever asked for some of the features of the Sing# language (http://en.wikipedia.org/wiki/Sing_Sharp ), needed to write the experimental Singularity OS. About Spec#:
The Spec# language is a superset of the programming language C# extending C# by
nonnull types, method contracts, object invariants and an ownership type system
[and Spec# also has built-in message passing for concurrency with a syntax to
specify message invariants]. The behavior of a Spec# program is checked at
runtime and statically verified by Boogie, the Spec# static program verifier
[2]. Boogie generates logical verification conditions from a Spec# program.
Internally, it uses an automatic theorem prover [7] that analyzes the
verification conditions to prove the correctness of the program or find errors
in it. One of the main innovations of Boogie is a systematic way (a
methodology) for specifying and verifying invariants. The Spec# Programming
System handles callbacks and aggregate objects, and it supports both object [4]
and static [3] class invariants.<

In Spec# beside the "assert" there is also "assume", it seems similar to this one of C++: http://msdn.microsoft.com/en-us/library/1b3fsfxw%28VS.80%29.aspx But Spec# "assume" seems used mostly for the contract programming, for example to state that some condition is true before some method call that has that thing as precondition. I have not fully understood the purpose of this, but I think it can be useful for performance (because contracts are enforces in "release mode" too. So the compiler has to try to remove some of them to improve code performance). In Spec# nonnull types are specified adding "!" after their type: T! t = new T(); // OK t = null; // not allowed Even if D can't turn all its class references to nonnull on default, a syntax to specify references and pointers that can't be null can be added. The bang symbol can't be used in D for that purpose, it has enough purposes already. Spec# defines three types of purity: - [Pure] Method does not change the existing objects (but it may create and update new objects). - [Confined] Method is pure and reads only this and objects owned by this. - [StateIndependent] Method does not read the heap at all. Add one of the three attributes above to a method to declare it as pure method. Any called method in a contract has to be pure. Spec# "static class invariants" test the consistency of static fields. http://research.microsoft.com/en-us/projects/specsharp/krml153.pdf
Sometimes there are even consistency conditions that relate the instance fields
of many or all objects of a class; static class invariants describe these
relations, too, since they cannot be enforced by any one object in isolation.<

class Client { int id; static int last_used_id = 0; static invariant() { assert(Client.last_used_id >= 0); HashSet!int used_ids; foreach (c; all Client instances) { assert(c.id < Client.last_used_id); assert(c.id !in usef_ids); usef_ids.add(c.id) } } this() { this.id = Client.last_used_id; Client.last_used_id++; } }
Every object of class Client has an ID. The next available ID is stored in the
static field last_used_id. Static class invariants guarantee that last_used_id
has not been assigned to a Client object and that all Client objects have
different IDs.<

In D class/struct invariants can access static fields too. Findind all instances of a class is not immediate in D, I don't think D reflection is enough here, you have to store all such references, for example in a static array of Client. So in D it can become something like: class Client { int id; static int last_used_id = 0; static typeof(id)[] clients; invariant() { assert(Client.last_used_id >= 0); // assert len(set(c.id for c in clients)) == len(clients) // assert all(c.id < Client.last_used_id for c in clients) HashSet!int used_ids; foreach (c; clients) { assert(c.id < Client.last_used_id); assert(c.id !in usef_ids); usef_ids.add(c.id) } } this() { this.id = Client.last_used_id; Client.last_used_id++; clients ~= this; } } But running that invariant often is slow. I don't know if/how Spec# solves this problem. --------------------
 It's interesting that D already has most of the gcc extensions:
 http://gcc.gnu.org/onlinedocs/gcc-2.95.3/gcc_4.html

There's lot of stuff in that page, and some of those things are new for me :-) 4.3 Labels as Values: that's computed gotos, they can be useful if you write an interpreter or you implement some kind of state machine. In the last two years I have found two situations where I have found useful this feature of GCC. I'd like computed gotos in D too (both GDC and LDC can implement them in a simple enough way. If this is hard to implement with the DMD back-end then I'd like this feature to be in the D specs anyway, so other D compilers that want to implement it will implement it with the same standard syntax, improving portability of D code that uses it). I will write about more of those GCC things tomorrow... Bye, bearophile
Jun 15 2010
parent Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 
 I'd rephrase that as D supports many different styles. One of those styles
 is as a "better C".<

D can replace many but not all usages of C; think about programming an Arduino (http://en.wikipedia.org/wiki/Arduino ) with a dmd compiler of today.

The Arduino is an 8 bit machine. D is designed for 32 bit and up machines. Full C++ won't even work on a 16 bit machine, either.
 I agree on those points. Those features would not be used when using D as a
 "better C".<

example you have to review code to avoid operator overloading usage :-) There is lot of D compiler complexity useless for that kind of code. A simpler compiler means less bugs and less D manual to read.

If you're a kernel dev, the language features should not be a problem for you. BTW, you listed nested functions as disqualifying a language from being a kernel dev language, yet gcc supports nested functions as an extension.
 The answer is that C++ doesn't offer much over C that does not involve
 those trouble causing features. D, on the other hand, offers substantial
 and valuable features not available in C or C++ that can be highly useful
 for kernel dev. Read on.<


It offers more than what C does, so it must be enough since C is enough.
 Since it has more than C does, and C is used for kernel dev, then it must
 be enough.<

Kernel C code uses several GCC extensions to the C language.

As I pointed out, D implements the bulk of those extensions as a standard part of D.
 With all due respect to Linus, in 30 years of professionally writing
 software, I've found that if you solely base improvements on what customers
 ask for, all you have are incremental improvements. No quantum leaps, no
 paradigm shifts, no game changers.<

You are right in general, but I don't know how much you are right regarding Linus. Linus desires some higher level features but maybe he doesn't exactly know what he desires :-)

Linus may very well be an expert on various languages and their tradeoffs, but maybe not. As far as languages go, he may only be an expert on C. All I know for sure is he is an expert on C and kernel development, and a gifted manager.
 4.3 Labels as Values: that's computed gotos, they can be useful if you write
 an interpreter or you implement some kind of state machine.

They are useful in some circumstances, but are hardly necessary.
Jun 15 2010
prev sibling next sibling parent Don <nospam nospam.com> writes:
Walter Bright wrote:
 bearophile wrote:
 Python integers don't overflow,
 this at the same time allows you to safe
 brain time and brain power thinking about possible overflows and the 
 code to
 avoid their risk, and makes coding more relaxed. And if you try to 
 write 50
 Project Euler programs in Python and D you will surely see how many 
 bugs the
 Python code has avoided you compared to D. Finding and fixing such 
 bugs in D
 code requires lot of time that you save in Python.

This is where we differ. I very rarely have a bug due to overflow or signed/unsigned differences.

One was fixed in this week's DMD release. http://www.dsource.org/projects/dmd/changeset/491 It's interesting to think how this could have been avoided.
Jun 15 2010
prev sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Walter Bright wrote:
 bearophile wrote:
 We are going to 64 bit systems where 63 bits can be enough for
 lenghts. If
 arrays of 4 billion items are seen as important on 32 bit systems too,=


 then
 use a long :-) 2) I don't like D to silently gulp down expressions
 that mix
 signed and unsigned integers and spit out wrong results when the integ=


 were negative.

That idea has a lot of merit for 64 bit systems. But there are two problems with it: =20 1. D source code is supposed to be portable between 32 and 64 bit systems. This would fail miserably if the sign of things silently chang=

 in the process.
=20

porting code from x86 to x86_64 because some unsigned operations don't behave the same way on both... Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jun 15 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jérôme M. Berger wrote:
 	Actually, that problem already occurs in C. I've had problems when
 porting code from x86 to x86_64 because some unsigned operations
 don't behave the same way on both...

How so? I thought most 64 bit C compilers were specifically designed to avoid this problem.
Jun 15 2010
parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Walter Bright wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
     Actually, that problem already occurs in C. I've had problems when=


 porting code from x86 to x86_64 because some unsigned operations
 don't behave the same way on both...

How so? I thought most 64 bit C compilers were specifically designed to=

 avoid this problem.

I can't isolate it to a minimal test case, but at my job, we make an image processing library. Since negative image dimensions don't make sense, we decided to define width and height as "unsigned int". Now, we have code that works fine on 32-bit platforms (x86 and arm) but segfaults on x86_64. Simply adding an (int) cast in front of the image dimensions in a couple of places fixes the issue (tested with various versions of gcc on linux and windows). Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jun 16 2010
parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: multipart/mixed;
 boundary="------------080305000704070801050406"

This is a multi-part message in MIME format.
--------------080305000704070801050406
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

J=C3=A9r=C3=B4me M. Berger wrote:
 Walter Bright wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
     Actually, that problem already occurs in C. I've had problems whe=



 porting code from x86 to x86_64 because some unsigned operations
 don't behave the same way on both...



 avoid this problem.

I can't isolate it to a minimal test case, but at my job, we make an image processing library. Since negative image dimensions don't make sense, we decided to define width and height as "unsigned int". Now, we have code that works fine on 32-bit platforms (x86 and arm) but segfaults on x86_64. Simply adding an (int) cast in front of the image dimensions in a couple of places fixes the issue (tested with various versions of gcc on linux and windows). =20

the issue as a reply to give everyone a chance to try and spot the error... Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr --------------080305000704070801050406 Content-Type: text/x-csrc; name="test.c" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline; filename="test.c" #include <assert.h> #include <stdio.h> int main (int argc, char** argv) { char* data =3D argv[0]; /* Just to get a valid pointer */ unsigned int offset =3D 3; printf ("Original: %p\n", data); data +=3D offset; printf ("+3 : %p\n", data); data +=3D -offset; printf ("-3 : %p\n", data); assert (data =3D=3D argv[0]); /* Works on 32-bit systems, fails on = 64-bit */ return 0; } --------------080305000704070801050406--
Jun 16 2010
next sibling parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

J=C3=A9r=C3=B4me M. Berger wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
 Walter Bright wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
     Actually, that problem already occurs in C. I've had problems wh=




 porting code from x86 to x86_64 because some unsigned operations
 don't behave the same way on both...




 avoid this problem.

an image processing library. Since negative image dimensions don't make sense, we decided to define width and height as "unsigned int". Now, we have code that works fine on 32-bit platforms (x86 and arm) but segfaults on x86_64. Simply adding an (int) cast in front of the image dimensions in a couple of places fixes the issue (tested with various versions of gcc on linux and windows).

the issue as a reply to give everyone a chance to try and spot the error... =20

even on 64 bits architecture, so what happens is: - Some operation between signed and unsigned ints gives a negative result. Because of the automatic type conversion rules, this is converted to an unsigned 32-bit int; - The result is added to a pointer. On 32-bit systems, the operation simply wraps around and works. On 64-bit systems, the result is extended to 64 bits by adding zeroes (since it is unsigned) and the resulting pointer is wrong. That's reasonably easy to spot in this simple example. It's a lot more difficult on real world code. We had the problem because we were moving a pointer through the image data. As soon as the movement depended on the image dimensions (say: move left by 1/4 the width), then the program crashed. Every other kind of move worked just fine... Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jun 16 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jérôme M. Berger wrote:
 Jérôme M. Berger wrote:
 Walter Bright wrote:
 Jérôme M. Berger wrote:
     Actually, that problem already occurs in C. I've had problems when
 porting code from x86 to x86_64 because some unsigned operations
 don't behave the same way on both...

avoid this problem.

an image processing library. Since negative image dimensions don't make sense, we decided to define width and height as "unsigned int". Now, we have code that works fine on 32-bit platforms (x86 and arm) but segfaults on x86_64. Simply adding an (int) cast in front of the image dimensions in a couple of places fixes the issue (tested with various versions of gcc on linux and windows).

the issue as a reply to give everyone a chance to try and spot the error... Jerome

Whoa! That's indeed unfortunate. Allow me some more whoring for TDPL: ============== \indexes{surprising behavior!of unary \lstinline{-}}% One surprising behavior of unary minus is that, when applied to an unsigned value, it still yields an unsigned value (according to the rules in~\S~\vref{sec:typing-of-ops}). For example,\sbs -55u is\sbs 4_294_967_241 , which is\sbs \ccbox{uint.max - 55 + 1}. \indexes{unsigned type, natural number, two's complement, overflow}% The fact that unsigned types are not really natural numbers is a fact of life. In\sbs\dee and many other languages, two's complement arithmetic with its simple overflow rules is an inescapable reality that cannot be abstracted away. One way to think \mbox{of} -val for any integral value~ val is to consider it a short form \mbox{of}$\,$ \cc{\~val + 1}; in other words, flip every bit in val and then add 1 to the result. This manipulation does not raise particular questions about the signedness of~ val . ============== (This heavily adorned text also shows what sausage making looks like...) Andrei
Jun 16 2010
parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Andrei Alexandrescu wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
 Walter Bright wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
     Actually, that problem already occurs in C. I've had problems w=





 porting code from x86 to x86_64 because some unsigned operations
 don't behave the same way on both...





 avoid this problem.




 an image processing library. Since negative image dimensions don't
 make sense, we decided to define width and height as "unsigned int".
 Now, we have code that works fine on 32-bit platforms (x86 and arm)
 but segfaults on x86_64. Simply adding an (int) cast in front of the
 image dimensions in a couple of places fixes the issue (tested with
 various versions of gcc on linux and windows).



 the issue as a reply to give everyone a chance to try and spot the
 error...

         Jerome

Whoa! That's indeed unfortunate. Allow me some more whoring for TDPL: =20 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D \indexes{surprising behavior!of unary \lstinline{-}}% One surprising behavior of unary minus is that, when applied to an unsigned value, it still yields an unsigned value (according to the rules in~\S~\vref{sec:typing-of-ops}). For example,\sbs -55u is\sbs 4_294_967_241 , which is\sbs \ccbox{uint.max - 55 + 1}. =20 \indexes{unsigned type, natural number, two's complement, overflow}% The fact that unsigned types are not really natural numbers is a fact of life. In\sbs\dee and many other languages, two's complement arithmetic with its simple overflow rules is an inescapable reality that cannot be abstracted away. One way to think \mbox{of} -val for any integral value~ val is to consider it a short form \mbox{of}$\,$ \cc{\~val + 1}; in other words, flip every bit in val and then add 1 to the result. This manipulation does not raise particular questions about the signedness of~ val . =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 (This heavily adorned text also shows what sausage making looks like...=

=20

The rhs expression was quite a bit more complicated than that (not counting the fact that it was hidden in a preprocessor macro...). Note moreover that the problem doesn't come from the unary minus since the code works as expected on 32-bit platforms... Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jun 16 2010
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jérôme M. Berger wrote:
 Jérôme M. Berger wrote:
 Walter Bright wrote:
 Jérôme M. Berger wrote:
     Actually, that problem already occurs in C. I've had problems when
 porting code from x86 to x86_64 because some unsigned operations
 don't behave the same way on both...

avoid this problem.

an image processing library. Since negative image dimensions don't make sense, we decided to define width and height as "unsigned int". Now, we have code that works fine on 32-bit platforms (x86 and arm) but segfaults on x86_64. Simply adding an (int) cast in front of the image dimensions in a couple of places fixes the issue (tested with various versions of gcc on linux and windows).

the issue as a reply to give everyone a chance to try and spot the error...

Easy. offset should be a size_t, not an unsigned.
Jun 16 2010
next sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Walter Bright wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
 Walter Bright wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
     Actually, that problem already occurs in C. I've had problems w=





 porting code from x86 to x86_64 because some unsigned operations
 don't behave the same way on both...





 avoid this problem.




 an image processing library. Since negative image dimensions don't
 make sense, we decided to define width and height as "unsigned int".
 Now, we have code that works fine on 32-bit platforms (x86 and arm)
 but segfaults on x86_64. Simply adding an (int) cast in front of the
 image dimensions in a couple of places fixes the issue (tested with
 various versions of gcc on linux and windows).



 the issue as a reply to give everyone a chance to try and spot the
 error...

Easy. offset should be a size_t, not an unsigned.

And what about image width and height? Sure, in hindsight they could probably be made into size_t too. Much easier and safer to make them into signed ints instead, since we don't manipulate images bigger than 2_147_483_648 on a side anyway... Which is more or less bearophile's point: unless you're *really* sure that you know what you're doing, use signed ints even if negative numbers make no sense in a particular context. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jun 16 2010
parent reply Don <nospam nospam.com> writes:
Jérôme M. Berger wrote:
 Walter Bright wrote:
 Jérôme M. Berger wrote:
 Jérôme M. Berger wrote:
 Walter Bright wrote:
 Jérôme M. Berger wrote:

but segfaults on x86_64. Simply adding an (int) cast in front of the image dimensions in a couple of places fixes the issue (tested with various versions of gcc on linux and windows).




 Easy. offset should be a size_t, not an unsigned.

And what about image width and height? Sure, in hindsight they could probably be made into size_t too. Much easier and safer to make them into signed ints instead, since we don't manipulate images bigger than 2_147_483_648 on a side anyway... Which is more or less bearophile's point: unless you're *really* sure that you know what you're doing, use signed ints even if negative numbers make no sense in a particular context.

I agree. Actually the great evil in C is that implicit casts from signed<->unsigned AND sign extension are both permitted in a single expression. I hope that when the integer range checking is fully implemented in D, such two-way implicit casts will be forbidden. (D has introduced ANOTHER instance of this with the ridiculous >>> operator. byte b = -1; byte c = b >>> 1; Guess what c is! )
Jun 17 2010
next sibling parent reply Justin Spahr-Summers <Justin.SpahrSummers gmail.com> writes:
On Thu, 17 Jun 2010 10:00:24 +0200, Don <nospam nospam.com> wrote:
 (D has introduced ANOTHER instance of this with the ridiculous >>> 
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )
 

127, right? I know at least RISC processors tend to have instructions for both a logical and algebraic right shift. In that context, it makes sense for a systems programming language.
Jun 17 2010
parent reply Don <nospam nospam.com> writes:
Justin Spahr-Summers wrote:
 On Thu, 17 Jun 2010 10:00:24 +0200, Don <nospam nospam.com> wrote:
 (D has introduced ANOTHER instance of this with the ridiculous >>> 
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

127, right? I know at least RISC processors tend to have instructions for both a logical and algebraic right shift. In that context, it makes sense for a systems programming language.

Surprise! c == -1. Because 1 is an int, b gets promoted to int before the shift happens. Then the result is 0x7FFF_FFFF which then gets converted to byte, leaving 0xFF == -1.
Jun 17 2010
parent reply BCS <none anon.com> writes:
Hello Don,

 Surprise! c == -1.

No kidding!
 Because 1 is an int, b gets promoted to int before the shift happens.

Why would it ever need to be promoted? Unless all (most?) CPUs have only size_t shifts, all three shifts should never promote the LHS. -- ... <IXOYE><
Jun 17 2010
parent reply Don <nospam nospam.com> writes:
BCS wrote:
 Hello Don,
 
 Surprise! c == -1.

No kidding!
 Because 1 is an int, b gets promoted to int before the shift happens.

Why would it ever need to be promoted? Unless all (most?) CPUs have only size_t shifts, all three shifts should never promote the LHS.

It shouldn't NEED to. But C defined that >> and << operate that way.
Jun 17 2010
parent BCS <none anon.com> writes:
Hello Don,

 BCS wrote:
 
 Hello Don,
 
 Surprise! c == -1.
 

 Because 1 is an int, b gets promoted to int before the shift
 happens.
 

only size_t shifts, all three shifts should never promote the LHS.


At leat for the >>> can we break that? C doesn't even *have* a >>> operator. -- ... <IXOYE><
Jun 17 2010
prev sibling parent reply Kagamin <spam here.lot> writes:
Don Wrote:

 (D has introduced ANOTHER instance of this with the ridiculous >>> 
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed? More interesting case is byte c = -1 >>> 1;
Jun 17 2010
parent reply Don <nospam nospam.com> writes:
Kagamin wrote:
 Don Wrote:
 
 (D has introduced ANOTHER instance of this with the ridiculous >>> 
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed?

No. It's a design flaw, not a bug. I think it could only be fixed by disallowing that code, or creating a special rule to make that code do what you expect. A better solution would be to drop >>>.
 More interesting case is
 byte c = -1 >>> 1;

Jun 17 2010
next sibling parent reply KennyTM~ <kennytm gmail.com> writes:
On Jun 17, 10 18:59, Don wrote:
 Kagamin wrote:
 Don Wrote:

 (D has introduced ANOTHER instance of this with the ridiculous >>>
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed?

No. It's a design flaw, not a bug. I think it could only be fixed by disallowing that code, or creating a special rule to make that code do what you expect. A better solution would be to drop >>>.

I disagree. The flaw is whether x should be promoted to CommonType!(typeof(x), int), given that the range of typeof(x >>> y) should never exceed the range of typeof(x), no matter what value y is.
 More interesting case is
 byte c = -1 >>> 1;


Jun 17 2010
parent reply Don <nospam nospam.com> writes:
KennyTM~ wrote:
 On Jun 17, 10 18:59, Don wrote:
 Kagamin wrote:
 Don Wrote:

 (D has introduced ANOTHER instance of this with the ridiculous >>>
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed?

No. It's a design flaw, not a bug. I think it could only be fixed by disallowing that code, or creating a special rule to make that code do what you expect. A better solution would be to drop >>>.

I disagree. The flaw is whether x should be promoted to CommonType!(typeof(x), int), given that the range of typeof(x >>> y) should never exceed the range of typeof(x), no matter what value y is.

The range of typeof(x & y) can never exceed the range of typeof(x), no matter what value y is. Yet (byte & int) is promoted to int. Actually, what happens to x>>>y if y is negative? The current rule is: x OP y means cast(CommonType!(x,y))x OP cast(CommonType!(x,y))y for any binary operation OP. How can we fix >>> without adding an extra rule?
 
 More interesting case is
 byte c = -1 >>> 1;



Jun 17 2010
next sibling parent BCS <none anon.com> writes:
Hello Don,

 The current rule is:
 x OP y      means
 cast(CommonType!(x,y))x OP cast(CommonType!(x,y))y
 for any binary operation OP.
 How can we fix >>> without adding an extra rule?

However it's not that way for the ternary op, so there is a (somewhat related) precedent. Even considering RHS<0, I would NEVER /expect/ a shift to have any type other than typeof(LHS). -- ... <IXOYE><
Jun 17 2010
prev sibling next sibling parent reply KennyTM~ <kennytm gmail.com> writes:
On Jun 17, 10 21:04, Don wrote:
 KennyTM~ wrote:
 On Jun 17, 10 18:59, Don wrote:
 Kagamin wrote:
 Don Wrote:

 (D has introduced ANOTHER instance of this with the ridiculous >>>
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed?

No. It's a design flaw, not a bug. I think it could only be fixed by disallowing that code, or creating a special rule to make that code do what you expect. A better solution would be to drop >>>.

I disagree. The flaw is whether x should be promoted to CommonType!(typeof(x), int), given that the range of typeof(x >>> y) should never exceed the range of typeof(x), no matter what value y is.

The range of typeof(x & y) can never exceed the range of typeof(x), no matter what value y is. Yet (byte & int) is promoted to int.

That's arguable. But (byte & int -> int) is meaningful because (&) is some what "symmetric" compared to (>>>). What does (&) do? (a & b) <=> foreach (bit x, y; zip(a, b)) yield bit (x == y ? 1 : 0); What does (>>>) do? (a >>> b) <=> repeat b times { logical right shift (a); } return a; Algorithmically, (&) needs to iterate over all bits of "a" and "b", but for (>>>) the range of "b" is irrelevant to the result of "a >>> b".
 Actually, what happens to x>>>y if y is negative?

x.d(6): Error: shift by -1 is outside the range 0..32
 The current rule is:
 x OP y means
 cast(CommonType!(x,y))x OP cast(CommonType!(x,y))y

 for any binary operation OP.
 How can we fix >>> without adding an extra rule?

There's already an extra rule for >>>. ubyte a = 1; writeln(typeof(a >>> a).stringof); // prints "int". Similarly, (^^), (==), etc do not obey this "rule". IMO, for ShiftExpression ((>>), (<<), (>>>)) the return type should be typeof(lhs).
 More interesting case is
 byte c = -1 >>> 1;




Jun 17 2010
parent reply Don <nospam nospam.com> writes:
KennyTM~ wrote:
 On Jun 17, 10 21:04, Don wrote:
 KennyTM~ wrote:
 On Jun 17, 10 18:59, Don wrote:
 Kagamin wrote:
 Don Wrote:

 (D has introduced ANOTHER instance of this with the ridiculous >>>
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed?

No. It's a design flaw, not a bug. I think it could only be fixed by disallowing that code, or creating a special rule to make that code do what you expect. A better solution would be to drop >>>.

I disagree. The flaw is whether x should be promoted to CommonType!(typeof(x), int), given that the range of typeof(x >>> y) should never exceed the range of typeof(x), no matter what value y is.

The range of typeof(x & y) can never exceed the range of typeof(x), no matter what value y is. Yet (byte & int) is promoted to int.

That's arguable. But (byte & int -> int) is meaningful because (&) is some what "symmetric" compared to (>>>).

See below. It's what C does that matters.
 Actually, what happens to x>>>y if y is negative?

x.d(6): Error: shift by -1 is outside the range 0..32

If y is a variable, it actually performs x >>> (y&31); So it actually makes no sense for it to cast everything to int.
 The current rule is:
 x OP y means
 cast(CommonType!(x,y))x OP cast(CommonType!(x,y))y

 for any binary operation OP.
 How can we fix >>> without adding an extra rule?

There's already an extra rule for >>>. ubyte a = 1; writeln(typeof(a >>> a).stringof); // prints "int". Similarly, (^^), (==), etc do not obey this "rule".

The logical operators aren't relevant. They all return bool. ^^ obeys the rule: typeof(a^^b) is typeof(a*b), in all cases.
 IMO, for ShiftExpression ((>>), (<<), (>>>)) the return type should be 
 typeof(lhs).

I agree that would be better, but it would be a silent change from the C behaviour. So it's not possible.
Jun 17 2010
parent KennyTM~ <kennytm gmail.com> writes:
On Jun 17, 10 23:50, Don wrote:
 KennyTM~ wrote:
 On Jun 17, 10 21:04, Don wrote:
 KennyTM~ wrote:
 On Jun 17, 10 18:59, Don wrote:
 Kagamin wrote:
 Don Wrote:

 (D has introduced ANOTHER instance of this with the ridiculous >>>
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed?

No. It's a design flaw, not a bug. I think it could only be fixed by disallowing that code, or creating a special rule to make that code do what you expect. A better solution would be to drop >>>.

I disagree. The flaw is whether x should be promoted to CommonType!(typeof(x), int), given that the range of typeof(x >>> y) should never exceed the range of typeof(x), no matter what value y is.

The range of typeof(x & y) can never exceed the range of typeof(x), no matter what value y is. Yet (byte & int) is promoted to int.

That's arguable. But (byte & int -> int) is meaningful because (&) is some what "symmetric" compared to (>>>).

See below. It's what C does that matters.
 Actually, what happens to x>>>y if y is negative?

x.d(6): Error: shift by -1 is outside the range 0..32

If y is a variable, it actually performs x >>> (y&31); So it actually makes no sense for it to cast everything to int.
 The current rule is:
 x OP y means
 cast(CommonType!(x,y))x OP cast(CommonType!(x,y))y

 for any binary operation OP.
 How can we fix >>> without adding an extra rule?

There's already an extra rule for >>>. ubyte a = 1; writeln(typeof(a >>> a).stringof); // prints "int". Similarly, (^^), (==), etc do not obey this "rule".

The logical operators aren't relevant. They all return bool. ^^ obeys the rule: typeof(a^^b) is typeof(a*b), in all cases.
 IMO, for ShiftExpression ((>>), (<<), (>>>)) the return type should be
 typeof(lhs).

I agree that would be better, but it would be a silent change from the C behaviour. So it's not possible.

Too bad.
Jun 17 2010
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Don wrote:
 KennyTM~ wrote:
 On Jun 17, 10 18:59, Don wrote:
 Kagamin wrote:
 Don Wrote:

 (D has introduced ANOTHER instance of this with the ridiculous >>>
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed?

No. It's a design flaw, not a bug. I think it could only be fixed by disallowing that code, or creating a special rule to make that code do what you expect. A better solution would be to drop >>>.

I disagree. The flaw is whether x should be promoted to CommonType!(typeof(x), int), given that the range of typeof(x >>> y) should never exceed the range of typeof(x), no matter what value y is.

The range of typeof(x & y) can never exceed the range of typeof(x), no matter what value y is. Yet (byte & int) is promoted to int. Actually, what happens to x>>>y if y is negative? The current rule is: x OP y means cast(CommonType!(x,y))x OP cast(CommonType!(x,y))y for any binary operation OP. How can we fix >>> without adding an extra rule?

Wait a minute. D should never allow an implicit narrowing conversion. It doesn't for other cases, so isn't this a simple bug? Andrei
Jun 17 2010
parent reply Don <nospam nospam.com> writes:
Andrei Alexandrescu wrote:
 Don wrote:
 KennyTM~ wrote:
 On Jun 17, 10 18:59, Don wrote:
 Kagamin wrote:
 Don Wrote:

 (D has introduced ANOTHER instance of this with the ridiculous >>>
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed?

No. It's a design flaw, not a bug. I think it could only be fixed by disallowing that code, or creating a special rule to make that code do what you expect. A better solution would be to drop >>>.

I disagree. The flaw is whether x should be promoted to CommonType!(typeof(x), int), given that the range of typeof(x >>> y) should never exceed the range of typeof(x), no matter what value y is.

The range of typeof(x & y) can never exceed the range of typeof(x), no matter what value y is. Yet (byte & int) is promoted to int. Actually, what happens to x>>>y if y is negative? The current rule is: x OP y means cast(CommonType!(x,y))x OP cast(CommonType!(x,y))y for any binary operation OP. How can we fix >>> without adding an extra rule?

Wait a minute. D should never allow an implicit narrowing conversion. It doesn't for other cases, so isn't this a simple bug?

It'll make it illegal, but it won't make it usable. I think the effect of full range propagation will be that >>> will become illegal for anything other than int and long, unless it is provably identical to >>. Unless you do the hideous b >>> cast(typeof(b))1; I think every D style guide will include the recommendation, "never use
".



A question I have though is, Java has >>>. Does Java have these problems too?
Jun 17 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Don wrote:
 Andrei Alexandrescu wrote:
 Don wrote:
 KennyTM~ wrote:
 On Jun 17, 10 18:59, Don wrote:
 Kagamin wrote:
 Don Wrote:

 (D has introduced ANOTHER instance of this with the ridiculous >>>
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed?

No. It's a design flaw, not a bug. I think it could only be fixed by disallowing that code, or creating a special rule to make that code do what you expect. A better solution would be to drop >>>.

I disagree. The flaw is whether x should be promoted to CommonType!(typeof(x), int), given that the range of typeof(x >>> y) should never exceed the range of typeof(x), no matter what value y is.

The range of typeof(x & y) can never exceed the range of typeof(x), no matter what value y is. Yet (byte & int) is promoted to int. Actually, what happens to x>>>y if y is negative? The current rule is: x OP y means cast(CommonType!(x,y))x OP cast(CommonType!(x,y))y for any binary operation OP. How can we fix >>> without adding an extra rule?

Wait a minute. D should never allow an implicit narrowing conversion. It doesn't for other cases, so isn't this a simple bug?

It'll make it illegal, but it won't make it usable. I think the effect of full range propagation will be that >>> will become illegal for anything other than int and long, unless it is provably identical to >>. Unless you do the hideous b >>> cast(typeof(b))1; I think every D style guide will include the recommendation, "never use >>>".

Three times. Three times I tried to convince Walter to remove that crap from D - one for each '>'. Last time was as the manuscript going out the door and I was willing to take the flak from the copyeditors for the changes in pagination. Just like with non-null references, Walter has framed the matter in a way that makes convincing extremely difficult. That would be great if he were right.
 A question I have though is, Java has >>>. Does Java have these problems 
 too?

Java is much more conservative with implicit conversions, so they wouldn't allow the assignment without a cast. Beyond that, yes, the issues are the same. Andrei
Jun 17 2010
parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 Just like with non-null references, Walter has 
 framed the matter in a way that makes convincing extremely difficult. 
 That would be great if he were right.

I know this is off-topic in this thread. I remember the long thread about this. Making all D references nonnull on default requires a significant change in both the language and the way objects are used in D, so I can understand that Walter has refused this idea, maybe he is right. But something more moderate can be done, keep the references nullable on default, and it can be invented a symbol (like ) that can be added as suffix to a class reference type or pointer type, that denotes it is nonnull (and the type system can enforce it at the calling point too, etc, it's part of the function signature or variable type, so it's more than just syntax sugar for a null test inside the function!). I believe this reduced idea can be enough to avoid many null-derived bugs, it's different from the situation of the Java exceptions, it's less viral, if you write a 100 lines long D program, or a long C-style D program, you are probably free to never use this feature. void foo(int* ptr, Bar b) {...} void main() { int* p = ensureNonull(cast(int*)malloc(int.sizeof)); Bar b = ensureNonull(new Bar()); foo(p, b); } Something (badly named) like ensureNonull() changes the input type into a notnull type and performs a run-time test of not-null-ty :-) Surely this idea has some holes, but they can probably be fixed. Bye, bearophile
Jun 17 2010
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 On Thu, 17 Jun 2010 15:24:52 -0400, Don <nospam nospam.com> wrote:
 
 A question I have though is, Java has >>>. Does Java have these 
 problems too?

Java doesn't have unsigned values, so it's necessary to use regular int's as bitmasks, hence the extra operator.

The reason D has >>> is to cause an unsigned right shift to be generated without needing to resort to casts as one has to in C. The problem with such casts is they wreck generic code.
Jun 17 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Steven Schveighoffer wrote:
 On Thu, 17 Jun 2010 15:24:52 -0400, Don <nospam nospam.com> wrote:

 A question I have though is, Java has >>>. Does Java have these 
 problems too?

Java doesn't have unsigned values, so it's necessary to use regular int's as bitmasks, hence the extra operator.

The reason D has >>> is to cause an unsigned right shift to be generated without needing to resort to casts as one has to in C. The problem with such casts is they wreck generic code.

No. http://www.digitalmars.com/d/2.0/phobos/std_traits.html#Unsigned void fun(T)(T num) if (isIntegral!T) { auto x = cast(Unsigned!T) num; ... } Andrei
Jun 17 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Steven Schveighoffer wrote:
 On Thu, 17 Jun 2010 15:24:52 -0400, Don <nospam nospam.com> wrote:

 A question I have though is, Java has >>>. Does Java have these 
 problems too?

Java doesn't have unsigned values, so it's necessary to use regular int's as bitmasks, hence the extra operator.

The reason D has >>> is to cause an unsigned right shift to be generated without needing to resort to casts as one has to in C. The problem with such casts is they wreck generic code.

No. http://www.digitalmars.com/d/2.0/phobos/std_traits.html#Unsigned void fun(T)(T num) if (isIntegral!T) { auto x = cast(Unsigned!T) num; ... }

It's not a perfect replacement, as in if T is a custom integer type, you have to extend the template to support it. Furthermore, now your BigInt custom type also has to support a cast to unsigned just so it can right shift. Also, T may not be readily identifiable, so you'd have to write: cast(Unsigned!(typeof(expr)) expr;
Jun 17 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Steven Schveighoffer wrote:
 On Thu, 17 Jun 2010 15:24:52 -0400, Don <nospam nospam.com> wrote:

 A question I have though is, Java has >>>. Does Java have these 
 problems too?

Java doesn't have unsigned values, so it's necessary to use regular int's as bitmasks, hence the extra operator.

The reason D has >>> is to cause an unsigned right shift to be generated without needing to resort to casts as one has to in C. The problem with such casts is they wreck generic code.

No. http://www.digitalmars.com/d/2.0/phobos/std_traits.html#Unsigned void fun(T)(T num) if (isIntegral!T) { auto x = cast(Unsigned!T) num; ... }

It's not a perfect replacement, as in if T is a custom integer type, you have to extend the template to support it.

Let me think when I wanted an unsigned shift against an arbitrarily-sized integer. Um... never?
 Furthermore, now your BigInt 
 custom type also has to support a cast to unsigned just so it can right 
 shift.

BigInt is a superficial argument. Unless you're willing to flesh it out much better, it can be safely dropped.
 Also, T may not be readily identifiable, so you'd have to write:
 
     cast(Unsigned!(typeof(expr)) expr;

It's not like shift occurs often enough to make that an issue. Note that your argument is predicated on using signed types instead of unsigned types in the first place, and tacitly assumes the issue is frequent enough to *add a new operator*. Yet unsigned shifts correlate naturally with unsigned numbers. So what is exactly that is valuable in >>> that makes its presence in the language justifiable? Andrei
Jun 17 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Note that your argument is predicated on using signed types instead of 
 unsigned types in the first place, and tacitly assumes the issue is 
 frequent enough to *add a new operator*. Yet unsigned shifts correlate 
 naturally with unsigned numbers.
 
 So what is exactly that is valuable in >>> that makes its presence in 
 the language justifiable?

Generally the irritation I feel whenever I right shift and have to go back through and either check the type or just cast it to unsigned to be sure there is no latent bug. For example, the optlink asm code does quite a lot of unsigned right shifts. I have to be very careful about the typing to ensure a matching unsigned shift, since I have little idea what the range of values the variable can have.
Jun 17 2010
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Note that your argument is predicated on using signed types instead of 
 unsigned types in the first place, and tacitly assumes the issue is 
 frequent enough to *add a new operator*. Yet unsigned shifts correlate 
 naturally with unsigned numbers.

 So what is exactly that is valuable in >>> that makes its presence in 
 the language justifiable?

Generally the irritation I feel whenever I right shift and have to go back through and either check the type or just cast it to unsigned to be sure there is no latent bug. For example, the optlink asm code does quite a lot of unsigned right shifts. I have to be very careful about the typing to ensure a matching unsigned shift, since I have little idea what the range of values the variable can have.

I'm sure all linker asm writers will be happy about that feature :o}. Andrei
Jun 17 2010
prev sibling parent reply Don <nospam nospam.com> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Note that your argument is predicated on using signed types instead of 
 unsigned types in the first place, and tacitly assumes the issue is 
 frequent enough to *add a new operator*. Yet unsigned shifts correlate 
 naturally with unsigned numbers.

 So what is exactly that is valuable in >>> that makes its presence in 
 the language justifiable?

Generally the irritation I feel whenever I right shift and have to go back through and either check the type or just cast it to unsigned to be sure there is no latent bug.

But x >>> 1 doesn't work for shorts and bytes.
 For example, the optlink asm code does quite a lot of unsigned right 
 shifts. I have to be very careful about the typing to ensure a matching 
 unsigned shift, since I have little idea what the range of values the 
 variable can have.

I've read the OMF spec, and I know it includes shorts and bytes. So I really don't think >>> solves even this use case.
Jun 18 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Don wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Note that your argument is predicated on using signed types instead 
 of unsigned types in the first place, and tacitly assumes the issue 
 is frequent enough to *add a new operator*. Yet unsigned shifts 
 correlate naturally with unsigned numbers.

 So what is exactly that is valuable in >>> that makes its presence in 
 the language justifiable?

Generally the irritation I feel whenever I right shift and have to go back through and either check the type or just cast it to unsigned to be sure there is no latent bug.

But x >>> 1 doesn't work for shorts and bytes.

I know. That's ill thought out.
 
 For example, the optlink asm code does quite a lot of unsigned right 
 shifts. I have to be very careful about the typing to ensure a 
 matching unsigned shift, since I have little idea what the range of 
 values the variable can have.

I've read the OMF spec, and I know it includes shorts and bytes. So I really don't think >>> solves even this use case.

I can send you the source if you like <g>.
Jun 18 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Don wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Note that your argument is predicated on using signed types instead 
 of unsigned types in the first place, and tacitly assumes the issue 
 is frequent enough to *add a new operator*. Yet unsigned shifts 
 correlate naturally with unsigned numbers.

 So what is exactly that is valuable in >>> that makes its presence 
 in the language justifiable?

Generally the irritation I feel whenever I right shift and have to go back through and either check the type or just cast it to unsigned to be sure there is no latent bug.

But x >>> 1 doesn't work for shorts and bytes.

I know. That's ill thought out.

The please rule it out of the language. Andrei
Jun 18 2010
prev sibling parent reply Don <nospam nospam.com> writes:
Walter Bright wrote:
 Steven Schveighoffer wrote:
 On Thu, 17 Jun 2010 15:24:52 -0400, Don <nospam nospam.com> wrote:

 A question I have though is, Java has >>>. Does Java have these 
 problems too?

Java doesn't have unsigned values, so it's necessary to use regular int's as bitmasks, hence the extra operator.

The reason D has >>> is to cause an unsigned right shift to be generated without needing to resort to casts as one has to in C.

Unfortunately it doesn't work. You still can't do an unsigned right shift of a signed byte by 1, without resorting to a cast.
 The problem with such casts is they wreck generic code.

It's C's cavalier approach to implicit conversions that wrecks generic code. And it makes such a pigs breakfast of it that >>> doesn't quite work.
Jun 17 2010
next sibling parent reply BCS <none anon.com> writes:
Hello Don,

 It's C's cavalier approach to implicit conversions that wrecks generic
 code. And it makes such a pigs breakfast of it that >>> doesn't quite
 work.
 

I still haven't seen anyone address how typeof(a>>>b) == typeof(a) breaks c code when a>>>b isn't legal c to begin with. (Note, I'm not saying do the same with >> or << because I see why that can;t be done) -- ... <IXOYE><
Jun 18 2010
parent reply Don <nospam nospam.com> writes:
Simen kjaeraas wrote:
 BCS <none anon.com> wrote:
 
 I still haven't seen anyone address how typeof(a>>>b) == typeof(a) 
 breaks c code when a>>>b isn't legal c to begin with.

It doesn't, of course. However, it is desirable to have similar rules for similar operations, like >> and >>>.

without making a special-case rule for it.
Jun 20 2010
parent BCS <none anon.com> writes:
Hello Don,

 Simen kjaeraas wrote:
 
 BCS <none anon.com> wrote:
 
 I still haven't seen anyone address how typeof(a>>>b) == typeof(a)
 breaks c code when a>>>b isn't legal c to begin with.
 

for similar operations, like >> and >>>.

without making a special-case rule for it.

At least for me, I find the current situation more surprising than the alternative. For that matter if >>> worked different than >>, If that were the case, I think I would have (the first time I ran across it) thought the >> case was the odd one. -- ... <IXOYE><
Jun 20 2010
prev sibling parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
BCS <none anon.com> wrote:

 I still haven't seen anyone address how typeof(a>>>b) == typeof(a)  
 breaks c code when a>>>b isn't legal c to begin with.

It doesn't, of course. However, it is desirable to have similar rules for similar operations, like >> and >>>. -- Simen
Jun 19 2010
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Don wrote:
 Kagamin wrote:
 Don Wrote:

 (D has introduced ANOTHER instance of this with the ridiculous >>> 
 operator.
 byte b = -1;
 byte c = b >>> 1;
 Guess what c is!
 )

:) Well, there was issue. Wasn't it fixed?

No. It's a design flaw, not a bug. I think it could only be fixed by disallowing that code, or creating a special rule to make that code do what you expect. A better solution would be to drop >>>.

I agree. But even within the current language, value range propagation (VRP) should disallow this case without a problem. There's been a long discussion about computing the bounds of a & b and a || b given the bounds of a and b. The current VRP code for those operations is broken, and I suspect the VRP code for a >>> b is broken too. Andrei
Jun 17 2010
prev sibling parent reply Kagamin <spam here.lot> writes:
Walter Bright Wrote:

 Easy. offset should be a size_t, not an unsigned.

I've hit the bug using size_t at the right side of a+=-b (array length). It's just a long was at the left side (file offset). Such code should actually work in 64bit system and it fails in 32bit. MS compiler reports such portability issues with a warning, I believe.
Jun 16 2010
parent reply Justin Spahr-Summers <Justin.SpahrSummers gmail.com> writes:
On Thu, 17 Jun 2010 02:46:13 -0400, Kagamin <spam here.lot> wrote:
 
 Walter Bright Wrote:
 
 Easy. offset should be a size_t, not an unsigned.

I've hit the bug using size_t at the right side of a+=-b (array length). It's just a long was at the left side (file offset). Such code should actually work in 64bit system and it fails in 32bit. MS compiler reports such portability issues with a warning, I believe.

This sounds more like an issue with file offsets being longs, ironically. Using longs to represent zero-based locations in a file is extremely unsafe. Such usages should really be restricted to short-range offsets from the current file position, and fpos_t used for everything else (which is assumably available in std.c.stdio).
Jun 16 2010
parent reply Kagamin <spam here.lot> writes:
Justin Spahr-Summers Wrote:

 This sounds more like an issue with file offsets being longs, 
 ironically. Using longs to represent zero-based locations in a file is 
 extremely unsafe. Such usages should really be restricted to short-range 
 offsets from the current file position, and fpos_t used for everything 
 else (which is assumably available in std.c.stdio).

1. Ironically the issue is not in file offset's signedness. You still hit the bug with ulong offset. 2. Signed offset is two times safer than unsigned as you can detect underflow bug (and, maybe, overflow). With unsigned offset you get exception if the filesystem doesn't support sparse files, so the linux will keep silence. 3. Signed offset is consistent/type-safe in the case of the seek function as it doesn't arbitrarily mutate between signed and unsigned. 4. Choosing unsigned for file offset is not dictated by safety, but by stupidity: "hey, I lose my bit!" I AM an optimization zealot, but unsigned offsets are plain dead freaking stupid.
Jun 17 2010
parent reply Justin Spahr-Summers <Justin.SpahrSummers gmail.com> writes:
On Thu, 17 Jun 2010 03:27:59 -0400, Kagamin <spam here.lot> wrote:
 
 Justin Spahr-Summers Wrote:
 
 This sounds more like an issue with file offsets being longs, 
 ironically. Using longs to represent zero-based locations in a file is 
 extremely unsafe. Such usages should really be restricted to short-range 
 offsets from the current file position, and fpos_t used for everything 
 else (which is assumably available in std.c.stdio).

1. Ironically the issue is not in file offset's signedness. You still hit the bug with ulong offset.

How so? Subtracting a size_t from a ulong offset will only cause problems if the size_t value is larger than the offset. If that's the case, then the issue remains even with a signed offset.
 2. Signed offset is two times safer than unsigned as you can detect
 underflow bug (and, maybe, overflow).

The solution with unsigned values is to make sure that they won't underflow *before* performing the arithmetic - and that's really the proper solution anyways.
 With unsigned offset you get exception if the filesystem doesn't
 support sparse files, so the linux will keep silence.

I'm not sure what this means. Can you explain?
 3. Signed offset is consistent/type-safe in the case of the seek function as
it doesn't arbitrarily mutate between signed and unsigned.

My point was about signed values being used to represent zero-based indices. Obviously there are applications for a signed offset *from the current position*. It's seeking to a signed offset *from the start of the file* that's unsafe.
 4. Choosing unsigned for file offset is not dictated by safety, but by
stupidity: "hey, I lose my bit!"

You referred to 32-bit systems, correct? I'm sure there are 32-bit systems out there that need to be able to access files larger than two gigabytes.
 I AM an optimization zealot, but unsigned offsets are plain dead
 freaking stupid.

It's not an optimization. Unsigned values logically correspond to disk and memory locations.
Jun 17 2010
parent reply Kagamin <spam here.lot> writes:
Justin Spahr-Summers Wrote:

 1. Ironically the issue is not in file offset's signedness. You still hit the
bug with ulong offset.

How so? Subtracting a size_t from a ulong offset will only cause problems if the size_t value is larger than the offset. If that's the case, then the issue remains even with a signed offset.

May be, you didn't see the testcase. ulong a; ubyte[] b; a+=-b.length; // go a little backwards or seek(-b.length, SEEK_CUR, file);
 2. Signed offset is two times safer than unsigned as you can detect
 underflow bug (and, maybe, overflow).

The solution with unsigned values is to make sure that they won't underflow *before* performing the arithmetic - and that's really the proper solution anyways.

If you rely on client code to be correct, you get security issue. And client doesn't necessarily use your language or your compiler. Or he can turn off overflow checks for performance. Or he can use the same unsigned variable for both signed and unsigned offsets, so checks for underflow become useless.
 With unsigned offset you get exception if the filesystem doesn't
 support sparse files, so the linux will keep silence.

I'm not sure what this means. Can you explain?

This means that you have subtle bug.
 3. Signed offset is consistent/type-safe in the case of the seek function as
it doesn't arbitrarily mutate between signed and unsigned.

My point was about signed values being used to represent zero-based indices. Obviously there are applications for a signed offset *from the current position*. It's seeking to a signed offset *from the start of the file* that's unsafe.

To catch this is the case of signed offset you need only one check. In the case of unsigned offsets you have to watch underflows in the entire application code even if it's not related to file seeks - just in order to fix issue that can be fixed separately.
 4. Choosing unsigned for file offset is not dictated by safety, but by
stupidity: "hey, I lose my bit!"

You referred to 32-bit systems, correct? I'm sure there are 32-bit systems out there that need to be able to access files larger than two gigabytes.

I'm talking about 64-bit file offsets which are 64-bit on 32-bit systems too. As to file size limitations there's no difference between signed and unsigned lenghts. File sizes have no tendency stick to 4 gig value. If you need to handle files larger that 2 gigs, you also need to handle files larger than 4 gigs.
 I AM an optimization zealot, but unsigned offsets are plain dead
 freaking stupid.

It's not an optimization. Unsigned values logically correspond to disk and memory locations.

They don't. Memory locations are a *subset* of size_t values range. That's why you have bound checks. And the problem is usage of these locations: memory bus doesn't perform computations on the addresses, application does - it adds, subtracts, mixes signeds with unsigneds, has various type system holes or kludges, library design issues, used good practices etc. In other words, it gets a little bit complex than just locations.
Jun 17 2010
parent Justin Spahr-Summers <Justin.SpahrSummers gmail.com> writes:
On Thu, 17 Jun 2010 06:41:33 -0400, Kagamin <spam here.lot> wrote:
 
 Justin Spahr-Summers Wrote:
 
 1. Ironically the issue is not in file offset's signedness. You still hit the
bug with ulong offset.

How so? Subtracting a size_t from a ulong offset will only cause problems if the size_t value is larger than the offset. If that's the case, then the issue remains even with a signed offset.

May be, you didn't see the testcase. ulong a; ubyte[] b; a+=-b.length; // go a little backwards

I did see that, but that's erroneous code. Maybe the compiler could warn about unary minus on an unsigned type, but I find such problems rare as long as everyone working on the code understands signedness.
 or
 
 seek(-b.length, SEEK_CUR, file);

I wouldn't call it a failure of unsigned types that this causes problems. Like I suggested above, the situation could possibly be alleviated if the compiler just warned about unary minus no-ops. Like a couple others pointed out, this is just a lack of understanding of unsigned types and modular arithmetic. I'd say that any programmer should have such an understanding, regardless if their programming language of choice supports unsigned types or not.
 2. Signed offset is two times safer than unsigned as you can detect
 underflow bug (and, maybe, overflow).

The solution with unsigned values is to make sure that they won't underflow *before* performing the arithmetic - and that's really the proper solution anyways.

If you rely on client code to be correct, you get security issue. And client doesn't necessarily use your language or your compiler. Or he can turn off overflow checks for performance. Or he can use the same unsigned variable for both signed and unsigned offsets, so checks for underflow become useless.

What kind of client are we talking about? If you're referring to contract programming, then it's the client's own fault if they fiddle around with the code and end up breaking it or violating its conventions.
 With unsigned offset you get exception if the filesystem doesn't
 support sparse files, so the linux will keep silence.

I'm not sure what this means. Can you explain?

This means that you have subtle bug.
 3. Signed offset is consistent/type-safe in the case of the seek function as
it doesn't arbitrarily mutate between signed and unsigned.

My point was about signed values being used to represent zero-based indices. Obviously there are applications for a signed offset *from the current position*. It's seeking to a signed offset *from the start of the file* that's unsafe.

To catch this is the case of signed offset you need only one check. In the case of unsigned offsets you have to watch underflows in the entire application code even if it's not related to file seeks - just in order to fix issue that can be fixed separately.

Signed offsets can (truly) underflow as well. I don't see how the issue is any different.
 
 4. Choosing unsigned for file offset is not dictated by safety, but by
stupidity: "hey, I lose my bit!"

You referred to 32-bit systems, correct? I'm sure there are 32-bit systems out there that need to be able to access files larger than two gigabytes.

I'm talking about 64-bit file offsets which are 64-bit on 32-bit systems too.

In D's provided interface, this is true, but fseek() from C uses C's long data type, which is *not* 64-bit on 32-bit systems, and this is (I assume) what std.stdio uses under-the-hood, making it doubly unsafe.
 As to file size limitations there's no difference between signed and
 unsigned lenghts. File sizes have no tendency stick to 4 gig value. If
 you need to handle files larger that 2 gigs, you also need to handle
 files larger than 4 gigs.

Of course. But why restrict oneself to half the available space unnecessarily?
 I AM an optimization zealot, but unsigned offsets are plain dead
 freaking stupid.

It's not an optimization. Unsigned values logically correspond to disk and memory locations.

They don't. Memory locations are a *subset* of size_t values range. That's why you have bound checks. And the problem is usage of these locations: memory bus doesn't perform computations on the addresses, application does - it adds, subtracts, mixes signeds with unsigneds, has various type system holes or kludges, library design issues, used good practices etc. In other words, it gets a little bit complex than just locations.

Bounds checking does alleviate the issue somewhat, I'll grant you that. But as far as address computation, even if your application does none, the operating system still will in order to map logical addresses, which start at 0, to physical addresses, which also start at 0. And the memory bus absolutely requires unsigned values even if it needs to perform no computation itself.
Jun 17 2010
prev sibling next sibling parent Adam Ruppe <destructionator gmail.com> writes:
On 6/15/10, Walter Bright <newshound1 digitalmars.com> wrote:
 4.3 Labels as Values: that's computed gotos, they can be useful if you
 write
 an interpreter or you implement some kind of state machine.

They are useful in some circumstances, but are hardly necessary.

Can't you accomplish the same thing with some minor sprinkling of inline assembly anyway?
Jun 15 2010
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 17 Jun 2010 15:24:52 -0400, Don <nospam nospam.com> wrote:

 A question I have though is, Java has >>>. Does Java have these problems  
 too?

Java doesn't have unsigned values, so it's necessary to use regular int's as bitmasks, hence the extra operator. -Steve
Jun 17 2010
prev sibling parent reply Kagamin <spam here.lot> writes:
bearophile Wrote:

 and using signed words (I think C# uses ints for that purpose)
 

Signed ints FTW!!!
Jun 14 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Kagamin wrote:
 bearophile Wrote:
 
 and using signed words (I think C# uses ints for that purpose)

Signed ints FTW!!!

CLS = ? Andrei
Jun 14 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 Signed ints FTW!!!

CLS = ?

I think he means "Common Language Specification": http://msdn.microsoft.com/en-us/library/12a7a7h3.aspx Bye, bearophile
Jun 15 2010
parent Kagamin <spam here.lot> writes:
bearophile Wrote:

 Andrei Alexandrescu:
 Signed ints FTW!!!

CLS = ?

I think he means "Common Language Specification": http://msdn.microsoft.com/en-us/library/12a7a7h3.aspx

Jun 15 2010
prev sibling parent Justin Johansson <no spam.com> writes:
Andrei Alexandrescu wrote:
 Kagamin wrote:
 bearophile Wrote:

 and using signed words (I think C# uses ints for that purpose)

Signed ints FTW!!!

CLS = ? Andrei

"Clear Screen" Justin
Jun 15 2010