www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - [OT] Sharp Regrets: Top 10 Worst C# Features

reply "Adam D. Ruppe" <destructionator gmail.com> writes:
I just saw this link come by my desktop and I thought it was an 
interesting read because D does a lot of these things too, and 
avoids some of them:

http://www.informit.com/articles/article.aspx?p=2425867

I don't agree they are all mistakes, but it is a pretty quick and 
interesting read.
Aug 18 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Aug 19, 2015 at 01:12:33AM +0000, Adam D. Ruppe via Digitalmars-d wrote:
 I just saw this link come by my desktop and I thought it was an
 interesting read because D does a lot of these things too, and avoids
 some of them:
 
 http://www.informit.com/articles/article.aspx?p=2425867
 
 I don't agree they are all mistakes, but it is a pretty quick and
 interesting read.
thought about it before. make <, <=, ==, >, >= individually overloadable (ahem, C++), 'cos it's a lot of redundant typing (lots of room for typos and bugs) and most combinations don't make sense anyway. D did the right thing by consolidating <, <=, >, >= into opCmp. However, D still differentiates between opCmp and opEquals, and if those two are inconsistent, strange things will happen. Andrei's argument is that we want to support general partial orders, not just linear orders, but IMO this falls in the "too much flexibility for only marginal benefit" trap. I mean, when was the last time you badly needed a partial order to be expressed by *built-in* comparison operators, as opposed to dedicated member functions? When people see <, <=, >, >= in your code, they generally expect the usual linear order of numerical types, not something else. This causes confusion and suffers from the same problems as using + for string concatenation. unresolved controversy surrounding the behaviour of >> vs. >>> (sorry, forgot the bug number, but it's in bugzilla). IMO, we should ditch these operators and use int intrinsics for the assembly instructions instead. What's the use of built-in operators that are only occasionally used in system code? Something like 1.shiftLeft(2) would work just fine in expressions, and simplify the lexer by having less token types. I'm not sure about making a separate type for ints-as-bits, though. That seems a bit extreme, and would almost imply that non-bitarray numbers would have to be BigInt by default. about breaking code NOW rather than having to live with a flaw for the foreseeable future of the language, surely applies to D. Sure, nobody likes their code broken, but if it means breaking a small number of projects today for a much better language in the future, vs. not breaking anything today and having to live with a huge number of projects later, I think breaking code is more worth it. current schizophrenic split between attributes on the left vs. attributes on the right is another example of needlessly convoluted syntax. but it was decided not worth changing. Still, it represents a hole in the language that could have done better. ++/-- these days. Still, I don't see what's the big deal of having them. tread a veritable minefield of surprising behaviour, counterintuitive semantics, unexpected GC interactions, and compiler bugs. Especially when you start sticking dtors on structs, which are supposed to be freely-copyable int-like values, which breaks a lot of assumptions in generic template code and just cause general nuisance. by design. T -- Why are you blatanly misspelling "blatant"? -- Branden Robinson
Aug 18 2015
next sibling parent "jmh530" <john.michael.hall gmail.com> writes:
On Wednesday, 19 August 2015 at 02:08:08 UTC, H. S. Teoh wrote:

 the still unresolved controversy surrounding the behaviour of
 vs. >>> (sorry, forgot the bug number, but it's in
bugzilla). IMO, we should ditch these operators and use int intrinsics for the assembly instructions instead. What's the use of built-in operators that are only occasionally used in system code? Something like 1.shiftLeft(2) would work just fine in expressions, and simplify the lexer by having less token types. I'm not sure about making a separate type for ints-as-bits, though. That seems a bit extreme, and would almost imply that non-bitarray numbers would have to be BigInt by default.
I like this point. I rarely see those operators, but it's usually in some kind of highly optimized code that is impossible to follow (maybe the compiler uses these more?). Also, every time I see a >> I always think much bigger than, which of course doesn't make any sense. 1.shiftleft(2) is actually much more intuitive to me.

 D, the current schizophrenic split between attributes on the 
 left vs. attributes on the right is another example of 
 needlessly convoluted syntax.
I agree with both points. Nevertheless, it does seem like a lot of newer languages have adopted this style. Rush and Go both have the type follow the name of the variable. I could add Julia and Python's mypy to that as well. I had downloaded some introduction to homotopy type theory, which I did not understand more than a few pages of, and it also used the name : type approach, so I imagine it is being used more broadly in mathematical type theories. Anyway, I can't imagine this would ever be adopted for D. It conflicts with too much stuff, most notably inheritance and template specialization, at this point.
Aug 18 2015
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 7:04 PM, H. S. Teoh via Digitalmars-d wrote:

 thought about it before.
D doesn't allow while(e);{s}
Aug 18 2015
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, 19 August 2015 at 02:08:08 UTC, H. S. Teoh wrote:

 never really
 thought about it before.
I think that D pretty much solved this by enforcing {} for if statements and loops. so, it's not particularly error-prone in D, statement is required, and AFAIK, debuggers won't treat {} as a statement, unlike ;, so I don't think that you can set a breakpoint on {}, whereas you can on ;. So, I really think that D's approach is the right one.

 bad idea to make <, <=, ==, >, >= individually overloadable 
 (ahem, C++), 'cos it's a lot of redundant typing (lots of room 
 for typos and bugs) and most combinations don't make sense 
 anyway. D did the right thing by consolidating <, <=, >, >= 
 into opCmp. However, D still differentiates between opCmp and 
 opEquals, and if those two are inconsistent, strange things 
 will happen. Andrei's argument is that we want to support 
 general partial orders, not just linear orders, but IMO this 
 falls in the "too much flexibility for only marginal benefit" 
 trap. I mean, when was the last time you badly needed a partial 
 order to be expressed by *built-in* comparison operators, as 
 opposed to dedicated member functions?  When people see <, <=,
, >= in your code, they generally expect the usual linear
order of numerical types, not something else. This causes confusion and suffers from the same problems as using + for string concatenation.
The main reason to have opEquals and opCmp separate is because there are many types for which equality makes sense but ordering doesn't. The other is that opEquals is almost certainly going to implement == more efficiently than opCmp could. And given how common an operation == and != are, I wouldn't want to see opCmp used in place of opEquals when opCmp is defined. It's not that hard to get the comparison operators right, especially when you only have two functions to worry about. Yes, you potentially still have a chance of a having a consistency problem as a think that what D has strikes a good balance. But I do think that that whole deal about partial ordering is just bizarre. If == is true, <= and >= should always be true and vice versa. Similarly, if any of them are false, all three of them should be false. If you want to do something else, then don't use the built-in operators.

 the still unresolved controversy surrounding the behaviour of
 vs. >>> (sorry, forgot the bug number, but it's in
bugzilla). IMO, we should ditch these operators and use int intrinsics for the assembly instructions instead. What's the use of built-in operators that are only occasionally used in system code? Something like 1.shiftLeft(2) would work just fine in expressions, and simplify the lexer by having less token types.
I don't see much point in not having << and >> and having a named function instead. A don't see how it would gain anything at all, particularly given that << and >> are already well-known, and everyone who was looking to bitshift would just get annoyed and confused if they weren't there.


- Jonathan M Davis
Aug 19 2015
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/19/2015 09:40 AM, Jonathan M Davis wrote:
 But I do think that that whole deal about partial ordering is just
 bizarre. If == is true, <= and >= should always be true and vice versa.
 Similarly, if any of them are false, all three of them should be false.
 If you want to do something else, then don't use the built-in operators.
Like the built-in types do? :o)
Aug 19 2015
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/19/2015 02:01 PM, Timon Gehr wrote:
 On 08/19/2015 09:40 AM, Jonathan M Davis wrote:
If == is true, <= and >= should always be true and vice versa.
 Similarly, if any of them are false, all three of them should be false.
Missed this. No, that is not how it should work, but I guess it's clear what you meant to say.
Aug 19 2015
parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, 19 August 2015 at 12:03:08 UTC, Timon Gehr wrote:
 On 08/19/2015 02:01 PM, Timon Gehr wrote:
 On 08/19/2015 09:40 AM, Jonathan M Davis wrote:
If == is true, <= and >= should always be true and vice versa.
 Similarly, if any of them are false, all three of them should 
 be false.
Missed this. No, that is not how it should work, but I guess it's clear what you meant to say.
Yeah. Rereading that. I didn't get it quite right. Posting when tired can be dangerous... - Jonathan M Davis
Aug 19 2015
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, 19 August 2015 at 12:01:35 UTC, Timon Gehr wrote:
 On 08/19/2015 09:40 AM, Jonathan M Davis wrote:
 But I do think that that whole deal about partial ordering is 
 just
 bizarre. If == is true, <= and >= should always be true and 
 vice versa.
 Similarly, if any of them are false, all three of them should 
 be false.
 If you want to do something else, then don't use the built-in 
 operators.
Like the built-in types do? :o)
Yeah. I guess that the floating point stuff doesn't quite work that way thanks to NaN. *sigh* I hate floating point numbers. Sometimes, you have no choice other than using them, but man are they annoying. - Jonathan M Davis
Aug 19 2015
parent reply "renoX" <renozyx gmail.com> writes:
On Wednesday, 19 August 2015 at 14:01:34 UTC, Jonathan M Davis 
wrote:
 Yeah. I guess that the floating point stuff doesn't quite work 
 that way thanks to NaN. *sigh* I hate floating point numbers. 
 Sometimes, you have no choice other than using them, but man 
 are they annoying.

 - Jonathan M Davis
No IMHO, it's not really the fault of floating point numbers, it's the languages fault: gloating point standard contain the 'signaling NaN', if the languages used it by default then the silent NaN many issues would never happen.. Silent NaN are an optimisation which is quite useful in some case but unfortunately the use of silent NaN by default in many languages makes it a premature optimisation pushed by the language designers over the poor unsuspecting programmers :-( renoX
Aug 20 2015
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, 20 August 2015 at 14:52:53 UTC, renoX wrote:
 On Wednesday, 19 August 2015 at 14:01:34 UTC, Jonathan M Davis 
 wrote:
 Yeah. I guess that the floating point stuff doesn't quite work 
 that way thanks to NaN. *sigh* I hate floating point numbers. 
 Sometimes, you have no choice other than using them, but man 
 are they annoying.

 - Jonathan M Davis
No IMHO, it's not really the fault of floating point numbers, it's the languages fault: gloating point standard contain the 'signaling NaN', if the languages used it by default then the silent NaN many issues would never happen.. Silent NaN are an optimisation which is quite useful in some case but unfortunately the use of silent NaN by default in many languages makes it a premature optimisation pushed by the language designers over the poor unsuspecting programmers :-(
I really don't mind NaN. It really doesn't cause problems normally. The problem with floating point values is floating point values themselves. They're so painfully inexact. Even without NaN, you can't use == with them and expect it to work. Compared to that, how NaN is dealt with is a total non-issue. Floating points themselves just plain suck. They're sometimes necessary, but they suck. - Jonathan M Davis
Aug 20 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Aug 20, 2015 at 04:22:20PM +0000, Jonathan M Davis via Digitalmars-d
wrote:
[...]
 I really don't mind NaN. It really doesn't cause problems normally.
 The problem with floating point values is floating point values
 themselves.  They're so painfully inexact. Even without NaN, you can't
 use == with them and expect it to work. Compared to that, how NaN is
 dealt with is a total non-issue. Floating points themselves just plain
 suck. They're sometimes necessary, but they suck.
[...] But how would you work around the inherent inexactness? In spite of all its warts, IEEE floating point is at least a usable compromise between not having any representation for reals at all, and having exact reals that are impractically slow in real-world applications. T -- I am a consultant. My job is to make your job redundant. -- Mr Tom
Aug 20 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, 20 August 2015 at 16:44:44 UTC, H. S. Teoh wrote:
 On Thu, Aug 20, 2015 at 04:22:20PM +0000, Jonathan M Davis via 
 Digitalmars-d wrote: [...]
 I really don't mind NaN. It really doesn't cause problems 
 normally. The problem with floating point values is floating 
 point values themselves.  They're so painfully inexact. Even 
 without NaN, you can't use == with them and expect it to work. 
 Compared to that, how NaN is dealt with is a total non-issue. 
 Floating points themselves just plain suck. They're sometimes 
 necessary, but they suck.
[...] But how would you work around the inherent inexactness? In spite of all its warts, IEEE floating point is at least a usable compromise between not having any representation for reals at all, and having exact reals that are impractically slow in real-world applications.
I don't know that there _is_ a good solution, and IEEE floating point may realistically be as good as it gets given the various pros and cons, but they're still annoying and IMHO really shouldn't be used unless you actually need them. - Jonathan M Davis
Aug 20 2015
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Aug 20, 2015 at 04:56:15PM +0000, Jonathan M Davis via Digitalmars-d
wrote:
 On Thursday, 20 August 2015 at 16:44:44 UTC, H. S. Teoh wrote:
On Thu, Aug 20, 2015 at 04:22:20PM +0000, Jonathan M Davis via
Digitalmars-d wrote: [...]
I really don't mind NaN. It really doesn't cause problems normally.
The problem with floating point values is floating point values
themselves.  They're so painfully inexact. Even without NaN, you
can't use == with them and expect it to work. Compared to that, how
NaN is dealt with is a total non-issue. Floating points themselves
just plain suck. They're sometimes necessary, but they suck.
[...] But how would you work around the inherent inexactness? In spite of all its warts, IEEE floating point is at least a usable compromise between not having any representation for reals at all, and having exact reals that are impractically slow in real-world applications.
I don't know that there _is_ a good solution, and IEEE floating point may realistically be as good as it gets given the various pros and cons, but they're still annoying and IMHO really shouldn't be used unless you actually need them.
[...] Well, as with any computer representation of a numerical type, one ought to understand what exactly the representation is capable of, and not expect more than that. :-) It sounds obvious, but how many programs, for example, check for integer overflow when performing complex integer arithmetic? It just so happens that most applications only work with relatively small magnitudes, so integer overflows are unlikely, but they do still happen (sometimes opening up security holes), just as using floating-point numbers without understanding their limitations will sometimes yield surprising results. It's just that with integers, the problematic areas are generally rarely encountered, whereas with floating point you bump into them quite often (e.g., 1/3 + 1/3 + 1/3 may not exactly equal 1). (Though with *unsigned* integers, you do often see buggy code when values near 0 are used, or when subtraction is involved.) At the root of it is the fallacious assumption that int == mathematical integer or float == mathematical real. It may appear to work at first, but eventually it will cause something to go wrong somewhere. T -- Food and laptops don't mix.
Aug 20 2015
prev sibling parent "renoX" <renozyx gmail.com> writes:
On Thursday, 20 August 2015 at 16:22:22 UTC, Jonathan M Davis 
wrote:
 I really don't mind NaN.
Well with silent NaN you have 'x == x' is false which means all the generic algorithms (silently) fail.
 It really doesn't cause problems normally. The problem with 
 floating point values is floating  > point values themselves. 
 They're so painfully inexact. Even without NaN, you can't use 
 == with them and expect it to work. Compared to that, how NaN 
 is dealt with is a total non-issue. Floating points themselves 
 just plain suck. They're sometimes necessary, but they suck.

 - Jonathan M Davis
I think that at Sun some pushed for interval arithmetic support, but 1) Oracle has bought Sun 2) range has its own set of problem and they are more expensive to compute. I'm still a bit sad that modern CPU seems to spend their humongous amount of transistors on dubious feature yet they don't try to support important basic things such as 'trap on overflow' integer computations, interval arithmetic.
Aug 24 2015
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/20/2015 7:52 AM, renoX wrote:
 No IMHO, it's not really the fault of floating point numbers, it's the
languages
 fault: gloating point standard contain the 'signaling NaN',
It has nothing to do with signalling nan, it has to do with nan.
Aug 20 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2015 12:40 AM, Jonathan M Davis wrote:

As I recall, I posted a survey of syntax from maybe a dozen languages, and the community picked the one they liked the best.
Aug 20 2015
parent Jacob Carlborg <doob me.com> writes:
On 2015-08-20 23:06, Walter Bright wrote:

 As I recall, I posted a survey of syntax from maybe a dozen languages,
 and the community picked the one they liked the best.
Yeah, I remember that for the lambda syntax. Not sure about when the delegate syntax was introduced. That was present when I started with D1. -- /Jacob Carlborg
Aug 20 2015
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 02:08:08 UTC, H. S. Teoh wrote:

 bad idea to make <, <=, ==, >, >= individually overloadable 
 (ahem, C++), 'cos it's a lot of redundant typing (lots of room 
 for typos and bugs) and most combinations don't make sense 
 anyway. D did the right thing by consolidating <, <=, >, >= 
 into opCmp.
I see your point, but it isn't so clear cut. When you are doing a high level APIs, like an ORM you might want to enforce having a field on the left and a number on the right and return a query building type. So you can type "db.query(ClassA.birth < somedate)" What would be better is to have defaults.

 remark about breaking code NOW rather than having to live with 
 a flaw for the foreseeable future of the language, surely 
 applies to D. Sure, nobody likes their code broken, but if it 
 means breaking a small number of projects today for a much 
 better language in the future, vs. not breaking anything today 
 and having to live with a huge number of projects later, I 
 think breaking code is more worth it.
Yes, if you do it in a planned manner, provide an upgrade tool, document changes in a "tutorial like" way and version it. E.g. D3, D4, D5…

 reference types by design.
I think that is the wrong argument, because surely D lifted the list it. It would be better to have a reference assignment operator like Simula. In C everything is either a pointer or a value, but there is still visual confusion. In D it gets worse by having a proliferation of builtin types without enough visual cues. E.g. a = b; // what does this do in C and D? Values, pointers, arrays…? More clear: a = b // constant definition a := b // value assignment (change fields) a :- b // reference assignment (change pointer) If you want references, then it makes sense to have everything be a reference conceptually (in terms of syntactical consistency, not semantics), but some are not value assignable, some are not reference assignable and some are both. In general I never want to confuse value assignment with reference assignment, so they should be visually distinct.
Aug 19 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2015 1:47 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com> wrote:
 I see your point, but it isn't so clear cut. When you are doing a high level
 APIs, like an ORM you might want to enforce having a field on the left and a
 number on the right and return a query building type. So you can type
 "db.query(ClassA.birth < somedate)"
That's exactly the kind of thing D's operator overloading is designed to discourage.

 design.
I think that is the wrong argument, because surely D lifted the struct/class
Surely you're wrong. I originally proposed that to Bjarne back in the 1980's. It
Aug 20 2015
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/19/2015 04:04 AM, H. S. Teoh via Digitalmars-d wrote:
 On Wed, Aug 19, 2015 at 01:12:33AM +0000, Adam D. Ruppe via Digitalmars-d
wrote:
 I just saw this link come by my desktop and I thought it was an
 interesting read because D does a lot of these things too, and avoids
 some of them:

 http://www.informit.com/articles/article.aspx?p=2425867

 I don't agree they are all mistakes, but it is a pretty quick and
 interesting read.
thought about it before. make <, <=, ==, >, >= individually overloadable (ahem, C++), 'cos it's a lot of redundant typing (lots of room for typos and bugs) and most combinations don't make sense anyway. D did the right thing by consolidating <, <=, >, >= into opCmp. However, D still differentiates between opCmp and opEquals, and if those two are inconsistent, strange things will happen. Andrei's argument is that we want to support general partial orders, not just linear orders, but IMO this falls in the "too much flexibility for only marginal benefit" trap.
What benefit can having separate opCmp and opEquals possibly have for defining partial orders? (opCmp can do it on its own). Hasn't the idea that x.opCmp(y)==0 should mean that x and y are not ordered been debunked?
 I mean, when was the
 last time you badly needed a partial order to be expressed by *built-in*
 comparison operators, as opposed to dedicated member functions?  When
 people see <, <=, >, >= in your code, they generally expect the usual
 linear order of numerical types, not something else. This causes
 confusion and suffers from the same problems as using + for string
 concatenation.
 ...
The built-in numerical types are not all ordered linearly.

 unresolved controversy surrounding the behaviour of >> vs. >>> (sorry,
 forgot the bug number, but it's in bugzilla).  IMO, we should ditch
 these operators and use int intrinsics for the assembly instructions
 instead. What's the use of built-in operators that are only occasionally
 used in system code?  Something like 1.shiftLeft(2) would work just fine
 in expressions, and simplify the lexer by having less token types.
 ...
(This is really not a simplification worth mentioning.)
 I'm not sure about making a separate type for ints-as-bits, though. That
 seems a bit extreme, and would almost imply that non-bitarray numbers
 would have to be BigInt by default.


 ...
 ...


 current schizophrenic split between attributes on the left vs.
 attributes on the right is another example of needlessly convoluted
 syntax.
 ...
Type on the right makes it more natural to leave off the type. Consider the confusion about what 'auto' actually means. It's even in the change log, and probably made it into the documentation in some places. E.g. "auto return type". 'auto' isn't a type.
 ...


 tread a veritable minefield of surprising behaviour, counterintuitive
 semantics, unexpected GC interactions, and compiler bugs. Especially
 when you start sticking dtors on structs, which are supposed to be
 freely-copyable int-like values, which breaks a lot of assumptions in
 generic template code and just cause general nuisance.


... anymore, it did in 2012 and in D1. (But there wasn't even a runtime check, it was just a lack of type safety. There are still a few cases of type system unsoundness even today.)
 thanks to classes being reference types
 by design.
Aug 19 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2015 5:00 AM, Timon Gehr wrote:

much, much older than that.
Aug 22 2015
parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, 23 August 2015 at 02:39:03 UTC, Walter Bright wrote:
 On 8/19/2015 5:00 AM, Timon Gehr wrote:

it's likely much, much older than that.
Without looking it up, I would have assumed that any language that's heavily OO like smalltalk would have classes be reference types simply because OO and value types really doesn't makes sense. And C++ has issues due to the fact that it allows OO types to be used as value types (i.e. object slicing). So, I would expect that while C++ is what made OO big, it's probably the oddball language with regards to how it allows OO types to live on the stack instead of the heap. LOL, but of course, we'd have to actually do some research on older OO languages to be sure of that. Still, the nature of OO screams reference types, so it's pretty natural for classes to be reference types. You just lose out on the nicety of being able to create a polymorphic type on the stack for the cases where you don't actually need to use it polymorphically, but at least in theory, std.tyepcons.Scoped solves that problem, and for the most part, separating types are polymorphic and those that aren't seems to be a big win. It forces people to actually think about polymorphism instead of just making everything polymorphic, so you tend to end up with cleaner, more efficient code. At least, from what I've seen with D, that's how it seems to be, and I'm very glad for its separation of structs and classes. - Jonathan M Davis
Aug 23 2015
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 19 August 2015 at 01:12:36 UTC, Adam D. Ruppe wrote:
 I just saw this link come by my desktop and I thought it was an 
 interesting read because D does a lot of these things too, and 
 avoids some of them:

 http://www.informit.com/articles/article.aspx?p=2425867

 I don't agree they are all mistakes, but it is a pretty quick 
 and interesting read.
Aug 18 2015
prev sibling next sibling parent =?UTF-8?B?TWFydGluIERyYcWhYXI=?= via Digitalmars-d writes:
Dne 19.8.2015 v 3:12 Adam D. Ruppe via Digitalmars-d napsal(a):
 I just saw this link come by my desktop and I thought it was an
 interesting read because D does a lot of these things too, and avoids
 some of them:
=20
 http://www.informit.com/articles/article.aspx?p=3D2425867
=20
 I don't agree they are all mistakes, but it is a pretty quick and
 interesting read.
I really liked this bit that reminded me of endless discussions in this mailing list:
 The moral is simple: You can't see the future, and you can't break
 backward compatibility once you get to the future. You make rational
 decisions that reach reasonable compromises, and you'll still get it
 wrong when requirements change unexpectedly. The hardest thing about
 designing a successful language is balancing simplicity, clarity,
 generality, flexibility, performance, and so on.
Aug 18 2015
prev sibling next sibling parent "Kagamin" <spam here.lot> writes:
On Wednesday, 19 August 2015 at 01:12:36 UTC, Adam D. Ruppe wrote:
 I just saw this link come by my desktop and I thought it was an 
 interesting read because D does a lot of these things too, and 
 avoids some of them:

 http://www.informit.com/articles/article.aspx?p=2425867

 I don't agree they are all mistakes, but it is a pretty quick 
 and interesting read.
10. Huh, never seen bugs caused by syntax. Maybe our codemonkeys are not braindead enough? But if you want to solve this for good, no empty statement is good, use nop() function instead. 7. Delegate syntax allows to skip arguments declarations, quite handy when you don't need them. 6. Heh, didn't know about HasFlag. 5. If this was true, .net programmers would use vb.net and C programmers would use pascal. 2. Yes, this proves to be hard for C++ programmers to grok. for loop is useful when you want to start from arbitrary index. From comments:
Events are null, but you can add a subscriber to them. And you 
can't trigger a null one. Which leads to the same convoluted 
code around event trigger.
Similarly, there's no reason why "foreach" couldn't treat a null 
list as an empty one - saving me from having to worry about 
which methods return nulls list objects and which ones return 
empty ones...
Yep, nulls should be equivalent to empty collections.
Aug 19 2015
prev sibling parent reply "Chris" <wendlec tcd.ie> writes:
On Wednesday, 19 August 2015 at 01:12:36 UTC, Adam D. Ruppe wrote:
 I just saw this link come by my desktop and I thought it was an 
 interesting read because D does a lot of these things too, and 
 avoids some of them:

 http://www.informit.com/articles/article.aspx?p=2425867

 I don't agree they are all mistakes, but it is a pretty quick 
 and interesting read.
the type-to-the-left syntax pretty handy, because in my part of the world we read from left to right and thus I know immediately what a function returns (unless it says `auto` :-)) or what a variable is supposed to be, which is the most important bit of information for me. To be honest, that there are variables in code is so common that they don't need to be "announced" with `var x : number`, I find it rather annoying. Plus, this argument doesn't hold, imho: "From both programming and mathematics, we have the convention that the result of the computation is notated to the right, so it's weird that in C-like languages the type is on the left." A convention, that's right. But who said it's a good thing, because it's a convention? And see, here's the contradiction: "The lesson: When you're designing a new language, don't slavishly follow the bizarre conventions of predecessor languages." Well, maybe that's exactly what the designers of C did, they didn't slavishly follow the convention that the result of the computation is notated to the right. Maybe they thought, 'Uh, actually, wouldn't it be handier to see immediately what type it is?'. Has the argument that tpye-to-the-right is easier for beginners has ever been proven? still think it's a very handy shorthand for cumbersome `x = x + 1` or even `x += 1`. And no, it's not confusing, because it is well defined as incrementing the value by 1. In fact, I don't like Python's patronizing insistence in having to write `x = x + 1`. And hey, it's just conventions. As long as the meaning is well defined, there's no problem. It's like spelling "colour" or "color", it doesn't really matter.
Aug 19 2015
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 10:09:33 UTC, Chris wrote:
 Well, maybe that's exactly what the designers of C did, they 
 didn't slavishly follow the convention that the result of the 
 computation is notated to the right. Maybe they thought, 'Uh, 
 actually, wouldn't it be handier to see immediately what type 
 it is?'.
Algol has "INTEGER A". Simula has "INTEGER A". The Simula successor BETA has "A : integer". C has "int a". The C successor Go has "a int".
 Has the argument that tpye-to-the-right is easier for beginners 
 has ever been proven?
It is much easier to read when you have longer types. Old languages tended to have not so long types (libraries and programs were smaller). If you support templates it is pain to have types on the left. It also makes it much more natural/consistent when you use type deduction. Just omit the type, no silly "auto".

still think it's a very handy shorthand for cumbersome `x = x + 1` or even `x += 1`. And no, it's not confusing, because it is well defined as incrementing the value by 1. In fact, I don't like Python's patronizing insistence in having to write `x = x + 1`.
Python supports "+=".
 defined, there's no problem. It's like spelling "colour" or 
 "color", it doesn't really matter.
Jeg skriver "farge"… ;-)
Aug 19 2015
next sibling parent reply "Kagamin" <spam here.lot> writes:
On Wednesday, 19 August 2015 at 11:42:54 UTC, Ola Fosheim Grøstad 
wrote:
 Has the argument that tpye-to-the-right is easier for 
 beginners has ever been proven?
It is much easier to read when you have longer types. Old languages tended to have not so long types (libraries and programs were smaller). If you support templates it is pain to have types on the left.
Just switch your editor to RTL mode, haha.
Aug 19 2015
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 12:01:41 UTC, Kagamin wrote:
 Just switch your editor to RTL mode, haha.
Indeed. Except the variable name is caught in the middle of the type and the assignment. I've started to carefully align my variable names at ~ row 40 in my C++ code. type<tab><tab><tab><tab><tab>name = some.initializer.expression;
Aug 19 2015
parent "Kagamin" <spam here.lot> writes:
On Wednesday, 19 August 2015 at 12:32:32 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 19 August 2015 at 12:01:41 UTC, Kagamin wrote:
 Just switch your editor to RTL mode, haha.
Indeed. Except the variable name is caught in the middle of the type and the assignment.
Well, if you have three things in line, something will end up in the middle, don't take go as a standard, they reinvented RTL, only worse.
Aug 19 2015
prev sibling parent Shachar Shemesh <shachar weka.io> writes:
On 19/08/15 15:01, Kagamin wrote:

 Just switch your editor to RTL mode, haha.
OT: (so this is an off topic reply to an off topic thread....) I actually tried to write a good RTL text editor (you can see the half baked result at http://bidiedit.lingnu.com). I know your comment was meant as a joke, but the truth is that mixing RTL text with code does not work well in any editor I have ever encountered. The BiDi engine is located too low to incorporate the syntax analysis required to do it properly (and, in some cases, it is not clear what "properly" even means) for any editor I've seen to do it. I usually end up using VIM, because it does not reoredering at all (I.e. - all text, regardless of language, is shown from left to right), which means that I can separate syntax from content. Shachar
Aug 24 2015
prev sibling parent reply "Chris" <wendlec tcd.ie> writes:
On Wednesday, 19 August 2015 at 11:42:54 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 19 August 2015 at 10:09:33 UTC, Chris wrote:
 Well, maybe that's exactly what the designers of C did, they 
 didn't slavishly follow the convention that the result of the 
 computation is notated to the right. Maybe they thought, 'Uh, 
 actually, wouldn't it be handier to see immediately what type 
 it is?'.
Algol has "INTEGER A". Simula has "INTEGER A". The Simula successor BETA has "A : integer". C has "int a". The C successor Go has "a int".
 Has the argument that tpye-to-the-right is easier for 
 beginners has ever been proven?
It is much easier to read when you have longer types. Old languages tended to have not so long types (libraries and programs were smaller). If you support templates it is pain to have types on the left. It also makes it much more natural/consistent when you use type deduction. Just omit the type, no silly "auto".

 I
still think it's a very handy shorthand for cumbersome `x = x + 1` or even `x += 1`. And no, it's not confusing, because it is well defined as incrementing the value by 1. In fact, I don't like Python's patronizing insistence in having to write `x = x + 1`.
Python supports "+=".
Yes, I forgot, it does. But why not `x++`? I never understood why. As if most people were too stoooopid to grasp the concept that `x++` is the same as `x += 1` (which is intellectually as 'challenging' as `x++`, by the way).
 defined, there's no problem. It's like spelling "colour" or 
 "color", it doesn't really matter.
Jeg skriver "farge"… ;-)
farge / farve / färg / Farbe - still the same thing ;)
Aug 19 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 13:02:50 UTC, Chris wrote:
 Yes, I forgot, it does. But why not `x++`? I never understood 
 why. As if most people were too stoooopid to grasp the concept 
 that `x++` is the same as `x += 1` (which is intellectually as 
 'challenging' as `x++`, by the way).
I don't know, and I seldom miss it, but I guess one reason could be that you would have to define what "1" is for non-real types. x--/x++ should be more like pred(), succ() on enumerations? e.g.: month = JAN; month++; // FEB
 farge / farve / färg / Farbe - still the same thing ;)
:)
Aug 19 2015
prev sibling parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
"Chris" <wendlec tcd.ie> wrote:
  [...]
 As if most people were too stoooopid to grasp the concept that `x++` is
 the same as `x += 1` (which is intellectually as 'challenging' as `x++`, by
the way).
Because it's not. ++x is the same as x+=1, not x++. Tobi
Aug 21 2015
parent "Chris" <wendlec tcd.ie> writes:
On Friday, 21 August 2015 at 19:58:04 UTC, Tobias Müller wrote:
 "Chris" <wendlec tcd.ie> wrote:
  [...]
 As if most people were too stoooopid to grasp the concept that 
 `x++` is
 the same as `x += 1` (which is intellectually as 'challenging' 
 as `x++`, by the way).
Because it's not. ++x is the same as x+=1, not x++. Tobi
This distinction is only relevant if there are two variables involved (i.e. assignment): `y = x++;` where `y = ++x;` does in fact yield a different result (most likely the desired one). If you work on the same variable, `x++;` is fine. Thus, I don't agree with Python's philosophy.
Aug 24 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2015 3:09 AM, Chris wrote:

it's
 a very handy shorthand for cumbersome `x = x + 1` or even `x += 1`. And no,
it's
 not confusing, because it is well defined as incrementing the value by 1. In
 fact, I don't like Python's patronizing insistence in having to write `x = x +
1`.
x=x+1 has further problems if x has side effects.
Aug 20 2015
parent reply "Chris" <wendlec tcd.ie> writes:
On Thursday, 20 August 2015 at 21:16:40 UTC, Walter Bright wrote:
 On 8/19/2015 3:09 AM, Chris wrote:

 I still think it's
 a very handy shorthand for cumbersome `x = x + 1` or even `x 
 += 1`. And no, it's
 not confusing, because it is well defined as incrementing the 
 value by 1. In
 fact, I don't like Python's patronizing insistence in having 
 to write `x = x + 1`.
x=x+1 has further problems if x has side effects.
The only time I had problems with `x++` was when I was trying to be smart and have it in a debug statement like `writefln("count = %d", cnt++);` or when accessing an array like `auto value = myArray[cnt++];`, which is clearly not the language's fault, it's rather my trying to be a super cool coding cowboy. 'Yeah Ma'am, that's how we do it in the Wild West!'. The whole article, imo, is like saying that when dealing with programming there are problems, difficulties and outright contradictions (like in maths or any other logical system the human mind has come up with), but language designers should make all these evil things go away! Imagine you had to attach warnings to a programming language, like the labels on microwaves "Don't put your pets in it [you stupid *****]!". Warning: `++` may increment a value by one!
Aug 21 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/21/2015 3:59 AM, Chris wrote:
 The whole article, imo, is like saying that when dealing with programming there
 are problems, difficulties and outright contradictions (like in maths or any
 other logical system the human mind has come up with), but language designers
 should make all these evil things go away! Imagine you had to attach warnings
to
 a programming language, like the labels on microwaves "Don't put your pets in
it
 [you stupid *****]!".
We have conversations like that around here often. Some people want to hide what a CPU normally does. It's a fine sentiment, but a systems programming language should expose what a CPU does so it can be exploited for efficient programming.
Aug 22 2015
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Aug 22, 2015 at 08:19:26PM -0700, Walter Bright via Digitalmars-d wrote:
 On 8/21/2015 3:59 AM, Chris wrote:
The whole article, imo, is like saying that when dealing with
programming there are problems, difficulties and outright
contradictions (like in maths or any other logical system the human
mind has come up with), but language designers should make all these
evil things go away! Imagine you had to attach warnings to a
programming language, like the labels on microwaves "Don't put your
pets in it [you stupid *****]!".
We have conversations like that around here often. Some people want to hide what a CPU normally does. It's a fine sentiment, but a systems programming language should expose what a CPU does so it can be exploited for efficient programming.
Yes! People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like. Otherwise the programs they write will be pretty weird. -- D. Knuth T -- Klein bottle for rent ... inquire within. -- Stephen Mulraney
Aug 22 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/22/2015 8:32 PM, H. S. Teoh via Digitalmars-d wrote:
 	People who are more than casually interested in computers should
 	have at least some idea of what the underlying hardware is like.
 	Otherwise the programs they write will be pretty weird.
 	-- D. Knuth
A good friend of mine in college decided to learn Fortran, having never programmed before. Being a practical sort, he got a copy of the Fortran-10 reference manual, read it, and wrote a program. Being an amazingly smart man, his program worked. But it ran awfully slowly. He was quite mystified, and finally asked for help from someone who knew Fortran. It was quickly discovered that the program wrote a file by opening the file, appending a character, then closing the file, for each byte in the file. (You can imagine how slow that is!) My friend defended himself with the fact that the Fortran reference manual made no mention about how to do file I/O for performance - knowledge of this sort of thing was just assumed. He was quite right.
Aug 22 2015
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Aug 22, 2015 at 10:25:11PM -0700, Walter Bright via Digitalmars-d wrote:
 On 8/22/2015 8:32 PM, H. S. Teoh via Digitalmars-d wrote:
	People who are more than casually interested in computers should
	have at least some idea of what the underlying hardware is like.
	Otherwise the programs they write will be pretty weird.
	-- D. Knuth
A good friend of mine in college decided to learn Fortran, having never programmed before. Being a practical sort, he got a copy of the Fortran-10 reference manual, read it, and wrote a program. Being an amazingly smart man, his program worked. But it ran awfully slowly. He was quite mystified, and finally asked for help from someone who knew Fortran. It was quickly discovered that the program wrote a file by opening the file, appending a character, then closing the file, for each byte in the file. (You can imagine how slow that is!) My friend defended himself with the fact that the Fortran reference manual made no mention about how to do file I/O for performance - knowledge of this sort of thing was just assumed. He was quite right.
Reminds me of my first job where somebody wrote a report generation script in bash, using awk, grep, cut, etc., to extract fields from the input file. It worked, but was painfully slow on large input files. Once, my manager ran it on a particularly large input file, and after 2 *days* it was still running. I rewrote the script in Perl, and it finished in less than 2 minutes. :-P (Had I known D in those days, I might've been able to write a D program that finished in 2 seconds. Perhaps. :-P) T -- "I suspect the best way to deal with procrastination is to put off the procrastination itself until later. I've been meaning to try this, but haven't gotten around to it yet. " -- swr
Aug 23 2015