www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Why don't other programming languages have ranges?

reply Justin Johansson <no spam.com> writes:
It sounds like the D PL has invented the range idiom unlike any other PL.

Since the dawn of PL's, which must be about 50 years now since Lisp
for example, it is hard to imagine a new PL inventing a completely
new idiom as "ranges" seem to purport. Given the many academic
arguments for ranges as opposed to iterators, why is it that the
D PL seems to be the major if not only sponsor of the range idiom?

Is D really taking the lead here and is it likely that other PL's
will eventually take D's lead?

Cheers
Justin Johansson
Jul 24 2010
next sibling parent "Michal Minich" <michal.minich gmail.com> writes:
maybe this will give some pointers

http://lambda-the-ultimate.org/node/3520 
Jul 24 2010
prev sibling next sibling parent Jonathan M Davis <jmdavisprog gmail.com> writes:
On Saturday 24 July 2010 06:36:34 Justin Johansson wrote:
 It sounds like the D PL has invented the range idiom unlike any other PL.
 
 Since the dawn of PL's, which must be about 50 years now since Lisp
 for example, it is hard to imagine a new PL inventing a completely
 new idiom as "ranges" seem to purport. Given the many academic
 arguments for ranges as opposed to iterators, why is it that the
 D PL seems to be the major if not only sponsor of the range idiom?
 
 Is D really taking the lead here and is it likely that other PL's
 will eventually take D's lead?
 
 Cheers
 Justin Johansson

Part of it probably comes from how limitedly most languages use iterators. Most languages only use them for iteration and perhaps to aid in insertion or erasure from a container. They don't generally pass them around for algorithms and such. C++ and D are the only ones that I'm aware of which have done so. There are, of course, problems with using iterators in that fashion. Java's and C#'s solution was to put it on the list of C++ features that they didn't use and didn't make them powerful enough to really be used in algorithms. With D, it was decided to keep that power, and ranges were the solution to the problems with C++ iterators. Regardless, there are a number of ideas floating out there which never make their way into a programming language - particularly serious and well-used programming languages. That doesn't necessarily mean that they're bad ideas, but the creators of the major programming languages either weren't aware of them, didn't like them for some reason, or couldn't fit them into their language for one reason or another. It doesn't help that programming languages aren't really invented all that often - or at least it's not all that often that new ones come along which actually become heavily used. Also, most languages which are heavily used are 10+ years old if not 20+, so many newer ideas came along too late to be incorporated into most of the major programming languages or their stdandard libraries and philosophies. As far as I can tell, ranges have not been a big thing in any way shape or form. Andrei didn't invent them, but he appears to be the first to put them into major use - at least in a manner that would get other programmers to use them rather than them just being used for a particular project or company. So, they don't appear to be a particularly well-known idea at this point - certainly not a heavily used one. - Jonathan M Davis
Jul 24 2010
prev sibling next sibling parent Leandro Lucarella <luca llucax.com.ar> writes:
Justin Johansson, el 24 de julio a las 23:06 me escribiste:
 It sounds like the D PL has invented the range idiom unlike any other PL.
 
 Since the dawn of PL's, which must be about 50 years now since Lisp
 for example, it is hard to imagine a new PL inventing a completely
 new idiom as "ranges" seem to purport. Given the many academic
 arguments for ranges as opposed to iterators, why is it that the
 D PL seems to be the major if not only sponsor of the range idiom?
 
 Is D really taking the lead here and is it likely that other PL's
 will eventually take D's lead?

http://www.boost.org/doc/libs/1_42_0/libs/range/index.html -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- This is what you get, when you mess with us.
Jul 24 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Justin Johansson wrote:
 It sounds like the D PL has invented the range idiom unlike any other PL.

Pointer programming is deeply embedded into the C++ culture, and iterators segue nicely into that culture. For D, however, programming revolves around arrays, and ranges fit naturally into that. It'll take years, but I'll be very surprised if ranges don't filter into many major languages, as well as other ideas that D has proven to be solid.
Jul 24 2010
parent reply levenshtein <distance fun.org> writes:
Walter Bright Wrote:

 Justin Johansson wrote:
 It sounds like the D PL has invented the range idiom unlike any other PL.

Pointer programming is deeply embedded into the C++ culture, and iterators segue nicely into that culture. For D, however, programming revolves around arrays, and ranges fit naturally into that. It'll take years, but I'll be very surprised if ranges don't filter into many major languages, as well as other ideas that D has proven to be solid.

At least the C++ fellows already stole your 'auto' type inference and the new template functionality. C# stole your delegate system. They even use the same terms. The world dominance already started.
Jul 25 2010
parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 25/07/10 12:11 PM, levenshtein wrote:
 Walter Bright Wrote:

 Justin Johansson wrote:
 It sounds like the D PL has invented the range idiom unlike any other PL.

Pointer programming is deeply embedded into the C++ culture, and iterators segue nicely into that culture. For D, however, programming revolves around arrays, and ranges fit naturally into that. It'll take years, but I'll be very surprised if ranges don't filter into many major languages, as well as other ideas that D has proven to be solid.

At least the C++ fellows already stole your 'auto' type inference and the new template functionality. C# stole your delegate system. They even use the same terms. The world dominance already started.

Not to belittle D, but type inference was around long before D came on the scene, and I don't think they got the use of the auto keyword from D (auto was already an (essentially unused) keyword in C++).
Jul 25 2010
parent reply levenshtein <distance fun.org> writes:
Peter Alexander Wrote:

 On 25/07/10 12:11 PM, levenshtein wrote:
 Walter Bright Wrote:

 Justin Johansson wrote:
 It sounds like the D PL has invented the range idiom unlike any other PL.

Pointer programming is deeply embedded into the C++ culture, and iterators segue nicely into that culture. For D, however, programming revolves around arrays, and ranges fit naturally into that. It'll take years, but I'll be very surprised if ranges don't filter into many major languages, as well as other ideas that D has proven to be solid.

At least the C++ fellows already stole your 'auto' type inference and the new template functionality. C# stole your delegate system. They even use the same terms. The world dominance already started.

Not to belittle D, but type inference was around long before D came on the scene, and I don't think they got the use of the auto keyword from D (auto was already an (essentially unused) keyword in C++).

Type inference might have been around, but I believe it was A. Alexandrescu's influence that made C++0x adopt the same 'auto' keyword for type inference. You can see here: http://en.wikipedia.org/wiki/C++0x#Type_inference It's essentially copied from D. Seems funny, but the mighty C++ committee is actually listening to us.
Jul 25 2010
parent reply Don <nospam nospam.com> writes:
levenshtein wrote:
 Peter Alexander Wrote:
 
 On 25/07/10 12:11 PM, levenshtein wrote:
 Walter Bright Wrote:

 Justin Johansson wrote:
 It sounds like the D PL has invented the range idiom unlike any other PL.

nicely into that culture. For D, however, programming revolves around arrays, and ranges fit naturally into that. It'll take years, but I'll be very surprised if ranges don't filter into many major languages, as well as other ideas that D has proven to be solid.


the scene, and I don't think they got the use of the auto keyword from D (auto was already an (essentially unused) keyword in C++).

Type inference might have been around, but I believe it was A. Alexandrescu's influence that made C++0x adopt the same 'auto' keyword for type inference. You can see here: http://en.wikipedia.org/wiki/C++0x#Type_inference It's essentially copied from D. Seems funny, but the mighty C++ committee is actually listening to us.

C++0x announced that it was going to use the 'auto' keyword, so D copied it from C++0x. But D implemented it before C++ did. <g>.
Jul 25 2010
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Don (nospam nospam.com)'s article
 levenshtein wrote:
 Peter Alexander Wrote:

 On 25/07/10 12:11 PM, levenshtein wrote:
 Walter Bright Wrote:

 Justin Johansson wrote:
 It sounds like the D PL has invented the range idiom unlike any other PL.






 nicely into that culture. For D, however, programming revolves around arrays,
 and ranges fit naturally into that.

 It'll take years, but I'll be very surprised if ranges don't filter into many
 major languages, as well as other ideas that D has proven to be solid.





terms. The world dominance already started.
 Not to belittle D, but type inference was around long before D came on
 the scene, and I don't think they got the use of the auto keyword from D
 (auto was already an (essentially unused) keyword in C++).

Type inference might have been around, but I believe it was A. Alexandrescu's


can see here:
 http://en.wikipedia.org/wiki/C++0x#Type_inference

 It's essentially copied from D. Seems funny, but the mighty C++ committee is


 C++0x announced that it was going to use the 'auto' keyword, so D copied
 it from C++0x. But D implemented it before C++ did. <g>.

This is my general impression of D as a language. It's not a very innovative language in the sense that everything in it has been done somewhere, in some context, and it breaks no completely new ground. However, its contribution to the programming world is extremely important nonetheless in that it takes the best features from lots of different languages, including some researchy languages that almost noone uses in production, makes them work (more or less) well together and makes them all usable from one **practical, production-oriented** language. Basically, my take as a practical programmer rather than a theoretical comp-sci researcher is "Who cares if it's been done before if it's not implemented in any practical language?".
Jul 25 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
dsimcha wrote:
 Basically, my take as a practical programmer rather than a theoretical comp-sci
 researcher is "Who cares if it's been done before if it's not implemented in
any
 practical language?".

There's rarely anything truly new in programming. I agree with you that turning an idea into something useful is where it's at, and how it's done. For example, there's the idea of putting windows in a house, and then there's the design, size, and placement of those windows that make it matter. This is hardly unique to programming. For example, the Vikings 'discovered' America centuries before Columbus. But they did no followup, forgot about it, and it had no lasting impact. Many argue that others invented the airplane before the Wright brothers. But none of those claims can be properly documented, none of them did any followup, and none of them had any impact on the evolution of airplane design. Led Zeppelin based several of its major hits on obscure blues songs. Zep should have given credit at the time, but also, if Zep hadn't reworked those songs into hits, those songs would be quite forgotten.
Jul 25 2010
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/24/2010 08:36 AM, Justin Johansson wrote:
 It sounds like the D PL has invented the range idiom unlike any other PL.

 Since the dawn of PL's, which must be about 50 years now since Lisp
 for example, it is hard to imagine a new PL inventing a completely
 new idiom as "ranges" seem to purport. Given the many academic
 arguments for ranges as opposed to iterators, why is it that the
 D PL seems to be the major if not only sponsor of the range idiom?

 Is D really taking the lead here and is it likely that other PL's
 will eventually take D's lead?

 Cheers
 Justin Johansson

I thought of this for a couple of days. Clearly as the person using ranges in D consistently I'm very much at risk for accepting theories that make me look and feel good, so probably even if I'm trying to stay aware of that risk, I'm too biased for my answer to be useful. At the same time there are a few lucid points that I feel compelled to note. First, I agree with Walter that D's arrays have been a very strong influence on ranges. Pointers pervade C and iterators were a natural generalization of them; slices pervade D and I have been very explicitly preoccupied for a long time to find a formalism that legitimizes and formalizes slices. Though it was clear that slices do offer access to their underlying pointers (s.ptr and s.ptr + s.length), it was also clear that that's an unclean mechanism, one of last resort that should not be part of the formalism. Secod, clearly ranges are derivative to STL iterators, and STL iterators are a very solid new idea. That also invalidates this argument:
 Since the dawn of PL's, which must be about 50 years now since Lisp
 for example, it is hard to imagine a new PL inventing a completely
 new idiom as "ranges" seem to purport.

Just wind the clock back by 16 years. We're now in 1994, and there had been 34 years since Lisp. However Stepanov did come with a revolutionary new idea. Personally I'm not as much amazed about the time taken, as about the fact that many people _today_ don't figure what's good about the STL. My theory (which I expanded in my article On Iteration) is that STL requires some unusual and borderline obscure PL features, and because of that it could only be implemented in C++ using means that were at the time highly non-idiomatic. The net result was that STL's beautiful design was difficult to distinguish from the wall of syntax that surrounded it. As such, most people who weren't C++ programmers didn't "get" it and only some C++ programmers "got" it (even Bjarne took some time). In my humble opinion, the design of Java, C#, and Go is proof that their authors didn't get the STL. Otherwise, those languages would be very different. I strongly believe Walter got the STL and generic programming in general. He might be fuzzy about some minor details, but he is plenty good at plenty other things and always had a good listening ear for the importance of genericity. Third, ranges were "in the air" already at the time I formalized them. Boost and Adobe had notions of "range", even though all their primitives were to expose begin() and end(), so they were essentially lackeys of the STL iterator abstraction. People were talking about "range" whenever they discussed two iterators delimiting a portion of a container. It was only a matter of time until someone said, hey, let's make range a first-class abstraction. Andrei
Jul 25 2010
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Personally I'm not as much amazed about the time taken, as about the
 fact that many people _today_ don't figure what's good about the STL. My
 theory (which I expanded in my article On Iteration) is that STL
 requires some unusual and borderline obscure PL features, and because of
 that it could only be implemented in C++ using means that were at the
 time highly non-idiomatic. The net result was that STL's beautiful
 design was difficult to distinguish from the wall of syntax that
 surrounded it. As such, most people who weren't C++ programmers didn't
 "get" it and only some C++ programmers "got" it (even Bjarne took some
 time). In my humble opinion, the design of Java, C#, and Go is proof
 that their authors didn't get the STL. Otherwise, those languages would
 be very different.

I see at least a few reasons why people (including me before ranges came along and made me see in hindsight what STL was trying to do, but didn't quite get right) miss the underlying beauty of STL: 1. Despite being extremely generic and good at making complicated things possible, STL sucks compared to both std.range/std.algorithm and Python/Ruby/Perl/PHP duck typing at making simple things simple. a. Iterators come in pairs (both horribly verbose and statically uncheckable). Yes, this enables a few things that wouldn't otherwise be possible, but at the expense of making the other ~98% of use cases much more awkward and dangerous. IMHO this is a very bad tradeoff. b. Calling the iteration syntax horrible would be an understatement. Iterating over a container (via iterators/ranges) is something that's done so often that even small improvements in syntactic clarity and terseness are worth reaching for. C++'s way of writing a loop to iterate over a container via iterators looks about the same before and after RSA encryption, and the encrypted message looks really long. 2. Lack of variadic templates. This is a key thing that really makes std.range/std.algorithm tick. Without them, you can't have chain or zip and map and reduce are severely neutered. Breaking outside of standard lib stuff into userland, I've found taking a variable number of generic ranges to be unbelievably useful in my dstats library, and I use it for everything from mutual information to regression to ANOVA to partial correlations. 3. Lack of powerful compile-time reflection/introspection. Pretty much all compile time reflection/introspection in C++ is a massive kludge, which makes it virtually impossible for mere mortals to write their own generic algorithms and containers in all but the simplest cases. They can still use ones written by gurus, but it's a lot easier to appreciate stuff like this when you can actually make your own.
Jul 25 2010
prev sibling next sibling parent Don <nospam nospam.com> writes:
Andrei Alexandrescu wrote:
 On 07/24/2010 08:36 AM, Justin Johansson wrote:
 It sounds like the D PL has invented the range idiom unlike any other PL.

 Since the dawn of PL's, which must be about 50 years now since Lisp
 for example, it is hard to imagine a new PL inventing a completely
 new idiom as "ranges" seem to purport. Given the many academic
 arguments for ranges as opposed to iterators, why is it that the
 D PL seems to be the major if not only sponsor of the range idiom?

 Is D really taking the lead here and is it likely that other PL's
 will eventually take D's lead?


 Third, ranges were "in the air" already at the time I formalized them. 
 Boost and Adobe had notions of "range", even though all their primitives 
 were to expose begin() and end(), so they were essentially lackeys of 
 the STL iterator abstraction. People were talking about "range" whenever 
 they discussed two iterators delimiting a portion of a container. It was 
 only a matter of time until someone said, hey, let's make range a 
 first-class abstraction.

It's also worth noting that one of the primary advocates of ranges in C++ was Matthew Wilson, who was hugely influential in the early years of D. Even the limited ranges that exist in Boost, are somewhat influenced by D.
Jul 25 2010
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 I strongly believe Walter got the STL and generic programming in 
 general. He might be fuzzy about some minor details, but he is plenty 
 good at plenty other things and always had a good listening ear for the 
 importance of genericity.

To be fair, it took me many years to get it. STL's brilliance was nearly completely obscured by the syntax of it, and I was thoroughly misled by that. Bartosz Milewski once gave a fantastic talk where he showed some type metaprogramming in Haskell. I don't know Haskell, but the examples were one liners and easily understood. Then he showed the equivalent using C++ template metaprogramming, and it was many lines of complex syntax. Then, the brilliant part was he highlighted the Haskell bits that were embedded in the C++ template syntax. It was one of those "ahaa!" moments where suddenly it made sense.
 Third, ranges were "in the air" already at the time I formalized them. 
 Boost and Adobe had notions of "range", even though all their primitives 
 were to expose begin() and end(), so they were essentially lackeys of 
 the STL iterator abstraction. People were talking about "range" whenever 
 they discussed two iterators delimiting a portion of a container. It was 
 only a matter of time until someone said, hey, let's make range a 
 first-class abstraction.

In the early days of D, we talked about using arrays as the basis for the "D Template Library" rather than pointers. I can't find the thread about it, though.
Jul 25 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 In my humble opinion, the design of Java, C#, and Go is proof 
 that their authors didn't get the STL. Otherwise, those languages would 
 be very different.

I don't believe you. Among the designers of Java, C# and Go there are people that are both experts and smart. C# designers have shown to be sometimes smarter than D designers. So I think some of them 'get the STL' and understand its advantages, but despite this in the universe they have found some other reasons to refuse it. I think they were unwilling to pay the large price in language complexity, bug-prone-ness, code bloat and compilation speed that C++ and D are willing to pay. Here you can find why C# designers have refused C++-style templates & STL and chosen the generics instead: http://msdn.microsoft.com/it-it/magazine/cc163683%28en-us%29.aspx One important problem of C# generics can be solved adding IArithmetic<T>: http://www.osnews.com/story/7930 I like D templates and I agree that adding them to D1 was a good idea (mostly because D1 is designed to be similar to C++) but you must accept that some language designers can understand STL and still refuse to add all the features necessary to implement it. Maybe there is a way to build something like STL without C++/D-style templates :-) Bye, bearophile
Jul 25 2010
next sibling parent Don <nospam nospam.com> writes:
bearophile wrote:
 Andrei Alexandrescu:
 In my humble opinion, the design of Java, C#, and Go is proof 
 that their authors didn't get the STL. Otherwise, those languages would 
 be very different.

I don't believe you. Among the designers of Java, C# and Go there are people that are both experts and smart. C# designers have shown to be sometimes smarter than D designers. So I think some of them 'get the STL' and understand its advantages, but despite this in the universe they have found some other reasons to refuse it. I think they were unwilling to pay the large price in language complexity, bug-prone-ness, code bloat and compilation speed that C++ and D are willing to pay.

 Here you can find why C# designers have refused C++-style templates & STL and
chosen the generics instead:
 http://msdn.microsoft.com/it-it/magazine/cc163683%28en-us%29.aspx

IMHO, that link strongly suggests that they didn't grok the STL. The pathetic arguments they give look to me like an attempt to rationalize a decision they'd already made.
Jul 25 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Andrei Alexandrescu:
 In my humble opinion, the design of Java, C#, and Go is proof that their
 authors didn't get the STL. Otherwise, those languages would be very
 different.

I don't believe you. Among the designers of Java, C# and Go there are people that are both experts and smart.

Of course they are experts and smart. That doesn't mean they won't make mistakes. Both Java and C# have added generic support only after several cycles. We'll see how long Go manages to resist adding it. My prediction is Go will not become a mainstream language without it.
 C# designers have shown to be sometimes smarter than D designers.

And D sometimes has smarter decisions than C#. I think the statement is meaningless.
 So I think some of them 'get the STL' and
 understand its advantages, but despite this in the universe they have found
 some other reasons to refuse it. I think they were unwilling to pay the large
 price in language complexity, bug-prone-ness, code bloat and compilation
 speed that C++ and D are willing to pay.

I think that's projecting a lot of the weaknesses of the C++ implementation onto D, and I don't think that is justified. For example, D simply doesn't have the compilation speed problems C++ has. The code bloat issue is resolvable with implementation effort, and neither C# nor Java have ever had any shortage of resources thrown into their development. The same goes for bug-prone-ness. D has gone a long way towards reducing the language complexity issue. D templates are *far* simpler than C++ ones, yet more powerful. I also don't see how C# generics are simpler (for the user) than D templates, and C# generics are quite complex under the hood.
 Here you can find why C# designers have refused C++-style templates & STL and
 chosen the generics instead: 
 http://msdn.microsoft.com/it-it/magazine/cc163683%28en-us%29.aspx One
 important problem of C# generics can be solved adding IArithmetic<T>: 
 http://www.osnews.com/story/7930

I think a better article is here: http://windowsdevcenter.com/pub/a/oreilly/windows/news/hejlsberg_0800.html The msdn article makes some misinformed statements.
 I like D templates and I agree that adding them to D1 was a good idea (mostly
 because D1 is designed to be similar to C++) but you must accept that some
 language designers can understand STL and still refuse to add all the
 features necessary to implement it.

The computer language space is far, far too large for any one programmer (no matter how expert & smart) to be experienced enough in all the varied paradigms to make informed tradeoffs.
Jul 25 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 I think a better article is here: 
 http://windowsdevcenter.com/pub/a/oreilly/windows/news/hejlsberg_0800.html

Thank you for the link. I see they talk about the importance of 'events', and later they say:
Indeed, if Foo is a reference type, and if we do the design right, we can share
the instantiation for all reference types.<

Bye, bearophile
Jul 26 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 I think a better article is here: 
 http://windowsdevcenter.com/pub/a/oreilly/windows/news/hejlsberg_0800.html

Thank you for the link.

You're welcome, it took me a fair bit of work to find. It's an important historical document.
Jul 26 2010
prev sibling parent reply levenshtein <distance fun.org> writes:
Walter Bright Wrote:

 bearophile wrote:
 Andrei Alexandrescu:
 In my humble opinion, the design of Java, C#, and Go is proof that their
 authors didn't get the STL. Otherwise, those languages would be very
 different.

I don't believe you. Among the designers of Java, C# and Go there are people that are both experts and smart.

Of course they are experts and smart. That doesn't mean they won't make mistakes. Both Java and C# have added generic support only after several cycles. We'll see how long Go manages to resist adding it. My prediction is Go will not become a mainstream language without it.

At least the hype around Go shows that the language's authors enjoy a huge respect. Some people believe that because of the divine effort from Rob Pike, no matter how bad Go looks now, it will replace ALL other languages very soon. Go is going to be used in kernel development, in systems programming, in application programming, as a low-level scripting language. It will replace everything.
 C# designers have shown to be sometimes smarter than D designers.

And D sometimes has smarter decisions than C#. I think the statement is meaningless.

C# is already a legacy language. Now we have Clojure and Go.
Jul 26 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
levenshtein wrote:
 At least the hype around Go shows that the language's authors enjoy a huge
 respect.

Not only the authors, but the respect for Google itself. It gives Go an enormous and enviable head start. But, eventually, Go will have to deliver.
Jul 26 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/25/2010 04:54 PM, bearophile wrote:
 Andrei Alexandrescu:
 In my humble opinion, the design of Java, C#, and Go is proof that
 their authors didn't get the STL. Otherwise, those languages would
 be very different.

I don't believe you. Among the designers of Java, C# and Go there are people that are both experts and smart. C# designers have shown to be sometimes smarter than D designers. So I think some of them 'get the STL' and understand its advantages, but despite this in the universe they have found some other reasons to refuse it. I think they were unwilling to pay the large price in language complexity, bug-prone-ness, code bloat and compilation speed that C++ and D are willing to pay.

But then their containers and algorithms are markedly inferior to STL's. They are toxic to the programmers who have only been exposed to them. So Java and C# got STL and decided to not go with it, I'm sure they would have at least gotten a few good bits from it. But no - the entire e.g. Java container library is parallel to everything STL stands for.
 Here you can find why C# designers have refused C++-style templates&
 STL and chosen the generics instead:
 http://msdn.microsoft.com/it-it/magazine/cc163683%28en-us%29.aspx One
 important problem of C# generics can be solved adding
 IArithmetic<T>: http://www.osnews.com/story/7930

 I like D templates and I agree that adding them to D1 was a good idea
 (mostly because D1 is designed to be similar to C++) but you must
 accept that some language designers can understand STL and still
 refuse to add all the features necessary to implement it.

 Maybe there is a way to build something like STL without C++/D-style
 templates :-)

I'm sure there is, and Java and C# could have come a long way. But taking their libraries at face value, they are simply ignorant of STL's values - to their detriment. Andrei
Jul 25 2010
next sibling parent %u <e sp.am> writes:
 But then their containers and algorithms are markedly inferior to STL's.
 They are toxic to the programmers who have only been exposed to them. So
 Java and C# got STL and decided to not go with it, I'm sure they would
 have at least gotten a few good bits from it. But no - the entire e.g.
 Java container library is parallel to everything STL stands for.

If you are talking about the original System.Collection you are right. But System.Collection.Generic (.NET 2.0) and LINQ (3.0) have come a long way since then and C# has gained much of the expressiveness that D is just reaching with std.range and std.algorithm. In fact you can see many similarities: .Where() => filter() .Select() => map() .Aggregate() => reduce() .Take() => take() ...
Jul 26 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 I think the Java/C# developers gave up X % of the execution speed to 
 avoid hard crashes (exceptions instead of segfaults)

1. segfaults *are* exceptions. 2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.
Jul 26 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 1. segfaults *are* exceptions.

Aren't exceptions objects? Bye, bearophile
Jul 26 2010
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 1. segfaults *are* exceptions.

Aren't exceptions objects?

That's an implementation detail.
Jul 26 2010
prev sibling parent "Jim Balter" <Jim Balter.name> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:i2knnc$1ftr$1 digitalmars.com...
 Walter Bright:
 1. segfaults *are* exceptions.

Aren't exceptions objects? Bye, bearophile

Not at all -- exceptions are system-generated events that are implemented in modern languages by the equivalent of a throw statement, and Exception objects are used to communicate information about either system- or user- generated throws. And Walter's point is getting lost -- 'retard' misidentified the issue as giving up speed by going with "exceptions" to avoid "hard crashes" such as segfaults, but of course there is no loss of speed for catching a segfault signal -- an asynchronous event -- and throwing an Exception object rather than calling a C signal catcher. The speed cost doesn't come from using Exceptions, it comes from doing array bounds checking (which, of course, results in an Exception object being thrown if violated). Of course, in D, if you're desperate for it, you can regain the speed by giving up the memory safety and using pointers, in which case an array bounds violation might overwrite something -- and if the thing overwritten is a pointer, that is likely to result in a segfault -- which is an exception, which in D will cause an Exception object to be thrown. Of course, in languages that have them, objects are used to communicate the information about exceptions, but even in C there are (non standard) libraries available that provide a catch/throw/exception-passing mechanism using a stack of setjmp structs and a longjmp for the throw.
Jul 26 2010
prev sibling next sibling parent reply KennyTM~ <kennytm gmail.com> writes:
On Jul 27, 10 02:42, Walter Bright wrote:
 retard wrote:
 I think the Java/C# developers gave up X % of the execution speed to
 avoid hard crashes (exceptions instead of segfaults)

1. segfaults *are* exceptions. 2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.

Catching exception is easy, but handling (segfault) signal is a mess.
Jul 26 2010
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
KennyTM~ wrote:
 On Jul 27, 10 02:42, Walter Bright wrote:
 retard wrote:
 I think the Java/C# developers gave up X % of the execution speed to
 avoid hard crashes (exceptions instead of segfaults)

1. segfaults *are* exceptions. 2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.

Catching exception is easy, but handling (segfault) signal is a mess.

Exceptions thrown because of programming bugs are just as unrecoverable as seg faults are. Making them easier to catch has the unfortunate side effect of seducing programmers into thinking that such are recoverable.
Jul 26 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Tue, 27 Jul 2010 04:10:14 +0800, KennyTM~ wrote:
 
 On Jul 27, 10 02:42, Walter Bright wrote:
 retard wrote:
 I think the Java/C# developers gave up X % of the execution speed to
 avoid hard crashes (exceptions instead of segfaults)

2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.


Indeed, I'd like to know how you recover from a segfault without help from an external processes.

On Windows, it's fairly straightforward. I did it once as part of a gc implementation that would mark unmodified pages as hardware readonly. It would catch the seg fault from writes to it, log the page as modified, make the page writable, and restart the failed instruction.
 Sometimes you know that some routine might 
 fail once in a week, but the program MUST run non-stop for several 
 months. In Java you can achieve this with exceptions. And you can also 
 dynamically fix classes with the class loader.

Any program that attempts to achieve reliability by "recovering" from program bugs and continuing is an extremely badly designed one. http://www.drdobbs.com/blog/archives/2009/10/safe_systems_fr.html http://www.drdobbs.com/blog/archives/2009/11/designing_safe.html Sadly, it's a topic that has not penetrated software engineering instructional materials, and programmers have to learn it the hard way again and again.
Jul 26 2010
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
retard wrote:
 http://www.drdobbs.com/blog/archives/2009/10/safe_systems_fr.html

 http://www.drdobbs.com/blog/archives/2009/11/designing_safe.html

 Sadly, it's a topic that has not penetrated software engineering
 instructional materials, and programmers have to learn it the hard way
 again and again.

But these are your articles with no cited sources about the software methodologies. It seems like they're written afterwards to advertise the features implemented in D. The contract programming only covers a small runtime dynamic part of programming. There's no mention about automated theorem proving. No word about exceptions nor sandboxing with virtual machines. Why? Because these would make D look ridiculous.

Isn't it actually simpler to reason that Walter defined D according to his views and expertise? Andrei
Jul 27 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Mon, 26 Jul 2010 14:04:53 -0700, Walter Bright wrote:
 http://www.drdobbs.com/blog/archives/2009/10/safe_systems_fr.html

 http://www.drdobbs.com/blog/archives/2009/11/designing_safe.html

 Sadly, it's a topic that has not penetrated software engineering
 instructional materials, and programmers have to learn it the hard way
 again and again.

But these are your articles

Yes.
 with no cited sources about the software methodologies.

As I said, it hasn't penetrated the software literature. It's one of the reasons I wrote that, I feel it's a contribution, not a retread. I did not invent those ideas, they come from my working at Boeing learning how the aviation system builds reliable systems from unreliable parts. (Not just mechanical systems, Boeing applied those rules to its software subsystems.)
 It seems like they're written afterwards to advertise the 
 features implemented in D.

Or maybe it's the other way around, that the features in D were designed to support those ideas. Why would I waste my time implementing complex features if I had to after-the-fact come up with some justification for them? Please remember that nobody is paying me to do this, I'm not saying "Yes Sir! Three bags full, Sir!" to some middle-management phb. I'm often wrong, but I really do believe that D's feature set is a good and justifiable one.
 The contract programming only covers a small 
 runtime dynamic part of programming. There's no mention about automated 
 theorem proving. No word about exceptions nor sandboxing with virtual 
 machines. Why? Because these would make D look ridiculous.

That misses the point about reliability. Again, you're approaching from the point of view that you can make a program that cannot fail (i.e. prove it correct). That view is WRONG WRONG WRONG and you must NEVER NEVER NEVER rely on such for something important, like say your life. Software can (and will) fail even if you proved it correct, for example, what if a memory cell fails and flips a bit? Cosmic rays flip a bit? Are you willing to bet your life?
 How the web programming world works:
 
 ''I'm not a real programmer. I throw together things until it works then 
 I move on. The real programmers will say "yeah it works but you're 
 leaking memory everywhere. Perhaps we should fix that." I'll just restart 
 apache every 10 requests.'' -- Rasmus Lerdorf
 
 It it widely accepted that web applications fail often. It suffices if 
 the developers are fixing bad code eventually, but the remaining parts 
 should work reasonably well. Either a process is restarted or exceptions 
 are used to catch small error conditions so the server could server 
 *something* . I'm talking about practical web applications, not X-ray 
 machines.

It's retarded (!) to pretend that this is how one makes reliable systems.
Jul 27 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 That misses the point about reliability. Again, you're approaching from the 
 point of view that you can make a program that cannot fail (i.e. prove it 
 correct). That view is WRONG WRONG WRONG and you must NEVER NEVER NEVER rely
on 
 such for something important, like say your life. Software can (and will) fail 
 even if you proved it correct, for example, what if a memory cell fails and 
 flips a bit? Cosmic rays flip a bit?
 
 Are you willing to bet your life?

If you have a critical system, you use most or all means you know to make it work, so if you can you use theorem proving too. If you have the economic resources to use those higher means, then refusing to use them is not wise. And then you also use error correction memory, 3-way or 6-way redundancy, plus watchdogs and more. If it is a critical system you can even design it fail gracefully even if zero software is running, see the recent designs of the concrete canal under the core of nuclear reactors, to let the fused core go where you want it to go (where it will kept acceptably cool and safe, making accidents like Chernobyl very hard to happen) :-) Bye, bearophile
Jul 27 2010
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from bearophile (bearophileHUGS lycos.com)'s article
 Walter Bright:
 That misses the point about reliability. Again, you're approaching from the
 point of view that you can make a program that cannot fail (i.e. prove it
 correct). That view is WRONG WRONG WRONG and you must NEVER NEVER NEVER rely on
 such for something important, like say your life. Software can (and will) fail
 even if you proved it correct, for example, what if a memory cell fails and
 flips a bit? Cosmic rays flip a bit?

 Are you willing to bet your life?


resources to use those higher means, then refusing to use them is not wise. And then you also use error correction memory, 3-way or 6-way redundancy, plus watchdogs and more.
 If it is a critical system you can even design it fail gracefully even if zero

of nuclear reactors, to let the fused core go where you want it to go (where it will kept acceptably cool and safe, making accidents like Chernobyl very hard to happen) :-)
 Bye,
 bearophile

But the point is that redundancy is probably the **cheapest, most efficient** way to get ultra-high reliability. Yes, cost matters even when people's lives are at stake. If people accepted this more often, maybe the U.S. healthcare system wouldn't be completely bankrupt. Anyhow, if you try to design one ultra reliable system, you can't be stronger than your weakest link. If your system has N components, each with independent probability p_i of failure, your failure probability is: 1 - product_i=1 to n( 1 - p_i), i.e. 1 - the product of the probabilities that everything works. If p_i is large for any component, you're very likely to fail and if the system has a lot of components, you're bound to have at least one oversight. In the limit where one link is a lot more prone to failure than any of the rest, you're basically as strong as your weakest link. For example, if all components except one are perfect and that one component has a 1% chance of failure then the system as a whole has a 1% chance of failure. If, on the other hand, you have redundancy, you're at least as strong as your strongest link because only one system needs to work. Assuming the systems were designed by independent teams and have completely different weak points, we can assume their failures are statistically independent. Assume we have m redundant systems each with probability p_s of failing in some way or another. Then the probability of the whole thing failing is: product_i = 1 to m(p_s), or the probability that ALL of your redundant systems fail. Assuming they're all decent and designed independently, with different weak points, they probably aren't going to fail at the same time. For example, if each redundant system really sucks and has a 5% chance of failure, then the probability that they both fail and you're up the creek is only 0.25%.
Jul 27 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
dsimcha wrote:
 But the point is that redundancy is probably the **cheapest, most efficient**
way
 to get ultra-high reliability.

It also works incredibly well. Airliners use a dual path system, which means that no single failure can bring it down. If it didn't work, the skies would be raining airplanes. Triple path systems add a great deal of cost, with only a negligible increase in safety.
 Yes, cost matters even when people's lives are at stake.

If the FAA required a triple path system in airliners, likely the airplane would be so heavy it could barely lift itself, let alone have a payload.
 If, on the other hand, you have redundancy, you're at least as strong as your
 strongest link because only one system needs to work.  Assuming the systems
were
 designed by independent teams and have completely different weak points, we can
 assume their failures are statistically independent.  Assume we have m
redundant
 systems each with probability p_s of failing in some way or another.  Then the
 probability of the whole thing failing is:
 
 product_i = 1 to m(p_s), or the probability that ALL of your redundant systems
 fail.  Assuming they're all decent and designed independently, with different
weak
 points, they probably aren't going to fail at the same time.
 
 For example, if each redundant system really sucks and has a 5% chance of
failure,
 then the probability that they both fail and you're up the creek is only 0.25%.

Yup. The important thing to make this work is to avoid coupling, where a failure in one path causes the other path to fail as well. This can be tricky to get right. Coupling is why a process attempting to diagnose and fix its own bugs is doomed.
Jul 27 2010
prev sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Walter Bright wrote:
 retard wrote:
 The contract programming only covers a small runtime dynamic part of
 programming. There's no mention about automated theorem proving. No
 word about exceptions nor sandboxing with virtual machines. Why?
 Because these would make D look ridiculous.

That misses the point about reliability. Again, you're approaching from=

 the point of view that you can make a program that cannot fail (i.e.
 prove it correct). That view is WRONG WRONG WRONG and you must NEVER
 NEVER NEVER rely on such for something important, like say your life.
 Software can (and will) fail even if you proved it correct, for example=

 what if a memory cell fails and flips a bit? Cosmic rays flip a bit?
=20

of software that were proven formally and that AFAIK worked perfectly to spec and yet failed spectacularly. One of them caused a Mars probe to crash (it had been proven using measurements in feet, but the sensors were reporting in kilometers IIRC) and the other caused the ground control crew to activate the self destruct for the first Ariane 5 rocket (it had been proven using data designed for Ariane 4 and no longer valid for Ariane 5). Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 27 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
J. M. Berger:
 	Plus, how do you prove the proof? I know of at least two examples
 of software that were proven formally and that AFAIK worked
 perfectly to spec and yet failed spectacularly.

You have to think about proofs as another (costly) tool to avoid bugs/bangs, but not as the ultimate and only tool you have to use (I think dsimcha was trying to say that there are more costly-effective tools. This can be true, but you can't be sure that is right in general). Bye, bearophile
Jul 27 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid bugs/bangs,
 but not as the ultimate and only tool you have to use (I think dsimcha was
 trying to say that there are more costly-effective tools. This can be true,
 but you can't be sure that is right in general).

I want to re-emphasize the point that keeps getting missed. Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.
Jul 27 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 I want to re-emphasize the point that keeps getting missed.
 
 Building reliable systems is not about trying to make components that cannot 
 fail. It is about building a system that can TOLERATE failure of any of its 
 components.
 
 It's how you build safe systems from UNRELIABLE parts. And all parts are 
 unreliable. All of them. Really. All of them.

Each of those parts must be pretty reliable if you want to design a globally reliable system. Space Shuttle control systems are redundant as you say, and probably each single point of failure has a backup, but each software system is pretty reliable by itself, probably they have proved some of its parts for each of the independently written redundant software systems. If your subsystems are crap, your overall system is crap, unless you have tons of them and they can be all used in "parallel" (with no single coordinator that can be a failure point). Bye, bearophile
Jul 27 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Each of those parts must be pretty reliable if you want to design a globally
 reliable system. Space Shuttle control systems are redundant as you say, and
 probably each single point of failure has a backup, but each software system
 is pretty reliable by itself, probably they have proved some of its parts for
 each of the independently written redundant software systems. If your
 subsystems are crap, your overall system is crap, unless you have tons of
 them and they can be all used in "parallel" (with no single coordinator that
 can be a failure point).

The space shuttle computer systems that are on the critical path have redundant backups that are written with different algorithms by different groups, and they try very hard to not have any coupling between them. Even so, there remain at least two obvious single points of failure. Also, of course, they still try to make each component as reliable as they can.
Jul 27 2010
prev sibling next sibling parent BCS <none anon.com> writes:
Hello Walter,

 bearophile wrote:
 
 You have to think about proofs as another (costly) tool to avoid
 bugs/bangs, but not as the ultimate and only tool you have to use (I
 think dsimcha was trying to say that there are more costly-effective
 tools. This can be true, but you can't be sure that is right in
 general).
 

Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

Agreed. You can make a system that can tolerate failures (e.g. not do something damaging), but that doesn't make it acceptable (e.g. continue to do what it's supposed to). Pure redundancy aside, if a part not working didn't degrade the system, you would remove it from the design. Making parts more reliable will increases the amount of time the system is in a non depredated state. -- ... <IXOYE><
Jul 28 2010
prev sibling parent reply "Jim Balter" <Jim Balter.name> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i2nkto$8ug$1 digitalmars.com...
 bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid 
 bugs/bangs,
 but not as the ultimate and only tool you have to use (I think dsimcha 
 was
 trying to say that there are more costly-effective tools. This can be 
 true,
 but you can't be sure that is right in general).

I want to re-emphasize the point that keeps getting missed. Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

You're being religious about this and arguing against a strawman. While all parts are unreliable, they aren't *equally* unreliable. Unit tests, contract programming, memory safe access, and other reliability techniques, *including correctness proofs*, all increase reliability. On the flip side, you can't guarantee reliability with simplistic rules like "no continuing after an exception". Numerous (relatively) reliable systems have demonstrated that religion to be a myth as well.
Jul 28 2010
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Jim Balter wrote:
 You're being religious about this and arguing against a strawman. While 
 all parts are unreliable, they aren't *equally* unreliable.

They don't have to have equal reliability in order for redundancy to be very effective.
 Unit tests, 
 contract programming, memory safe access, and other reliability 
 techniques, *including correctness proofs*, all increase reliability.

True, but the problem is when one is seduced by that into thinking that redundancy is not necessary.
 On the flip side, you can't guarantee reliability with simplistic rules 
 like "no continuing after an exception".

Of course, but you can guarantee unreliability by thinking one can continue after an exception thrown by a programming error. (In engineering, one can never "guarantee" reliability anyway. What one does is set a maximum failure rate, and prove a design is more reliable than that.) Blindly applying rules without using one's brain is bad, and ignoring rules without thoroughly understanding their rationale is equally bad.
 Numerous (relatively) reliable systems have demonstrated that religion to be a
myth as well.

If there's an interesting example here, please tell me about it! As for the religion aspect, please consider that I get this from my experience with how airliners are designed. I think there can be little doubt that these techniques (religion) are extremely effective in producing incredibly reliable and safe airline travel.
Jul 28 2010
prev sibling next sibling parent reply Don <nospam nospam.com> writes:
Jim Balter wrote:
 
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:i2nkto$8ug$1 digitalmars.com...
 bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid 
 bugs/bangs,
 but not as the ultimate and only tool you have to use (I think 
 dsimcha was
 trying to say that there are more costly-effective tools. This can be 
 true,
 but you can't be sure that is right in general).

I want to re-emphasize the point that keeps getting missed. Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

You're being religious about this and arguing against a strawman. While all parts are unreliable, they aren't *equally* unreliable. Unit tests, contract programming, memory safe access, and other reliability techniques, *including correctness proofs*, all increase reliability.

I have to disagree with that. "Correctness proofs" are based on a total fallacy. Attempting to proving that a program is correct (on a real machine, as opposed to a theoretical one) is utterly ridiculous. I'm genuinely astonished that such an absurd idea ever had any traction.
Jul 29 2010
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Don (nospam nospam.com)'s article
 Jim Balter wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i2nkto$8ug$1 digitalmars.com...
 bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid
 bugs/bangs,
 but not as the ultimate and only tool you have to use (I think
 dsimcha was
 trying to say that there are more costly-effective tools. This can be
 true,
 but you can't be sure that is right in general).

I want to re-emphasize the point that keeps getting missed. Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

You're being religious about this and arguing against a strawman. While all parts are unreliable, they aren't *equally* unreliable. Unit tests, contract programming, memory safe access, and other reliability techniques, *including correctness proofs*, all increase reliability.

fallacy. Attempting to proving that a program is correct (on a real machine, as opposed to a theoretical one) is utterly ridiculous. I'm genuinely astonished that such an absurd idea ever had any traction.

Yea, here's a laundry list of stuff that theory doesn't account for that can go wrong on real machines: overflow rounding error compiler bugs hardware bugs OS bugs I sincerely wish all my numerics code always worked if it was provably mathematically correct.
Jul 29 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
dsimcha:
 overflow

Good provers take in account integral overflows too.
 rounding error

Interval (floating point) arithmetic can be used to face a large part of this problem. I hope to see a good Interval arithmetic lib for D in few years.
 compiler bugs
 OS bugs

Those software too can be proven :-) Microsoft is working on provable OS kernel.
 hardware bugs

You can use RAM with error correction, redundancy, etc. Proofs are one useful tool for high-reliability systems. They are not enough, of course. "Beware of bugs in the above code; I have only proved it correct, not tried it.'' -- Knuth Bye, bearophile
Jul 29 2010
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from bearophile (bearophileHUGS lycos.com)'s article
 dsimcha:
 overflow

 rounding error


 compiler bugs
 OS bugs

 hardware bugs

Proofs are one useful tool for high-reliability systems. They are not enough, of

 "Beware of bugs in the above code; I have only proved it correct, not tried
it.''
 -- Knuth
 Bye,
 bearophile

Oh, I forgot to mention memory allocation issues: heap fragmentation stack overflows just plain running out of memory
Jul 29 2010
parent BCS <none anon.com> writes:
Hello dsimcha,

 Oh, I forgot to mention memory allocation issues:

 stack overflows
 just plain running out of memory

Easy to account for
 heap fragmentation

Not so easy but if you can show the maximum size of the allocated memory you might be able to prove there is no allocation of the n-1 parts that doesn't leave a hole big enough for the nth part. -- ... <IXOYE><
Jul 31 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
retard wrote:
 I really love digitalmars.D because this is one of the few places where 
 99% of the community has zero experience with other languages, other 
 paradigms (non-imperative), automatic theorem provers, or anything not 
 related to D. There's a whole choir against theorem proving now. The 
 funniest thing is that none of you seem to have any clue about how those 
 programs work. Has anyone except the almighty Andrei ever even downloaded 
 a theorem prover?

From the desk of: Almighty Andrei, New H(e)aven, CT Almighty Andrei confirms hereby that he tried to used an automated theorem prover to simplify his Masters thesis work back in 2003. To his shame he even forgot its name (Coq?) Unfortunately he decided it was too difficult to set up and use, so he resorted to manual proofs. His MS thesis contains twenty-something theorems, all proven by hand. Almighty Andrei wishes to state that his perception of automated theorem proving is that it can be useful for proving properties of systems as long as both the properties and the systems are confined enough. For interesting properties of large systems, theorem proving has notable scaling difficulties. Also, as Don mentioned, the models used by theorem provers often abstract away certain realities. Andrei
Jul 29 2010
parent "Jim Balter" <Jim Balter.name> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:i2snsk$1p93$1 digitalmars.com...
 retard wrote:
 I really love digitalmars.D because this is one of the few places where 
 99% of the community has zero experience with other languages, other 
 paradigms (non-imperative), automatic theorem provers, or anything not 
 related to D. There's a whole choir against theorem proving now. The 
 funniest thing is that none of you seem to have any clue about how those 
 programs work. Has anyone except the almighty Andrei ever even downloaded 
 a theorem prover?

From the desk of: Almighty Andrei, New H(e)aven, CT Almighty Andrei confirms hereby that he tried to used an automated theorem prover to simplify his Masters thesis work back in 2003. To his shame he even forgot its name (Coq?) Unfortunately he decided it was too difficult to set up and use, so he resorted to manual proofs. His MS thesis contains twenty-something theorems, all proven by hand. Almighty Andrei wishes to state that his perception of automated theorem proving is that it can be useful for proving properties of systems as long as both the properties and the systems are confined enough. For interesting properties of large systems, theorem proving has notable scaling difficulties. Also, as Don mentioned, the models used by theorem provers often abstract away certain realities.

Which does not make the idea "absurd", or "astonishing that it ever had any traction". What Don said was stupid and ignorant, and you should be honest enough to say so.
 Andrei 

Aug 02 2010
prev sibling next sibling parent dsimcha <dsimcha gmail.com> writes:
== Quote from retard (re tard.com.invalid)'s article
 I really love digitalmars.D because this is one of the few places where
 99% of the community has zero experience with other languages, other
 paradigms (non-imperative), automatic theorem provers, or anything not
 related to D. There's a whole choir against theorem proving now. The
 funniest thing is that none of you seem to have any clue about how those
 programs work. Has anyone except the almighty Andrei ever even downloaded
 a theorem prover?

Does the Maxima computer algebra system count? It has a basic theorem prover built in, and I've played around with it a little. If this is how theorem provers work, then they have a long way to go.
Jul 29 2010
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Has anyone except the almighty Andrei ever even downloaded 
 a theorem prover?

That's *A*lmighty Andrei, note the caps. Please show due respect.
Jul 29 2010
prev sibling next sibling parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

retard wrote:
 I really love digitalmars.D because this is one of the few places where=

 99% of the community has zero experience with other languages, other=20
 paradigms (non-imperative), automatic theorem provers, or anything not =

 related to D. There's a whole choir against theorem proving now. The=20
 funniest thing is that none of you seem to have any clue about how thos=

 programs work. Has anyone except the almighty Andrei ever even download=

 a theorem prover?

I have lots of experience with python. I have tried several other languages including Haskell (hopelessly broken), caml (broken too but maybe not hopeless), sml, felix, nimrod, and cyclone, java, php, C and C++. As far as formal proving of software is concerned, I have yet to find a *free* solution worth the trouble. OTOH, I have had some experience with Klocwork and my opinion is neither as strongly in favour as you seem to be, nor as strongly against as you imply we are. Basically: - Even tool like klockwork *cannot* prove that your code is 100% correct and will never fail; - However, it *will* find bugs in your code that you wouldn't have found any other way. Whether that's worth the cost depends on how reliable you want your soft to be and how confident you are of the quality of your devs. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 30 2010
prev sibling next sibling parent BCS <none anon.com> writes:
Hello dsimcha,

 Yea, here's a laundry list of stuff that theory doesn't account for
 that can go wrong on real machines:
 
 overflow

Theory can
 rounding error

Theory has: A mechanically checked proof of the AMD5K 86 floating-point division program: http://ieeexplore.ieee.org/Xplore/login.jsp?url=http://ieeexplore.ieee.org/iel4/12/15456/00713311.pdf%3Farnumber%3D713311&authDecision=-203 Any proof that fails to take those into account is flawed.
 compiler bugs
 hardware bugs
 OS bugs

OTOH A proof might be able to derive a unit test that will find most relevant bugs
 I sincerely wish all my numerics code always worked if it was provably
 mathematically correct.
 

... <IXOYE><
Jul 31 2010
prev sibling parent "Jim Balter" <Jim Balter.name> writes:
"dsimcha" <dsimcha yahoo.com> wrote in message 
news:i2rvar$6o0$1 digitalmars.com...
 == Quote from Don (nospam nospam.com)'s article
 Jim Balter wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i2nkto$8ug$1 digitalmars.com...
 bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid
 bugs/bangs,
 but not as the ultimate and only tool you have to use (I think
 dsimcha was
 trying to say that there are more costly-effective tools. This can be
 true,
 but you can't be sure that is right in general).

I want to re-emphasize the point that keeps getting missed. Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

You're being religious about this and arguing against a strawman. While all parts are unreliable, they aren't *equally* unreliable. Unit tests, contract programming, memory safe access, and other reliability techniques, *including correctness proofs*, all increase reliability.

fallacy. Attempting to proving that a program is correct (on a real machine, as opposed to a theoretical one) is utterly ridiculous. I'm genuinely astonished that such an absurd idea ever had any traction.

Yea, here's a laundry list of stuff that theory doesn't account for that can go wrong on real machines: overflow rounding error compiler bugs hardware bugs OS bugs I sincerely wish all my numerics code always worked if it was provably mathematically correct.

I have no idea why any rational person would think that this shows that correctness proofs don't increase reliability.
Aug 02 2010
prev sibling next sibling parent reply Don <nospam nospam.com> writes:
retard wrote:
 Thu, 29 Jul 2010 12:11:21 +0200, Don wrote:
 
 Jim Balter wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i2nkto$8ug$1 digitalmars.com...
 bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid
 bugs/bangs,
 but not as the ultimate and only tool you have to use (I think
 dsimcha was
 trying to say that there are more costly-effective tools. This can be
 true,
 but you can't be sure that is right in general).

Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

all parts are unreliable, they aren't *equally* unreliable. Unit tests, contract programming, memory safe access, and other reliability techniques, *including correctness proofs*, all increase reliability.

fallacy. Attempting to proving that a program is correct (on a real machine, as opposed to a theoretical one) is utterly ridiculous. I'm genuinely astonished that such an absurd idea ever had any traction.

What's your favorite then? 100% unit test coverage?

A completely independent code reviewer helps, too. It really helps if you're aggressively trying to break the code. I agree with Walter's statement that ALL of the components are unreliable, and I think it's important to realize that proofs are the same. Even in the case where the program perfectly implements the algorithm, there can be bugs in the proof. Very many mathematical proofs published in peer-reviewed journals have subsequently been found to have errors in them. I think the value in 'correctness proofs' mainly comes from the fact that they have some kind of independence from the code. In practice, you run your code, and find bugs in the proof... My feeling is that the value of any reliability technique depends on how independent it is from the code you're testing.
Jul 29 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Don wrote:
 I agree with Walter's statement that ALL of the components are 
 unreliable, and I think it's important to realize that proofs are the 
 same. Even in the case where the program perfectly implements the 
 algorithm, there can be bugs in the proof.

Also, the hardware running the correct program can fail.
Jul 29 2010
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 29/07/2010 19:49, Walter Bright wrote:
 Don wrote:
 I agree with Walter's statement that ALL of the components are
 unreliable, and I think it's important to realize that proofs are the
 same. Even in the case where the program perfectly implements the
 algorithm, there can be bugs in the proof.

Also, the hardware running the correct program can fail.

Yes, but that's a different issue. It would still be of value to know that the program is correct. For example, you could make a reliable system by having several different hardware run the same program and compare the results. This is similar to what you said before about achieving redundancy, but here you would not need other separate teams to write a different programs to compute the same thing, which obviously would be a great saving in effort. I'm not going to argue if it is possible, or practical, or whatever to know for sure that your program is correct. My point is just that knowing with absolute certainty that a program is correct, that would still be quite valuable, regardless of the fact that hardware, or other programs, systems, etc., could fail. -- Bruno Medeiros - Software Engineer
Aug 03 2010
prev sibling parent BCS <none anon.com> writes:
Hello Don,

 I agree with Walter's statement that ALL of the components are
 unreliable, and I think it's important to realize that proofs are the
 same.
 

That's where automatic proof checkers come in... -- ... <IXOYE><
Jul 30 2010
prev sibling next sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Don wrote:
 I have to disagree with that. "Correctness proofs" are based on a total=

 fallacy. Attempting to proving that a program is correct (on a real
 machine, as opposed to a theoretical one) is utterly ridiculous.
 I'm genuinely astonished that such an absurd idea ever had any traction=

The idea isn't absurd, but you need to use it properly. Saying "I have run a correctness prover on my code so there aren't any bugs" is a fallacy, but so is "I have run unit tests with 100% coverage so there aren't any bugs". The point of correctness proofs isn't that they will find *all* the bugs, but rather that they will find a completely different category of bugs than testing. So you shouldn't run a correctness prover on your code to prove that there aren't any bugs, but simply to find some of the bugs (and you can find a lot of bugs with such a tool). Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 29 2010
next sibling parent reply Don <nospam nospam.com> writes:
Jérôme M. Berger wrote:
 Don wrote:
 I have to disagree with that. "Correctness proofs" are based on a total
 fallacy. Attempting to proving that a program is correct (on a real
 machine, as opposed to a theoretical one) is utterly ridiculous.
 I'm genuinely astonished that such an absurd idea ever had any traction.

The idea isn't absurd, but you need to use it properly. Saying "I have run a correctness prover on my code so there aren't any bugs" is a fallacy, but so is "I have run unit tests with 100% coverage so there aren't any bugs". The point of correctness proofs isn't that they will find *all* the bugs, but rather that they will find a completely different category of bugs than testing. So you shouldn't run a correctness prover on your code to prove that there aren't any bugs, but simply to find some of the bugs (and you can find a lot of bugs with such a tool). Jerome

You can certainly catch bugs with that technique, but the word "proof" has no business being there. It's like the "unsinkable" Titanic. (I think it's really similar, actually. Apparently the only reason the Titanic sank was that many of the rivets were defective). My recollection from university courses was not that "proofs will find bugs in your programs". Rather, it was that "proofs will ensure your program is correct". The reason I think it's absurd is that (AFAIK) no other modern engineering discpline has attempted to rely on correctness proofs.
Jul 29 2010
next sibling parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Don wrote:
 You can certainly catch bugs with that technique, but the word "proof"
 has no business being there. It's like the "unsinkable" Titanic.
 (I think it's really similar, actually. Apparently the only reason the
 Titanic sank was that many of the rivets were defective).

Actually, the word "proof" is perfectly appropriate. You just have to remember that *any* proof relies on some starting hypothesis. What a theorem prover does is, given a certain set of starting conditions, and a certain set of assertions to check, they will tell you: - Either that the assertions will fail, in which case they will provide an example of valid starting conditions and an execution path that causes the failure; - Or tell you that the assertion will always pass. In which case, barring bugs in the theorem prover, you can be 100% sure that so long as the starting hypothesis hold then the assertion will pass (compiler and hardware reliability are always part of the starting hypothesis); - Or that they cannot conclude. What I just said refers to the *theorem* prover itself. A *software* prover (like Klocwork) contains some extra code to extract some starting hypothesis and some assertions from your source and give them to the theorem prover, then post-process the results to make them usable. This has several important implications: - Usually, the software prover will not make the distinction between options 2 and 3: if an assertion cannot be proven to fail, then it will be silently assumed to pass; - Everything depends on the assertions that the software prover uses. Some of them are easy to define (when you dereference a pointer then that pointer must be valid, or if the source code contains a call to "assert" then that "assert" must pass), but some are a lot less obvious (how do you define that "the output must make sense"?)
 My recollection from university courses was not that "proofs will find
 bugs in your programs". Rather, it was that "proofs will ensure your
 program is correct".
=20

provers, but if that's the approach your teachers took, then they were wrong (unfortunately, this is a kind of mistakes teachers are very often prone to). Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 30 2010
prev sibling parent reply BCS <none anon.com> writes:
Hello Don,

 Jrme M. Berger wrote:
 
 Don wrote:
 
 I have to disagree with that. "Correctness proofs" are based on a
 total
 fallacy. Attempting to proving that a program is correct (on a real
 machine, as opposed to a theoretical one) is utterly ridiculous.
 I'm genuinely astonished that such an absurd idea ever had any
 traction.

have run a correctness prover on my code so there aren't any bugs" is a fallacy, but so is "I have run unit tests with 100% coverage so there aren't any bugs". The point of correctness proofs isn't that they will find *all* the bugs, but rather that they will find a completely different category of bugs than testing. So you shouldn't run a correctness prover on your code to prove that there aren't any bugs, but simply to find some of the bugs (and you can find a lot of bugs with such a tool). Jerome

has no business being there. It's like the "unsinkable" Titanic. (I think it's really similar, actually. Apparently the only reason the Titanic sank was that many of the rivets were defective). My recollection from university courses was not that "proofs will find bugs in your programs". Rather, it was that "proofs will ensure your program is correct".

I think the best a correctness proof can do is assert that, under a set of assumptions (including things like the machine working correctly) the program will do certain things and will not do certain things. E.g. all memory allocations have exactly one owner at all times and nothing is ever de-allocated that is an owner of anything. There are several issues with this: First that it can't be proven that the things the program will and will not do are the correct things. Also, it is very costly ( >1 man/year per kLOC)
 The reason I think it's absurd is that (AFAIK) no other modern
 engineering discpline has attempted to rely on correctness proofs.
 

Every engineering discipline I have any experience with gets a heck of a lot closer to producing formal proofs of correctness than programing. (BTW: I don't consider CS as generally practiced to be an engineering discipline.) -- ... <IXOYE><
Jul 31 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
BCS wrote:
 Every engineering discipline I have any experience with gets a heck of a 
 lot closer to producing formal proofs of correctness than programing. 

Mechanical engineering designs also tend to be a lot simpler than programs, although the environment they work in is far more complex. Modeling for the design analysis also takes a very simplified view of the actual design, justified by taking the worst case. For example, the strength calculations are done for the weakest cross section, and are not bothered with for the obviously stronger sections. Furthermore, after a while a good mechanical engineer develops a "feel" for things that is pretty darned accurate. Going through the analysis is a backup for that and is used to fine tune the design.
Jul 31 2010
next sibling parent "Mike James" <foo bar.com> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i31qr6$4ma$1 digitalmars.com...
 BCS wrote:
 Every engineering discipline I have any experience with gets a heck of a 
 lot closer to producing formal proofs of correctness than programing.

Mechanical engineering designs also tend to be a lot simpler than programs, although the environment they work in is far more complex. Modeling for the design analysis also takes a very simplified view of the actual design, justified by taking the worst case. For example, the strength calculations are done for the weakest cross section, and are not bothered with for the obviously stronger sections. Furthermore, after a while a good mechanical engineer develops a "feel" for things that is pretty darned accurate. Going through the analysis is a backup for that and is used to fine tune the design.

Sometimes it gets complicated... :-) http://mysite.du.edu/~etuttle/rail/lock.htm -=mike=-
Jul 31 2010
prev sibling parent reply BCS <none anon.com> writes:
Hello Walter,

 BCS wrote:
 
 Every engineering discipline I have any experience with gets a heck
 of a lot closer to producing formal proofs of correctness than
 programing.
 

programs, although the environment they work in is far more complex. Modeling for the design analysis also takes a very simplified view of the actual design, justified by taking the worst case. For example, the strength calculations are done for the weakest cross section, and are not bothered with for the obviously stronger sections.

Now days they just jump to using finite element and compute everything.
 
 Furthermore, after a while a good mechanical engineer develops a
 "feel" for things that is pretty darned accurate. Going through the
 analysis is a backup

No, the analysis is mandated, by code if not law. -- ... <IXOYE><
Jul 31 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
BCS wrote:
 Hello Walter,
 
 BCS wrote:

 Every engineering discipline I have any experience with gets a heck
 of a lot closer to producing formal proofs of correctness than
 programing.

programs, although the environment they work in is far more complex. Modeling for the design analysis also takes a very simplified view of the actual design, justified by taking the worst case. For example, the strength calculations are done for the weakest cross section, and are not bothered with for the obviously stronger sections.

Now days they just jump to using finite element and compute everything.

I still see calcs submitted for approval that are done by hand on paper. If you want to see real seat of the pants engineering, look at one of those hot rod shows like Musclecar. I don't think those guys have ever even seen a calculator.
 Furthermore, after a while a good mechanical engineer develops a
 "feel" for things that is pretty darned accurate. Going through the
 analysis is a backup

No, the analysis is mandated, by code if not law.

Not much. Even for buildings, only a few critical spots need checking. This is possible because building structures are usually way over-designed, because it's cheap and convenient to do so. Where every gram counts, like in a spacecraft, everything is analyzed. I once had a fire hydrant installed on my property. The city required an engineering analysis, which ran to quite a stack of paper. After approval, the workers came by to install it. They never looked at the analysis, or even the drawings, they just dug up the water main and stuck a hydrant on it with a specialized tool they had. Done in an hour or so. The "engineering analysis" was a freakin' joke.
Jul 31 2010
parent reply BCS <none anon.com> writes:
Hello Walter,

 BCS wrote:
 
 Hello Walter,
 
 BCS wrote:
 
 Every engineering discipline I have any experience with gets a heck
 of a lot closer to producing formal proofs of correctness than
 programing.
 

programs, although the environment they work in is far more complex. Modeling for the design analysis also takes a very simplified view of the actual design, justified by taking the worst case. For example, the strength calculations are done for the weakest cross section, and are not bothered with for the obviously stronger sections.

everything.

paper. If you want to see real seat of the pants engineering, look at one of those hot rod shows like Musclecar. I don't think those guys have ever even seen a calculator.

and anyone who knows what they are doing should be able to clean up... but where's the fun in that.
 Furthermore, after a while a good mechanical engineer develops a
 "feel" for things that is pretty darned accurate. Going through the
 analysis is a backup
 


This is possible because building structures are usually way over-designed, because it's cheap and convenient to do so. Where every gram counts, like in a spacecraft, everything is analyzed.

Mostly they avoid doing detailed analysts by reducing thing to already solved problems: i.e. they do what the building code says or look up the accepted values or follow the best practices. These sources can be treated as theorems: under conditions X, Y and Z if you satisfy constraints A, B and C, things don't break. Thus we have design by modus ponens.
 I once had a fire hydrant installed on my property. The city required
 an engineering analysis, which ran to quite a stack of paper. After
 approval, the workers came by to install it. They never looked at the
 analysis, or even the drawings, they just dug up the water main and
 stuck a hydrant on it with a specialized tool they had. Done in an
 hour or so.
 

I'd almost bet that buried somewhere in the fine print of the "engineering analysis" was the assertion "the standard way works" or the same things in 10 times the words. -- ... <IXOYE><
Jul 31 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
BCS wrote:
 I once had a fire hydrant installed on my property. The city required
 an engineering analysis, which ran to quite a stack of paper. After
 approval, the workers came by to install it. They never looked at the
 analysis, or even the drawings, they just dug up the water main and
 stuck a hydrant on it with a specialized tool they had. Done in an
 hour or so.

"engineering analysis" was the assertion "the standard way works" or the same things in 10 times the words.

It was painfully obvious that this was nothing more than a money-making scheme for the water utility. It colluded with the city to get those regs written, so they could literally quintuple the cost of a hydrant install and one had no choice but pay.
Jul 31 2010
parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 08/01/2010 02:35 AM, Walter Bright wrote:
 BCS wrote:
 I once had a fire hydrant installed on my property. The city required
 an engineering analysis, which ran to quite a stack of paper. After
 approval, the workers came by to install it. They never looked at the
 analysis, or even the drawings, they just dug up the water main and
 stuck a hydrant on it with a specialized tool they had. Done in an
 hour or so.

"engineering analysis" was the assertion "the standard way works" or the same things in 10 times the words.

It was painfully obvious that this was nothing more than a money-making scheme for the water utility. It colluded with the city to get those regs written, so they could literally quintuple the cost of a hydrant install and one had no choice but pay.

It's possible that's the reason. Then again, regulations are often in response to an accident. You also make a big deal about them not looking at the analysis. Couldn't they have already seen a copy before they went to the site? This is obviously planned work, so you'd think they'd have a meeting beforehand. As BCS said, if the stack of paper is due diligence with the conclusion "the standard way works", I don't see how you can tell from your experience.
Aug 01 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Jeff Nowakowski wrote:
 On 08/01/2010 02:35 AM, Walter Bright wrote:
 BCS wrote:
 I once had a fire hydrant installed on my property. The city required
 an engineering analysis, which ran to quite a stack of paper. After
 approval, the workers came by to install it. They never looked at the
 analysis, or even the drawings, they just dug up the water main and
 stuck a hydrant on it with a specialized tool they had. Done in an
 hour or so.

"engineering analysis" was the assertion "the standard way works" or the same things in 10 times the words.

It was painfully obvious that this was nothing more than a money-making scheme for the water utility. It colluded with the city to get those regs written, so they could literally quintuple the cost of a hydrant install and one had no choice but pay.

It's possible that's the reason. Then again, regulations are often in response to an accident. You also make a big deal about them not looking at the analysis. Couldn't they have already seen a copy before they went to the site? This is obviously planned work, so you'd think they'd have a meeting beforehand.

I talked to them. They had no knowledge of the analysis. But they did know how to install fire hydrants.
 As BCS said, if the stack of paper is due diligence with the conclusion 
 "the standard way works", I don't see how you can tell from your 
 experience.

Bah, one glance at the location would show it would be a standard install and all it would have cost was the xerox copy.
Aug 01 2010
prev sibling parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Thursday, July 29, 2010 12:34:35 Don wrote:
 The reason I think it's absurd is that (AFAIK) no other modern
 engineering discpline has attempted to rely on correctness proofs.

Really, the reason that you even _can_ attempt such proofs is because computer science is effectively applied mathematics. No other engineering discipline is so purely mathematical. They have have to deal with real world physics, so while they could undoubtedly prove _some_ stuff about what they do, it's not like you could ever prove something like your bridge will never collapse. Best case, you could show that it wouldn't collapse under certain types of stress. And since what you're doing is so thoroughly rooted in the physical world (with it being a physical structure built out of physical materials), it really doesn't make much sense to try and prove much about since that would all be rooted in theory. Computer science, on the other hand, being thoroughly rooted in mathematics and generally having very little with physical reality allows for such proofs - even begs for them in at least some cases because that's the sort of thing that you do in math. However, even if you are able to correctly do such proofs, programs have to run on a physical machine at _some_ point in order to be worth anything, and then many of the assumptions of such proofs fall apart (due to hardware defects or limitations or whatnot). Also, it's not all that hard for a program to become complex enough that doing a proof for it is completely unreasonable if not outright infeasible. I'd be tempted to at least consider proofs as _one_ of the steps in ensuring a correct and safe program for something like a plane or the spaceshuttle where the cost of failure is extremely high. But relying on such proofs would unwise (at best, they're just one of the tools in your toolbox) and the cost is prohibitive enough that there's no point in considering it for most projects. Proofs are mainly used by academic types, and you're really not likely to see them much elsewhere. They're just too costly to do, and actually proving your program correct only gets you so far even if you can do it. That's probably a good chunk of the reason why the only time that you've really run into such proofs are with academic types interested in proving program correctness for the sake of correctness rather than programmers looking to use it as a tool to help ensure that they have safe and properly debugged code. So, personally, I don't think that they're absurd, but it is unwise to rely on them (particularly when it's so easy to get them wrong), and most of the time, there's no point in dealing with them. - Jonathan M Davis
Jul 29 2010
parent BCS <none anon.com> writes:
Hello Jonathan,

 On Thursday, July 29, 2010 12:34:35 Don wrote:
 
 The reason I think it's absurd is that (AFAIK) no other modern
 engineering discpline has attempted to rely on correctness proofs.
 

computer science is effectively applied mathematics. No other engineering discipline is so purely mathematical. They have have to deal with real world physics, so while they could undoubtedly prove _some_ stuff about what they do, it's not like you could ever prove something like your bridge will never collapse. Best case, you could show that it wouldn't collapse under certain types of stress. And since what you're doing is so thoroughly rooted in the physical world (with it being a physical structure built out of physical materials), it really doesn't make much sense to try and prove much about since that would all be rooted in theory. Computer science, on the other hand, being thoroughly rooted in mathematics and generally having very little with physical reality allows for such proofs - even begs for them in at least some cases because that's the sort of thing that you do in math. However, even if you are able to correctly do such proofs, programs have to run on a physical machine at _some_ point in order to be worth anything, and then many of the assumptions of such proofs fall apart (due to hardware defects or limitations or whatnot). Also, it's not all that hard for a program to become complex enough that doing a proof for it is completely unreasonable if not outright infeasible.

I agree with you in theory but not in practice In mechanical or civil engineering, yes there are huge amounts of very ill defined interactions and properties and in CS there are well defined abstractions that are very nearly ideal. However, most programs are written and debugged empirically; "if it runs all the test cases, it's correct". In the engineering disciplines OTOH, it is more or less required by law that a "semi-formal proof" be done for virtually ever aspect of a design (keep reading :). The procedure in a nut shell is; make a (mostly) formal statement of the success criteria, pick a bunch of (conservative) assumptions (general from well vetted research), and then formally show that under those assumptions, you have satisfied the success criteria. Now I will grant that (aside from the last step) this is a far cry from a formal math proof but in CS the success criteria is rarely more than informal and most projects doesn't even bother listing there assumptions, and rarely is there any proof, formal or informal, that the success criteria was met. Ironic. -- ... <IXOYE><
Jul 31 2010
prev sibling next sibling parent reply "Jim Balter" <Jim Balter.name> writes:
"Don" <nospam nospam.com> wrote in message 
news:i2rk4b$2jet$1 digitalmars.com...
 Jim Balter wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:i2nkto$8ug$1 digitalmars.com...
 bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid 
 bugs/bangs,
 but not as the ultimate and only tool you have to use (I think dsimcha 
 was
 trying to say that there are more costly-effective tools. This can be 
 true,
 but you can't be sure that is right in general).

I want to re-emphasize the point that keeps getting missed. Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

You're being religious about this and arguing against a strawman. While all parts are unreliable, they aren't *equally* unreliable. Unit tests, contract programming, memory safe access, and other reliability techniques, *including correctness proofs*, all increase reliability.

I have to disagree with that. "Correctness proofs" are based on a total fallacy. Attempting to proving that a program is correct (on a real machine, as opposed to a theoretical one) is utterly ridiculous. I'm genuinely astonished that such an absurd idea ever had any traction.

I've also heard from people genuinely astonished that such an absurd idea as that humans evolved from worms ever had any traction. I'm equally impressed by both. For a more rational discussion, see http://c2.com/cgi/wiki?ProofOfCorrectness
Aug 02 2010
parent BCS <none anon.com> writes:
Hello Jim,

 "Don" <nospam nospam.com> wrote in message
 news:i2rk4b$2jet$1 digitalmars.com...
 
 Jim Balter wrote:
 
 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i2nkto$8ug$1 digitalmars.com...
 
 bearophile wrote:
 
 You have to think about proofs as another (costly) tool to avoid
 bugs/bangs,
 but not as the ultimate and only tool you have to use (I think
 dsimcha
 was
 trying to say that there are more costly-effective tools. This can
 be
 true,
 but you can't be sure that is right in general).

Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

While all parts are unreliable, they aren't *equally* unreliable. Unit tests, contract programming, memory safe access, and other reliability techniques, *including correctness proofs*, all increase reliability.

total fallacy. Attempting to proving that a program is correct (on a real machine, as opposed to a theoretical one) is utterly ridiculous. I'm genuinely astonished that such an absurd idea ever had any traction.

idea as that humans evolved from worms ever had any traction.

Well, it is kinda absurd (statistically) unless you assume the existence of "god" is absurd, at which point the anthropic participial more or less forces it you into accepting it. So, I guess that, by dragging in theology, you are asserting by analogy that the idea of proving a program correct may or may not be absurd, depending on what axioms you start with? -- ... <IXOYE><
Aug 03 2010
prev sibling parent reply Don <nospam nospam.com> writes:
retard wrote:
 Tue, 03 Aug 2010 10:43:44 +0200, Don wrote:
 
 Jim Balter wrote:
 "Jonathan M Davis" <jmdavisprog gmail.com> wrote in message
 news:mailman.50.1280425209.13841.digitalmars-d puremagic.com...
 On Thursday, July 29, 2010 03:11:21 Don wrote:
 I have to disagree with that. "Correctness proofs" are based on a
 total fallacy. Attempting to proving that a program is correct (on a
 real machine, as opposed to a theoretical one) is utterly ridiculous.
 I'm genuinely astonished that such an absurd idea ever had any
 traction.

program is correct is rather seductive. As it is, you never know for sure whether you found all of the bugs or not. So, I think that it's quite understandable that the academic types at the least have been interested in it. I'd also have expected folks like Boeing to have some interest in it. However, since it's so time consuming, error prone, and ultimately not enough even if you do it right, it just doesn't make sense to do it in most cases, and it's never a good idea to rely on it. Given how computer science is effectively applied mathematics, I don't think that it's at all surprising or ridiculous that correctness proofs have been tried (if anything, it would be totally natural and obvious to a lot of math folks), but I do think that that it's fairly clear that they aren't really a solution for a whole host of reasons. - Jonathan M Davis

disagreeing with, it was simply that correctness proofs are *one tool among many* for increasing program reliability -- no one claimed that they are "a solution". The claim that they are "based on a total fallacy" is a grossly ignorant.

were presented as such in the mid-80's. The idea was that you developed the program and its proof simultaneously, "with the proof usually leading the way". If you have proved the program is correct, of course you don't need anything else to improve reliability. I see this as a very dangerous idea, which was I think Walter's original point: you cannot completely trust ANYTHING.

The issues Walter mentioned are almost orthogonal to traditional proofs. You can prove some upper bounds for resource usage or guarantee that the software doesn't crash or hang _on an ideal model of the computer_.

Exactly. If
 you have two memory modules and shoot one of them to pieces with a double-
 barreled shotgun or M16, proving the correctness of a quicksort doesn't 
 guarantee that your sorted data won't become corrupted. What you need is 
 fault tolerant hardware and redundant memory modules.
 
 The reason why DbC, invariants, unit tests, and automated theorem provers 
 exist is that code written by humans typically contains programming 
 errors. No matter how perfect the host hardware is, it won't save the day 
 if you cannot write correct code. Restarting the process or having 
 redundant code paths & processors won't help if every version of the code 
 contains bugs (for instance doesn't terminate in some rare cases).

Hence Walter's assertion, that the most effective way to deal with this is to have _independent_ redundant systems. Having two identical copies doesn't improve reliability much. If you have two buggy pieces of code, but they are completely independent, their bugs will manifest on different inputs. So you can achieve extremely high reliability even on code with a very high bug density. I find it particularly interesting that the Pentium FDIV flaw was detected by comparison of Pentium with 486 -- even though the comparison was made in order to detect software bugs. So it is a method of improving total system reliability. I've not heard this idea applied to software before. (Though I vaguely remember a spacecraft with 3 computers, where if one produced different results from the other two, it was ignored. Not sure if the processors used different algorithms, though).
 I'm of course strongly in favour of static analysis tools.

These are specialized theorem provers. Even the static type checker solves a more or less simple constraint satisfaction problem. These use simple logic rules like the modus ponens.

Indeed.
Aug 03 2010
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Don (nospam nospam.com)'s article
 If you have two buggy pieces of code,
 but they are completely independent, their bugs will manifest on
 different inputs. So you can achieve extremely high reliability even on
 code with a very high bug density.
 I find it particularly interesting that the Pentium FDIV flaw was
 detected by comparison of Pentium with 486 -- even though the comparison
 was made in order to detect software bugs. So it is a method of
 improving total system reliability.
 I've not heard this idea applied to software before. (Though I vaguely
 remember a spacecraft with 3 computers, where if one produced different
 results from the other two, it was ignored. Not sure if the processors
 used different algorithms, though).

I often exploit this for code that's otherwise very hard to unit test. If I have a very complex algorithm with about ten zillion code paths to compute something, and manually coming up with test cases for all possible paths would be near impossible, I often devise a simpler algorithm that's either much slower, makes different tradeoffs or is only an approximation, and test both against each other on tons and tons of random data to make sure they get the same answer (or roughly the same in the case of the approximation algorithm). I've found tons of bugs this way that I probably would never have found any other way. For example, let's say you're writing a new hash table implementation with optimizations for some specific use case. One easy way to test it would be to feed it random data and feed the builtin AA the same random data, and see if they end up in the same state.
Aug 03 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Don wrote:
 Hence Walter's assertion, that the most effective way to deal with this 
 is to have _independent_ redundant systems. Having two identical copies 
 doesn't improve reliability much. If you have two buggy pieces of code, 
 but they are completely independent, their bugs will manifest on 
 different inputs. So you can achieve extremely high reliability even on 
 code with a very high bug density.

Yup, it's one of those things where the math works in your favor. Two systems, each with only 90% reliability, combine to produce 99% reliability (if they are decoupled and independent).
 I've not heard this idea applied to software before. (Though I vaguely 
 remember a spacecraft with 3 computers, where if one produced different 
 results from the other two, it was ignored. Not sure if the processors 
 used different algorithms, though).

It's been done with software in avionics since at least the 70's. Different processors, different circuits, different algorithms, different programming languages, different teams, and another team to ensure the resulting designs really doesn't have any commonality. There's a hardware comparator that, if the outputs differ too much, will shut down both systems (by pulling the power to them) and inform the pilot that he's now doing it manually. The space shuttle I believe uses 5 computers. 4 are identical, that vote. There's a fifth that is independently written and has only a minimal subset of the functionality, good enough to hopefully get the machine home.
Aug 03 2010
prev sibling next sibling parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Thursday, July 29, 2010 03:11:21 Don wrote:
 I have to disagree with that. "Correctness proofs" are based on a total
 fallacy. Attempting to proving that a program is correct (on a real
 machine, as opposed to a theoretical one) is utterly ridiculous.
 I'm genuinely astonished that such an absurd idea ever had any traction.

It's likely because the idea of being able to _prove_ that your program is correct is rather seductive. As it is, you never know for sure whether you found all of the bugs or not. So, I think that it's quite understandable that the academic types at the least have been interested in it. I'd also have expected folks like Boeing to have some interest in it. However, since it's so time consuming, error prone, and ultimately not enough even if you do it right, it just doesn't make sense to do it in most cases, and it's never a good idea to rely on it. Given how computer science is effectively applied mathematics, I don't think that it's at all surprising or ridiculous that correctness proofs have been tried (if anything, it would be totally natural and obvious to a lot of math folks), but I do think that that it's fairly clear that they aren't really a solution for a whole host of reasons. - Jonathan M Davis
Jul 29 2010
parent reply "Jim Balter" <Jim Balter.name> writes:
"Jonathan M Davis" <jmdavisprog gmail.com> wrote in message 
news:mailman.50.1280425209.13841.digitalmars-d puremagic.com...
 On Thursday, July 29, 2010 03:11:21 Don wrote:
 I have to disagree with that. "Correctness proofs" are based on a total
 fallacy. Attempting to proving that a program is correct (on a real
 machine, as opposed to a theoretical one) is utterly ridiculous.
 I'm genuinely astonished that such an absurd idea ever had any traction.

It's likely because the idea of being able to _prove_ that your program is correct is rather seductive. As it is, you never know for sure whether you found all of the bugs or not. So, I think that it's quite understandable that the academic types at the least have been interested in it. I'd also have expected folks like Boeing to have some interest in it. However, since it's so time consuming, error prone, and ultimately not enough even if you do it right, it just doesn't make sense to do it in most cases, and it's never a good idea to rely on it. Given how computer science is effectively applied mathematics, I don't think that it's at all surprising or ridiculous that correctness proofs have been tried (if anything, it would be totally natural and obvious to a lot of math folks), but I do think that that it's fairly clear that they aren't really a solution for a whole host of reasons. - Jonathan M Davis

You've snipped the context; if you go back and look at what Don was disagreeing with, it was simply that correctness proofs are *one tool among many* for increasing program reliability -- no one claimed that they are "a solution". The claim that they are "based on a total fallacy" is a grossly ignorant.
Aug 02 2010
parent Don <nospam nospam.com> writes:
Jim Balter wrote:
 "Jonathan M Davis" <jmdavisprog gmail.com> wrote in message 
 news:mailman.50.1280425209.13841.digitalmars-d puremagic.com...
 On Thursday, July 29, 2010 03:11:21 Don wrote:
 I have to disagree with that. "Correctness proofs" are based on a total
 fallacy. Attempting to proving that a program is correct (on a real
 machine, as opposed to a theoretical one) is utterly ridiculous.
 I'm genuinely astonished that such an absurd idea ever had any traction.

It's likely because the idea of being able to _prove_ that your program is correct is rather seductive. As it is, you never know for sure whether you found all of the bugs or not. So, I think that it's quite understandable that the academic types at the least have been interested in it. I'd also have expected folks like Boeing to have some interest in it. However, since it's so time consuming, error prone, and ultimately not enough even if you do it right, it just doesn't make sense to do it in most cases, and it's never a good idea to rely on it. Given how computer science is effectively applied mathematics, I don't think that it's at all surprising or ridiculous that correctness proofs have been tried (if anything, it would be totally natural and obvious to a lot of math folks), but I do think that that it's fairly clear that they aren't really a solution for a whole host of reasons. - Jonathan M Davis

You've snipped the context; if you go back and look at what Don was disagreeing with, it was simply that correctness proofs are *one tool among many* for increasing program reliability -- no one claimed that they are "a solution". The claim that they are "based on a total fallacy" is a grossly ignorant.

No, I was arguing against the idea that they are a total solution. They were presented as such in the mid-80's. The idea was that you developed the program and its proof simultaneously, "with the proof usually leading the way". If you have proved the program is correct, of course you don't need anything else to improve reliability. I see this as a very dangerous idea, which was I think Walter's original point: you cannot completely trust ANYTHING. I just have a big problem with the word "proof", because I feel it has connotations of infallibility. I also feel that of all the development methodology fads which arise periodically in computer science, this one did the least to improve program reliability throughout the industry. BTW The idea pre-dated theorem solvers, which obviously improve reliability. I'm of course strongly in favour of static analysis tools.
Aug 03 2010
prev sibling parent retard <re tard.com.invalid> writes:
Tue, 03 Aug 2010 10:43:44 +0200, Don wrote:

 Jim Balter wrote:
 "Jonathan M Davis" <jmdavisprog gmail.com> wrote in message
 news:mailman.50.1280425209.13841.digitalmars-d puremagic.com...
 On Thursday, July 29, 2010 03:11:21 Don wrote:
 I have to disagree with that. "Correctness proofs" are based on a
 total fallacy. Attempting to proving that a program is correct (on a
 real machine, as opposed to a theoretical one) is utterly ridiculous.
 I'm genuinely astonished that such an absurd idea ever had any
 traction.

It's likely because the idea of being able to _prove_ that your program is correct is rather seductive. As it is, you never know for sure whether you found all of the bugs or not. So, I think that it's quite understandable that the academic types at the least have been interested in it. I'd also have expected folks like Boeing to have some interest in it. However, since it's so time consuming, error prone, and ultimately not enough even if you do it right, it just doesn't make sense to do it in most cases, and it's never a good idea to rely on it. Given how computer science is effectively applied mathematics, I don't think that it's at all surprising or ridiculous that correctness proofs have been tried (if anything, it would be totally natural and obvious to a lot of math folks), but I do think that that it's fairly clear that they aren't really a solution for a whole host of reasons. - Jonathan M Davis

You've snipped the context; if you go back and look at what Don was disagreeing with, it was simply that correctness proofs are *one tool among many* for increasing program reliability -- no one claimed that they are "a solution". The claim that they are "based on a total fallacy" is a grossly ignorant.

No, I was arguing against the idea that they are a total solution. They were presented as such in the mid-80's. The idea was that you developed the program and its proof simultaneously, "with the proof usually leading the way". If you have proved the program is correct, of course you don't need anything else to improve reliability. I see this as a very dangerous idea, which was I think Walter's original point: you cannot completely trust ANYTHING.

The issues Walter mentioned are almost orthogonal to traditional proofs. You can prove some upper bounds for resource usage or guarantee that the software doesn't crash or hang _on an ideal model of the computer_. If you have two memory modules and shoot one of them to pieces with a double- barreled shotgun or M16, proving the correctness of a quicksort doesn't guarantee that your sorted data won't become corrupted. What you need is fault tolerant hardware and redundant memory modules. The reason why DbC, invariants, unit tests, and automated theorem provers exist is that code written by humans typically contains programming errors. No matter how perfect the host hardware is, it won't save the day if you cannot write correct code. Restarting the process or having redundant code paths & processors won't help if every version of the code contains bugs (for instance doesn't terminate in some rare cases).
 I'm of course strongly in favour of static analysis tools.

These are specialized theorem provers. Even the static type checker solves a more or less simple constraint satisfaction problem. These use simple logic rules like the modus ponens.
Aug 03 2010
prev sibling parent Jonathan M Davis <jmdavisprog gmail.com> writes:
On Tuesday, July 27, 2010 15:00:20 Walter Bright wrote:
 bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid
 bugs/bangs, but not as the ultimate and only tool you have to use (I
 think dsimcha was trying to say that there are more costly-effective
 tools. This can be true, but you can't be sure that is right in
 general).

I want to re-emphasize the point that keeps getting missed. Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

Especially the programmer. ;) - Jonathan M Davis
Jul 27 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Adam Ruppe wrote:
 Lerdorf's quote strikes me as actually being somewhat close to what
 Walter is talking about. Web applications don't focus on making a
 thing that never fails, but instead achieve reliability by having an
 external watchdog switch to backups - that is, a fresh copy of the
 program - when something goes wrong with the first one.
 
 Doing this gives them adequately good results at low cost.

That part is true, but the part about catching exceptions thrown by programming bugs and then continuing the same process runs completely counter to any sensible notion of reliability.
Jul 27 2010
prev sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2010-07-26 14:42:54 -0400, Walter Bright <newshound2 digitalmars.com> said:

 1. segfaults *are* exceptions.

At the processor level, yes. They're exceptions at the language level only on Windows, which I'd consider 'implementation defined', so not exceptions as far as the language is concerned.
 2. D offers a memory safe subset, and D's ranges and algorithms are 
 memory safe.

That's more a wish than reality though. I have yet to see safe applied on ranges and algorithms. I'm pretty sure it'll require a couple of adjustments to safe. As it exists currently, safe does not work very well with templates and delegate literals. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Jul 26 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Michel Fortin wrote:
 On 2010-07-26 14:42:54 -0400, Walter Bright <newshound2 digitalmars.com> 
 said:
 
 1. segfaults *are* exceptions.

At the processor level, yes. They're exceptions at the language level only on Windows, which I'd consider 'implementation defined', so not exceptions as far as the language is concerned.

They are exceptions in the sense that a condition required by the program has been violated.
 2. D offers a memory safe subset, and D's ranges and algorithms are 
 memory safe.

That's more a wish than reality though. I have yet to see safe applied on ranges and algorithms. I'm pretty sure it'll require a couple of adjustments to safe. As it exists currently, safe does not work very well with templates and delegate literals.

While it's true that safe hasn't been thoroughly debugged yet, the ranges and algorithms are still memory safe because they use the array paradigm rather than the unchecked pointer paradigm.
Jul 26 2010
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
bearophile Wrote:

 Andrei Alexandrescu:
 In my humble opinion, the design of Java, C#, and Go is proof 
 that their authors didn't get the STL. Otherwise, those languages would 
 be very different.

I don't believe you. Among the designers of Java, C# and Go there are people that are both experts and smart. C# designers have shown to be sometimes smarter than D designers. So I think some of them 'get the STL' and understand its advantages, but despite this in the universe they have found some other reasons to refuse it. I think they were unwilling to pay the large price in language complexity, bug-prone-ness, code bloat and compilation speed that C++ and D are willing to pay.

The reason behind the Java design is much more mundane: they chose binary backwards compatibility as a requirement for future language changes. So you can do ridiculous things like: class A { void put(List x) {} } class B { void pass(List<Integer> x) { (new A).put(x); } }
 Here you can find why C# designers have refused C++-style templates & STL and
chosen the generics instead:
 http://msdn.microsoft.com/it-it/magazine/cc163683%28en-us%29.aspx

C# generics are a heck of a lot nicer than Java generics, but there also I think there were other practical reasons for the decision that they didn't fully address. C# is effectively the native language for .NET, and so its libraries should be as useful as possible to other languages compiled to CLR bytecode. If C# used C++ style templates, C++ would integrate well with it, but no other languages really would. Try telling some Visual Basic programmer that they have to define a different container interface for each stored type and see if they use your library. The "Binary Compatibility" section mentions this, but only briefly.
 Maybe there is a way to build something like STL without C++/D-style templates
:-)

If you figure out how, I'd love to know it :-)
Jul 25 2010
next sibling parent Sean Kelly <sean invisibleduck.org> writes:
Sean Kelly Wrote:

 So you can do ridiculous things like:
 
 class A {
     void put(List x) {}
 }
 
 class B {
     void pass(List<Integer> x) {
         (new A).put(x);
     }
 }

Change that to: class A { void put(List<Integer)> x) {} } class B { void pass(List x) { (new A).put(x); } }
Jul 25 2010
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 26/07/2010 05:32, Sean Kelly wrote:
 C# generics are a heck of a lot nicer than Java generics, but there also I
think there were other practical reasons for the decision that they didn't
fully address.  C# is effectively the native language for .NET, and so its
libraries should be as useful as possible to other languages compiled to CLR
bytecode.  If C# used C++ style templates, C++ would integrate well with it,
but no other languages really would.  Try telling some Visual Basic programmer
that they have to define a different container interface for each stored type
and see if they use your library.  The "Binary Compatibility" section mentions
this, but only briefly.

Why is that? (my C# knowledge is very rusty) Is it because they maintain some runtime information unlike Java generics which are completely erased? -- Bruno Medeiros - Software Engineer
Jul 26 2010
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:i2ibqm$2gsv$1 digitalmars.com...
 One important problem of C# generics can be solved adding IArithmetic<T>:
 http://www.osnews.com/story/7930

That's one of the reasons I've pretty much given up on C# despite having initially been relatively impressed with it. They've had that giant, gaping, and frankly quite easy to remedy, lack-of-IArithmetic hole since C# first got generics (ie, forever ago), and there's been no shortage of people complaining about it. And yet, last I checked (numerous versions and years after the hole was created in the first place), MS still had yet to even acknowledge the issue, much less remedy it. Not the best management situation.
Jul 26 2010
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Sun, 25 Jul 2010 20:21:05 -0500, Andrei Alexandrescu wrote:

 On 07/25/2010 04:54 PM, bearophile wrote:
 Andrei Alexandrescu:
 In my humble opinion, the design of Java, C#, and Go is proof that
 their authors didn't get the STL. Otherwise, those languages would be
 very different.

I don't believe you. Among the designers of Java, C# and Go there are people that are both experts and smart. C# designers have shown to be sometimes smarter than D designers. So I think some of them 'get the STL' and understand its advantages, but despite this in the universe they have found some other reasons to refuse it. I think they were unwilling to pay the large price in language complexity, bug-prone-ness, code bloat and compilation speed that C++ and D are willing to pay.

But then their containers and algorithms are markedly inferior to STL's. They are toxic to the programmers who have only been exposed to them. So Java and C# got STL and decided to not go with it, I'm sure they would have at least gotten a few good bits from it. But no - the entire e.g. Java container library is parallel to everything STL stands for.

I think the Java/C# developers gave up X % of the execution speed to avoid hard crashes (exceptions instead of segfaults) AND to make it possible for ordinary people to develop applications. Even Java was too complex so the less bright people made PHP for the idiot part of the community (I'm not saying all PHP users are idiots, but if you're an idiot, you can't develop in Java or C++, but you can fluently write PHP). It's a sad fact that not everyone is as intelligent as A. Alexandrescu or W. Bright. A PHP website for a local fishing club doesn't need super efficient code. Why? There are maybe 2..4 visitors daily and the only interactive part on the page is a feedback form. A typical C/C++/D program written by a mediocre developer crashes every 15..60 minutes because of bug prone low level code. That's the reason why there are "inferior" languages. PHP doesn't crash. You just get an interactive error message. And you can fix the bug in 1..5 minutes without restarting the server. A novice coder is willing to sacrifice 99.9% of performance to get this feature.
Jul 26 2010
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Tue, 27 Jul 2010 04:10:14 +0800, KennyTM~ wrote:

 On Jul 27, 10 02:42, Walter Bright wrote:
 retard wrote:
 I think the Java/C# developers gave up X % of the execution speed to
 avoid hard crashes (exceptions instead of segfaults)

1. segfaults *are* exceptions. 2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.

Catching exception is easy, but handling (segfault) signal is a mess.

Indeed, I'd like to know how you recover from a segfault without help from an external processes. Sometimes you know that some routine might fail once in a week, but the program MUST run non-stop for several months. In Java you can achieve this with exceptions. And you can also dynamically fix classes with the class loader.
Jul 26 2010
prev sibling next sibling parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
bearophile <bearophileHUGS lycos.com> wrote:

 Walter Bright:
 1. segfaults *are* exceptions.

Aren't exceptions objects?

That's Exception, not exception. The latter may be a hardware thing. -- Simen
Jul 26 2010
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Mon, 26 Jul 2010 14:04:53 -0700, Walter Bright wrote:

 retard wrote:
 Tue, 27 Jul 2010 04:10:14 +0800, KennyTM~ wrote:
 
 On Jul 27, 10 02:42, Walter Bright wrote:
 retard wrote:
 I think the Java/C# developers gave up X % of the execution speed to
 avoid hard crashes (exceptions instead of segfaults)

2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.


Indeed, I'd like to know how you recover from a segfault without help from an external processes.

On Windows, it's fairly straightforward. I did it once as part of a gc implementation that would mark unmodified pages as hardware readonly. It would catch the seg fault from writes to it, log the page as modified, make the page writable, and restart the failed instruction.
 Sometimes you know that some routine might fail once in a week, but the
 program MUST run non-stop for several months. In Java you can achieve
 this with exceptions. And you can also dynamically fix classes with the
 class loader.

Any program that attempts to achieve reliability by "recovering" from program bugs and continuing is an extremely badly designed one. http://www.drdobbs.com/blog/archives/2009/10/safe_systems_fr.html http://www.drdobbs.com/blog/archives/2009/11/designing_safe.html Sadly, it's a topic that has not penetrated software engineering instructional materials, and programmers have to learn it the hard way again and again.

But these are your articles with no cited sources about the software methodologies. It seems like they're written afterwards to advertise the features implemented in D. The contract programming only covers a small runtime dynamic part of programming. There's no mention about automated theorem proving. No word about exceptions nor sandboxing with virtual machines. Why? Because these would make D look ridiculous. How the web programming world works: ''I'm not a real programmer. I throw together things until it works then I move on. The real programmers will say "yeah it works but you're leaking memory everywhere. Perhaps we should fix that." I'll just restart apache every 10 requests.'' -- Rasmus Lerdorf It it widely accepted that web applications fail often. It suffices if the developers are fixing bad code eventually, but the remaining parts should work reasonably well. Either a process is restarted or exceptions are used to catch small error conditions so the server could server *something* . I'm talking about practical web applications, not X-ray machines.
Jul 27 2010
prev sibling next sibling parent Adam Ruppe <destructionator gmail.com> writes:
Lerdorf's quote strikes me as actually being somewhat close to what
Walter is talking about. Web applications don't focus on making a
thing that never fails, but instead achieve reliability by having an
external watchdog switch to backups - that is, a fresh copy of the
program - when something goes wrong with the first one.

Doing this gives them adequately good results at low cost.
Jul 27 2010
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Tue, 27 Jul 2010 12:02:36 -0700, Walter Bright wrote:

 retard wrote:
 Mon, 26 Jul 2010 14:04:53 -0700, Walter Bright wrote:


 That misses the point about reliability. Again, you're approaching from
 the point of view that you can make a program that cannot fail (i.e.
 prove it correct). That view is WRONG WRONG WRONG and you must NEVER
 NEVER NEVER rely on such for something important, like say your life.
 Software can (and will) fail even if you proved it correct, for example,
 what if a memory cell fails and flips a bit? Cosmic rays flip a bit?

ECC memory helps a bit and so does RAID arrays. A server might also have 2 + power supplies with batteries. On software side you can spawn new processes and kill old ones. Typical low-end server hardware used to serve web pages doesn't have much more hardware support for reliability.
 How the web programming world works:
 
 ''I'm not a real programmer. I throw together things until it works
 then I move on. The real programmers will say "yeah it works but you're
 leaking memory everywhere. Perhaps we should fix that." I'll just
 restart apache every 10 requests.'' -- Rasmus Lerdorf
 
 It it widely accepted that web applications fail often. It suffices if
 the developers are fixing bad code eventually, but the remaining parts
 should work reasonably well. Either a process is restarted or
 exceptions are used to catch small error conditions so the server could
 server *something* . I'm talking about practical web applications, not
 X-ray machines.

It's retarded (!) to pretend that this is how one makes reliable systems.

I understand you meant reliable systems. But my original post was about real world web sites. I've been in a project where the manager told us to outsource development because the system written by the local workforce would have made it *too reliable*. If you can make the system only 20% more unreliable by cutting off 80% from the salaries, it's a big win. The idea is that since people accept a certain downtime and certain kind of error states for typical web sites, it's ok to have those problems. You just need to keep the number of bugs low enough. A 90..100% reliability isn't important.
Jul 27 2010
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Wed, 28 Jul 2010 11:48:42 -0700, Walter Bright wrote:

 Jim Balter wrote:
 You're being religious about this and arguing against a strawman. While
 all parts are unreliable, they aren't *equally* unreliable.

They don't have to have equal reliability in order for redundancy to be very effective.
 Unit tests,
 contract programming, memory safe access, and other reliability
 techniques, *including correctness proofs*, all increase reliability.

True, but the problem is when one is seduced by that into thinking that redundancy is not necessary.

I give you one use case where I really appreciate exceptions instead of segfaults: I've just spent 8 hours encoding a video file on my home computer. The job gets done and my wife wants me to start playing it in 25 minutes (it takes 15 minutes to burn the video to a cd/dvd). 250 guests are waiting for me, I'm in my son's wedding. I have very little time to save the results. Now, the save file dialog has a bug when the target file system has too little space. I want it to save the work SOMEWHERE. It should NOT just crash! I'd donate my kidney to keep the software running without problems. People typically buy Apple's computers in these kinds of situations for a good reason. PC is just too unreliable and the Windows/ Linux developers have an unproductive mentality. I don't have time to encode it twice, redundance makes no sense.
Jul 28 2010
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Thu, 29 Jul 2010 12:11:21 +0200, Don wrote:

 Jim Balter wrote:
 
 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i2nkto$8ug$1 digitalmars.com...
 bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid
 bugs/bangs,
 but not as the ultimate and only tool you have to use (I think
 dsimcha was
 trying to say that there are more costly-effective tools. This can be
 true,
 but you can't be sure that is right in general).

I want to re-emphasize the point that keeps getting missed. Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

You're being religious about this and arguing against a strawman. While all parts are unreliable, they aren't *equally* unreliable. Unit tests, contract programming, memory safe access, and other reliability techniques, *including correctness proofs*, all increase reliability.

I have to disagree with that. "Correctness proofs" are based on a total fallacy. Attempting to proving that a program is correct (on a real machine, as opposed to a theoretical one) is utterly ridiculous. I'm genuinely astonished that such an absurd idea ever had any traction.

What's your favorite then? 100% unit test coverage?
Jul 29 2010
prev sibling parent reply retard <re tard.com.invalid> writes:
Thu, 29 Jul 2010 13:22:35 +0000, dsimcha wrote:

 == Quote from Don (nospam nospam.com)'s article
 Jim Balter wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i2nkto$8ug$1 digitalmars.com...
 bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid
 bugs/bangs,
 but not as the ultimate and only tool you have to use (I think
 dsimcha was
 trying to say that there are more costly-effective tools. This can
 be true,
 but you can't be sure that is right in general).

I want to re-emphasize the point that keeps getting missed. Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

You're being religious about this and arguing against a strawman. While all parts are unreliable, they aren't *equally* unreliable. Unit tests, contract programming, memory safe access, and other reliability techniques, *including correctness proofs*, all increase reliability.

fallacy. Attempting to proving that a program is correct (on a real machine, as opposed to a theoretical one) is utterly ridiculous. I'm genuinely astonished that such an absurd idea ever had any traction.

Yea, here's a laundry list of stuff that theory doesn't account for that can go wrong on real machines: overflow rounding error

The proofs help here actually.
 compiler bugs

This is a bit unfortunate and unfortunately in some cases there's nothing you can do - no matter what code you write, the compiler breaks it fatally.
 hardware bugs

In many cases there's nothing you can do. The hardware bug / failure is fatal. I'm talking about x86 micro-computers here since D isn't that much suitable yet for other targets. A typical desktop PC has zero redundancy. A single error in any of the components will kill your program or at least some part of the user experience.
 OS bugs

If your numerics code is fucked up, the proofs really do help. I don't think your numerics code will use any (problematic non-tested) syscalls. If the OS breaks independently regardless of your code, there's nothing you can do. The best way to protect against random OS bugs is to not run any code on that OS.
 
 I sincerely wish all my numerics code always worked if it was provably
 mathematically correct.

I really love digitalmars.D because this is one of the few places where 99% of the community has zero experience with other languages, other paradigms (non-imperative), automatic theorem provers, or anything not related to D. There's a whole choir against theorem proving now. The funniest thing is that none of you seem to have any clue about how those programs work. Has anyone except the almighty Andrei ever even downloaded a theorem prover?
Jul 29 2010
next sibling parent reply BCS <none anon.com> writes:
Hello retard,
 Has anyone except the almighty
 Andrei ever even downloaded a theorem prover?
 

Yes, ACL2. http://www.dsource.org/projects/scrapple/browser/trunk/backmath Now I know why that sort of thing isn't done more often. -- ... <IXOYE><
Jul 31 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 08/01/2010 05:44 AM, retard wrote:
 Sat, 31 Jul 2010 23:38:17 +0000, BCS wrote:

 Hello retard,
 Has anyone except the almighty
 Andrei ever even downloaded a theorem prover?

http://www.dsource.org/projects/scrapple/browser/trunk/backmath Now I know why that sort of thing isn't done more often.

Learning how to document your code might also help. It looks like a mess.

I think it takes some good amount of courage and confidence to put one's identity and code out there for everyone to criticize. Conversely, criticizing from the comfort of anonymity and without producing "do as I do, not only as I say" examples doesn't sit well. Andrei
Aug 01 2010
next sibling parent Justin Johansson <no spam.com> writes:
Andrei Alexandrescu wrote:
 On 08/01/2010 05:44 AM, retard wrote:
 Sat, 31 Jul 2010 23:38:17 +0000, BCS wrote:

 Hello retard,
 Has anyone except the almighty
 Andrei ever even downloaded a theorem prover?

http://www.dsource.org/projects/scrapple/browser/trunk/backmath Now I know why that sort of thing isn't done more often.

Learning how to document your code might also help. It looks like a mess.

I think it takes some good amount of courage and confidence to put one's identity and code out there for everyone to criticize. Conversely, criticizing from the comfort of anonymity and without producing "do as I do, not only as I say" examples doesn't sit well. Andrei

Well put. Similarly can be said about posting with anonymity on a newsgroup period. I for one have made numerous newsgroup bloopers, whether being an "academic" err or perhaps a socially tactless response. Whenever I realize one of my bloops, I often wish for retrospective anonymity. Justin ( no last name given :-) )
Aug 01 2010
prev sibling parent reply "Jim Balter" <Jim Balter.name> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:i34op7$2hbj$1 digitalmars.com...
 On 08/01/2010 05:44 AM, retard wrote:
 Sat, 31 Jul 2010 23:38:17 +0000, BCS wrote:

 Hello retard,
 Has anyone except the almighty
 Andrei ever even downloaded a theorem prover?

http://www.dsource.org/projects/scrapple/browser/trunk/backmath Now I know why that sort of thing isn't done more often.

Learning how to document your code might also help. It looks like a mess.

I think it takes some good amount of courage and confidence to put one's identity and code out there for everyone to criticize. Conversely, criticizing from the comfort of anonymity and without producing "do as I do, not only as I say" examples doesn't sit well. Andrei

Ad hominem argument. If you disagree that it looks like a mess, you should argue that. If you concur, then it doesn't matter who pointed it out. And anonymity on the internet has been held up as a fundamental right for decades.
Aug 02 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Jim Balter wrote:
 Ad hominem argument. If you disagree that it looks like a mess, you 
 should argue that. If you concur, then it doesn't matter who pointed it 
 out.

Yes, and most of the time that's the case. But sometimes a person uses anonymity in order to engage in trolling, or so they can make careless statements.
 And anonymity on the internet has been held up as a fundamental right for
decades.

Of course. It's why I don't mind people posting here using pseudonyms, nor do I try to "unmask" them. On the other hand, people posting under pseudonyms have to work harder to earn respect, and that's fair, too.
Aug 03 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 On the other hand, people posting under pseudonyms have to work harder to earn 
 respect, and that's fair, too.

Sometimes I am polemical, very often wrong, nearly always partially wrong, and often I don't have enough experience. But I hope to have gained a bit of your respect in the last three+ years :-) Bye, bearophile
Aug 03 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 On the other hand, people posting under pseudonyms have to work harder to
 earn respect, and that's fair, too.

Sometimes I am polemical, very often wrong, nearly always partially wrong, and often I don't have enough experience. But I hope to have gained a bit of your respect in the last three+ years :-)

I appreciate your honest and forthright style, and the breadth of knowledge that you bring.
Aug 03 2010
prev sibling next sibling parent reply retard <re tard.com.invalid> writes:
Sat, 31 Jul 2010 23:38:17 +0000, BCS wrote:

 Hello retard,
 Has anyone except the almighty
 Andrei ever even downloaded a theorem prover?
 
 

http://www.dsource.org/projects/scrapple/browser/trunk/backmath Now I know why that sort of thing isn't done more often.

Learning how to document your code might also help. It looks like a mess.
Aug 01 2010
parent BCS <none anon.com> writes:
Hello retard,

 Sat, 31 Jul 2010 23:38:17 +0000, BCS wrote:
 
 Hello retard,
 
 Has anyone except the almighty
 Andrei ever even downloaded a theorem prover?

http://www.dsource.org/projects/scrapple/browser/trunk/backmath Now I know why that sort of thing isn't done more often.

mess.

I wish that had been the main problem... -- ... <IXOYE><
Aug 01 2010
prev sibling next sibling parent "Jim Balter" <Jim Balter.name> writes:
"retard" <re tard.com.invalid> wrote in message 
news:i2smf3$2ltk$1 digitalmars.com...
 Thu, 29 Jul 2010 13:22:35 +0000, dsimcha wrote:

 == Quote from Don (nospam nospam.com)'s article
 Jim Balter wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i2nkto$8ug$1 digitalmars.com...
 bearophile wrote:
 You have to think about proofs as another (costly) tool to avoid
 bugs/bangs,
 but not as the ultimate and only tool you have to use (I think
 dsimcha was
 trying to say that there are more costly-effective tools. This can
 be true,
 but you can't be sure that is right in general).

I want to re-emphasize the point that keeps getting missed. Building reliable systems is not about trying to make components that cannot fail. It is about building a system that can TOLERATE failure of any of its components. It's how you build safe systems from UNRELIABLE parts. And all parts are unreliable. All of them. Really. All of them.

You're being religious about this and arguing against a strawman. While all parts are unreliable, they aren't *equally* unreliable. Unit tests, contract programming, memory safe access, and other reliability techniques, *including correctness proofs*, all increase reliability.

fallacy. Attempting to proving that a program is correct (on a real machine, as opposed to a theoretical one) is utterly ridiculous. I'm genuinely astonished that such an absurd idea ever had any traction.

Yea, here's a laundry list of stuff that theory doesn't account for that can go wrong on real machines: overflow rounding error

The proofs help here actually.
 compiler bugs

This is a bit unfortunate and unfortunately in some cases there's nothing you can do - no matter what code you write, the compiler breaks it fatally.
 hardware bugs

In many cases there's nothing you can do. The hardware bug / failure is fatal. I'm talking about x86 micro-computers here since D isn't that much suitable yet for other targets. A typical desktop PC has zero redundancy. A single error in any of the components will kill your program or at least some part of the user experience.
 OS bugs

If your numerics code is fucked up, the proofs really do help. I don't think your numerics code will use any (problematic non-tested) syscalls. If the OS breaks independently regardless of your code, there's nothing you can do. The best way to protect against random OS bugs is to not run any code on that OS.
 I sincerely wish all my numerics code always worked if it was provably
 mathematically correct.

I really love digitalmars.D because this is one of the few places

Nah, such appalling ignorance and arrogance is actually quite common.
 where
 99% of the community has zero experience with other languages, other
 paradigms (non-imperative), automatic theorem provers, or anything not
 related to D. There's a whole choir against theorem proving now. The
 funniest thing is that none of you seem to have any clue about how those
 programs work. Has anyone except the almighty Andrei ever even downloaded
 a theorem prover? 

Aug 02 2010
prev sibling parent retard <re tard.com.invalid> writes:
Mon, 02 Aug 2010 23:47:49 -0700, Jim Balter wrote:

 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:i34op7$2hbj$1 digitalmars.com...
 On 08/01/2010 05:44 AM, retard wrote:
 Sat, 31 Jul 2010 23:38:17 +0000, BCS wrote:

 Hello retard,
 Has anyone except the almighty
 Andrei ever even downloaded a theorem prover?

http://www.dsource.org/projects/scrapple/browser/trunk/backmath Now I know why that sort of thing isn't done more often.

Learning how to document your code might also help. It looks like a mess.

I think it takes some good amount of courage and confidence to put one's identity and code out there for everyone to criticize. Conversely, criticizing from the comfort of anonymity and without producing "do as I do, not only as I say" examples doesn't sit well. Andrei

Ad hominem argument. If you disagree that it looks like a mess, you should argue that. If you concur, then it doesn't matter who pointed it out. And anonymity on the internet has been held up as a fundamental right for decades.

To be honest, I also questioned the expertise of some members of the community. The reason is, comparing complex software technologies is hard. Some examples: XML parsing with SAX vs DOM, Qt vs GTK+, F#'s type system vs Haskell's type system, vi(m) vs emacs. One really needs to have a thorough knowledge of all technologies involved before making relevant conclusions. The conclusion also reflects one's personal values. I don't know many people who can give an authoritative and exhaustive summary of these things. A professor might understand the "ivory tower" side of the issue, senior developers' vision is the "pragmatic" one, but even those together don't form the full truth. A project manager or a novice coder might value some other aspects. My recommendation was just to study at least one other languge or theorem prover before wasting time on useless flame wars.
Aug 03 2010