www.digitalmars.com         C & C++   DMDScript  

D - Concepts, Techniques, and Models of Computer Programming

reply Mark Evans <Mark_member pathlink.com> writes:
This post is not just another language citation.  It's about language
fundamentals (although the interesting language Oz serves as a reference).

This new book is important for D.  The D newsgroup torrent of discussion has
little regard for fundamentals or cohesion of design from a computational
standpoint.  The criteria are vague, e.g. "must feel like C" and "must be easier
than C++" and "I'd like this feature."  Not that Walter isn't trying.  The poor
cohesion of C++ exacerbates the problem.  C++ itself is a mish-mash, yet serves
as the starting point for D.  For that matter, C is a mish-mash -- see previous
posts on IMP.  So we have a lot of mish-mash piled up.  A review of language
fundamentals may help D more clearly delineate a proper design point and track
it carefully.  (Walter, read: "make my life easier.")

This book demonstrates how languages and their "paradigms" boil down to certain
fundamentals (the "kernel language").  Adding just one feature to the kernel
enables an entirely new programming paradigm (e.g. class of languages).

My own feeling is that D should pay more attention to the functional paradigm
which is extremely powerful in a variety of applications.  Languages with
functional power (like OCaml and Mathematica) leave poor-cousin imitators like
C++ STL in the dust.  STL was in some respects a vain attempt to graft
functional programming onto C++.

I'm not sure what to make of Oz just yet, but it culminates years of research
along these lines.  The book is down to earth.  Before the D practicality police
shoot me down, here are some quotes for you:

"The number of different computation models that are known to be useful is much
smaller than the number of programming languages....The main criterium for
presenting a model is whether it is useful in practice."
"We find that a good programming style requires using programming concepts that
are usually associated with different computation models.  Languages that
implement just one computation model make this difficult."  (MAJOR POINT FOR D
TO CONSIDER.)
"Concurrency can be simple."

I would love to see D's "kernel language" written down.  The kernel language is
not a virtual machine, it is a semantic specification.

Enjoy,
Mark

Oz the language
http://www.mozart-oz.org/

PDF book draft:
http://www.info.ucl.ac.be/people/PVR/book.html
"Concepts, Techniques, and Models of Computer Programming"
by
PETER VAN ROY
SEIF HARIDI
(c) 2001-2003

"One approach to study computer programming is to study programming languages.
But there are a tremendously large number of languages, so large that it is
impractical to study them all. How can we tackle this immensity? We could pick a
small number of languages that are representative of different programming
paradigms. But this gives little insight into programming as a unified
discipline. This book uses another approach.

"We focus on programming concepts and the techniques to use them, not on
programming languages. The concepts are organized in terms of computation
models. A computation model is a formal system that defines how computations are
done. There are many ways to define computation models. Since this book is
intended to be practical, it is important that the computation model should be
directly useful to the programmer. We will therefore define it in terms of
concepts that are important to programmers: data types, operations, and a
programming language. The term computation model makes precise the imprecise
notion of 'programming paradigm'. The rest of the book talks about computation
models and not programming paradigms. Sometimes we will use the phrase
programming model. This refers to what the programmer needs: the programming
techniques and design principles made possible by the computation model.

"Each computation model has its own set of techniques for programming and
reasoning about programs. The number of different computation models that are
known to be useful is much smaller than the number of programming languages.
This book covers many well-known models as well as some less-known models. The
main criterium for presenting a model is whether it is useful in practice. Each
computation model is based on a simple core language called its kernel language.
The kernel languages are introduced in a progressive way, by adding concepts one
by one. This lets us show the deep relationships between the different models.
Often, just adding one new concept makes a world of difference in programming.
For example, adding destructive assignment (explicit state) to functional
programming allows to do [sic] object-oriented programming. When stepping from
one model to the next, how do we decide on what concepts to add? We will touch
on this question many times in the book. The main criterium is the creative
extension principle. Roughly, a new concept is added when programs become
complicated for technical reasons unrelated to the problem being solved. Adding
a concept to the kernel language can keep programs simple, if the concept is
chosen carefully. This is explained in Section 2.1.2 and Appendix E.

"A nice property of the kernel language approach is that it lets us use
different models together in the same program. This is usually called
multiparadigm programming. It is quite natural, since it means simply to use the
right concepts for the problem, independent of what computation model they
originate from. Multiparadigm programming is an old idea. For example, the
designers of Lisp and Scheme have long advocated a similar view. However, this
book applies it in a much broader and deeper way than was previously done."
Jan 20 2003
next sibling parent Mark Evans <Mark_member pathlink.com> writes:
http://www.ps.uni-sb.de/alice/manual/tour.html
http://www.ps.uni-sb.de/Papers/abstracts/Kornstaedt2001.html

This paper showcases an Oz-inspired language, the Alice variant of Standard ML.
A major difference between Alice and Oz is static vs. dynamic typing.  The
following quote is worth pondering in that regard, since D is statically typed
(and properly so given its intent -- but so is Standard ML):
"Its powerful static type system is one of the major pros of ML.  However, there
are programming tasks where it is not possible to  perform all typing
statically. For example, consider exchange of data structures between separate
processes. To accompany such tasks of open programming, Alice complements its
static type system with a controlled form of dynamic typing."

The major point about Oz is not that it's another language with nice feautures
we should borrow.  Oz is merely a reference implementation of the kernel
language.  The kernel language is the big deal.  It serves to unify and
integrate language design.  So forget about performance and dynamic typing.  The
kernel language defines and classifies language semantics, one of the dangling,
unresolved issues in the whole D development.

I suspect that much debate on the D newsgroup would evaporate if D were
scrutinized along the lines of this book -- the closest thing to a Scientific
Method for programming languages that non-mathematicians can understand.

Mark
Jan 20 2003
prev sibling next sibling parent reply Antti Sykari <jsykari gamma.hut.fi> writes:
Mark Evans <Mark_member pathlink.com> writes:

 This new book is important for D.  The D newsgroup torrent of
 discussion has little regard for fundamentals or cohesion of design
 from a computational standpoint.  The criteria are vague, e.g. "must
 feel like C" and "must be easier than C++" and "I'd like this
 feature."  Not that Walter isn't trying.  The poor cohesion of C++
 exacerbates the problem.  C++ itself is a mish-mash, yet serves as the
 starting point for D.  For that matter, C is a mish-mash -- see
 previous posts on IMP.  So we have a lot of mish-mash piled up.  A

It's instructive to note that the contemporary form of C++, which is perceived to be a pile of mish-mash, was certainly not intended to become that. I guess it "just happened" when features were added. But they were added with good intentions, and are even occasionally useful. The language might not be the simplest in the world, but when one understands why the features are there and the context in which they entered the language, suddenly C++ doesn't seem all that complex any more. And the design decisions of C++ have, I trust, been backed up by well-defined principles. In "The Design and Evolution of C++", Stroustrup lists a set of rules, which are divided into categories of general rules, design support rules, language-technical rules and low-level programming support rules. I'll quote some of them here, since many of them are relevant to contemporary language design as well. Among general rules, there were: - C++'s evolution must be driven by real problems. - Don't get involved in a sterile quest for perfection. - C++ must be useful now. These three rules imply that the purpose of C++ was to become a very practical language, driven by real needs of real people. As is D, I assume... I'd imagine that D will eventually get features that it currently cannot even dream about, and that will probably make the language more complex. Not all useful language features, or even programming paradigms, have been invented yet - and who knows if D will one day support one of them. Another, remarkable aspect of C++ is: - All features must be affordable. - What you don't use, you don't pay for. C++ was designed from the beginning to be as efficient as possible, which is kind of reasonable because then, in the evil mid-eighties, processor time was limited and memory was scarce. (Not that it isn't today, at least for the hordes of the game programmers who, for some reason, are found in large quantities in this newsgroup ;) Which was kind of bad, at least since it caused the lack of garbage collection in C++. (And no portable way to get (or even print) a stack trace without a debugger, which would be nice. Java has this, and I don't know if it's a performance issue since you can investigate the stack inside a debugger anyway.) And kind of good, too, since it showed that object-oriented programs can be efficient. However, D has the same problem as C++ had in the eighties -- C++ had also the following rules, and then for good reason: - Use traditional (dumb) linkers. - No gratuitous incompatibilities with C. Without these, C++ would've probably been a much cleaner language, but on the other hand, it might not have existed at all... or at least attracted the masses like it eventually did. Similarly, D is attempting to be attractive for the masses that already know C++ or Java. Which is kind of nice, since I like C-style syntax :) (Although I'm of the opinion that parts of C's declaration syntax, such as function pointers, could benefit from redesigning. As well as certain other parts.) Finally, my favourite design rules, which I'd like to strive for myself (and which I'd like D - or any language - to develop towards) are: - Don't try to force people. - It is more important to allow a useful feature than to prevent every misuse. C++ doesn't assume that the programmer is stupid; it does not try to prevent the misuse of pointers, manual memory allocation, silly casts or what-you-have. D seems to go into the same direction. Which is, again, nice. At least if the dangerous features are kept as difficult to misuse as possible (which might be hard). On the other hand, C++ (and D) allows several different programming styles and, often, many ways to do the same thing. I had a class today, on a course called "Principles on Programming Languages". One of the points that were presented was that a programming language should only provide one way to do a thing -- for example, in C, there are four ways to increment a variable (++x, x++, x = x + 1 and x += 1). But I would consider that only as a richness, and the languages of single philosophy, such as Pascal or Eiffel, I'd consider mostly just too restrictive. Anyway, the fundamental design rules enumerated in the chapter 4 of "The Design and Evolution of C++" make a good reading for anyone, whether they are C++ or D programmers, language designers or just interested in the topic. To the point, then: The design of D seems to be have taken the shopping-list approach: take the goodies of C++, Java and Eiffel and put them together; add some widely-used language-defined types (such as string and dynamic arrays), leave out the preprocessor, (a couple of other omissions, which can be seen at http://www.digitalmars.com/d/) and voilà. This seems like a less "scientific" method than the one that produced C++, and -- I might be wrong though -- there seems not to be a well-defined set of rules and principles to guide the design. Maybe it would help in determining the purpose and nature of D to write down and prioritize some rules that would be applied when new features are requested. (Whoops, it appears that I got a bit sidetracked. Yet another off-topic post, then. Back to the topic:) I didn't read the book mentioned in the subject yet, just browsed through the draft available on the web page and read a couple of pages from the start. But it seems like a very good book, one that could even become a classic. (Or then again, it might turn out to be crap and fade into the tombs of history. But you never know ;) Oz the language, however, seems like it really doesn't have to care about commercial success, so it can do whatever it wants to. (Like adopt a non-C-like syntax, for instance. :) -Antti
Jan 21 2003
next sibling parent Mark Evans <Mark_member pathlink.com> writes:
Bjarne Stroustroup has long lamented the end result of all those good intentions
so I'm afraid even its father would not agree with you about C++.

The kernel language technique is a methodical, scientific way to define and
analyze language features which is far more practical than lambda-calculus but
still precisely defined -- unlike our endless newsgroup discussions.  The
authors show how minor changes to the kernel induce whole new paradigms of
programming.  When you analyze these problems you find, in the end, just a few
key concepts at the heart of everything.

It's something like a Turing machine equivalency demonstration but at a much
higher level that is practical for real-world language design.

Mark
Jan 21 2003
prev sibling parent reply Ilya Minkov <midiclub 8ung.at> writes:
Antti Sykari wrote:
 It's instructive to note that the contemporary form of C++, which is
 perceived to be a pile of mish-mash, was certainly not intended to
 become that.  I guess it "just happened" when features were added.
=20
 But they were added with good intentions, and are even occasionally
 useful.  The language might not be the simplest in the world, but when
 one understands why the features are there and the context in which
 they entered the language, suddenly C++ doesn't seem all that complex
 any more.

Well, it's drifting towards Perl, in the sense that C++ is not too hard=20 to programme in, but it's a "tower of babel": "... go down, confuse their language, so they will not understand one=20 another." Genesis 11:7. It's the same language, it's simply confused. :> Very confused. :>
=20
 And the design decisions of C++ have, I trust, been backed up by
 well-defined principles.  In "The Design and Evolution of C++",
 Stroustrup lists a set of rules, which are divided into categories of
 general rules, design support rules, language-technical rules and
 low-level programming support rules.  I'll quote some of them here,
 since many of them are relevant to contemporary language design as
 well.

There's not only him. These "committees" can spoil anything. I want=20 this, i want that. There every voice counts, but here it is a fair=20 dictatorship of intellect :> ... oh, i meant monarchy!
 These three rules imply that the purpose of C++ was to become a very
 practical language, driven by real needs of real people. =20

Which are (over-)represented by the committee :>
=20
 However, D has the same problem as C++ had in the eighties -- C++ had
 also the following rules, and then for good reason:
=20
 - Use traditional (dumb) linkers.
 - No gratuitous incompatibilities with C.
=20
 Without these, C++ would've probably been a much cleaner language, but
 on the other hand, it might not have existed at all... or at least
 attracted the masses like it eventually did.  Similarly, D is
 attempting to be attractive for the masses that already know C++ or
 Java.  Which is kind of nice, since I like C-style syntax :) (Although
 I'm of the opinion that parts of C's declaration syntax, such as
 function pointers, could benefit from redesigning.  As well as certain
 other parts.)

Hey, there are tons of wonderful languages... Java is almost the worst=20 of the best... i meant of the modern. Where are they? In universities? I = can't bring my frinds to "help me out in OCaml", or "Sather", or=20 anything, but in C, C++, D - no problem.
 Finally, my favourite design rules, which I'd like to strive for
 myself (and which I'd like D - or any language - to develop towards)
 are:
=20
 - Don't try to force people.

It is worse when people force something they don't 100% understand :>=20 which wouldn't happen here.
 - It is more important to allow a useful feature than to prevent every
 misuse.

HM... Right, most academics are *too* restrictive. But being not=20 restrictive at all unadvertedly results in a mess.
 On the other hand, C++ (and D) allows several different programming
 styles and, often, many ways to do the same thing.

Why do I read foreign Delphi code as if it were mine? Good example of=20 the library? The language is not really restrictive... Like Eiffel is. But true, it does have a bit less flexibility than C. But C is still not = too hard to read, because most different ways fall into a very local=20 scope. C++ goes global. I guess D compiler should restrict most things that "look like bugs" in=20 C code by warnings. And provide ways to shut warnings off, one by one. A = compiler should only indicate 1 error or up to N warnings, then break=20 compilation to support fast pinpoint-correct debugging style. (N<5) I'll = make a testsuite and a list with explanations later.
 Anyway, the fundamental design rules enumerated in the chapter 4 of
 "The Design and Evolution of C++" make a good reading for anyone,
 whether they are C++ or D programmers, language designers or just
 interested in the topic.
=20
 To the point, then:
=20
 The design of D seems to be have taken the shopping-list approach:
 take the goodies of C++, Java and Eiffel and put them together; add
 some widely-used language-defined types (such as string and dynamic
 arrays), leave out the preprocessor, (a couple of other omissions,
 which can be seen at http://www.digitalmars.com/d/) and voil=E0.  This
 seems like a less "scientific" method than the one that produced C++,
 and -- I might be wrong though -- there seems not to be a well-defined
 set of rules and principles to guide the design.  Maybe it would help
 in determining the purpose and nature of D to write down and
 prioritize some rules that would be applied when new features are
 requested.

It is drifting in the right durection. In the Modula-3 direction,=20 collecting only features which fit well. To the extent, it seems to have = the same basic ideas as Modula-3, which is much *less* mess than C++.=20 Although noone has formulated any loud rules, good only to be violated.=20 :> Except that a OS had to be written with it, robust fast and=20 efficient, with all applications, in the shortest possible time.
 (Whoops, it appears that I got a bit sidetracked.  Yet another
 off-topic post, then. Back to the topic:)
=20
 I didn't read the book mentioned in the subject yet, just browsed
 through the draft available on the web page and read a couple of pages
 from the start.  But it seems like a very good book, one that could
 even become a classic.  (Or then again, it might turn out to be crap
 and fade into the tombs of history.  But you never know ;)
=20
 Oz the language, however, seems like it really doesn't have to care
 about commercial success, so it can do whatever it wants to.  (Like
 adopt a non-C-like syntax, for instance. :)
=20
 -Antti

-i.
Jan 21 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
Ilya Minkov wrote:
 Well, it's drifting towards Perl, in the sense that C++ is not too hard 
 to programme in, but it's a "tower of babel":
 
 "... go down, confuse their language, so they will not understand one 
 another."
 
 Genesis 11:7.
 
 It's the same language, it's simply confused. :>
 Very confused. :>
 

I think *someone over there* wanted to punish the silly people trying to do the impossible things in C++ :> Climb into the sky, attach a GC and contracts to C++... it's a striking simlarity... I don't believe in *him*, but i'm literate enough to respect what he does :> -i.
Jan 21 2003
parent Mark Evans <Mark_member pathlink.com> writes:
 collecting only features which fit well. 

The point of the Oz book is to deal with 'features' at the level of the kernel language. Then by definition, the new capability fits well into the design.
- What you don't use, you don't pay for.

Right, and the book shows the minimum kernel language required to support various high-level paradigms.
C++ was designed from the beginning to be as efficient as possible,

Efficiency is irrelevant to the kernel language specification. It's just a way to define language semantics, like a Turing machine. Nobody complains that Turing machines are inefficient, but in fact every conceivable computer program (in any language) can be stated as a Turing machine. The kernel language serves a similar purpose at a higher level which is more appropriate for language designers.
On the other hand, C++ (and D) allows several different programming
styles and, often, many ways to do the same thing.

Functional and logic programming is impossible in D and C++. Oz covers all paradigms including these, in a clean, coherent design. This language demonstrates that with proper kernel language considerations, one can support all known programming styles -- without the deign-by-committee ugliness of C++.
- C++'s evolution must be driven by real problems.
- Don't get involved in a sterile quest for perfection.
- C++ must be useful now.

Oh please. I posted comments about the practicality police anticipating this kind of shootout. You missed. Please go back and read my police remarks, then read the book before wasting more ammo. You might find that you're shooting at the good guys. If it makes you feel better I think a lot of computer science is academic too, but not all of it fits that description, and you should be open-minded enough to consider such a possibility. Mark
Jan 21 2003
prev sibling parent reply "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> writes:
"Mark Evans" <Mark_member pathlink.com> escreveu na mensagem
news:b0hpdj$1kib$1 digitaldaemon.com...
 This post is not just another language citation.  It's about language
 fundamentals (although the interesting language Oz serves as a reference).

[snip] It's a very nice book. I've read it when they released to public review some time ago. Their approach of using kernel language features to analyse language expressiveness is very good, but their work has more theory than practice. It's like using lambda calculus, it's more expressive than a Turing machine, but it's impractical to everyday use. Oz is a well designed language, but they lack practical use. IMO language designers will use the ideas from this book in their languages, discarding some, and some years from now a practical language with a powerful set of kernel features will emerge. Pretty much like C (if compared to a plain Turing machine), Haskell (if compared to pure lambda calculus) or ??? (if compared to Lisp) ;-) --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.443 / Virus Database: 248 - Release Date: 10/1/2003
Jan 21 2003
next sibling parent reply Mark Evans <Mark_member pathlink.com> writes:
Their approach of using kernel language features to analyse
language expressiveness is very good, but their work has more theory than
practice.

It's a 1,000 page book with the minimum theory required to perform the analysis and presented in a practical style. The idea here is to provide some unifying perspective on language design to help Walter pick a sweet spot based on sound design principles. I reject vague assertions of impracticality. The next person who makes that claim had better adduce some evidence. Oz is a research demonstration language and was intended as such, but the results are general.
 IMO language designers will use the
 ideas from this book in their languages, discarding some

Yes, and that's exactly the point. The book identifies the fundamental choices that can be made, and also how to make them. Glad you agree it's a very nice book with a very good approach.
It's like using lambda calculus

Not hardly! The authors state right up front that such techniques don't help real-world language designers and programmers. Mark
Jan 22 2003
parent reply "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> writes:
Hi,

    Comments embedded.

"Mark Evans" <Mark_member pathlink.com> escreveu na mensagem
news:b0msu1$1io1$1 digitaldaemon.com...
Their approach of using kernel language features to analyse
language expressiveness is very good, but their work has more theory than
practice.

It's a 1,000 page book with the minimum theory required to perform the

 and presented in a practical style.  The idea here is to provide some

 perspective on language design to help Walter pick a sweet spot based on

 design principles.

 I reject vague assertions of impracticality.  The next person who makes

 claim had better adduce some evidence.  Oz is a research demonstration

 and was intended as such, but the results are general.

If you re-read what I said you'll see that I never used the word "impracticality" when talking about Oz. I've used it when said about lambda calculus in everyday work. Pure lambda calculus looks wonderful, but it's not practical to use. Oz looks wonderful, but how many libraries where developed using it, by people that weren't involved with the language design? Compare it to Java. Java has lots of flaws, but is somewhat consistent (I don't claim full consistency, just some). I like studying and researching new ideas, but when people start to make statements about generality or expressiveness I start reading it with care. It's like saying, Design by Contract improves software reliability, instead of promotes good design of method and class contracts. Some features look very good (e.g. multi-methods, Sather-like iterators, generics, tuples, etc.) but some of them will bite you back sometime. Whenever I add a new feature in my language, I test it by coding some basic classes, like collections, regex, numerics, report generation, xml, gui, etc., and checking how the design changed. Sometimes I discover that something looked very nice, but became ugly with usage. I'm sure that three minutes before I release the first alpha compiler people will send me several posts complaining about unforeseen side-effects of some features, be it either syntax or semantics. Walter feel this everyday, as people here post problems with almost anything he puts in D. Oz is a very good language, but IMHO it still needs more usage to become evident of what is good and what should be changed, like any other language.
 IMO language designers will use the
 ideas from this book in their languages, discarding some

Yes, and that's exactly the point. The book identifies the fundamental

 that can be made, and also how to make them.  Glad you agree it's a very

 book with a very good approach.

It's like using lambda calculus

Not hardly! The authors state right up front that such techniques don't

 real-world language designers and programmers.

 Mark

Well, Erlang and Common Lisp are real-world languages that makes heavy use of lambda calculus ;-) I'm just saying that some features may look nice (e.g. concurrency/paralelism) but it can be very tricky if you don't add the correct amount of sugar. There's a lot of good computer scientists out there trying to define the "correct" semantics for concurrency, and yet there's no prevailing set of concurrency primitives. Best regards, Daniel Yokomiso. "If you want to be happy be." --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.443 / Virus Database: 248 - Release Date: 10/1/2003
Jan 22 2003
next sibling parent reply "Robert Medeiros" <robert.medeiros utoronto.ca> writes:
     I like studying and researching new ideas, but when people start to

 statements about generality or expressiveness I start reading it with

 It's like saying, Design by Contract improves software reliability,

 of promotes good design of method and class contracts. Some features look
 very good (e.g. multi-methods, Sather-like iterators, generics, tuples,
 etc.) but some of them will bite you back sometime. Whenever I add a new

 syntax or semantics. Walter feel this everyday, as people here post

 with almost anything he puts in D.

A little philosophy goes a long way: http://www.mcs.vuw.ac.nz/comp/Publications/CS-TR-02-9.abs.html http://www.wall.org/~larry/pm.html Rob
Jan 22 2003
next sibling parent reply "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> writes:
"Robert Medeiros" <robert.medeiros utoronto.ca> escreveu na mensagem
news:b0n6hl$1oi1$1 digitaldaemon.com...
     I like studying and researching new ideas, but when people start to

 statements about generality or expressiveness I start reading it with

 It's like saying, Design by Contract improves software reliability,

 of promotes good design of method and class contracts. Some features


 very good (e.g. multi-methods, Sather-like iterators, generics, tuples,
 etc.) but some of them will bite you back sometime. Whenever I add a new

 syntax or semantics. Walter feel this everyday, as people here post

 with almost anything he puts in D.

A little philosophy goes a long way: http://www.mcs.vuw.ac.nz/comp/Publications/CS-TR-02-9.abs.html http://www.wall.org/~larry/pm.html Rob

Thanks I'll read both. --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.443 / Virus Database: 248 - Release Date: 10/1/2003
Jan 22 2003
parent "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> writes:
"Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> escreveu na mensagem
news:b0n9ur$1qjv$1 digitaldaemon.com...
 "Robert Medeiros" <robert.medeiros utoronto.ca> escreveu na mensagem
 news:b0n6hl$1oi1$1 digitaldaemon.com...
     I like studying and researching new ideas, but when people start



 make
 statements about generality or expressiveness I start reading it with

 It's like saying, Design by Contract improves software reliability,

 of promotes good design of method and class contracts. Some features


 very good (e.g. multi-methods, Sather-like iterators, generics,



 etc.) but some of them will bite you back sometime. Whenever I add a



 [snip]
 syntax or semantics. Walter feel this everyday, as people here post

 with almost anything he puts in D.

A little philosophy goes a long way: http://www.mcs.vuw.ac.nz/comp/Publications/CS-TR-02-9.abs.html http://www.wall.org/~larry/pm.html Rob

Thanks I'll read both. --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.443 / Virus Database: 248 - Release Date: 10/1/2003

The first is a collection of thoughts about computer science and postmodernism. Sometimes interesting, but in general I don't think they grok the idea of postmodernism. There's a lot of "don't" and "not" it their text, and definition by negation is (somewhat) the antithesis of postmodernism. The Larry Wall rant is very insightful, as everything he writes. He has a quite unique POV in language design and it's highly educational reading his reasoning and opinions (the apocalypses series is very good). In the sense of his talk D is a postmodern programming language. Anyway I think this thread is becoming OT right now. Interesting, but OT still. --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.443 / Virus Database: 248 - Release Date: 11/1/2003
Jan 22 2003
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
"Robert Medeiros" <robert.medeiros utoronto.ca> wrote in message
news:b0n6hl$1oi1$1 digitaldaemon.com...
 A little philosophy goes a long way:

 http://www.mcs.vuw.ac.nz/comp/Publications/CS-TR-02-9.abs.html
 http://www.wall.org/~larry/pm.html

A fun read. Consider this: "The Cult of Originality shows up in computer science as well. For some reason, many languages that came out of academia suffer from this. Everything is reinvented from first principles (or in some cases, zeroeth principles), and nothing in the language resembles anything in any other language you've ever seen. And then the language designer wonders why the language never catches on."
Feb 27 2003
parent reply Mark Evans <Mark_member pathlink.com> writes:
The Oz book (of the subject line) agrees with that sentiment in a certain way,
though it would not apportion blame in that puerile, uninformed manner.
Nonacademics are at fault, and the book is an attempt to help them.  What the
book blames academics for, if anything, is their failure to communicate results
in a way 'the masses' can understand.

The book explicitly recognizes the constant reinvention of wheels by language
designers who ignore academic research.  To address that problem, it adopts a
'programmer-friendly' presentation of fundamental concepts.

So you have in Oz the opposite of 'nothing in the language resembles anything in
any other language you've ever seen.' In fact everything in the kernel language
resembles everything you've ever seen.  It shows exactly how many wheels have
been reinvented and names them all.  Next time some hacker trots out one of
these wheels, and says 'behold a new language,' you can evaluate his work
objectively.

The Cult of Originality is a hacker phenomenon.  Academic research is all about
classification, taxonomy, and discovery.  Academics face vicious peer review --
the ultimate Darwinian struggle?  Non-academics have cult followings and flame
wars, with very little technical substance.  Change the syntax a bit, and voila!
a new language is born.  Add a new feature, and voila! a new cult following.

No better example than Perl fits the reinvented-wheel category:  pure marketing
and hackerdom, nothing original in a language sense, and horrid syntax, possibly
the absolute worst on earth.  I hope for better things of D.  BTW I have plenty
of colleagues who use Perl at work, and their comments range from 'horrid' to
'write-once language'...so even those using it are not always fond of it.  My
impressions of Perl wizards is that they have little exposure to better
languages, and therefore think Perl, and only Perl, offers what it offers.  One
way to achieve success is to put blinders on the customers so they can't see the
competition.

Mark


In article <b3lpcs$1kv2$1 digitaldaemon.com>, Walter says...
"Robert Medeiros" <robert.medeiros utoronto.ca> wrote in message
news:b0n6hl$1oi1$1 digitaldaemon.com...
 A little philosophy goes a long way:

 http://www.mcs.vuw.ac.nz/comp/Publications/CS-TR-02-9.abs.html
 http://www.wall.org/~larry/pm.html

A fun read. Consider this: "The Cult of Originality shows up in computer science as well. For some reason, many languages that came out of academia suffer from this. Everything is reinvented from first principles (or in some cases, zeroeth principles), and nothing in the language resembles anything in any other language you've ever seen. And then the language designer wonders why the language never catches on."

Feb 27 2003
next sibling parent "Walter" <walter digitalmars.com> writes:
"Mark Evans" <Mark_member pathlink.com> wrote in message
news:b3m3eo$1r1i$1 digitaldaemon.com...
 The Cult of Originality is a hacker phenomenon.  Academic research is all

 classification, taxonomy, and discovery.  Academics face vicious peer

 the ultimate Darwinian struggle?  Non-academics have cult followings and

 wars, with very little technical substance.  Change the syntax a bit, and

 a new language is born.  Add a new feature, and voila! a new cult

I don't claim that D is inventing much of anything original. What it does do is attempt to refactor C and C++ into something much easier to deal with. C++ has a terrible burden of having to support legacy C code. The compromises needed to do that permeate the design (look at the tag name space issue for an example). Java has a doubly onerous burden - legacy code, and inability to change the byte code vm.
 No better example than Perl fits the reinvented-wheel category:  pure

 and hackerdom, nothing original in a language sense, and horrid syntax,

 the absolute worst on earth.  I hope for better things of D.  BTW I have

 of colleagues who use Perl at work, and their comments range from 'horrid'

 'write-once language'...so even those using it are not always fond of it.

 impressions of Perl wizards is that they have little exposure to better
 languages, and therefore think Perl, and only Perl, offers what it offers.

 way to achieve success is to put blinders on the customers so they can't

 competition.

I haven't worked enough with Perl to have much of an informed opinion about that.
Feb 27 2003
prev sibling next sibling parent "Mike Wynn" <mike.wynn l8night.co.uk> writes:
 No better example than Perl fits the reinvented-wheel category:  pure

 and hackerdom, nothing original in a language sense, and horrid syntax,

 the absolute worst on earth.  I hope for better things of D.  BTW I have

 of colleagues who use Perl at work, and their comments range from 'horrid'

 'write-once language'...so even those using it are not always fond of it.

 impressions of Perl wizards is that they have little exposure to better
 languages, and therefore think Perl, and only Perl, offers what it offers.

 way to achieve success is to put blinders on the customers so they can't

 competition.

totally agree, and perl is a hackers lang, and great at it. you write a script, and run it ... easy, has all the power of C with a few extras, like sub's (think they are real closures) and objects you even know the context within which your function was called you you can return different things if evaled as a scalar or an array or as an object. the optimiser and compiler are good enough that the performance is good enough to use for even quite large projects, Mod perl CGI is fantasticly fast (the second time you access the script). does all the things you might want if you've got to write a program in a hurry, buildin hashtables, object, closures, fast regexp processing and arrays/hashtables can be passed by value or reference. nice loop constructs (redo as well as continue and break (next/last)). the only feature perl (5) lacks is a swtich statement but there are ways to get a switch like layout which allow switch on regexp too. the syntax is horrific (at first), beated within an inch of its life with the ugly stick BUT that is to allow easy fast parsing. like any lang once you get used to perl syntax its not a write only lang at all. and personally its better than pascal (pet hate of mine as I keep ending up having to use delphi) one feature of perl that I realy like is the 'if' which is 'if' <cond> <block> ['elif' <cond> <block>] ['else' <block>] or <statement> [ ('if' | 'unless') <cond>] ';' it feels quite 'english' to write $foo = 10 if defined $bar; i.e. `if ( bar != null ) foo=10;` perl did do one thing that was original, it tooks all the useful features from other langs and bound them together. there are a lot of things you can write in 10 lines of perl that border on 100 lines in any other lang I also think the Perl wizards have given it a bad name by using the odd features to excess, you can write perl that is as readble as C or any other algol based lang. I don't see that the 'market' Perl occupies would overlap much with D they are designed for different jobs. what do you think of PHP's syntax (which is a slightly cleaner Perl with features missing) again its that way so its easy to parse.
Feb 27 2003
prev sibling parent reply Bill Cox <bill viasic.com> writes:
 The Oz book (of the subject line) agrees with that sentiment in a certain way,
 though it would not apportion blame in that puerile, uninformed manner.
 Nonacademics are at fault, and the book is an attempt to help them.  What the
 book blames academics for, if anything, is their failure to communicate results
 in a way 'the masses' can understand.

I find that much (not all) academic research has a research focus, rather than a pragmatic focus (which makes sense to me). The result tends to be slower languages that would hurt acceptance of my application, which gets benchmarked against the competition for every sale. Every new concept is naturally pushed by it's inventer as hard as possible (publish or perish), regardless of its real value. There are exceptions, and I'll list Sather as one. The other thing I find is that experience in both language design, and application design are required to be able to put together a new language well. That's why Walter seems a natural for this. Frankly, D addresses needs that I only new I had after years of intensive coding for high-performance commercial applications. For example, no pointers. It's counter-intuitive, but every C and C++ application I've worked on where programmers were encouraged to avoid them ran faster than competing tools. Programmers are required to typedef away pointers to classes, and many hacks that were used instead of proper modeling with classes go away. The bar for commercial acceptance of a language feature is different than in academics. If an idea is too complicated, we can't use it in industry. There are many such features of good academic languages in this category, and some in C++ that hinder it's use. Most C++ programmers I know don't know what about the virtual function table pointer, and use, but don't write templates. I was looking at some older C++ code yesterday, and the authors where hoplessly lost in a sea of features they didn't know how to use. It was a real mess.
 The book explicitly recognizes the constant reinvention of wheels by language
 designers who ignore academic research.  To address that problem, it adopts a
 'programmer-friendly' presentation of fundamental concepts.

Personally, I apriciate you're efforts to enlighten us in this group of academic work, both good and bad. I've certainly learned a lot. I'd have to dissagree that academics aren't re-inventing the wheel very often. For example, Sather's "include" construct solves the same problem as "virtual classes", "framework templates", and covariation. There must be 5 to 10 academic languages out there that re-invented this.
 So you have in Oz the opposite of 'nothing in the language resembles anything
in
 any other language you've ever seen.' In fact everything in the kernel language
 resembles everything you've ever seen.  It shows exactly how many wheels have
 been reinvented and names them all.  Next time some hacker trots out one of
 these wheels, and says 'behold a new language,' you can evaluate his work
 objectively.

Ok... I'll read the book!
 The Cult of Originality is a hacker phenomenon.  Academic research is all about
 classification, taxonomy, and discovery.  Academics face vicious peer review --
 the ultimate Darwinian struggle?  Non-academics have cult followings and flame
 wars, with very little technical substance.  Change the syntax a bit, and
voila!
 a new language is born.  Add a new feature, and voila! a new cult following.

Hmmm... When I try to list languages with cult followings, I get some industry born stuff, but mostly academic languages. I would put the following in this category, and this is just the tip of the iceberg: Lisp, Scheme, ML and variants, Prolog, APL, Nice, Kiev, Nickel, Eiffel... I probably have some errors here, but the list is very very long. These have one thing in common - they're not really quite right for developing most cpu intensive commercial applications, yet their backers keep pushing them as such. If I had a dollar for each time I've heard "just get a faster CPU if you need speed" ... Industry came up with Forth, Perl, and Java. I hear game hackers came up with C. There are more, but not so many as from academics. The characteristics of these langauges is that they are hammers for solving the immediate problems faced by programmers in industry. Forth is memory lean. Perl rocks for simple text manipulation. Java is portable and good for networking apps.
 No better example than Perl fits the reinvented-wheel category:  pure marketing
 and hackerdom, nothing original in a language sense, and horrid syntax,
possibly
 the absolute worst on earth.  I hope for better things of D.  BTW I have plenty
 of colleagues who use Perl at work, and their comments range from 'horrid' to
 'write-once language'...so even those using it are not always fond of it.

Agreed, at least as far as being a well designed langauge. I cringe at not only the language, but at most of the code written in it. Perl seems to have at least two things going for it: 1) It focusses on a real problem where a solution was needed. 2) It offers the masses every simple feature that they could possibly want. There's nothing hard to understand in original Perl. Most features are aimed at saving a few keystrokes (like the <> variable). Most programmers love that. The excitement I've seen in programmer's eyes as they read the spec remind me of my children at Christmas.
 My impressions of Perl wizards is that they have little exposure to better
 languages, and therefore think Perl, and only Perl, offers what it offers.  One
 way to achieve success is to put blinders on the customers so they can't see
the
 competition.

I'd have to guess you're still in your youthful 20's. I envy your idealistic enthusiasm for language design. The real world is Dilbert Land. Most Perl wizards aren't ignorant... They CHOSE to be Perl wizards, and would again knowing everything you know. Bill
Feb 28 2003
next sibling parent reply "Walter" <walter digitalmars.com> writes:
"Bill Cox" <bill viasic.com> wrote in message
news:3E5F4733.8070809 viasic.com...
 That's why Walter seems a natural for this.  Frankly, D
 addresses needs that I only new I had after years of intensive coding
 for high-performance commercial applications.

D is simply the language I always wanted to use <g>.
 Personally, I apriciate you're efforts to enlighten us in this group of
   academic work, both good and bad.  I've certainly learned a lot.

I agree.
 I hear game hackers came
 up with C.

One thing that is never mentioned about C is that I believe the PC is what pushed C into the mass accepted language that it is. The reasons are: 1) PCs were slow and small. Writing high performance, memory efficient apps was a requirement, not a luxury. 2) C was a good fit for the PC architecture. 3) C was the ONLY high level language that had a decent compiler for it in the early PC days. (The only other options were BASIC and assembler.) Sure, there were shipping FORTRAN and Pascal compilers for the PC early on, but the implementations were so truly terrible they were useless (and I and my coworkers really did try to get them to work).
Feb 28 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Antti Sykari" <jsykari gamma.hut.fi> wrote in message
news:87smu7b1sx.fsf hoastest1-8c.hoasnet.inet.fi...
 "Walter" <walter digitalmars.com> writes:
 One thing that is never mentioned about C is that I believe the PC is


 pushed C into the mass accepted language that it is. The reasons are:
 1) PCs were slow and small. Writing high performance, memory efficient


 was a requirement, not a luxury.
 2) C was a good fit for the PC architecture.
 3) C was the ONLY high level language that had a decent compiler for it


 the early PC days. (The only other options were BASIC and assembler.)


 there were shipping FORTRAN and Pascal compilers for the PC early on,


 the implementations were so truly terrible they were useless (and I and


 coworkers really did try to get them to work).

programming language textbooks from the late 1970's and the early 1980's. Many of them didn't even mention C and those that did, didn't usually consider it to be much of a language. Algol, Pascal, Cobol and Fortran were the languages of the day, with an occasional side note for Lisp, Prolog and Smalltalk. Although C had existed from around 1973 is was still a cryptical-looking, system-oriented niche language used only by some UNIX researchers and didn't seem to be of much interest.

You're right. I started programming in 1975, and had heard of all those languages (and many more like bliss, simula, APL) except for C, which I never heard of until 1983.
Mar 01 2003
parent Patrick Down <pat codemoon.com> writes:
"Walter" <walter digitalmars.com> wrote in
news:b3r0ls$2ai6$1 digitaldaemon.com: 

 
 "Antti Sykari" <jsykari gamma.hut.fi> wrote in message
 news:87smu7b1sx.fsf hoastest1-8c.hoasnet.inet.fi...
 "Walter" <walter digitalmars.com> writes:
 One thing that is never mentioned about C is that I believe the PC
 is 


 pushed C into the mass accepted language that it is. The reasons
 are: 1) PCs were slow and small. Writing high performance, memory
 efficient 


 was a requirement, not a luxury.
 2) C was a good fit for the PC architecture.
 3) C was the ONLY high level language that had a decent compiler
 for it 


 the early PC days. (The only other options were BASIC and
 assembler.) 


 there were shipping FORTRAN and Pascal compilers for the PC early
 on, 


 the implementations were so truly terrible they were useless (and I
 and 


 coworkers really did try to get them to work).

programming language textbooks from the late 1970's and the early 1980's. Many of them didn't even mention C and those that did, didn't usually consider it to be much of a language. Algol, Pascal, Cobol and Fortran were the languages of the day, with an occasional side note for Lisp, Prolog and Smalltalk. Although C had existed from around 1973 is was still a cryptical-looking, system-oriented niche language used only by some UNIX researchers and didn't seem to be of much interest.

You're right. I started programming in 1975, and had heard of all those languages (and many more like bliss, simula, APL) except for C, which I never heard of until 1983.

When I went to the university in 1984 they had just switched the computer science departments core teaching language from Pascal to C.
Mar 01 2003
prev sibling parent reply Mark Evans <Mark_member pathlink.com> writes:
Bill Cox says,

 I find that much (not all) academic research has a research focus, 
 rather than a pragmatic focus (which makes sense to me).

Could you supply 3 examples of each type? Otherwise I'll just dismiss the assertion as an unsupported, vague generalization. Industry funds much research, as does government. C and C++ came out of a lab from a Ph.D. in mathematics named Dennis M. Ritchie and a Ph.D. in computer science named Bjarne Stroustrup. You could hardly ask for a more academic birthplace. The dozens of research URLs recently supplied to the D project all have direct applicability. The notion that computer scientists have no interest in practical results, no design sense, or somehow don't listen to industry, is astonishingly silly. I am worn out listening to this kind of thing from D folks. Computer scientists have common sense and sometimes even good taste, along with other skills.
 The result tends to be slower languages that would hurt acceptance
 of my application, which gets benchmarked against the competition
 for every sale.

The idea that every new language construct is automatically suspect as a threat to performance is another diatribe I tire of hearing. It ain't so. Any dynamically typed language will be slower than statically typed languages, but no one is proposing that D go dynamic.
 Every new concept is naturally pushed by it's inventer as hard as
 possible (publish or perish), regardless of its real value.

Again I would appreciate 3 specific examples. My impression of the D folks is that nobody reads any research material to speak of. Some languages are designed to explore limits of certain paradigms. Even then, you find that CS folks listen and adapt. The wonderful INRIA folks put OO features into SML, and bequeathed to us O'Caml -- a real screamer of a language, right up there with C. Compiled O'Caml gives about 50% the speed of raw C with probably 50 times the expressiveness and complete C/C++ interfacing.
 There are exceptions, and I'll list Sather as one.

Sather is no longer developed, sadly. (What that fact says about its exceptional status in your eyes, I do not know.)
 The other thing I find is that experience in both language design, and 
 application design are required to be able to put together a new 
 language well.  That's why Walter seems a natural for this.

Walter wrote games, but has he designed any languages beyond D? Todd Proebsting has worked on many languages -- his stuff is worth reading. See the Disruptive Languages presentation. He is quite interested in what makes a language "sell." You see, academics are not unconcerned about such things.
 Frankly, D addresses needs that I only [k]new I had after years of
 intensive coding for high-performance commercial applications.

You're not alone. C/C++ users know their flaws and agonies. That's why I'm here, too. That's also part of the motivation behind Java, C#, ...
 The bar for commercial acceptance of a language feature is different
 than in academics. If an idea is too complicated, we can't use it in
 industry.

The fallacy here is that a 'language feature' automatically increases complexity. Features increase expressiveness, or in other words, make programming simpler and less error-prone. True, they make writing a compiler more difficult, but I would rather have the complexity in the compiler than in my code. The phrase 'commercial acceptance of a language feature' is strange. I picture a software manager talking to subordinates: 'No Bob we won't let you use C++, because it has generics.' 'Sally you can't use O'Caml, because it has recursion.' I don't see that scenario. A car with five gear ratios and a V8 engine is more capable than one with three gear ratios and a 4-cylinder. Nonetheless you can drive it like the 4-cylinder. The point is to have the extra power on board when needed, and use it only under expert control. Either type of driving will get you home, but one is more pleasant, more fun, and faster. If you give the car to granny or junior, just tell them to keep it under 25 MPH.
 Personally, I apriciate you're [sic] efforts to enlighten us in this group
 of academic work, both good and bad. I've certainly learned a lot.

Thanks! Do you care to identify the 'bad' research?
 I'd have to dissagree that academics aren't re-inventing the wheel very 
 often.  For example, Sather's "include" construct solves the same 
 problem as "virtual classes", "framework templates", and covariation. 
 There must be 5 to 10 academic languages out there that re-invented this.

The Oz book addresses academics too, you know. Given any problem domain, I expect that every conceivable solution has probably been explored by some academician -- somewhere. That is their job.
 Ok... I'll read the book!

Glad to get a commitment. Many D people have expressed opinions without reading it.
 Hmmm...  When I try to list languages with cult followings, I get some
 industry born stuff, but mostly academic languages.  I would put the 
 following in this category, and this is just the tip of the iceberg:
 Lisp, Scheme, ML and variants, Prolog, APL, Nice, Kiev, Nickel, 
 Eiffel...

Er, Nickel has a cult following? Are we certain it has *any* following? Do you have any citations, or is this more guesswork? If we're doing guesswork, then I'm as qualified as you, and will strikethrough all items on your list except perhaps Eiffel. When I call Perl a cult, I am just echoing its adherents. They admit it! "Because Perl enjoys this social club air, comprehension of the subject is sometimes informed by a certain sense of smugness." (etc.) http://63.236.73.146/Authoring/Languages/Perl/PerlfortheWeb/index22.html O'Caml and Alice ML are exceptionally pragmatic languages useful for production work. I don't see a cult around them. O'Caml folks use other languages liberally, and in fact that is one of its strengths, foreign function interfacing.
 I probably have some errors here, but the list is very very
 long.  These have one thing in common - they're not really quite right 
 for developing most cpu intensive commercial applications, yet their 
 backers keep pushing them as such.

I don't buy that. Don't most of these languages offer C interfacing of some kind? Why do you suppose they do that? Because they acknowledge the need for speed.
 If I had a dollar for each time I've
 heard "just get a faster CPU if you need speed" ...

Speed is great, really great, but D folks are too paranoid about the supposed costs of expressiveness. Let's push the envelope on performance, and even make it priority #1 -- fine. Now let's also push the expressiveness envelope as a close second priority. D should push both sides to their limits, maximizing the language power. My fear is that there is so much narrow attention on #1 that nobody is really worried about #2. In fact there is this paranoid knee-jerk reaction which kicks in, putting #2 in artificial competition with #1 when in fact they can often cohabitate. We have in D something with better expressiveness than C++. D will always be better than C++. OK. But are we content with a language that is "just" better than C++? I want a language with maximum performance and maximum expressiveness, not just "better than C++" expressiveness. C++ is such an ugly kludge that almost any evolution is better (Java, C#, what have you). Todd Proebsting's "Disruptive Languages" talk prophecies that the next disruptive language will be slower than its predecessor because what really counts is programmer productivity. Before someone whacks me on the head, I understand that D puts CPU cycles above expressiveness. Todd just happens to think that industry will value expressiveness more. The main point I wish to make is that expressiveness should not be ignored or considered a threat. D has priorities, but should strive to max them all out. Just because D lets us go "under the hood" doesn't mean D should force us under the hood. That's what expressiveness is all about -- I express the logic in the most appropriate manner. The language gives me tradeoff choices. If I, the programmer, am willing to give up CPU cycles to finish my job faster, then D should permit that to the maximum extent of the law.
 Industry came up with Forth, Perl, and Java.  I hear game hackers came
 up with C.  There are more, but not so many as from academics.

No, Ph.D. academics working in a research lab came up with C.
 The characteristics of these langauges is that they are hammers for
 solving the immediate problems faced by programmers in industry.
 Forth is memory lean. Perl rocks for simple text manipulation. Java
 is portable and good for networking apps.

Academic projects are also started to solve specific problems. The industry workers you admire were trained in academia by the way. Mark
Feb 28 2003
next sibling parent "Walter" <walter digitalmars.com> writes:
"Mark Evans" <Mark_member pathlink.com> wrote in message
news:b3pg8m$19te$1 digitaldaemon.com...
 The other thing I find is that experience in both language design, and
 application design are required to be able to put together a new
 language well.  That's why Walter seems a natural for this.


Yes, the ABEL (Advanced Boolean Expression Language). The language was a big commercial success for Data I/O, and has lasted for 15 years or so. It's now obsolete because the hardware chips it was targetted for are gone. My experiences, however, are more in implementing languages than designing them.
Feb 28 2003
prev sibling next sibling parent reply Farmer <itsFarmer. freenet.de> writes:
Mark Evans <Mark_member pathlink.com> wrote in
news:b3pg8m$19te$1 digitaldaemon.com: 

Hi,

in your last post you often mentioned the term "languages expressiveness".

What is this exactly? 
Is there any academic definition for it (That is easily understood) ?
How can you measure this? [ the kernel language? :-) ]


Why focus on performance:

I think, programmers focus so much on performance, because it is easy to 
benchmark performance.
When I write some code that run's faster than the code of a fellow 
programmer, I can say: "My code is better than yours. It is faster." (Of 
course this is true only if performce is the number one goal for the 
application to be written.)
The fellow programmer would not be offended since the statement is 
objective. He could learn more about programming and algorithms, rewrite 
his code and beat my code. 

When I write some code that I think is more expressive than the code of a 
fellow programmer, I'd better not say : "My code is better than yours. It 
is more expressive." 
The fellow programmer would be offended: He would think - "Why should your 
code be more expressive than mine? He's doing a smear campaign against my 
coding practices!". The problem is that the two programmers would spent 
more time on argueing what the most expressive code is, than on doing their 
job.


For similar reasons, performance is often used for marketing compaigns. 
Performance usually is the trait of software that is easiest to measure and 
to compare. Price is much harder, due to the complex and different 
licensing practices <g>.


More comments are embedded.


Farmer

 The result tends to be slower languages that would hurt acceptance
 of my application, which gets benchmarked against the competition
 for every sale.

The idea that every new language construct is automatically suspect as a threat to performance is another diatribe I tire of hearing. It ain't so.

higher abstraction instructions. For simple assembler like languages, it is possible to code a reasonably fast implemention of an algorithm without the help of a profiler. Fancy new language features *may* have adverse effects on performance, since they are designed to appeal to man's brain instead of hardware. So, when I look e.g. at virtual fields in Kiev, I reject this feature untill I know their good points (better support for MI?) and their bad points (some CPU cycles a likely to be wasted, aren't they?) Could I use virtual fields in performance critical code sections? Or should I use virtual fields only for prototypes/research work? You may be tired of hearing that stuff. But programmers ask these questions for good reasons.
 Every new concept is naturally pushed by it's inventer as hard as
 possible (publish or perish), regardless of its real value.

Again I would appreciate 3 specific examples. My impression of the D folks is that nobody reads any research material to speak of.

I guess, you are right on this one. But I also have the impression that researchers don't read any source code material to speak of. So researchers tend to come up with great solutions for problems that are of minor importance for mainstream applications.
 Some languages are designed to explore limits of certain paradigms.
 Even then, you find that CS folks listen and adapt. The wonderful
 INRIA folks put OO features into SML, and bequeathed to us O'Caml -- a
 real screamer of a language, right up there with C. Compiled O'Caml
 gives about 50% the speed of raw C with probably 50 times the
 expressiveness and complete C/C++ interfacing.

I wonder what the figures of O'Caml for memory consumption are?
 The other thing I find is that experience in both language design,
 and application design are required to be able to put together a new 
 language well.  That's why Walter seems a natural for this.

Walter wrote games, but has he designed any languages beyond D? Todd Proebsting has worked on many languages -- his stuff is worth reading. See the Disruptive Languages presentation. He is quite interested in what makes a language "sell." You see, academics are not unconcerned about such things.

Todd Proebsting's presentation was a great read. I really enjoyed it.
 Walter wrote games, but has he designed any languages beyond D?

"What is an Architect ? He designs a house for another to build and someone else to inhabit." Taken from "How Java’s Floating-Point Hurts Everyone Everywhere" (was posted in another thread). I'm glad that Walter designs a house for him to build and him and everybody else to inhabit. I think that designing games and designing languages have some similiar traits, these days: -It's NOT about how many FEATURES or ORIGINAL IDEAS you put in. The KEY IS that EVERYTHING fits WELL. Simple games can be fun. More feature-rich games can be even more fun (just my personal opinion), though just adding features to games does not make them fun. -Every designer has access to the same set of features: generics, interfaces, inheritance, multimethods, static type system, easy interfacing to C, portablility, etc. So by adding features to the language you cannot gain any significant advantage over your competitors. -Market success is only loosly coupled with superiority of games: Even average games can outsell truely outstanding games. This happens because of designer/vendors's reputation, hype, mainstream conformance, or simply luck (appearance at the right time)
 The bar for commercial acceptance of a language feature is different
 than in academics. If an idea is too complicated, we can't use it in
 industry.

The fallacy here is that a 'language feature' automatically increases complexity. Features increase expressiveness, or in other words, make programming simpler and less error-prone. True, they make writing a compiler more difficult, but I would rather have the complexity in the compiler than in my code.

Adding language features does increase complexity. The language spec becomes bigger, book authors will have to write more pages about D, programmers will have to read and understand more pages about D, vendors of development tools (I do not mean compiler vendors here) are faced with more complexity. Simplicity is a language feature by itself. From doing Java programming I learnt that it is a major feature.
 
 The phrase 'commercial acceptance of a language feature' is strange. I
 picture a software manager talking to subordinates: 'No Bob we won't
 let you use C++, because it has generics.' 'Sally you can't use
 O'Caml, because it has recursion.' I don't see that scenario.

Just imagine : 'No Bob we won't let you use C++, because it has POINTERS.' 'Sally you can't use C#, because it supports GOTOs.' Actually I believe [But I can hardly remember, so things may be a bit different], that some embedded programming folks, made a subset of C++ that banned templates. They said it would add too much complexity on compiler implementations. Stroustrup could not convince them that templates does not interfere with embedded programming. (you don't have to use them; many C++ compilers already support templates)
 A car with five gear ratios and a V8 engine is more capable than one
 with three gear ratios and a 4-cylinder. Nonetheless you can drive it
 like the 4-cylinder. 

A car with a V8 engine is more expensive: The engine is more expensive. Because of the car's extra power, extra weight and extra size, the car must also be improved on other parts, like brakes or tires. That makes the car even more costly to manufacture. I drive a car with 4-cylinders, I don't want to drive one with a V8 engine. It just consumes more fuel, but does not increase the car's usefullness. Though I drive in one of the very few countries, that have no speed limit on autobahns, I could not save any time to speak of, if I drove a car with a V8 engine.
The point is to have the extra power on board
 when needed, and use it only under expert control.

 
 Either type of driving will get you home, but one is more pleasant,
 more fun, and faster. If you give the car to granny or junior, just
 tell them to keep it under 25 MPH.

the tree. Junior is dead. Worse your car is broken !
 
 Personally, I apriciate you're [sic] efforts to enlighten us in this
 group of academic work, both good and bad. I've certainly learned a
 lot. 

Thanks! Do you care to identify the 'bad' research?

The paper "A Critique of C++" from Ian Joyner that you had posted. It's like a sh*tty marketing paper, that wears the clothes of science.
 I probably have some errors here, but the list is very very
 long.  These have one thing in common - they're not really quite
 right for developing most cpu intensive commercial applications, yet
 their backers keep pushing them as such.

I don't buy that. Don't most of these languages offer C interfacing of some kind? Why do you suppose they do that? Because they acknowledge the need for speed.

Having to interface with C for performance critical code is an ugly kludge. Why not put the ability for speed right into the language instead ?
 
 If I had a dollar for each time I've
 heard "just get a faster CPU if you need speed" ...

Speed is great, really great, but D folks are too paranoid about the supposed costs of expressiveness. Let's push the envelope on performance, and even make it priority #1 -- fine. Now let's also push the expressiveness envelope as a close second priority.

I agree. Sometimes I wish that D would always make safe behaviour the default and fast but bug-prone ways of doing things merely possible.
 
 D should push both sides to their limits, maximizing the language
 power. My fear is that there is so much narrow attention on #1 that
 nobody is really worried about #2. In fact there is this paranoid
 knee-jerk reaction which kicks in, putting #2 in artificial
 competition with #1 when in fact they can often cohabitate.

performance AND (not or) could increase expressiveness.
 
 We have in D something with better expressiveness than C++. D will
 always be better than C++. OK. But are we content with a language that
 is "just" better than C++? I want a language with maximum performance
 and maximum expressiveness, not just "better than C++" expressiveness.
 C++ is such an ugly kludge that almost any evolution is better (Java,
 C#, what have you).

applications today, Java and C# is the better choice. Both languages were not designed to be a successor to C++.
 
 Todd Proebsting's "Disruptive Languages" talk prophecies that the next
 disruptive language will be slower than its predecessor because what
 really counts is programmer productivity. Before someone whacks me on
 the head, I understand that D puts CPU cycles above expressiveness.
 Todd just happens to think that industry will value expressiveness
 more. The main point I wish to make is that expressiveness should not
 be ignored or considered a threat. D has priorities, but should strive
 to max them all out.

Before I read Todd's presentation, I thought it would be better if D put less emphasize on performance. But now I think, that D must put strong emphasize on performance to survive against a forthcoming language, created by MS that focuses on programmer productivity. I'll talk prophecies that the next disruptive language will succeed because of a great library. The speed or expressiveness of the language itself will be of secondary importance. The library will have the greatest influence on programmer productivity and performance.
 Just because D lets us go "under the hood" doesn't mean D should force
 us under the hood. That's what expressiveness is all about -- I
 express the logic in the most appropriate manner. The language gives
 me tradeoff choices. If I, the programmer, am willing to give up CPU
 cycles to finish my job faster, then D should permit that to the
 maximum extent of the law.

Yes, it would be nice if D could be flexible enough to provide both.
 Mark
 
 

Mar 01 2003
next sibling parent "Walter" <walter digitalmars.com> writes:
"Farmer" <itsFarmer. freenet.de> wrote in message
news:Xns9331BF31FC6CAitsFarmer 63.105.9.61...
 But I also have the impression that researchers don't read any source code
 material to speak of. So researchers tend to come up with great solutions
 for problems that are of minor importance for mainstream applications.

One thing I do bring to the table is 18 years of doing tech support for compilers.
 I'm glad that Walter designs a house for him to build and him and

 else to inhabit.

The interesting thing about my career is when I've built a product to the marketing department's specifications, or listened to what Bob tells me that Fred wants, the resulting product was invariably a failure. When I designed a product that *I* wanted to use, and that features in it that Bob told me that *Bob* needed, the products were a success.
 I think that designing games and designing languages have some similiar
 traits, these days:
 -It's NOT about how many FEATURES or ORIGINAL IDEAS you put in. The KEY IS
 that EVERYTHING fits WELL. Simple games can be fun. More feature-rich

 can be even more fun (just my personal opinion), though just adding
 features to games does not make them fun.

The interesting thing about my game Empire is it had very few features. You could show someone how to play it in a minute or so. I think that was a large factor in its success. Simple rules, but complex play resulting from those rules.
 Adding language features does increase complexity. The language spec
 becomes bigger, book authors will have to write more pages about D,
 programmers will have to read and understand more pages about D, vendors

 development tools (I do not mean compiler vendors here) are faced with

 complexity.
 Simplicity is a language feature by itself. From doing Java programming I
 learnt that it is a major feature.

You're right. But there is a downside to it - I think the way Java does inner classes and closures is too complicated. They just aren't a natural extension to the way Java does other things.
 Actually I believe [But I can hardly remember, so things may be a bit
 different], that some embedded programming folks, made a subset of C++

 banned templates. They said it would add too much complexity on compiler
 implementations. Stroustrup could not convince them that templates does

 interfere with embedded programming. (you don't have to use them; many C++
 compilers already support templates)

I remember it too, I think it was called "embedded C++".
 Having to interface with C for performance critical code is an ugly

 Why not put the ability for speed right into the language instead ?

You're right. What those languages are in effect doing are saying "if you want to program in X, you must learn both X and C."
 Neither Java nor C# is better than C++. But for many/some/a few
 applications today, Java and C# is the better choice. Both languages were
 not designed to be a successor to C++.

Right. D has an entirely different purpose than C# or Java has.
 Before I read Todd's presentation, I thought it would be better if D put
 less emphasize on performance. But now I think, that D must put strong
 emphasize on performance to survive against a forthcoming language,

 by MS that focuses on programmer productivity.

There are a lot of other programming languages that de-emphasis performance to achieve some other attribute. What I'm trying to do with D is show that you can get substantial programmer productivity improvements over C++, without sacrificing performance. D even promises to be able to generate *faster* code than C++. That is the hook that makes D different and appealing to C++ programmers. They won't be giving up what attracted them to C++ in the first place, performance and control.
Mar 01 2003
prev sibling next sibling parent reply Ilya Minkov <midiclub 8ung.at> writes:
Farmer wrote:
 Mark Evans <Mark_member pathlink.com> wrote in
 news:b3pg8m$19te$1 digitaldaemon.com: 
 
 Hi,
 
 in your last post you often mentioned the term "languages expressiveness".
 
 What is this exactly? 
 Is there any academic definition for it (That is easily understood) ?
 How can you measure this? [ the kernel language? :-) ]

I think i can try to define it. A language being expressive means that many problems can be expressed directly and intuitively in its terms. Since "directly" and esp. "intuitively" cannot be measured well, it is and stays a very vague goal, which is nontheless very well worth persuing. It might affect the popularity of a language. Like: "perl is a very expressive shell scripting language" mans that perl is being felt to be easy to use for shell scripting tasks. "Python is a very expressive rapid prototyping language." And so on. I also work with projects where performance is important, but else safety is. Do you prefer a fast elevator or a safe one? Presumably both. But if it can't be both, it'd rather be safe else you're risking your life. And i beg you for more respect towards the researchers. They usually research problems which are of importance. However, since they don't strive to make a product, but insight, they may choose ways of implementation and tools which are not appropriate for commercial software development, because they might just be better suited for that particular purpose. Or doesn't really matter why, may be a personal preference as well. They give you knowledge, which is to your advantage only, and can be profitably used in an industrial product. Just usually not the tools they have used to prove their thesis. Having a "research focus" rather than a "pragmatic purpose" is a good goal, since insight is always helpful in pragmatic sense. -i.
Mar 03 2003
parent reply Farmer <itsFarmer. freenet.de> writes:
Hi,

thanks for giving me another explanation "languages expressiveness".

Well, "intuitively" is not a generally useful concept, the problem is e.g.:
What's intuitive to people who know how computers work, is often counter-
intuitive to people that don't know and the other way round. 
So, practically I cannot discuss inclusion/exclusion of features to D 
claiming that this change would make things more intuitive.

The other definition "problems can be expressed directly" was very 
enlighting for me. 
If you take the idea of expressiveness to the extreme (problems are 
expressed directly, the language is the solution), isn't it very likely 
that the language is bound to some (or even many) specific application 
domains. I think that having some level of indirection for expressing 
problems is natural for a general purpose language.

 I also work with projects where performance is important, but else 
 safety is. Do you prefer a fast elevator or a safe one? Presumably
 both. But if it can't be both, it'd rather be safe else you're risking
 your life. 

matter whether your programm crashes or is too slow. Both can be simply inacceptable. Of course, I favour stability/maintainability whenever it is possible.
 
 And i beg you for more respect towards the researchers. They usually 
 research problems which are of importance. However, since they don't 
 strive to make a product, but insight, they may choose ways of 
 implementation and tools which are not appropriate for commercial 
 software development, because they might just be better suited for
 that particular purpose. Or doesn't really matter why, may be a
 personal preference as well. They give you knowledge, which is to your
 advantage only, and can be profitably used in an industrial product.
 Just usually not the tools they have used to prove their thesis.
 Having a "research focus" rather than a "pragmatic purpose" is a good
 goal, since insight is always helpful in pragmatic sense.
 
 -i.
 

Maybe you refer to my statement: "But I also have the impression that researchers don't read any source code material to speak of. So researchers tend to come up with great solutions for problems that are of minor importance for mainstream applications." [Quote myself here to please my ego] I said: "So researchers tend to come up with great solutions for problems that are of minor importance for mainstream applications." I did not meant to say: "So researchers tend to come up with great solutions for problems that are of minor importance." Also, I did not meant to say "So researchers tend to come up with great solutions for problems that are useless for mainstream applications." Maybe you refer to my question: "Is there any academic definition for it (That is easily understood) ?" I guess, that my statement has ambigous meaning depending on your native language. So I refactor it a bit: "Is there any precise definition for it (That is easily understood) ?" Farmer
Mar 04 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Farmer" <itsFarmer. freenet.de> wrote in message
news:Xns9335CE7A4BC9itsFarmer 63.105.9.61...
 Well, "intuitively" is not a generally useful concept, the problem is

 What's intuitive to people who know how computers work, is often counter-
 intuitive to people that don't know and the other way round.
 So, practically I cannot discuss inclusion/exclusion of features to D
 claiming that this change would make things more intuitive.

You're right. But in the context of D, I would give more weight to "what is intuitive to a programmer comfortable with C or C++" than what might be intuitive to a programmer coming from a very different language, such as Lisp or Prolog. I want the C/C++ programmer feel like someone who has spent an hour driving down a dirt road and has now turned onto the main, freshly paved, macadam highway.
Mar 05 2003
parent reply Mark Evans <Mark_member pathlink.com> writes:
Walter wrote:
in the context of D, I would give more weight to "what is
intuitive to a programmer comfortable with C or C++" than what might be
intuitive to a programmer coming from a very different language, such as
Lisp or Prolog. I want the C/C++ programmer feel like someone who has spent
an hour driving down a dirt road and has now turned onto the main, freshly
paved, macadam highway.

Then you want high-level capability. Compare C-ish ---------------------------- int myfunction(); List mylist(); for(int i=0; i<mylist.len(); i++) { mylist[i] = myfunction(mylist[i]); } ---------------------------- Lisp-ish ---------------------------- int myfunction(); List mylist; mylist = map(myfunction,mylist); ---------------------------- It's all about choices. If a programmer wants to write C for-loops instead of using map, he can do that. I see no conflict here. In fact D has the opportunity to introduce to C folks choices that they never had before. They might thank you for that. Mark
Mar 05 2003
next sibling parent reply Bill Cox <bill viasic.com> writes:
 It's all about choices.  If a programmer wants to write C for-loops instead of
 using map, he can do that.  I see no conflict here.  In fact D has the
 opportunity to introduce to C folks choices that they never had before.  They
 might thank you for that.

Or they might run screaming and naked for the woods. Bill
Mar 05 2003
next sibling parent Mark Evans <Mark_member pathlink.com> writes:
Or they might run screaming and naked for the woods.

Bill

Talk about paranoia! M.
Mar 05 2003
prev sibling parent reply Dan Liebgold <Dan_member pathlink.com> writes:
In article <3E6648D5.3030002 viasic.com>, Bill Cox says...
 It's all about choices.  If a programmer wants to write C for-loops instead of
 using map, he can do that.  I see no conflict here.  In fact D has the
 opportunity to introduce to C folks choices that they never had before.  They
 might thank you for that.

Or they might run screaming and naked for the woods. Bill

Why naked, exactly? ;) But back to Mark's point, if D did implement builtin support for constructs like "map" over lists, it would really only be a win. The old for-loop way of doing things is not going to be eliminated, and as new programmers catch on to the map (for example) idea, whey will migrate over and become more productive. Built in garbage collection throws open the doors to all sorts of more expressive ways of doing things, and meanwhile the C-ish constructs will always be available. D can really do everything right that the STL is attempting to do, without the negative impact to understandability and implementational complexity. Dan
Mar 05 2003
next sibling parent "Walter" <walter digitalmars.com> writes:
"Dan Liebgold" <Dan_member pathlink.com> wrote in message
news:b45oa0$2858$1 digitaldaemon.com...
 But back to Mark's point, if D did implement builtin support for

 "map" over lists, it would really only be a win.  The old for-loop way of

 things is not going to be eliminated, and as new programmers catch on to

 (for example) idea, whey will migrate over and become more productive.

Yes, you're right.
 Built in garbage collection throws open the doors to all sorts of more
 expressive ways of doing things, and meanwhile the C-ish constructs will

 be available.  D can really do everything right that the STL is attempting

 do, without the negative impact to understandability and implementational
 complexity.

It would be great to do STL better than STL.
Mar 05 2003
prev sibling next sibling parent Bill Cox <bill viasic.com> writes:
Dan Liebgold wrote:
 In article <3E6648D5.3030002 viasic.com>, Bill Cox says...
 
It's all about choices.  If a programmer wants to write C for-loops instead of
using map, he can do that.  I see no conflict here.  In fact D has the
opportunity to introduce to C folks choices that they never had before.  They
might thank you for that.

Or they might run screaming and naked for the woods. Bill

Why naked, exactly? ;)

It was just too fun not to say.
 But back to Mark's point, if D did implement builtin support for constructs
like
 "map" over lists, it would really only be a win.  The old for-loop way of doing
 things is not going to be eliminated, and as new programmers catch on to the
map
 (for example) idea, whey will migrate over and become more productive.
 
 Built in garbage collection throws open the doors to all sorts of more
 expressive ways of doing things, and meanwhile the C-ish constructs will always
 be available.  D can really do everything right that the STL is attempting to
 do, without the negative impact to understandability and implementational
 complexity.

You and Mark are probably right. Don't tell anybody around here that I said that. Bill
Mar 05 2003
prev sibling parent reply Daniel Yokomiso <Daniel_member pathlink.com> writes:
In article <b45oa0$2858$1 digitaldaemon.com>, Dan Liebgold says...
In article <3E6648D5.3030002 viasic.com>, Bill Cox says...
 It's all about choices.  If a programmer wants to write C for-loops instead of
 using map, he can do that.  I see no conflict here.  In fact D has the
 opportunity to introduce to C folks choices that they never had before.  They
 might thank you for that.

Or they might run screaming and naked for the woods. Bill

Why naked, exactly? ;) But back to Mark's point, if D did implement builtin support for constructs like "map" over lists, it would really only be a win. The old for-loop way of doing things is not going to be eliminated, and as new programmers catch on to the map (for example) idea, whey will migrate over and become more productive.

D already has support for higher-order functions. The Deimos Template Library has such functions available for arrays. You can download it from www.minddrome.com/produtos/d under the Deimos link. Beware compiler version bugs, this version (0.0.1) works with dmd 0.50 (perhaps with some versions later). The next version will work with dmd 0.58 and new syntax for function types.
Built in garbage collection throws open the doors to all sorts of more
expressive ways of doing things, and meanwhile the C-ish constructs will always
be available.  D can really do everything right that the STL is attempting to
do, without the negative impact to understandability and implementational
complexity.

Dan

Mar 06 2003
next sibling parent reply Mark Evans <Mark_member pathlink.com> writes:
 But such could would never work, because how can you
 map over a function that takes nothing and give back ints? ;-)

My "-ish" qualifier disclaimed all accuracy. BTW one maps over a list, not a function.
D already has support for higher-order functions.

This dubious claim implies that D supports first-class functions (which are required for HOFs). Last I heard, Walter was only recently working on such matters: D/11250 And we had a whole thread about first-class functions, too. Nobody told me that D already supports them. I would have celebrated. The FC++ papers are worth reading because they discuss the particular deficiencies of C++ which make the functional paradigm hard to support. In D we can address them directly instead of with FC++'s workarounds. It's also worth noting that FC++ does not claim to be complete, just as close as you can get in ISO C++: "Despite FC++’s abilities, it is not a complete functional language with polymorphism and type inference. One of the main drawbacks is that variable types have to be declared explicitly. Although FC++ type inference eliminates the need for typing intermediate results, if the final result of an FC++ expression needs to be stored, the variable should be explicitly typed. This restriction will hopefully be removed with the addition of the typeof keyword in the next revision of the C++ standard."
Mar 06 2003
parent "Walter" <walter digitalmars.com> writes:
"Mark Evans" <Mark_member pathlink.com> wrote in message
news:b48bcv$m4p$1 digitaldaemon.com...
 This dubious claim implies that D supports first-class functions (which

 required for HOFs).  Last I heard, Walter was only recently working on

 matters:   D/11250

 And we had a whole thread about first-class functions, too.  Nobody told

 D already supports them.  I would have celebrated.

They are in the latest version. I'm interested in any deficiencies in it.
Mar 06 2003
prev sibling parent reply "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
Do you really need the recursing boolean in the TRange type?  You'd be
better off just writing a more robust test in the Intersects method and not
having it recurse.  Or make a private Intersects_NoRecurse method;  just
don't saddle all instances with an extra bool.

Sean

"Daniel Yokomiso" <Daniel_member pathlink.com> wrote in message
news:b47srh$cn9$1 digitaldaemon.com...
 D already has support for higher-order functions. The Deimos Template

 has such functions available for arrays. You can download it from
 www.minddrome.com/produtos/d under the Deimos link. Beware compiler

 bugs, this version (0.0.1) works with dmd 0.50 (perhaps with some versions
 later). The next version will work with dmd 0.58 and new syntax for

 types.

Mar 06 2003
parent reply Daniel Yokomiso <Daniel_member pathlink.com> writes:
In article <b49hcj$1daf$1 digitaldaemon.com>, Sean L. Palmer says...
Do you really need the recursing boolean in the TRange type?  You'd be
better off just writing a more robust test in the Intersects method and not
having it recurse.  Or make a private Intersects_NoRecurse method;  just
don't saddle all instances with an extra bool.

Sean

[snip] It's just a hack to avoid recursion on contracts. It's used to enforce symmetrical properties of operations. In the Eiffel library there are lot's of examples of such postconditions.
Mar 08 2003
parent reply "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
Can it be done in a way that affects performance and memory usage less?

Sean

"Daniel Yokomiso" <Daniel_member pathlink.com> wrote in message
news:b4crih$am4$1 digitaldaemon.com...
 In article <b49hcj$1daf$1 digitaldaemon.com>, Sean L. Palmer says...
Do you really need the recursing boolean in the TRange type?  You'd be
better off just writing a more robust test in the Intersects method and


having it recurse.  Or make a private Intersects_NoRecurse method;  just
don't saddle all instances with an extra bool.

Sean

[snip] It's just a hack to avoid recursion on contracts. It's used to enforce symmetrical properties of operations. In the Eiffel library there are

 examples of such postconditions.

Mar 09 2003
parent Burton Radons <loth users.sourceforge.net> writes:
Sean L. Palmer wrote:
 Can it be done in a way that affects performance and memory usage less?

debug private boolean recursing = false; in and out contracts, and the contents of assert, aren't run through the semantic phase when -release.
Mar 09 2003
prev sibling parent Daniel Yokomiso <Daniel_member pathlink.com> writes:
In article <b45gcj$22r6$1 digitaldaemon.com>, Mark Evans says...
[snip]
C-ish
----------------------------
int myfunction();
List mylist();
for(int i=0; i<mylist.len(); i++)
{ mylist[i] = myfunction(mylist[i]); }
----------------------------
Lisp-ish
----------------------------
int myfunction();
List mylist;
mylist = map(myfunction,mylist);
----------------------------

D-ish alias List instance TList(int).List; int myfunction(); List mylist; mylist = mylist.map(myfunction); But such could would never work, because how can you map over a function that takes nothing and give back ints? ;-) [snip]
Mar 06 2003
prev sibling parent Farmer <itsFarmer. freenet.de> writes:
1)
Your definition and presention of language expressiveness as the level of 
possible abstractions with metalinguistic abstraction as the climax, is 
excellent.

Obviously with metalinguistic abstraction problems can be expressed more 
directly, which was Ilya's definition of expressiveness. Furthermore 
metalinguistic abstraction is good example for the downsides of language 
expressiveness.


2)
 D is mostly much better in this regard (function literals, nested
 functions, garbage collection, etc.), but there's one area where it
 lags behind C++: templates.  For example, to simply declare a set of[...]

Very right. Recently, I did some D template programming. Definitely the cumbersome usage of templates would drove away programmers. I think, exploring possible applications of templates (e.g. a graph lib) will increase the usefulness of D templates over time. But C++ templates are not at their best, too. Walter could make using templates more terse and nicer at some later time, when people have more experience with D templates. I agree, D has more abstractions than C++. I guess, that compared with more research oriented languages it is worse [worse is better?] in at least these areas: - procedural abstraction: think of inout parameters or function literals: "function (SubClass)" cannot be assigned to function literal of type "function (BaseClass)" - Object abstraction: E.g.: Sather fixes the quirks of OOP much better. - "Concurrent process": D concurrent process model is like the simplistic one that Java(tm) uses. 3)
 Let's conclude with a nice little paper by MacLennan (1997) about the
 effect of aesthetics in the context of language design: [...]

I want to understand it this way: Do not hide language deficiencies with nice looking syntax sugar. Better improve the design. The design is best when it looks nice. Farmer.
http://www.info.ucl.ac.be/~pvr/
(Concepts, Techniques, and Models of Computer Programming, which was
already mentioned but is worth repeating over and over.  The kernel
language approach is something that would be healthy to be interested
in for most people that would like to call themselves computer
scientists.)

Huh, glad I prefer not calling myself a computer scientists; it saves me reading over 1000 pages ;-)
Mar 07 2003
prev sibling next sibling parent "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
I certainly wouldn't want the compiler keeping profiling histories in my
code, at least not if they weren't used.  Perhaps the virtual properties
would exist only if you tried to use them.

Sean

"Antti Sykari" <jsykari gamma.hut.fi> wrote in message
news:87ptpac4bd.fsf hoastest1-8c.hoasnet.inet.fi...
 Mark Evans <Mark_member pathlink.com> writes:
 Todd Proebsting has worked on many languages -- his stuff is worth
 reading.

Indeed. I found this exploratory article quite interesting: ftp://ftp.research.microsoft.com/pub/tr/tr-2000-54.ps "We propose a new language feature, a program history, that significantly reduces bookkeeping code in imperative programs. A history represents previous program state that is not explicitly recorded by the programmer. By reducing bookkeeping, programs are more convenient to write and less error-prone. Example program histories include a list that represents all the values previously assigned to a given variable, an accumulator that represents the sum of values assigned to a given variable, and a counter that represents the number of times a given loop has iterated. Many program histories can be implemented with low overhead." -Antti

Mar 01 2003
prev sibling parent reply Bill Cox <bill viasic.com> writes:
Hi, Mark.

Perhaps it would be more productive if we focused discussion on specific 
language constructs which improve expressiveness.  Here are some I would 
like to discuss.

My bent is always towards features that give us more power while helping 
us to write faster applications.  I've used most of these enough in 
high-performance applications to know that they can be very valuable.

Template frameworks (or equivalent, such as virtual classes):

This is much like copying and pasting code from an entire module into 
your module, with some modifications.  This would allow me to 
efficiently reuse code designed for mutually recursive data structures 
like graphs.  There seems to be no speed overhead for using this feature.

Dynamic Inheritance (dynamic extension?)

This is where you add class members on-the-fly.  The simple version is 
slow, but we've been using a method that has exactly 0 speed penatly 
(actually, there seems to be a speed-up due to cache efficiency).  The 
trick is using arrays of properties to represent members of classes, 
rather than structure style layout with contiguous fields.  This allows 
us to write efficient code with a shared object database model where the 
database objects are shared between tools.  Without it, each tool has to 
come up with some hack to add it's custom data to the objects that exist 
in the database.

Extended support for relationships between classes.

Most languages that borrow heavily from C have some equivalent to 
pointers.  Even Java has object handles as a basic type.  Using these, 
we can write template libraries of container classes to implement 
relationships between classes such as linked lists, or hash tables. 
However, the simple model that a relationship is owned soley by the 
class containing the container class member is not very efficient, and 
leaves some tasks to the programmer to implement.  By having language 
constructs that allow container classes to add members to both the 
parent and child classes of a relationship, we gain both efficiency and 
safty.  For example, if class Graph owns a linked list of Nodes, and 
Nodes point back to Graph, we have to remember to NULL out the back 
pointer of a Node when we remove it from the linked list.  This leads to 
lots of errors.  Also, the next pointer isn't embedded in the child 
class, as would be the natural and efficient implementation if we were 
doing it by hand without the container class.

Support for code generators.

Code generators add much power to a programmer's toolbox.  GUI 
generators are the obvious example.  Where I work, we even use data 
structure code generators.  Full recursive destructors are also 
generated, as is highly efficient memory management for each class. 
Adding simple support for code generators can be done by extending the 
language so that generators only have to write D code, not read it. 
This means that classes have to be able to be extended simply by 
creating new files with appropriate declarations.

Compiler extensions

This is a tough feature, but it can be very powerful.  Basically, the 
compiler needs to be written in D, and end-users need to be able to make 
extensions directly to the compiler by writing new files in D.  A custom 
compiler gets compiled for each user application, so these new features 
can be included efficiently.  How much of the compiler gets exposed, and 
in what manner makes a big difference.  For example, if D supported 
descriptions of parsers in a yacc-like manner, we could add syntax to D 
with yacc like rules in our applications.  It seems possible to allow 
users to add new built-in types.  Complex numbers could be such a 
language extension.  A simple compiler I wrote allows users to write 
code generators that run off the compiler's representation of the user's 
code.  This is used, for example, to generate the container class code 
in both parent and child classes, as described above in support for 
relationships, as well as recursive destructors.  The possiblilties here 
seem huge.

Bill
Mar 01 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
Though it's not for me, i'll take liberty to comment a bit.

Bill Cox wrote:
 Hi, Mark.
 
 Perhaps it would be more productive if we focused discussion on specific 
 language constructs which improve expressiveness.  Here are some I would 
 like to discuss.

Mark has given us an ocean of information which is mostly very interesting, but we can't cope with the vast amount. :) Thanks for trying to make him think in more pragmatic terms.
 Dynamic Inheritance (dynamic extension?)
 
 This is where you add class members on-the-fly.  The simple version is 
 slow, but we've been using a method that has exactly 0 speed penatly 
 (actually, there seems to be a speed-up due to cache efficiency).  The 
 trick is using arrays of properties to represent members of classes, 
 rather than structure style layout with contiguous fields.  This allows 
 us to write efficient code with a shared object database model where the 
 database objects are shared between tools.  Without it, each tool has to 
 come up with some hack to add it's custom data to the objects that exist 
 in the database.

Great idea. It'd be also very useful for scripting languages. Thus a scripting language and D could communicate fairly directly. Not to mention that it could reduce executable size when large GUI frameworks are used, because of less "generality". (maybe, due to mix-ins?)
 Support for code generators.
 
 Code generators add much power to a programmer's toolbox.  GUI 
 generators are the obvious example.  Where I work, we even use data 
 structure code generators.  Full recursive destructors are also 
 generated, as is highly efficient memory management for each class. 
 Adding simple support for code generators can be done by extending the 
 language so that generators only have to write D code, not read it. This 
 means that classes have to be able to be extended simply by creating new 
 files with appropriate declarations.

 Compiler extensions
 
 This is a tough feature, but it can be very powerful.  Basically, the 
 compiler needs to be written in D, and end-users need to be able to make 
 extensions directly to the compiler by writing new files in D.  A custom 
 compiler gets compiled for each user application, so these new features 
 can be included efficiently.  How much of the compiler gets exposed, and 
 in what manner makes a big difference.  For example, if D supported 
 descriptions of parsers in a yacc-like manner, we could add syntax to D 
 with yacc like rules in our applications.  It seems possible to allow 
 users to add new built-in types.  Complex numbers could be such a 
 language extension.  A simple compiler I wrote allows users to write 
 code generators that run off the compiler's representation of the user's 
 code.  This is used, for example, to generate the container class code 
 in both parent and child classes, as described above in support for 
 relationships, as well as recursive destructors.  The possiblilties here 
 seem huge.

Right. This would impose that compilers have to have similar internal structure though. I planned to write an external tool which does something like that and yuilds D source. Wonder if i ever come to it, but i value any ideas. Then, a backend interface can be attached to it, for which i'd implement a compiler into C and possibly CIL/Net. And other code generation tools (GUI constructor toolkits and alike) could use it somehow to parse D code with extensions and/or make additions to it. That is, its core functionality should be a library. -i.
Mar 02 2003
parent reply "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
Someone at my place of work says that the big thing that makes him think D
will not do well is the lack of an Eval() function.

Eval("D code snippet") or Eval("D function definitions", "D function call")
I suppose are what he's after.  Some other languages such as Lisp and Perl
have these.
It would require at least a Dscript interpreter, or a small compiler in the
runtime library.
I'm not sure everyone would benefit from it but occasionally the most
straightforward way to solve a task is to write a program that writes
programs based on runtime data.

D has regexps already.

Another convenient feature that I see in ML and Haskell is a construct that
can execute some code based on the type of an object.  This is similar to
function overloading in utility but written with a more switch-like syntax.
It would also replace code that does a bunch of if's to decide if an object
is-a descendant of one of these certain classes.  I believe they call it
pattern matching and I think it has even more functionality than what I've
mentioned here.  (but I'm just learning Haskell and OCaml)

I think this is the main reason OCaml and Haskell are great languages with
which to write parsers.  If you make D into a language which makes it very
easy to write a D parser, you've made life easier for many many people.

Thought about bootstrapping DMD yet?  ;)

Sean

 Perhaps it would be more productive if we focused discussion on specific
 language constructs which improve expressiveness.  Here are some I would
 like to discuss.

Mar 02 2003
parent reply "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
I believe Larry Wall said that languages need more pronouns.

I propose extending the with construct to allow multiple aliases within the
scope of the with body

instead of:

{
    int myverylongidentifier;
    int myaccessofdifficulttoaccessmember;
    {
        alias myverylongidentifier id;
        alias myaccessofdifficulttoaccessmember member;
        if (id > 5 && id < 10)
            member = member + id;
    }
}

we could write:

{
    int myverylongidentifier;
    int myaccessofdifficulttoaccessmember;
    with (myverylongidentifier id,  myaccessofdifficulttoaccessmember
member)
        if (id > 5 && id < 10)
            member = member + id;
}

This allows one to conveniently build one's own pronouns.

For those times when one isn't trying to bring names into scope so much as
build a convenient temporary name for some complex expression, I also
propose that an "it" keyword is added which is equivalent to the innermost
enclosing old-style with statement expression:

with (a * b + c)
    myvar = it;

Seems more convenient than:

{
    mytype it = a * b + c;
    myvar = it;
}

and also would replace this:

int member1;
struct mybigstruct { int member1, member2; }
mybigstruct mybigstructvar;
{
    alias mybigstruct it;
    myvar = it.member1 + it.member2;
}

with this:

int member1;
struct mybigstruct { int member1, member2; }
mybigstruct mybigstructvar;
with (mybigstructvar)
    myvar = it.member1 + it.member2;

But use of "it" could in some cases be less ambiguous than without;  for
instance using the existing with syntax it is not clear:

int member1;
struct mybigstruct { int member1, member2; }
mybigstruct mybigstructvar;
with (mybigstructvar)
    myvar = member1 + member2;  // which member1 are we talking about here?

Essentially it's syntax sugar for alias.

The with statement should be able to handle any construct that alias
handles.

Thoughts?

Sean

 Perhaps it would be more productive if we focused discussion on specific
 language constructs which improve expressiveness.  Here are some I would
 like to discuss.

Mar 02 2003
parent reply "Dan Liebgold" <dliebgold yahoo.com> writes:
"Sean L. Palmer" <seanpalmer directvinternet.com> wrote in message
news:b3tu41$m74$1 digitaldaemon.com...
 I believe Larry Wall said that languages need more pronouns.

 I propose extending the with construct to allow multiple aliases within

 scope of the with body

 we could write:

 {
     int myverylongidentifier;
     int myaccessofdifficulttoaccessmember;
     with (myverylongidentifier id,  myaccessofdifficulttoaccessmember
 member)
         if (id > 5 && id < 10)
             member = member + id;
 }

 This allows one to conveniently build one's own pronouns.

Ah, a Lisp afficionado? This is the "let" construct, which in Lisp is implemented with a macro. Perhaps we ought to consider (again) a "real" preprocessing pass in the compiler... and not of the C ilk. This complicates things, of course, but imagine a pair of compiler passes running before the others, a parser generation pass and a parser application pass. Do something like this to implement your above "with" (again I'm brainstorming kitchen sink ideas): syntax with (decl_list) {body} is { replace decl_list is { var expr [, decl_list] } by { "\expr.type var = \expr;" } "{ \decl_list \body }" } So anything in quotes gets pasted into the output of this syntax transform. You can define 'replace' rules which transform expressions in the input. The rules will be applied recursively as necessary (through the []-delimited optional expansion). And true to the C approach, use '\' to escape out of the quotes to paste in parameters to the syntax transform or replacement rule outputs. And escaped expressions will have properties like in the normal language, just a different set of properties. For example above we write \expr.type to paste in the type of expr. And with the above transform in a library somewhere, you'll have your new "with" syntax, although I'd probably rename it "let" since there is precedence for that and "with" already does something different. And to play off the "it" idea, do something like this: syntax if_with (expr) {body} is { "{ \expr.type it = \expr; if (it) { \body }}" } "if_with" will now be a sort of lexical optimization, as it will save you declaring and setting the variable in order to check and use it. - Dan "Kitchen Sink" Liebgold
Mar 02 2003
parent reply "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
Aside from all the subtle gotchas you have to watch out for when doing
textual macro preprocessing, I really don't relish the idea of doing:

import my_convenient_macro_library;

my_convenient_macro_library.with (foo)
{
    // do something with foo
}

Yeah it's nice to be able to make your own syntax, but you can always use an
external preprocessor for that.  That's not the issue.  What the issue is,
is what level of functionality should be available in the core D spec that's
guaranteed to be in every implementation.  What convenience features can I
rely upon to be there all the time?

Things like this are core features, not library features.  And preferrably
not user-defined features.  People will end up extending D, but the vanilla
baseline is what gets used for 99% of all work, since it's expected to be
fairly portable.

Sean

"Dan Liebgold" <dliebgold yahoo.com> wrote in message
news:b3u1ic$nsq$1 digitaldaemon.com...
 "Sean L. Palmer" <seanpalmer directvinternet.com> wrote in message
 news:b3tu41$m74$1 digitaldaemon.com...
 I believe Larry Wall said that languages need more pronouns.

 I propose extending the with construct to allow multiple aliases within

 scope of the with body

 we could write:

 {
     int myverylongidentifier;
     int myaccessofdifficulttoaccessmember;
     with (myverylongidentifier id,  myaccessofdifficulttoaccessmember
 member)
         if (id > 5 && id < 10)
             member = member + id;
 }

 This allows one to conveniently build one's own pronouns.

Ah, a Lisp afficionado? This is the "let" construct, which in Lisp is implemented with a macro. Perhaps we ought to consider (again) a "real" preprocessing pass in the compiler... and not of the C ilk. This complicates things, of course, but imagine a pair of compiler passes

 before the others, a parser generation pass and a parser application pass.
 Do something like this to implement your above "with"  (again I'm
 brainstorming kitchen sink ideas):

 syntax with (decl_list) {body} is {
       replace decl_list  is { var expr [, decl_list] } by {
          "\expr.type var = \expr;"
       }
       "{ \decl_list \body }"
 }

 So anything in quotes gets pasted into the output of this syntax

 You can define 'replace' rules which transform expressions in the input.

 rules will be applied recursively as necessary (through the []-delimited
 optional expansion).  And true to the C approach, use '\' to escape out of
 the quotes to paste in parameters to the syntax transform or replacement
 rule outputs. And escaped expressions will have properties like in the
 normal language, just a different set of properties. For example above we
 write \expr.type to paste in the type of expr.  And with the above

 in a library somewhere, you'll have your new "with" syntax, although I'd
 probably rename it "let" since there is precedence for that and "with"
 already does something different.

 And to play off the "it" idea, do something like this:

 syntax if_with (expr) {body} is {
     "{ \expr.type it = \expr; if (it) { \body }}"
 }

 "if_with" will now be a sort of lexical optimization, as it will save you
 declaring and setting the variable in order to check and use it.

  - Dan "Kitchen Sink" Liebgold

Mar 03 2003
parent reply Dan Liebgold <Dan_member pathlink.com> writes:
In article <b3v6q3$1bmn$1 digitaldaemon.com>, Sean L. Palmer says...
Aside from all the subtle gotchas you have to watch out for when doing
textual macro preprocessing, I really don't relish the idea of doing:

import my_convenient_macro_library;

my_convenient_macro_library.with (foo)
{
    // do something with foo
}

As Mark T quoted a few posts back: "Library design is language design". Something basic like the "with" macro would be part of the *standard* library, and would not need importation or any special specification. Thus, unlike in most languages, the implementation of many basic constructs would be available to expert D practitioners, and that is a good thing. Syntax transforming macros are one of the basic Lisp ideas that Eric Raymond was implicating when he said "LISP is worth learning for a different reason — the profound enlightenment experience you will have when you finally get it. That experience will make you a better programmer for the rest of your days, even if you never actually use LISP itself a lot." Dan
Mar 03 2003
parent "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
The only argument for making it a macro instead of a language construct is
so that people can do this sort of thing as user libraries.

If it's possible in a user library, it gets pushed out of the standard
library into the hands of the users because there's no "need" for it to be
standard;  if people want that syntax they can make their own, and not
everybody needs that feature so why standardize it?

It adds a lot of complexity to the language to allow robust macros.  That's
one of the reasons D doesn't have them.

I'm asking for a better 'with' statement.  You're saying that macros can do
that so what should instead be added are preprocessor macros.  That doesn't
exactly solve my issue, and it entails lots of complications on its own.
But thanks for trying!

Sean

"Dan Liebgold" <Dan_member pathlink.com> wrote in message
news:b40a57$213p$1 digitaldaemon.com...
 In article <b3v6q3$1bmn$1 digitaldaemon.com>, Sean L. Palmer says...
Aside from all the subtle gotchas you have to watch out for when doing
textual macro preprocessing, I really don't relish the idea of doing:

import my_convenient_macro_library;

my_convenient_macro_library.with (foo)
{
    // do something with foo
}

As Mark T quoted a few posts back: "Library design is language design". Something basic like the "with" macro would be part of the *standard*

 and would not need importation or any special specification. Thus, unlike

 most languages, the implementation of many basic constructs would be

 to expert D practitioners, and that is a good thing.

 Syntax transforming macros are one of the basic Lisp ideas that Eric

 implicating when he said "LISP is worth learning for a different reason -

 profound enlightenment experience you will have when you finally get it.

 experience will make you a better programmer for the rest of your days,

 you never actually use LISP itself a lot."

 Dan

Mar 04 2003
prev sibling parent reply Mark Evans <Mark_member pathlink.com> writes:
Daniel,

1. Market success of Oz was never a goal and is not the same thing as research
success.  Oz was built to explore and demonstrate the kernel language design
approach and did so admirably.  It achieved its goal.

2. A taxonomy of computational models imposes no requirement to adopt any given
model(s).  The kernel language taxonomy merely serves as a tool with which to
make choices.  In itself, it imposes none.

3. Kernel language was not designed to compete with Java, but to compete with
lambda calculus (if anything).  It lives in the domain of language design, not
of language usage.  Oz was the demonstration project and nothing more.

4. The remark that "Java has lots of flaws, but is somewhat consistent" is not
pertinent to D, which is based on C++.  It also ignores some comments the
authors make about Java (next point).

5. The remark "Some features look very good ... but some of them will bite you
back sometime" is very true when languages are designed by gut instinct instead
of careful analysis.  That's why the kernel language was developed:  to clarify
orthogonal dimensions of the problem domain of language design.  The authors
discuss how (for example) Java and C++ programmers struggle endlessly with
concurrency issues that, in themselves, are not hard to build into a language
correctly (correct here meaning easy to use and stable).  As proof of which, Oz
handles thousands of fine-grained threads.  So in effect the authors demonstrate
that their technique could have made C++ or Java better languages.  Too late for
them, but not for D.

6. The kernel language approach is a deliberate and careful technique for
language design.  I cannot say the same for our humble newsgroup.  This is why I
chuckle at your assertion of carefulness.

7. The intent of my post was not to demand any features in D, or say that C++ is
bad (though it is), but to offer a method whereby the tradeoffs under
consideration can be analyzed in a more professional and systematic fashion.

8. (Rob:) Perl is not a well-designed language and Mr. Wall's philosophy
interests me very little.  I avoid Perl like the plague.

9. If you think this thread is OT (which I assume is a BT) feel free to stop
participating.
Jan 23 2003
parent reply "Robert Medeiros" <robert.medeiros utoronto.ca> writes:
 8. (Rob:) Perl is not a well-designed language and Mr. Wall's philosophy
 interests me very little.  I avoid Perl like the plague.

Perl is something of a dog's breakfast, I agree (I'm a Python fan :) but OTOH, given my druthers I'd write all my code using refinement in an interactive proof system, and anything less formal often seems "sloppy" given this preference. I'm hoping D will serve as a happy medium. I think Wall's (or any other language designers) philosophy is important to understand -- even when I'm liable to disagree -- since that philosophy serves to motivate each decision taken when presented with a tradeoff by a theoretical framework for language design such as you've described. Rob
Jan 26 2003
parent "Walter" <walter digitalmars.com> writes:
"Robert Medeiros" <robert.medeiros utoronto.ca> wrote in message
news:b12oba$1mes$1 digitaldaemon.com...
 8. (Rob:) Perl is not a well-designed language and Mr. Wall's philosophy
 interests me very little.  I avoid Perl like the plague.

OTOH, given my druthers I'd write all my code using refinement in an interactive proof system, and anything less formal often seems "sloppy" given this preference. I'm hoping D will serve as a happy medium. I think Wall's (or any other language designers) philosophy is important to understand -- even when I'm liable to disagree -- since that philosophy serves to motivate each decision taken when presented with a tradeoff by a theoretical framework for language design such as you've described.

Perl is a successful language, which is an attribute that makes it worth examining to try and see why.
Feb 27 2003
prev sibling next sibling parent Antti Sykari <jsykari gamma.hut.fi> writes:
"Walter" <walter digitalmars.com> writes:
 "Bill Cox" <bill viasic.com> wrote in message
 news:3E5F4733.8070809 viasic.com...
 I hear game hackers came
 up with C.

One thing that is never mentioned about C is that I believe the PC is what pushed C into the mass accepted language that it is. The reasons are: 1) PCs were slow and small. Writing high performance, memory efficient apps was a requirement, not a luxury. 2) C was a good fit for the PC architecture. 3) C was the ONLY high level language that had a decent compiler for it in the early PC days. (The only other options were BASIC and assembler.) Sure, there were shipping FORTRAN and Pascal compilers for the PC early on, but the implementations were so truly terrible they were useless (and I and my coworkers really did try to get them to work).

I recently visited my university's library and examined some programming language textbooks from the late 1970's and the early 1980's. Many of them didn't even mention C and those that did, didn't usually consider it to be much of a language. Algol, Pascal, Cobol and Fortran were the languages of the day, with an occasional side note for Lisp, Prolog and Smalltalk. Although C had existed from around 1973 is was still a cryptical-looking, system-oriented niche language used only by some UNIX researchers and didn't seem to be of much interest. -Antti
Mar 01 2003
prev sibling next sibling parent Antti Sykari <jsykari gamma.hut.fi> writes:
Mark Evans <Mark_member pathlink.com> writes:
 Todd Proebsting has worked on many languages -- his stuff is worth
 reading.

Indeed. I found this exploratory article quite interesting: ftp://ftp.research.microsoft.com/pub/tr/tr-2000-54.ps "We propose a new language feature, a program history, that significantly reduces bookkeeping code in imperative programs. A history represents previous program state that is not explicitly recorded by the programmer. By reducing bookkeeping, programs are more convenient to write and less error-prone. Example program histories include a list that represents all the values previously assigned to a given variable, an accumulator that represents the sum of values assigned to a given variable, and a counter that represents the number of times a given loop has iterated. Many program histories can be implemented with low overhead." -Antti
Mar 01 2003
prev sibling next sibling parent Antti Sykari <jsykari gamma.hut.fi> writes:
Since the feature of program histories is local in nature (as local as
local variables are), it would be straightforward to make it so that
they would be calculated only when needed.

The good side of the tangible program histories is that they can raise
the abstraction of the code and make some bugs obvious.  The bad side
is that it makes the structure ("what really happens and when") of the
code less obvious.  The interesting thing is that it's a declarative
feature understanding of which leads to a different way of thinking
than traditional imperative languages.

Might be that something similar will pop up some day in a mainstream
programming language...

-Antti

"Sean L. Palmer" <seanpalmer directvinternet.com> writes:

 I certainly wouldn't want the compiler keeping profiling histories in my
 code, at least not if they weren't used.  Perhaps the virtual properties
 would exist only if you tried to use them.

 Sean

 "Antti Sykari" <jsykari gamma.hut.fi> wrote in message
 news:87ptpac4bd.fsf hoastest1-8c.hoasnet.inet.fi...
 Mark Evans <Mark_member pathlink.com> writes:
 Todd Proebsting has worked on many languages -- his stuff is worth
 reading.

Indeed. I found this exploratory article quite interesting: ftp://ftp.research.microsoft.com/pub/tr/tr-2000-54.ps "We propose a new language feature, a program history, that significantly reduces bookkeeping code in imperative programs. A history represents previous program state that is not explicitly recorded by the programmer. By reducing bookkeeping, programs are more convenient to write and less error-prone. Example program histories include a list that represents all the values previously assigned to a given variable, an accumulator that represents the sum of values assigned to a given variable, and a counter that represents the number of times a given loop has iterated. Many program histories can be implemented with low overhead." -Antti


Mar 02 2003
prev sibling parent Antti Sykari <jsykari gamma.hut.fi> writes:
Farmer <itsFarmer. freenet.de> writes:

 Mark Evans <Mark_member pathlink.com> wrote in
 news:b3pg8m$19te$1 digitaldaemon.com: 

 Hi,

 in your last post you often mentioned the term "languages expressiveness".

 What is this exactly? 
 Is there any academic definition for it (That is easily understood) ?
 How can you measure this? [ the kernel language? :-) ]

Not an academic definition but definition all the same: IMHO, language's expressiveness (and power) extends as far as a language's ability to define and use abstractions. As languages have evolved, their designers have sought for possibilities for abstraction in different places: - There is procedural abstraction, which enables the programmer to define a code unit that has inputs, outputs and side effects, give it a name and use that to perform the same task repeatedly. - Data abstraction (structs) allows one to take several blocks of data, give them a name and use the same set of data in several places, denoting structured values. - Object abstraction is yet another abstraction that combines a set of data, more or less hidden, that has an identity and operations that can be performed on it. - "Concurrent process" abstraction is an object or a process that has an identity, a thread of control that can proceed conceptually simultaneously with others, and can communicate with them with different mechanisms. - "Template" (or generic, or whatever you like to call it) abstraction defines common structure for data and/or code and lets you parameterize them with types or data. - And much more. Anything which lets you define and name something and define things in terms of that something is an abstraction that gives rise to expressiveness. - Metalinguistic abstraction is the most powerful form of abstraction which lets one define and extend the language in its own terms - this allows the programmer to tailor the ready-made abstraction facilities or make customized abstractions if needed. Fear the LISP macro hackers, because having mastered this art they have incredible power at their fingertips. http://mitpress.mit.edu/sicp/full-text/book/book.html http://www.paulgraham.com/onlisp.html It is the combination of different abstraction mechanisms that give rise to the programming paradigms that are available to a programmer using a given language. There have been attempts to unify the different paradigms and abstraction mechanisms, and they are a lively research topic. http://www.ifs.uni-linz.ac.at/~ecoop/cd/papers/1850/18500001.pdf (Was this paper mentioned earlier? At least it's about Beta and what it's all about) http://www.info.ucl.ac.be/~pvr/ (Concepts, Techniques, and Models of Computer Programming, which was already mentioned but is worth repeating over and over. The kernel language approach is something that would be healthy to be interested in for most people that would like to call themselves computer scientists.) Having said that, I'd like to also express my opinion that *having* the abstractions is not enough. You also have to be able to *use* them and using them should be easy and look nice. For example, C++ has plenty of abstraction facilities. It's a shame that you are punished for using them: to make a class (so that representation is separated from the interface) you have to maintain two different copies of the function signatures, and wrap the class inside a memory-managing smart pointer or use pointers and risk memory leakage. To make a function object, you have to declare either a local object, global function, or use clumsy STL/boost combinators to combine them, which isn't always possible, either. You can't easily make local functions that can operate on local data, and so on. D is mostly much better in this regard (function literals, nested functions, garbage collection, etc.), but there's one area where it lags behind C++: templates. For example, to simply declare a set of integers, you'll have to instantiate the template manually: (assume template List(T) which contains the class List) --D snippet-- instance List(int) intList; void foo() { intList.List list; // use list } --D snippet-- compare this with --C++ snippet-- void foo() { List<int> list; // use list } --C++ snippet-- This is similar to the situation in which I'm tempted not to use std::for_each for performing an action over a container. Using a for loop is easier. There must be reasons for this, I guess, but... So, let's return to the concept of expressiveness. Basically you could determine expressiveness as an (intuitive) measure of the abstraction facilities offered by the language and the ease of use of those abstraction facilities. You cannot be very deterministic in determining the expressiveness absolutely, but you could always ask a thousand programmers about how easy to use they find abstraction mechanism X in language Y, and then add them all, or something like that. If you fancy things like that. Expressiveness isn't everything, either. If I have a language that allows me to build the most beautiful abstractions in a small amount of code, but the language simply looks stramge. difficult to read, or ugly in my opinion, I probably won't use it. If the syntax and abstractions are elegant, but the implementation is slow as hell and the standard library lacks critical features, I probably won't use it. If nobody else uses it (because of some of the features above), it's unlikely that it has decent standard library and support for the systems I use, and so I probably won't use it. And so on. Let's conclude with a nice little paper by MacLennan (1997) about the effect of aesthetics in the context of language design: "The general principle is that designs that look good will also be good, and therefore the design process can be guided by aesthetics without extensive (but incomplete) mathematical analysis." -- -- "The same applies to programming language design. By restricting our attention to designs in which the interaction of features is manifest - in which good interactions look good, and bad interactions look bad - we can let our aesthetic sense guide our design and we can be much more confident that we have a good design, without having to check all the possible interactions. " "We accomplish little by covering an unbalanced structure in a beautiful facade. When the bridge is unable to sustain the load for which it was designed, and collapses, it won't much matter that it was beautiful on the outside. So also in programming languages. If the elegance is only superficial, that is, if it is not the manifestation of a deep coherence in the design, then programmers will quickly see through the illusion and loose their (unwarranted) confidence." It's available at: http://www.cs.utk.edu/~mclennan/anon-ftp/Elegance.html -Antti
Mar 05 2003