www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: dynamic classes and duck typing

reply Roman Ivanov <x y.z> writes:
Walter Bright Wrote:

 dsimcha wrote:
 Right, but sometimes (though certainly not always) it's better to provide a
 meta-feature that solves a whole bunch of problems (like better templates) and
 then solve the individual problems at the library level, rather than add a
 language feature specifically to address each need.

Yup. The hard part, though, is figuring out what the magic set of seminal features should be.
 One thing D does very well is
 allow you to do the same kind of metaprogramming solutions you would do in C++,
 except that the result doesn't suck.  For example, std.range implements
 functional-style lazy evaluation as a library, and does it well.  The point is
 that, if you can't deal with the complexity of having real templates, you
better
 be prepared for the complexity created by not having them.

Right. A "simple" language pushes the complexity onto the programmer, so he has to write complicated code instead. D programs tend to be dramatically shorter than the equivalent C++ one.
 Having never done it before, I really cannot imagine how people get any work
done
 in a language that doesn't have either duck typing or good templates.  It's
just
 too rigid.  It seems like modern statically typed languages like Java and C#
end
 up adding tons of ad-hoc workarounds for lacking either of these as
 well-integrated language features.  The best/worst example is auto-boxing.

I tried programming in Java. A friend of mine had an unexpected insight. He used Java a lot at a major corporation. He said an IDE was indispensable because with "one click" you could generate a "hundred lines of code". The light bulb came on. Java makes up for its lack of expressiveness by putting that expressiveness into the IDE! In D, you generate that hundred lines of code with templates and mixins.

I'm a Java programmer. IMO, the biggest problem with Java is not the language expressiveness, but poorly written APIs and badly selected abstractions. The reason I can't program in Java without an IDE is (usually) not because I need to generate tons of code, but because I'm constantly looking up new method/class names, looking up packages to export and refactoring. A lot of things that require extensive code generation do so simply because they are badly designed. Web services (SOAP based) are a good example of that. In the end, it's just reading and writing text to a socket. It could be very simple, but it isn't. An area when I find myself using code generation a lot is exception handling. I prefer to write my code without handling exceptions at all, and then let API to generate try/catch blocks. I tweak then afterward. Thing is, Java supports runtime exceptions that don't cascade in kilobytes of mostly useless code. People just don't use them that often. My point is, language is one thing, but "language culture" is another. For some reason Java bred a culture that encourages bloated, counter-intuitive, "enterprise" solutions. It's not inherent in the language. It has more to do with the companies that use it, core API design and design of popular libraries. At least that's the way I see it.
Nov 30 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Roman Ivanov wrote:
 My point is, language is one thing, but "language culture" is
 another. For some reason Java bred a culture that encourages bloated,
 counter-intuitive, "enterprise" solutions. It's not inherent in the
 language. It has more to do with the companies that use it, core API
 design and design of popular libraries. At least that's the way I see
 it.

I know what you mean. I even found the file I/O Java library routines to be impenetrable.
Nov 30 2009
prev sibling next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Roman Ivanov (x y.z)'s article
 Walter Bright Wrote:
 dsimcha wrote:
 Right, but sometimes (though certainly not always) it's better to provide a
 meta-feature that solves a whole bunch of problems (like better templates) and
 then solve the individual problems at the library level, rather than add a
 language feature specifically to address each need.

Yup. The hard part, though, is figuring out what the magic set of seminal features should be.
 One thing D does very well is
 allow you to do the same kind of metaprogramming solutions you would do in C++,
 except that the result doesn't suck.  For example, std.range implements
 functional-style lazy evaluation as a library, and does it well.  The point is
 that, if you can't deal with the complexity of having real templates, you
better
 be prepared for the complexity created by not having them.

Right. A "simple" language pushes the complexity onto the programmer, so he has to write complicated code instead. D programs tend to be dramatically shorter than the equivalent C++ one.
 Having never done it before, I really cannot imagine how people get any work



 in a language that doesn't have either duck typing or good templates.  It's
just
 too rigid.  It seems like modern statically typed languages like Java and C#
end
 up adding tons of ad-hoc workarounds for lacking either of these as
 well-integrated language features.  The best/worst example is auto-boxing.

I tried programming in Java. A friend of mine had an unexpected insight. He used Java a lot at a major corporation. He said an IDE was indispensable because with "one click" you could generate a "hundred lines of code". The light bulb came on. Java makes up for its lack of expressiveness by putting that expressiveness into the IDE! In D, you generate that hundred lines of code with templates and mixins.


reason I can't program in Java without an IDE is (usually) not because I need to generate tons of code, but because I'm constantly looking up new method/class names, looking up packages to export and refactoring.
 A lot of things that require extensive code generation do so simply because
they

end, it's just reading and writing text to a socket. It could be very simple, but it isn't.
 An area when I find myself using code generation a lot is exception handling. I

generate try/catch blocks. I tweak then afterward. Thing is, Java supports runtime exceptions that don't cascade in kilobytes of mostly useless code. People just don't use them that often.
 My point is, language is one thing, but "language culture" is another. For some

"enterprise" solutions. It's not inherent in the language. It has more to do with the companies that use it, core API design and design of popular libraries. At least that's the way I see it. Yes, but in my (possibly somewhat uninformed) opinion, the root cause of this is that Java just doesn't provide many tools for managing complexity. Complexity has to go somewhere, and about the only tool Java provides for managing it is OO-style class hierarchies. I have nothing against OO, classes, interfaces, inheritance, etc. It's just that it's not the right tool for every job. If your problem doesn't fit neatly into an OO-style class hierarchy, it will be made to fit sloppily. In a more multi-paradigm language, you might use templates, or duck typing, or higher-order functions, or closures, or eval statements, or mixins, or macros, or whatever complexity management system maps best to the problem you're trying to solve. In Java, by going overboard on making the core language simple, you end up pushing all the complexity into the APIs.
Nov 30 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 In Java, by going overboard on making the core language simple,
 you end up pushing all the complexity into the APIs.

Yup, and that's the underlying problem with "simple" languages. Complicated code.
Nov 30 2009
next sibling parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple,
 you end up pushing all the complexity into the APIs.

Yup, and that's the underlying problem with "simple" languages. Complicated code.

I think users of scripting languages would disagree with you.
Dec 01 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple,
 you end up pushing all the complexity into the APIs.

Yup, and that's the underlying problem with "simple" languages. Complicated code.

I think users of scripting languages would disagree with you.

PHP?
Dec 01 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 Overall these simplifications don't remove any crucial high level 
 language features, in fact they make the code simpler and shorter. For 
 instance there isn't high level code that can only be written with 8-bit 
 byte primitives, static methods or closures, but not with 32-bit generic 
 ints, singletons, and generic higher order functions. The only thing you 
 lose is some type safety and efficiency.

I'm no expert on Python, but there are some things one gives up with it: 1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming. 2. as you mentioned, there's the performance problem. It's fine if you don't need performance, but once you do, the complexity abruptly goes way up. 3. no contract programming (it's very hard to emulate contract inheritance) 4. no metaprogramming 5. simple interfacing to C 6. scope guard (transactional processing); Python has the miserable try-catch-finally paradigm 7. static verification 8. RAII 9. versioning 10. ability to manage resources directly 11. inline assembler 12. constants
Dec 01 2009
next sibling parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Walter Bright wrote:
 retard wrote:
 Overall these simplifications don't remove any crucial high level 
 language features, in fact they make the code simpler and shorter. For 
 instance there isn't high level code that can only be written with 
 8-bit byte primitives, static methods or closures, but not with 32-bit 
 generic ints, singletons, and generic higher order functions. The only 
 thing you lose is some type safety and efficiency.

I'm no expert on Python, but there are some things one gives up with it: 1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming. 2. as you mentioned, there's the performance problem. It's fine if you don't need performance, but once you do, the complexity abruptly goes way up. 3. no contract programming (it's very hard to emulate contract inheritance) 4. no metaprogramming 5. simple interfacing to C 6. scope guard (transactional processing); Python has the miserable try-catch-finally paradigm 7. static verification 8. RAII 9. versioning 10. ability to manage resources directly 11. inline assembler 12. constants

I mostly agree, but python actually has a rather elegant version of RAII.
Dec 01 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 retard, el  1 de diciembre a las 11:42 me escribiste:
 Tue, 01 Dec 2009 03:13:28 -0800, Walter Bright wrote:

 retard wrote:
 Overall these simplifications don't remove any crucial high level
 language features, in fact they make the code simpler and shorter. For
 instance there isn't high level code that can only be written with
 8-bit byte primitives, static methods or closures, but not with 32-bit
 generic ints, singletons, and generic higher order functions. The only
 thing you lose is some type safety and efficiency.

1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming.

use immutable data types in a language without pure/const/final attributes.

And BTW, Python *have* some built-in immutable types (strings, tuples, integers, floats, frozensets, and I don't remember if there is anything else). Python uses convention over hard-discipline (no public/private for example), so you can make your own immutable types, just don't add mutating methods and don't mess with. I agree it's arguable, but people actually use this conventions (they are all consenting adults :), so things works.

I agree that statically enforced immutability is unnecessary if you are able to rigidly follow an immutability convention. C++ also has immutability by convention. People who work in large teams with programmers of all skill levels tell me, however, that having a convention and being sure it is followed 100% are two very different things.
 I can only speak from experience, and my bug count in Python is extremely
 low, even when doing MT (the Queue module provides a very easy way to pass
 messages from one thread to another).

How about the GIL?
 I agree that, when you don't care much for performance, things are much
 easier :)

I would also agree that your bug count and complexity should be low as long as you're staying within the paradigms that Python (or any language) was designed to support.
 2. as you mentioned, there's the performance problem. It's fine if you
 don't need performance, but once you do, the complexity abruptly goes
 way up.

readability, but once you do, the efficiency abruptly goes way down.


That, I strongly disagree with.
 3. no contract programming (it's very hard to emulate contract
 inheritance)

languages than Eiffel or D that support this. I'm not sure how hard it would be to emulate this feature in languages where you can define your own class mechanism.


I suspect it is a very hard problem to do with just front end rewriting, 1. I've never seen anyone manage to do it 2. I had to adjust the code generator to make it work
 But I don't many people really wants DbC in Python, so I don't think it
 would be implemented.

That goes back to if you're staying inside the supported paradigms or not.
 4. no metaprogramming

lisp macros?

Exactly! You can even generate code dynamically! This is a very nice example: http://code.activestate.com/recipes/362305/ It makes "self" implicit in *pure Python*. If you say dynamic languages don't have metaprogramming capabilities, you just don't have any idea of what a dynamic language really is.

Ok, can you do Bill Baxter's swizzler? Can you do Don Clugston's FPU code generator?
 5. simple interfacing to C

the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.

In D you need interfacing code too, it can be a little simpler, that's true.

The interfacing in D is nothing more than providing a declaration. There is no code executed.
 6. scope guard (transactional processing); Python has the miserable
 try-catch-finally paradigm


WRONG! See the with statement: http://www.python.org/dev/peps/pep-0343/ with lock: some_non_mt_function() with transaction: some_queries() with file(fname) as f: x = f.read(10) f.write(x)

Looks like you're right, and it's a recently added new feature. I suggest it proves my point - Python had to add complexity to support another paradigm. Python's "with" doesn't look any simpler than scope guard.
 8. RAII

I think this could also be enforced dynamically.

Again, the with statement.

Yes, you can emulate RAII with the with statement, but with RAII (objects that destruct when they go out of scope) you can put this behavior in the object rather than explicitly in the code every time you use it. It's more complicated to have to remember to do it every time on use.
 9. versioning


It can, and it's pretty common, you can do things like this: class A: if WHATEVER: def __init__(self): pass else: def __init__(self, x): pass
 10. ability to manage resources directly


What do you mean by resource?

Garbage collection isn't appropriate for managing every resources. Scarce ones need handling manually. Even large malloc's often are better done outside of the gc.
 11. inline assembler


You can do bytecode manipulation, which is the assembler of dynamic languages :)

That doesn't help if you really need to do a little assembler.
 I really think the *only* *major* advantage of D over Python is speed.
 That's it.

I probably place a lot more importance on static verification rather than relying on convention and tons of unit tests.
Dec 01 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Leandro Lucarella wrote:
 with file(fname) as f:
     x = f.read(10)
     f.write(x)

Looks like you're right, and it's a recently added new feature. I suggest it proves my point - Python had to add complexity to support another paradigm. Python's "with" doesn't look any simpler than scope guard.

Actually "with" is an awful abstraction as defined in Java (the new version), C#, and Python. Scheme also has am unwind-protect function. I strongly believe all of the above are hopelessly misguided. Scope guard is the right thing, and I am convinced it will prevail. Andrei
Dec 01 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el  1 de diciembre a las 11:07 me escribiste:
 Walter Bright wrote:
 Leandro Lucarella wrote:
 with file(fname) as f:
    x = f.read(10)
    f.write(x)

suggest it proves my point - Python had to add complexity to support another paradigm. Python's "with" doesn't look any simpler than scope guard.

version), C#, and Python. Scheme also has am unwind-protect function. I strongly believe all of the above are hopelessly misguided. Scope guard is the right thing, and I am convinced it will prevail.

Good arguments!

Yah, point taken :o). I probably haven't clarified enough that I'm talking about a mere belief. Arguments have been discussed here in the past (e.g. scalability of the language construct with multiple transactions). Time will tell, but one indicating factor is that programs don't deal well with exceptions and scope guards help that massively, whereas "with" seems to help much less. Besides, anyone may be a nut about something, and scope guard is something I'm a nut about. Andrei
Dec 01 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Yah, point taken :o). I probably haven't clarified enough that I'm 
 talking about a mere belief. Arguments have been discussed here in the 
 past (e.g. scalability of the language construct with multiple 
 transactions). Time will tell, but one indicating factor is that 
 programs don't deal well with exceptions and scope guards help that 
 massively, whereas "with" seems to help much less. Besides, anyone may 
 be a nut about something, and scope guard is something I'm a nut about.

I didn't read the Python with carefully, but where does it fall down?
Dec 01 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
I suggest Walter to don't try to say that D2 is "better" than Python, it's a
waste of time and it means nothing.

Walter Bright:

can you do Bill Baxter's swizzler? Can you do Don Clugston's FPU code
generator?<

Python is more more flexible. See the __getattr__ standard method: class Reg4(object): ORDER = dict((c,i) for i,c in enumerate("wxyz")) def __init__(self, data=None): self.data = [None] * 4 if data: for i, item in enumerate(data): self.data[i] = item def __getattr__(self, attr): assert sorted(list(attr)) == ['w', 'x', 'y', 'z'] self.data[:] = (self.data[Reg4.ORDER[c]] for c in attr) r = Reg4("ABCD") print r.data r.xyzw print r.data Output: ['A', 'B', 'C', 'D'] ['B', 'C', 'D', 'A'] If you want the r.xyzw() syntax, that too can be done, creating new methods on the fly. In Python there's also the __getattribute__ that's a little more powerful than __getattr__: http://pyref.infogami.com/__getattribute__
That doesn't help if you really need to do a little assembler.<

That Reg4() class can actually use true SSE registers, generating and running very efficient asm computational kernels using corepy: http://www.corepy.org/ And you can also use GPUs with PyCuda and PyOpenCL in Python: http://mathema.tician.de/software/pycuda http://python-opencl.next-touch.com/ With Python + CorePy today you can write heavy numerical code that's faster than all D code. Bye, bearophile
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 I suggest Walter to don't try to say that D2 is "better" than Python,
 it's a waste of time and it means nothing.

I meant it in the form of the simpler being better hypothesis. I am arguing that a simpler language often leads to complex code. CorePy, PyCuda, PyOpenCL, etc. are not part of Python. They are extensions, and are not written in Python. Heck, C++ Boost is listed as a prerequisite for PyCuda. The very existence of those shows that Python itself is not powerful enough. Secondly, use of them does not make Python a simple language. And thirdly, any language can have extension libraries and processors.
Dec 01 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 I meant it in the form of the simpler being better hypothesis.

I see, I have missed that purpose of the discussion... I am sorry.
 The very existence of those shows that Python itself is not powerful enough.

Right. But what people care in the end is programs that get the work done. If a mix of Python plus C/C++ libs are good enough and handy enough then they get used. For example I am able to use the PIL Python lib with Python to load, save and process jpeg images at high-speed with few lines of handy code. So I don't care if PIL is written in C++: http://www.pythonware.com/products/pil/
 Secondly, use of them does not make Python a simple language.

Python is simpler than D2, but it's not a simple language, it has many features, etc. A simple language is Scheme :-)
 And thirdly, any language can have extension libraries and processors.

That's true, but in practice there's difference from practice and theory :-) - Are the libs you need to do X and Y and Z actually present and are they working well? It's often possible to find every kind of binding for Python. - Are those libs powerful? CorePy allows you to write the most efficient code that runs with the SSE extensions. - Is using those handy, with a nice syntax, with a nice try-test-debug cycle? Python allows for this too, allows to write wrappers with a good syntax, etc. And the shell allows you to try code, etc. Bye, bearophile
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Right. But what people care in the end is programs that get the work
 done. If a mix of Python plus C/C++ libs are good enough and handy
 enough then they get used. For example I am able to use the PIL
 Python lib with Python to load, save and process jpeg images at
 high-speed with few lines of handy code. So I don't care if PIL is
 written in C++: http://www.pythonware.com/products/pil/

Sure, but that's not about the language. It's about the richness of the ecosystem that supports the language, and Python certainly has a rich one.
Dec 01 2009
next sibling parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
retard wrote:
 Tue, 01 Dec 2009 14:22:10 -0800, Walter Bright wrote:
 
 bearophile wrote:
 Right. But what people care in the end is programs that get the work
 done. If a mix of Python plus C/C++ libs are good enough and handy
 enough then they get used. For example I am able to use the PIL Python
 lib with Python to load, save and process jpeg images at high-speed
 with few lines of handy code. So I don't care if PIL is written in C++:
 http://www.pythonware.com/products/pil/

ecosystem that supports the language, and Python certainly has a rich one.

I thought D was supposed to be a practical language for real world problems. This 'D is good because everything can and must be written in D' is beginning to sound like a religion. To me it seems the Python way is more practical in all ways. Even novice programmers can produce efficient programs with it by using a mixture of low level C/C++ libs and high level python scripts. I agree that Python isn't as fast as D and it lacks type safety things and so on, but in the end of day the Python coder gets the job done while the D coder still fights with inline assembler, compiler bugs, porting the app, fighting the type system (mostly purity/constness issues). Python has more libs available, you need to write less code to implement the same functionality and it's all less troublesome because the lack of type annotations. So it's really understandable why a greater amount people favor Python.

a wonderful language to just do string-and-hashtable code in. All the other features are there to help bigger projects (contracts, yay!) or projects with special needs (I for one have never needed inline ASM).
Dec 01 2009
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from retard (re tard.com.invalid)'s article
 I thought D was supposed to be a practical language for real world
 problems. This 'D is good because everything can and must be written in
 D' is beginning to sound like a religion.

You're missing the point. Mixing languages always adds complexity. If you want the languages to talk to each other, the glue layer adds complexity that has nothing to do with the problem being solved. If you don't want the languages to talk to each other, then you're severely limited in terms of the granularity at which they can be mixed. Furthermore, it's nice to be able to write generic code once and have it always "just be there". I get very annoyed with languages that target a small niche. For example, I do a lot of mathy stuff, but I hate Matlab and R because they're too domain-specific. Anytime I write more than 20 lines of code in either of these, I find that the lack of some general-purpose programming capability in these languages or the awkwardness of using it has just added a layer of complexity to my project. Even Python runs out of steam when you need more performance but you realize what a PITA it is to get all the glue working to rewrite parts of your code in C. Heck, even Numpy sometimes feels like a kludge because it reimplements basic things like arrays (with static typing, mind you) because Python's builtin arrays are too slow. Therefore, Numpy code is often not very Pythonic. A practical language should have enough complexity management tools to handle basically any type of complexity you throw at it, whether it be a really complicated business model, insane performance requirements, the need to scale to massive datasets, or the sheer volume of code that needs to be written. Making more assumptions about what problems you want to solve is what libraries or applications are for. These complexity management tools should also stay the heck out of the way when you don't need them. If you can achieve this, your language will be good for almost anything.
Dec 02 2009
parent bearophile <bearophileHUGS lycos.com> writes:
dsimcha:

because Python's builtin arrays are too slow.<

Python lists are not badly implemented, it's the interpreter that's slow (*). Python built-in arrays (lists) are dynamically typed, so they are less efficient but more flexible. NumPy arrays are the opposite. So as usual with data structures, they are a result of compromises and are chosen an optimized for your purposes. (*) And the interpreter is slow because it's designed to be simple. Being simple it's possible for not very expert people too, people that do it in their free time, to hack and fix the Python C source code. This allows CPython to keep enough developers, so the language keeps improving. In the Python design there are many lessons like this that D developers have to learn still.
 A practical language should have enough complexity management tools to handle
 basically any type of complexity you throw at it, [...]

In the world there's space for smaller and simpler languages too, like Lua, designed for more limited purposes. Not every language must become an universal ball of mud like C++.
 If you can achieve this, your language will be good for almost anything.

I will not believe in the single True Language, sorry, just like there isn't a single perfect way to implement dynamic arrays. Bye, bearophile
Dec 02 2009
prev sibling next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from retard (re tard.com.invalid)'s article
 Tue, 01 Dec 2009 10:46:11 -0800, Walter Bright wrote:
 Leandro Lucarella wrote:
 I really think the *only* *major* advantage of D over Python is speed.
 That's it.

I probably place a lot more importance on static verification rather than relying on convention and tons of unit tests.

less bullshit talk to their ears. Unit testing with large frameworks is the way to go. You even have lots of new paradigms to learn, e.g. TDD, BDD, ...

My biggest gripe about static verification is that it can't help you at all with high-level logic/algorithmic errors, only lower level coding errors. Good unit tests (and good asserts), on the other hand, are invaluable for finding and debugging high-level logic and algorithmic errors.
Dec 01 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
dsimcha:
 Good unit
 tests (and good asserts), on the other hand, are invaluable for finding and
 debugging high-level logic and algorithmic errors.

Contract programming too can help. For example in a precondition of a binary search function you can test that the items are sorted. If you don't like that (because when such contract is present it changes the computational complexity class of the function) you can even do a random sampling test :-) In some cases it can be useful to split unittests and contracts in two groups (using a version()), a group of fast ones to be run all the time, and group of slower ones to be run only once in a while to be safer. What I'd like to know is why Andrei has asked for exceptions inside contracts too. Bye, bearophile
Dec 01 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 My biggest gripe about static verification is that it can't help you at all
with
 high-level logic/algorithmic errors, only lower level coding errors.  Good unit
 tests (and good asserts), on the other hand, are invaluable for finding and
 debugging high-level logic and algorithmic errors.

Unit tests have their limitations as well. Unit tests cannot prove a function is pure, for example. Both unit tests and static verification are needed.
Dec 01 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 Tue, 01 Dec 2009 14:24:01 -0800, Walter Bright wrote:
 Unit tests have their limitations as well. Unit tests cannot prove a
 function is pure, for example.

Sure, unit tests can't prove that.
 Both unit tests and static verification are needed.

But it doesn't lead to this conclusion. Static verification is sometimes very expensive

Not if it's built in to the compiler. I aim to bring the cost of it down to zero.
 and real world business applications don't need those 
 guarantees that often.

Having your accounting software write checks in the wrong amount can be very very bad. And frankly, if you can afford your software unwittingly emitting garbage data, you don't need that software for your business apps.
 It's ok if a web site or game crashes every now 
 and then.

If Amazon's web site goes down, they likely lose millions of dollars a minute. Heck, I once lost a lot of business because the web site link to the credit card system went down. Few businesses can afford to have their ecommerce web sites down.
 If I need serious static verification, I would use tools like 
 Coq, not D..

There's a lot of useful stuff in between a total formal proof of correctness and nothing at all. D can offer proof of various characteristics that are valuable for eliminating bugs.
Dec 02 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello dsimcha,

 My biggest gripe about static verification is that it can't help you
 at all with high-level logic/algorithmic errors, only lower level
 coding errors.  Good unit tests (and good asserts), on the other hand,
 are invaluable for finding and debugging high-level logic and
 algorithmic errors.
 

I don't have a link or anything but I remember hearing about a study MS did about finding bugs and what they found is that every reasonably effective tool they looked at found the same amount of bugs (ok, within shouting distance, close enough that none of them could be said to be pointless) but different bugs. The way to find the most bugs is to attack it from many angle. If I can have a language that can totally prevent one class of bugs in vast swaths of code, that's a good thing, even if it does jack for another class of bugs.
Dec 02 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from BCS (none anon.com)'s article
 Hello dsimcha,
 My biggest gripe about static verification is that it can't help you
 at all with high-level logic/algorithmic errors, only lower level
 coding errors.  Good unit tests (and good asserts), on the other hand,
 are invaluable for finding and debugging high-level logic and
 algorithmic errors.

about finding bugs and what they found is that every reasonably effective tool they looked at found the same amount of bugs (ok, within shouting distance, close enough that none of them could be said to be pointless) but different bugs. The way to find the most bugs is to attack it from many angle. If I can have a language that can totally prevent one class of bugs in vast swaths of code, that's a good thing, even if it does jack for another class of bugs.

Right, but the point I was making is that you hit diminishing returns on static verification very quickly. If you have even very basic static verification, it will be enough to tilt the vast majority of your bugs towards high-level logic/algorithm bugs.
Dec 02 2009
parent reply BCS <none anon.com> writes:
Hello dsimcha,

 == Quote from BCS (none anon.com)'s article
 
 I don't have a link or anything but I remember hearing about a study
 MS did
 about finding bugs and what they found is that every reasonably
 effective
 tool they looked at found the same amount of bugs (ok, within
 shouting distance,
 close enough that none of them could be said to be pointless) but
 different
 bugs. The way to find the most bugs is to attack it from many angle.
 If I
 can have a language that can totally prevent one class of bugs in
 vast swaths
 of code, that's a good thing, even if it does jack for another class
 of bugs.

Right, but the point I was making is that you hit diminishing returns on static verification very quickly. If you have even very basic static verification, it will be enough to tilt the vast majority of your bugs towards high-level logic/algorithm bugs.

OTOH, if it's done well (doesn't get in my way) and's built into the language, any static verification is free from the end users standpoint. Heck, even it it gets in your way but only for strange cases where your hacking around, it's still useful because it tells you where the high risk code is.
Dec 02 2009
parent Don <nospam nospam.com> writes:
BCS wrote:
 Hello dsimcha,
 
 == Quote from BCS (none anon.com)'s article

 I don't have a link or anything but I remember hearing about a study
 MS did
 about finding bugs and what they found is that every reasonably
 effective
 tool they looked at found the same amount of bugs (ok, within
 shouting distance,
 close enough that none of them could be said to be pointless) but
 different
 bugs. The way to find the most bugs is to attack it from many angle.
 If I
 can have a language that can totally prevent one class of bugs in
 vast swaths
 of code, that's a good thing, even if it does jack for another class
 of bugs.

Right, but the point I was making is that you hit diminishing returns on static verification very quickly. If you have even very basic static verification, it will be enough to tilt the vast majority of your bugs towards high-level logic/algorithm bugs.

OTOH, if it's done well (doesn't get in my way) and's built into the language, any static verification is free from the end users standpoint. Heck, even it it gets in your way but only for strange cases where your hacking around, it's still useful because it tells you where the high risk code is.

There's a really interesting synergy between pure and unit tests. It's much easier to test a function properly if it's pure -- you know that there are no globals anywhere which you have to worry about.
Dec 03 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 5. simple interfacing to C

do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.

true.

There is no code executed.

Unless you want to pass D strings to C, then you have to execute toStringz(), which is a really thin "wrapper", but it's a wrapper. Using C from D is (generally) error prone and painful, so I usually end up writing more D'ish wrappers to make the D coding more pleasant.

You can also simply use C strings in D, and pass them straight to C functions that take void*. No conversion necessary. It isn't any harder to ensure a 0 termination in D than it is in C, in fact, it's just the same. D string literals even helpfully already have a 0 at the end with this in mind!
 It's not safe, and of course, being a dynamic language, you can access
 C code at "compile time" (because there it no compile time), but you can
 interface with C very easily:
 
 import ctypes
 libc = ctypes.cdll.LoadLibrary("libc.so.6")
 libc.printf("hello world %i\n", 5)



Wow, that was hard! =)

Ok, does this work: p = libc.malloc(100); *p = 3; ? Or this: struct S { int a; char b; }; S s; libc.fillInS(&s);
 It's simpler, because you only have one obvious way to do things,

No, Python has try/catch/finally as well.
 in D you
 can use a struct, a scope class or a scope statement to achieve the same.
 Of course that gives you more flexibility, but adds complexity to the
 language. I'm not complaining or saying that D is wrong, I'm just saying
 that Python is a very expressive language without much complexity. I think
 the tradeoff is the speed.

 Yes, you can emulate RAII with the with statement, but with RAII
 (objects that destruct when they go out of scope) you can put this
 behavior in the object rather than explicitly in the code every time
 you use it. It's more complicated to have to remember to do it every
 time on use.

Maybe you are right, but the with statement plays very well with the "explicit is better than implicit" of Python :) Again, is flexibility vs complexity.

Another principle is abstractions should be in the right place. When the abstraction leaks out into the use of the abstraction, it's user code complexity. This is a case of that, I believe.
 There are static analyzers for Python:
 http://www.logilab.org/857
 http://divmod.org/trac/wiki/DivmodPyflakes
 http://pychecker.sourceforge.net/

What's happening here is the complexity needed in the language is pushed off to third party tools. It didn't go away.
 And again, judging from experience, I don't know why, but I really have
 a very small bug count when using Python. I don't work with huge teams of
 crappy programmers (which I think is the scenario that D tries to cover),
 that can be a reason ;)

Part of that may be experience. The languages I use a lot, I tend to generate far fewer bugs with because I've learned to avoid the common bugs. There have been very few coding errors in the C++ dialect I use in dmd, the errors have been logic ones. You're right that D has a lot that is intended more for large scale projects with a diverse team than one man jobs. There is a lot to support enforced encapsulation, checking, and isolation, if that is desired. Purity, immutability, contracts, interfaces, etc., are not important for small programs.
Dec 01 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 Ok, does this work:
 
      p = libc.malloc(100);
      *p = 3;
 
 ? Or this:
 
      struct S { int a; char b; };
      S s;
      libc.fillInS(&s);

The purpose of ctypes is to interface Python with C libs, it's a quite well designed piece of software engineering. This is how you can do what you ask for: from ctypes import POINTER, Structure, cdll, c_int, c_char libc = cdll.msvcrt # on Windows # libc = CDLL("libc.so.6") # on linux malloc = libc.malloc malloc.restype = POINTER(c_int) p = malloc(100) p[0] = 3 #----------------- class S(Structure): _fields_ = [("a", c_int), ("b", c_char)] s = S() # AttributeError: function 'fillInS' not found libc.fillInS(byref(s)) Bye, bearophile
Dec 01 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 
 Ok, does this work:
 
 p = libc.malloc(100); *p = 3;
 
 ? Or this:
 
 struct S { int a; char b; }; S s; libc.fillInS(&s);

The purpose of ctypes is to interface Python with C libs, it's a quite well designed piece of software engineering. This is how you can do what you ask for: from ctypes import POINTER, Structure, cdll, c_int, c_char libc = cdll.msvcrt # on Windows # libc = CDLL("libc.so.6") # on linux malloc = libc.malloc malloc.restype = POINTER(c_int) p = malloc(100) p[0] = 3 #----------------- class S(Structure): _fields_ = [("a", c_int), ("b", c_char)] s = S() # AttributeError: function 'fillInS' not found libc.fillInS(byref(s)) Bye, bearophile

Doable, yes, simple, no. For example, it's clear it cannot be linked directly to C. The C code must be installed into a shared library first.
Dec 01 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 It looks like you can (not as easily) according to bearophile example, but
 this is besides the point, you only want to use malloc() for performance
 reasons, and I already said that D is better than Python on that.
 I mentioned ctypes just for the point of easy C-interoperability.

To me C interoperability means being able to connect with any C function. That means handling pointers, structs, etc.
 It's simpler, because you only have one obvious way to do things,


errors, not doing RAII). Of course you can find convoluted ways to do anything in Python as with any other language.

try/catch/finally is usually used for handling RAII in languages that don't have RAII, so I don't think it's really justifiable to argue that Python only gives one obvious way to do it. D has three: RAII, scope guard, and try-catch-finally. As far as I'm concerned, the only reason t-c-f isn't taken out to the woodshed and shot is to make it easy to translate code from other languages to D.
 Maybe you are right, but the with statement plays very well with the
 "explicit is better than implicit" of Python :)

 Again, is flexibility vs complexity.

the abstraction leaks out into the use of the abstraction, it's user code complexity. This is a case of that, I believe.

Where is the code complexity here, I can't see it.

The code complexity is suppose I create a mutex object. Every time I get the mutex, I want the mutex to be released on all paths. With RAII, I build this into the mutex object itself. Without RAII, I have to add in the exception handling code EVERY place I use the object. If I change the abstraction, I have to go and change every use of it. To me, that's code complexity, not flexibility. A proper abstraction means that if I change the design, I only have to change it in one place. Not everywhere its used.
 The thing is, I never used them and never had the need to. Don't ask me
 why, I just have very few errors when coding in Python. So it's not really
 *needed*.

I agree that static analysis isn't needed. The better statement is is there a benefit to it that exceeds the cost?
Dec 01 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 D has three: RAII, scope guard, and try-catch-finally. As far as I'm
 concerned, the only reason t-c-f isn't taken out to the woodshed and
 shot is to make it easy to translate code from other languages to D.

I've literally never written a finally block in my life in D because scope statements and RAII are just that good. Does anyone, other than a few beginners who were unaware of scope guards, use finally? I'm half-tempted to say we should just axe it. It's an error prone legacy feature that's completely useless for any purpose except writing Java or C# code in D. Since we're looking to lighten the spec, ditching finally would do so, and it would encourage converts from Java and C# to learn a better way of doing clean-up code.
Dec 01 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 D has three: RAII, scope guard, and try-catch-finally. As far as I'm
 concerned, the only reason t-c-f isn't taken out to the woodshed and
 shot is to make it easy to translate code from other languages to D.

I've literally never written a finally block in my life in D because scope statements and RAII are just that good. Does anyone, other than a few beginners who were unaware of scope guards, use finally? I'm half-tempted to say we should just axe it. It's an error prone legacy feature that's completely useless for any purpose except writing Java or C# code in D. Since we're looking to lighten the spec, ditching finally would do so, and it would encourage converts from Java and C# to learn a better way of doing clean-up code.

I'm sympathetic to that point of view, but it is pure drudgery to unwind try-catch-finally into proper scope statements.
Dec 01 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 I think code translation from other languages is not a good reason for
 adding complexity...

I think it is. We wouldn't have DWT otherwise, for example. Inner classes were added specifically in order to speed up the translation process.
 The code complexity is suppose I create a mutex object. Every time I
 get the mutex, I want the mutex to be released on all paths. With
 RAII, I build this into the mutex object itself.

But you can do that with the 'with' statement!

The with goes at the use end, not the object declaration end. Or I read the spec wrong.
Dec 01 2009
parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Walter Bright wrote:
 But you can do that with the 'with' statement!

The with goes at the use end, not the object declaration end. Or I read the spec wrong.

the with-statement, only it does it in a more flexible and arguably sexier way.
Dec 01 2009
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Adam D. Ruppe wrote:
 On Tue, Dec 01, 2009 at 09:17:44PM +0000, retard wrote:
 The lack of type annotations at least removes all typing bugs. 

Quite the contrary, leaving off the type annotation spawns bugs.

Yah, I was wondering about that! The hypothesis is there, but the conclusion was the negation of the correct conclusion. Andrei
Dec 01 2009
prev sibling next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Adam D. Ruppe wrote:
 You might say that I should have been more disciplined about [...]

That's the usual excuse for poor language design <g>. What I've been trying to do with D is enable more static verification, so that the project team can rely on enforced guarantees rather than discipline, education, convention, hope and prayer.
Dec 01 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 The thing is, nowadays when all development should follow the principles 
 of clean code (book), agile, and tdd/bdd, this cannot happen. You write 
 tests first, then the production code. They say that writing tests and 
 code takes less time than writing only the more or less buggy production 
 code. Not writing tests is a sign of a novice programmer and they 
 wouldn't hire you if you didn't advertise your TDD skills.

And therein lies the problem. You need the programmers to follow a certain discipline. I don't know if you've managed programmers before, but they don't always follow discipline, no matter how good they are. The root problem is there's no way to *verify* that they've followed the discipline, convention, procedure, whatever. But with mechanical checking, you can guarantee certain things. How are you going to guarantee each member of your team put all the unit tests in? Each time they change anything?
 In this particular case you use a dummy test db fixture system, write 
 tests for 'a is int' and 'b is int'. With these tests in place, the 
 functionality provided by D's type system is only a subset of the 
 coverage the tests provide. So D cannot offer any advantage anymore over 
 e.g. Python.

Where's the advantage of: assert(a is int) over: int a; ? Especially if I have to follow the discipline and add them in everywhere?
Dec 02 2009
next sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
retard wrote:
 Wed, 02 Dec 2009 03:16:58 -0800, Walter Bright wrote:
 
 retard wrote:
 The thing is, nowadays when all development should follow the
 principles of clean code (book), agile, and tdd/bdd, this cannot
 happen. You write tests first, then the production code. They say that
 writing tests and code takes less time than writing only the more or
 less buggy production code. Not writing tests is a sign of a novice
 programmer and they wouldn't hire you if you didn't advertise your TDD
 skills.

certain discipline. I don't know if you've managed programmers before, but they don't always follow discipline, no matter how good they are. The root problem is there's no way to *verify* that they've followed the discipline, convention, procedure, whatever. But with mechanical checking, you can guarantee certain things. How are you going to guarantee each member of your team put all the unit tests in? Each time they change anything?
 In this particular case you use a dummy test db fixture system, write
 tests for 'a is int' and 'b is int'. With these tests in place, the
 functionality provided by D's type system is only a subset of the
 coverage the tests provide. So D cannot offer any advantage anymore
 over e.g. Python.

assert(a is int) over: int a; ? Especially if I have to follow the discipline and add them in everywhere?

The case I commented on was about fetching values from a db IIRC. So the connection between SQL database and D loses all type information unless you build some kind of high level SQL interface which checks the types (note that up-to-date checking cannot be done with dmd unless it allows fetching stuff from the db on compile time or you first dump the table parameters to some text file before compiling). You can't just write: typedef string[] row; row[] a = sql_engine.execute("select * from foobar;").result; int b = (int)a[0][0]; string c = (string)b[0][1]; and somehow expect that the first column of row 0 is an integer and the next column a string. You still need to postpone the checking to runtime with some validation function: typedef string[] row; row[] a = sql_engine.execute("select * from foobar;").result; void runtime_assert(T)(string s) { ... } runtime_assert!(int)(a[0][0]); int b = (int)a[0][0]; string c = b[0][1];

std.conv.to() to the rescue! :) import std.conv; ... row[] a = sql_engine.execute("select * from foobar;").result; int b = to!int(a[0][0]); // Throws if conversions fail string c = to!string(a[0][1]); -Lars
Dec 02 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 But with mechanical checking, you can guarantee certain things.

Usually what mechanical checking guarantee is not even vaguely enough, and such guarantee aren't even about the most important parts :-) Unit tests are more important, because they cover things that matter more. Better to add much more unit tests to Phobos.
 Where's the advantage of:
      assert(a is int)
 over:
      int a;
 ? Especially if I have to follow the discipline and add them in everywhere?

Probably I have missed parts of this discussion, so what I write below can be useless. But in dynamic code you don't almost never assert that a variable is an int; you assert that 'a' is able to do its work where it's used. So 'a' can often be an int, decimal, a multiprecision long, a GMP multiprecision, or maybe even a float. What you care of it not what a is but if does what it has to, so you care if it quacks :-) That's duck typing. Bye, bearophile
Dec 02 2009
parent Michal Minich <michal.minich gmail.com> writes:
Hello bearophile,

 But in dynamic code you don't almost never assert that a variable is
 an int; you assert that 'a' is able to do its work where it's used. So
 'a' can often be an int, decimal, a multiprecision long, a GMP
 multiprecision, or maybe even a float. What you care of it not what a
 is but if does what it has to, so you care if it quacks :-) That's
 duck typing.

Yes that's duck typing: "assert that 'a' is able to do its work where it's used" (function with required signature exists) Interfaces in OOP, or type classes in Haskell are here to "assert that 'a' is intended to work where it's used" (type is some implementation of the required concept (int/long/bigint)) both have its place :) Note that duck typing need not to be only dynamic, it can also happen at compile time - ranges in D checks if some functions are specified for "object" at compile time.
Dec 02 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 I agree some disciplines are hard to follow. For example ensuring 
 immutability in a inherently mutable language. But TDD is something a bit 
 easier - it's a lot higher level. It's easy to remember that you can't 
 write any code into production code folder unless there is already code 
 in test folder. You can verify with code coverage tools that you didn't 
 forget to write some tests. In TDD the whole code looks different. You 
 build it to be easily testable. It's provably a good way to write code - 
 almost every company nowadays uses TDD and agile methods such as Scrum.

I totally agree with the value of unittests. That's why D has them built in to the language, and even has a code coverage analyzer built in so you can see how good your unit tests are. Where you and I disagree is on the notion that unit tests are a good enough replacement for static verification. For me it's like using a sports car to tow a trailer.
Dec 02 2009
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 retard, el  1 de diciembre a las 11:42 me escribiste:
 Tue, 01 Dec 2009 03:13:28 -0800, Walter Bright wrote:

 retard wrote:
 Overall these simplifications don't remove any crucial high level
 language features, in fact they make the code simpler and shorter. For
 instance there isn't high level code that can only be written with
 8-bit byte primitives, static methods or closures, but not with 32-bit
 generic ints, singletons, and generic higher order functions. The only
 thing you lose is some type safety and efficiency.

1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming.

use immutable data types in a language without pure/const/final attributes.

And BTW, Python *have* some built-in immutable types (strings, tuples, integers, floats, frozensets, and I don't remember if there is anything else). Python uses convention over hard-discipline (no public/private for example), so you can make your own immutable types, just don't add mutating methods and don't mess with. I agree it's arguable, but people actually use this conventions (they are all consenting adults :), so things works. I can only speak from experience, and my bug count in Python is extremely low, even when doing MT (the Queue module provides a very easy way to pass messages from one thread to another).

But wait, my understanding is that threading in Python is a complete shame: one global lock. Is that correct? FWIW, that's such a bad design that _nobosy_ I know every brought it up except in jest.
 I agree that, when you don't care much for performance, things are much
 easier :)

I've hoped to leave my trace in history with a one-liner: "Inefficient abstractions are a dime a dozen". Didn't seem to catch up at all :o).
 I really think the *only* *major* advantage of D over Python is speed.
 That's it.

In wake of the above, it's actually huge. If you can provide comparable power for better speed, that's a very big deal. (Usually dynamic/scripting languages are significantly more powerful because they have fewer constraints.) Andrei
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me,
 but that's where I think Python is better than D :)

If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.
 I think only not having a compile cycle (no matter how fast compiling is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.

That makes sense.
Dec 01 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Walter Bright, el  1 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me,
 but that's where I think Python is better than D :)

than the D one, then yes indeed it is a win.

And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive language and when you need speed, you need static typing and all the low-level support. They are all necessary evil. All I'm saying is, when I don't need speed and I have to do something quickly, Python is still a far better language than D, because of they inherent differences.
 I think only not having a compile cycle (no matter how fast compiling is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.


I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...

I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I managed to make-do without it. Andrei
Dec 01 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 On Tue, Dec 1, 2009 at 4:37 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Leandro Lucarella wrote:
 Walter Bright, el  1 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me,
 but that's where I think Python is better than D :)

than the D one, then yes indeed it is a win.

I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive language and when you need speed, you need static typing and all the low-level support. They are all necessary evil. All I'm saying is, when I don't need speed and I have to do something quickly, Python is still a far better language than D, because of they inherent differences.
 I think only not having a compile cycle (no matter how fast compiling
 is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.


a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...

managed to make-do without it.

The web page[1] says it doesn't work on Windows. That'd be my excuse for not using it. [1] http://www.digitalmars.com/d/2.0/rdmd.html --bb

rdmd does work for Windows. What it does is to detect and cache dependencies such that you only need to specify your main file. Andrei
Dec 01 2009
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 On Tue, Dec 1, 2009 at 5:08 PM, Bill Baxter <wbaxter gmail.com> wrote:
 On Tue, Dec 1, 2009 at 4:37 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Leandro Lucarella wrote:
 Walter Bright, el  1 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me,
 but that's where I think Python is better than D :)

than the D one, then yes indeed it is a win.

I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive language and when you need speed, you need static typing and all the low-level support. They are all necessary evil. All I'm saying is, when I don't need speed and I have to do something quickly, Python is still a far better language than D, because of they inherent differences.
 I think only not having a compile cycle (no matter how fast compiling
 is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.


a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...

managed to make-do without it.

for not using it. [1] http://www.digitalmars.com/d/2.0/rdmd.html

Seems like it does work, though. Good news! The web page should be updated. I will definitely use it now that I know it works. It does seem to hang at the end of output waiting for an Enter from the console. And the Š in the --help message doesn't show properly on the console either. (but actually it does work if I chcp 65001 first). And the --man browser thing doesn't work at all. I think you need to do some registry diving to find the browser under Windows. You can open a url in the default browser with this magic code: import std.c.windows.windows; extern(Windows) { HINSTANCE ShellExecuteW(HWND,const LPWSTR, const LPWSTR, const LPWSTR, const LPWSTR,INT); } void main() { HINSTANCE hr = ShellExecuteW(null, "open"w.ptr, "http://www.digitalmars.com/d"w.ptr, null, null, SW_SHOWNORMAL); } --bb

Thanks! Could you please submit that to bugzilla? Andrei
Dec 01 2009
prev sibling next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Andrei Alexandrescu wrote:

 
 I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I
 managed to make-do without it.
 
 Andrei

rdmd is a life saver, I use it all the time.
Dec 02 2009
prev sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
Andrei Alexandrescu wrote:
 Leandro Lucarella wrote:
 Walter Bright, el  1 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me,
 but that's where I think Python is better than D :)

than the D one, then yes indeed it is a win.

And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive language and when you need speed, you need static typing and all the low-level support. They are all necessary evil. All I'm saying is, when I don't need speed and I have to do something quickly, Python is still a far better language than D, because of they inherent differences.
 I think only not having a compile cycle (no matter how fast 
 compiling is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.


I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...

I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I managed to make-do without it. Andrei

I use it almost exclusively, and find it an extremely useful and efficient tool. The only time I use DMD directly is when I'm done coding and testing, and want to compile the final library file or executable. For libraries, I define a unit.d file in the library root directory that looks something like this: #!/usr/local/bin/rdmd --shebang -w -unittest module unit; import std.stdio; // Import entire library. import mylib.moduleA; import mylib.moduleB; ... void main() { writeln("All unittests passed."); } Then I mark unit.d as executable, and run it whenever I want to test changes I've made to the library. -Lars
Dec 02 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 I guess D can greatly benefit from a compiler that can compile and run
 a multiple-files program with one command

dmd a b c -run args...
Dec 01 2009
parent reply =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Walter Bright wrote:
 Leandro Lucarella wrote:
 I guess D can greatly benefit from a compiler that can compile and run
 a multiple-files program with one command

dmd a b c -run args...

Can we have dmd -resolve-deps-and-run main.d I use rdmd when I can, but it doesn't manage to link C-libs in properly.
Dec 01 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Pelle Månsson wrote:
 Walter Bright wrote:
 Leandro Lucarella wrote:
 I guess D can greatly benefit from a compiler that can compile and run
 a multiple-files program with one command

dmd a b c -run args...

Can we have dmd -resolve-deps-and-run main.d I use rdmd when I can, but it doesn't manage to link C-libs in properly.

Could you please submit a sample to bugzilla? Andrei
Dec 02 2009
parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Andrei Alexandrescu wrote:
 Pelle Månsson wrote:
 Walter Bright wrote:
 Leandro Lucarella wrote:
 I guess D can greatly benefit from a compiler that can compile and run
 a multiple-files program with one command

dmd a b c -run args...

Can we have dmd -resolve-deps-and-run main.d I use rdmd when I can, but it doesn't manage to link C-libs in properly.

Could you please submit a sample to bugzilla? Andrei

http://d.puremagic.com/issues/show_bug.cgi?id=3564 Thank you.
Dec 02 2009
prev sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Leandro Lucarella wrote:

 
 I guess D can greatly benefit from a compiler that can compile and run
 a multiple-files program with one command (AFAIK rdmd only support one
 file programs, right?) and an interactive console that can get the ddoc
 documentation on the fly. But that's not very related to the language
 itself, I guess it's doable, the trickiest part is the interactive
 console, I guess...

rdmd does copmile in dependencies, or is that not what you mean? For the module you are working in, assuming you program with unit tests: rdmd -unittest --main foo.d When you don't have tons of dependencies, it is practically as fast a scripting language.
Dec 02 2009
prev sibling next sibling parent Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 5:08 PM, Bill Baxter <wbaxter gmail.com> wrote:
 On Tue, Dec 1, 2009 at 4:37 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Leandro Lucarella wrote:
 Walter Bright, el =A01 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only m=





 but that's where I think Python is better than D :)

If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.

And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive langua=



 and when you need speed, you need static typing and all the low-level
 support. They are all necessary evil. All I'm saying is, when I don't n=



 speed and I have to do something quickly, Python is still a far better
 language than D, because of they inherent differences.

 I think only not having a compile cycle (no matter how fast compiling
 is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.

That makes sense.

I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...

I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I managed to make-do without it.

The web page[1] says it doesn't work on Windows. =A0That'd be my excuse for not using it. [1] http://www.digitalmars.com/d/2.0/rdmd.html

Seems like it does work, though. Good news! The web page should be updated. I will definitely use it now that I know it works. It does seem to hang at the end of output waiting for an Enter from the con= sole. And the =E1 in the --help message doesn't show properly on the console either. (but actually it does work if I chcp 65001 first). And the --man browser thing doesn't work at all. I think you need to do some registry diving to find the browser under Windows. You can open a url in the default browser with this magic code: import std.c.windows.windows; extern(Windows) { HINSTANCE ShellExecuteW(HWND,const LPWSTR, const LPWSTR, const LPWSTR, const LPWSTR,INT); } void main() { HINSTANCE hr =3D ShellExecuteW(null, "open"w.ptr, "http://www.digitalmars.com/d"w.ptr, null, null, SW_SHOWNORMAL); } --bb
Dec 01 2009
prev sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wed, Dec 02, 2009 at 11:50:23AM +0000, retard wrote:
 The case I commented on was about fetching values from a db IIRC.

What happened to me was the value got returned as the incorrect type, stored, and used later where it threw the exception.. Conceptual code here: === def getPermission(userid) begin return $db.query("select whatever from table where id = ?", userid); end def getPermissionNeeded(operation) begin return $db.query("select whatever from table where id = ?", operation.id); end === The most common way the code used it was like this: if(getPermission(user) == getPermissionNeeded(op) || user == User.ROOT) op.run; // works - the db functions return equal strings in both cases The bug was here: if(getPermission(user) >= getPermissionNeeded(op)) // this throws at runtime op.run; // never reached, users complain If the functions were defined like they would be in D: int getPermission(int user) { return db.query(...); } The real source of the bug - that the database query didn't give me the expected type - would have been located in the fraction of a second it takes for the compiler to run its most trivial checks. That really is similar to putting in an out contract: assert(getPermission is int); or probably better: assert(getPermission >= 0); But it is a) required, so I'm not allowed to get lazy about it and b) just plain easier, so laziness won't affect it anyway. (Or hell, if it was PHP, the weak typing would have converted both to integer at that line and it would work. But weak typing comes with its own slippery bugs.) Thanks to the dynamic duck typing, the code worked most the time, but failed miserably where I deviated ever so slightly. The fix in the ruby was easy enough, once the bug was found: return db.query(...).to_i The same thing dmd would have forced me to do to make it compile, but the important difference is dmd would have found the bug for me, not an end user. It just goes to prove that you can't just forget about types in your code just because it is dynamic. -- Adam D. Ruppe http://arsdnet.net
Dec 03 2009
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  1 de diciembre a las 17:31 me escribiste:
 Leandro Lucarella wrote:
It looks like you can (not as easily) according to bearophile example, but
this is besides the point, you only want to use malloc() for performance
reasons, and I already said that D is better than Python on that.
I mentioned ctypes just for the point of easy C-interoperability.

To me C interoperability means being able to connect with any C function. That means handling pointers, structs, etc.

Well, you can. It's a *little* more verbose than D, but since you almost *never* need to interoperate with C in Python, it's not so bad.
It's simpler, because you only have one obvious way to do things,


errors, not doing RAII). Of course you can find convoluted ways to do anything in Python as with any other language.

try/catch/finally is usually used for handling RAII in languages that don't have RAII, so I don't think it's really justifiable to argue that Python only gives one obvious way to do it.

It's obvious when you code in Python.
 D has three: RAII, scope guard, and try-catch-finally. As far as I'm
 concerned, the only reason t-c-f isn't taken out to the woodshed and
 shot is to make it easy to translate code from other languages to D.

I think code translation from other languages is not a good reason for adding complexity...
Maybe you are right, but the with statement plays very well with the
"explicit is better than implicit" of Python :)

Again, is flexibility vs complexity.

the abstraction leaks out into the use of the abstraction, it's user code complexity. This is a case of that, I believe.

Where is the code complexity here, I can't see it.

The code complexity is suppose I create a mutex object. Every time I get the mutex, I want the mutex to be released on all paths. With RAII, I build this into the mutex object itself.

But you can do that with the 'with' statement!
 Without RAII, I have to add in the exception handling code EVERY place
 I use the object. If I change the abstraction, I have to go and change
 every use of it. To me, that's code complexity, not flexibility.
 
 A proper abstraction means that if I change the design, I only have
 to change it in one place. Not everywhere its used.

We agree completely :)
The thing is, I never used them and never had the need to. Don't ask me
why, I just have very few errors when coding in Python. So it's not really
*needed*.

I agree that static analysis isn't needed. The better statement is is there a benefit to it that exceeds the cost?

Maybe in very big projects with an heterogeneous team, I don't know. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Si pensas que el alma no se ve el alma sí se ve en los ojos
Dec 01 2009
prev sibling next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
grauzone wrote:

 Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple,
 you end up pushing all the complexity into the APIs.

Yup, and that's the underlying problem with "simple" languages. Complicated code.

I think users of scripting languages would disagree with you.

Do you mean scripting languages such as Lua or ruby and python? The latter two are by no means simple languages, they pack tons of features.
Dec 01 2009
prev sibling next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple,
 you end up pushing all the complexity into the APIs.

Yup, and that's the underlying problem with "simple" languages. Complicated code.

I think users of scripting languages would disagree with you.

Looks like even simple Javascript is getting a major complexity upgrade: http://arstechnica.com/web/news/2009/12/commonjs-effort-sets-javascript-on-path-for-world-domination.ars
Dec 01 2009
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tue, Dec 01, 2009 at 09:17:44PM +0000, retard wrote:
 The lack of type annotations at least removes all typing bugs. 

Quite the contrary, leaving off the type annotation spawns bugs. I had to write a web app in Ruby last year, and well remember the little things that slipped past tests, pissing off end users. "Why can't I access this obscure page?" Because a != b since for some reason, the database returned a as a string, and b was assigned by an integer literal. In D, that would have been an instant compile time error. In Ruby, it was a runtime error on a page obscure enough that it slipped past testing into the real world. You might say that I should have been more disciplined about my testing, or maybe the company should have hired a dedicated tester, but the fact remains that it simply wouldn't have happened in D at all. (Even if I left off the types and used 'auto' everywhere, the compiler would still see the mismatch.) Until now :P I'm fairly certain that with std.variant and some opDispatch magic, we can recreate the dynamic system wholesale, so you could, if you really wanted to, just use var for all types. The only thing left to make it happen in the language is probably either opImplicitCast or global assignment operator overloads, and even they aren't strictly necessary for a lot of programs. -- Adam D. Ruppe http://arsdnet.net
Dec 01 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 16:58:32 -0500, Adam D. Ruppe wrote:

 On Tue, Dec 01, 2009 at 09:17:44PM +0000, retard wrote:
 The lack of type annotations at least removes all typing bugs.

Quite the contrary, leaving off the type annotation spawns bugs.

It spawns new bugs, for sure, but it removes all static typing bugs cause those aren't checked anymore and cannot exist under that category!
 I had
 to write a web app in Ruby last year, and well remember the little
 things that slipped past tests, pissing off end users.
 
 "Why can't I access this obscure page?"
 
 Because a != b since for some reason, the database returned a as a
 string, and b was assigned by an integer literal.
 
 In D, that would have been an instant compile time error. In Ruby, it
 was a runtime error on a page obscure enough that it slipped past
 testing into the real world.

The thing is, nowadays when all development should follow the principles of clean code (book), agile, and tdd/bdd, this cannot happen. You write tests first, then the production code. They say that writing tests and code takes less time than writing only the more or less buggy production code. Not writing tests is a sign of a novice programmer and they wouldn't hire you if you didn't advertise your TDD skills. In this particular case you use a dummy test db fixture system, write tests for 'a is int' and 'b is int'. With these tests in place, the functionality provided by D's type system is only a subset of the coverage the tests provide. So D cannot offer any advantage anymore over e.g. Python.
Dec 01 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Wed, 02 Dec 2009 03:16:58 -0800, Walter Bright wrote:

 retard wrote:
 The thing is, nowadays when all development should follow the
 principles of clean code (book), agile, and tdd/bdd, this cannot
 happen. You write tests first, then the production code. They say that
 writing tests and code takes less time than writing only the more or
 less buggy production code. Not writing tests is a sign of a novice
 programmer and they wouldn't hire you if you didn't advertise your TDD
 skills.

And therein lies the problem. You need the programmers to follow a certain discipline. I don't know if you've managed programmers before, but they don't always follow discipline, no matter how good they are. The root problem is there's no way to *verify* that they've followed the discipline, convention, procedure, whatever. But with mechanical checking, you can guarantee certain things. How are you going to guarantee each member of your team put all the unit tests in? Each time they change anything?
 In this particular case you use a dummy test db fixture system, write
 tests for 'a is int' and 'b is int'. With these tests in place, the
 functionality provided by D's type system is only a subset of the
 coverage the tests provide. So D cannot offer any advantage anymore
 over e.g. Python.

Where's the advantage of: assert(a is int) over: int a; ? Especially if I have to follow the discipline and add them in everywhere?

The case I commented on was about fetching values from a db IIRC. So the connection between SQL database and D loses all type information unless you build some kind of high level SQL interface which checks the types (note that up-to-date checking cannot be done with dmd unless it allows fetching stuff from the db on compile time or you first dump the table parameters to some text file before compiling). You can't just write: typedef string[] row; row[] a = sql_engine.execute("select * from foobar;").result; int b = (int)a[0][0]; string c = (string)b[0][1]; and somehow expect that the first column of row 0 is an integer and the next column a string. You still need to postpone the checking to runtime with some validation function: typedef string[] row; row[] a = sql_engine.execute("select * from foobar;").result; void runtime_assert(T)(string s) { ... } runtime_assert!(int)(a[0][0]); int b = (int)a[0][0]; string c = b[0][1]; I agree some disciplines are hard to follow. For example ensuring immutability in a inherently mutable language. But TDD is something a bit easier - it's a lot higher level. It's easy to remember that you can't write any code into production code folder unless there is already code in test folder. You can verify with code coverage tools that you didn't forget to write some tests. In TDD the whole code looks different. You build it to be easily testable. It's provably a good way to write code - almost every company nowadays uses TDD and agile methods such as Scrum.
Dec 02 2009
prev sibling parent retard <re tard.com.invalid> writes:
Wed, 02 Dec 2009 13:12:58 +0100, Lars T. Kyllingstad wrote:

 std.conv.to() to the rescue! :)
 
    import std.conv;
    ...
 
    row[] a = sql_engine.execute("select * from foobar;").result;
 
    int b = to!int(a[0][0]);          // Throws if conversions fail
    string c = to!string(a[0][1]);
 
 -Lars

You also seem to miss the point. The topic of this conversation (I think?) was about static verification. to! throws at runtime.
Dec 02 2009
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  1 de diciembre a las 13:43 me escribiste:
 Leandro Lucarella wrote:
5. simple interfacing to C

do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.

true.

There is no code executed.

Unless you want to pass D strings to C, then you have to execute toStringz(), which is a really thin "wrapper", but it's a wrapper. Using C from D is (generally) error prone and painful, so I usually end up writing more D'ish wrappers to make the D coding more pleasant.

You can also simply use C strings in D, and pass them straight to C functions that take void*. No conversion necessary. It isn't any harder to ensure a 0 termination in D than it is in C, in fact, it's just the same. D string literals even helpfully already have a 0 at the end with this in mind!

Yes, I know you can use bare C strings, but when I use D, I want to code in D, not in C =)
It's not safe, and of course, being a dynamic language, you can access
C code at "compile time" (because there it no compile time), but you can
interface with C very easily:

import ctypes
libc = ctypes.cdll.LoadLibrary("libc.so.6")
libc.printf("hello world %i\n", 5)



Wow, that was hard! =)

Ok, does this work: p = libc.malloc(100); *p = 3;

It looks like you can (not as easily) according to bearophile example, but this is besides the point, you only want to use malloc() for performance reasons, and I already said that D is better than Python on that. I mentioned ctypes just for the point of easy C-interoperability.
It's simpler, because you only have one obvious way to do things,

No, Python has try/catch/finally as well.

I said *obvious*. try/catch/finally is there for another reason (managing errors, not doing RAII). Of course you can find convoluted ways to do anything in Python as with any other language.
Maybe you are right, but the with statement plays very well with the
"explicit is better than implicit" of Python :)

Again, is flexibility vs complexity.

Another principle is abstractions should be in the right place. When the abstraction leaks out into the use of the abstraction, it's user code complexity. This is a case of that, I believe.

Where is the code complexity here, I can't see it.
There are static analyzers for Python:
http://www.logilab.org/857
http://divmod.org/trac/wiki/DivmodPyflakes
http://pychecker.sourceforge.net/

What's happening here is the complexity needed in the language is pushed off to third party tools. It didn't go away.

The thing is, I never used them and never had the need to. Don't ask me why, I just have very few errors when coding in Python. So it's not really *needed*.
And again, judging from experience, I don't know why, but I really have
a very small bug count when using Python. I don't work with huge teams of
crappy programmers (which I think is the scenario that D tries to cover),
that can be a reason ;)

Part of that may be experience. The languages I use a lot, I tend to generate far fewer bugs with because I've learned to avoid the common bugs. There have been very few coding errors in the C++ dialect I use in dmd, the errors have been logic ones.

You're probably right, but I think Python simplicity really helps in reducing bug count. When the language doesn't get in the way it's much harder to introduce bugs because you can focus in what's important, there is no noise distracting you :)
 You're right that D has a lot that is intended more for large scale
 projects with a diverse team than one man jobs. There is a lot to
 support enforced encapsulation, checking, and isolation, if that is
 desired. Purity, immutability, contracts, interfaces, etc., are not
 important for small programs.

Agreed. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- I am so psychosomatic it makes me sick just thinking about it! -- George Constanza
Dec 01 2009
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  1 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
I develop twice as fast in Python than in D. Of course this is only me,
but that's where I think Python is better than D :)

If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.

And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive language and when you need speed, you need static typing and all the low-level support. They are all necessary evil. All I'm saying is, when I don't need speed and I have to do something quickly, Python is still a far better language than D, because of they inherent differences.
I think only not having a compile cycle (no matter how fast compiling is)
is a *huge* win. Having an interactive console (with embedded
documentation) is another big win.

That makes sense.

I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess... -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- La terapia no sirve: es mucho mejor pagar para hacer las perversiones que para contarlas. -- Alberto Giordano (filósofo estilista)
Dec 01 2009
prev sibling parent Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 4:37 PM, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:
 Leandro Lucarella wrote:
 Walter Bright, el =A01 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me=




 but that's where I think Python is better than D :)

If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.

And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive languag=


 and when you need speed, you need static typing and all the low-level
 support. They are all necessary evil. All I'm saying is, when I don't ne=


 speed and I have to do something quickly, Python is still a far better
 language than D, because of they inherent differences.

 I think only not having a compile cycle (no matter how fast compiling
 is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.

That makes sense.

I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...

I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I managed to make-do without it.

The web page[1] says it doesn't work on Windows. That'd be my excuse for not using it. [1] http://www.digitalmars.com/d/2.0/rdmd.html --bb
Dec 01 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 01:08:11 -0800, Walter Bright wrote:

 grauzone wrote:
 Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple, you
 end up pushing all the complexity into the APIs.

Yup, and that's the underlying problem with "simple" languages. Complicated code.

I think users of scripting languages would disagree with you.

PHP?

Php is a terrible joke built by a novice http://tnx.nl/php.html. Fans of e.g. ruby, python etc. could argue that their language has less corner cases and more uniform features, which makes the development more of a joy to do and code less verbose. Instead of two variations, in many cases there is only one choice - e.g. - type known at runtime/compile time -> known at runtime - generic type / ordinary type -> runtime dynamic type - primitives/objects -> everything is an object - special set of built-in operators / normal methods -> everything is a message - static classes / objects -> objects (some are singletons but can inherit from interfaces etc. unlike statics in d) - free functions / methods / static methods -> methods (the modules are singleton objects -> free functions are module methods) - functions / delegates -> functions - special set of built-in control structures -> simple primitive (e.g. recursion & library defined structures) - statements / expressions -> everything is an expression (this unifies e.g. if-then-else and a ? b : c) - built-in AA, array, list etc. -> library defined collections - dozens of primitive number types -> fixed size int & float (e.g. 32bit int and 64bit float), arbitrary precision int & float (rational type) Overall these simplifications don't remove any crucial high level language features, in fact they make the code simpler and shorter. For instance there isn't high level code that can only be written with 8-bit byte primitives, static methods or closures, but not with 32-bit generic ints, singletons, and generic higher order functions. The only thing you lose is some type safety and efficiency.
Dec 01 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 03:13:28 -0800, Walter Bright wrote:

 retard wrote:
 Overall these simplifications don't remove any crucial high level
 language features, in fact they make the code simpler and shorter. For
 instance there isn't high level code that can only be written with
 8-bit byte primitives, static methods or closures, but not with 32-bit
 generic ints, singletons, and generic higher order functions. The only
 thing you lose is some type safety and efficiency.

I'm no expert on Python, but there are some things one gives up with it: 1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming.

Even if the language doesn't enforce immutability it's indeed possible to use immutable data types in a language without pure/const/final attributes. Python et al support functional style programming. The fact that php doesn't only proves that its author had no idea of what he was doing. Early php versions even had a limitation on recursion, 50 levels or something like that. They still have those limitations, somewhat relaxed. Probably no other language performs so poorly with functional code than php.
 
 2. as you mentioned, there's the performance problem. It's fine if you
 don't need performance, but once you do, the complexity abruptly goes
 way up.

In D, there's the simplicity problem. It's fine if you don't need readability, but once you do, the efficiency abruptly goes way down.
 
 3. no contract programming (it's very hard to emulate contract
 inheritance)

True, this is a commonly overlooked feature. I don't know any other languages than Eiffel or D that support this. I'm not sure how hard it would be to emulate this feature in languages where you can define your own class mechanism.
 
 4. no metaprogramming

Dynamic languages support dynamic metaprogramming. Ever heard of e.g. lisp macros?
 
 5. simple interfacing to C

In case you mean no unnecessary wrappers etc., this has more to do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.
 
 6. scope guard (transactional processing); Python has the miserable
 try-catch-finally paradigm

Ok. On the other hand, I don't know why this can't be done with runtime metaprogramming features.
 
 7. static verification

Dynamic language users argue that since the language is much simpler, you don't need to verify anything. And you still have unit test frameworks.
 
 8. RAII

Ok. I think this could also be enforced dynamically.
 
 9. versioning

I don't know why this can't be done dynamically.
 10. ability to manage resources directly

Ok.
 
 11. inline assembler

Ok. Note that I wrote
 Overall these simplifications don't remove any crucial ___high level___
 language features, 


 
 12. constants

I don't know why this can't be done dynamically with wrapper objects.
Dec 01 2009
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  1 de diciembre a las 11:07 me escribiste:
 Walter Bright wrote:
Leandro Lucarella wrote:
with file(fname) as f:
    x = f.read(10)
    f.write(x)

Looks like you're right, and it's a recently added new feature. I suggest it proves my point - Python had to add complexity to support another paradigm. Python's "with" doesn't look any simpler than scope guard.

Actually "with" is an awful abstraction as defined in Java (the new version), C#, and Python. Scheme also has am unwind-protect function. I strongly believe all of the above are hopelessly misguided. Scope guard is the right thing, and I am convinced it will prevail.

Good arguments! -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Los pobres buscan su destino. Ac√° est√°; ¬Ņno lo ven? -- Emilio Vaporeso. Marzo de 1914
Dec 01 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 10:46:11 -0800, Walter Bright wrote:

 Leandro Lucarella wrote:

 I really think the *only* *major* advantage of D over Python is speed.
 That's it.

I probably place a lot more importance on static verification rather than relying on convention and tons of unit tests.

In many places if you apply for a job, static verification is more or less bullshit talk to their ears. Unit testing with large frameworks is the way to go. You even have lots of new paradigms to learn, e.g. TDD, BDD, ...
Dec 01 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 17:11:26 -0300, Leandro Lucarella wrote:

 And again, judging from experience, I don't know why, but I really have
 a very small bug count when using Python. I don't work with huge teams
 of crappy programmers (which I think is the scenario that D tries to
 cover), that can be a reason ;)

The lack of type annotations at least removes all typing bugs. Your brain has more processing power for the task at hand since you don't need to concentrate on trivial type issues. Testing the code and writing prototypes in the repl basically eliminates all bugs. At least so they say.
Dec 01 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 13:15:53 -0800, Walter Bright wrote:

 grauzone wrote:
 Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple, you
 end up pushing all the complexity into the APIs.

Yup, and that's the underlying problem with "simple" languages. Complicated code.

I think users of scripting languages would disagree with you.

Looks like even simple Javascript is getting a major complexity upgrade: http://arstechnica.com/web/news/2009/12/commonjs-effort-sets-javascript-

All languages seem to add more features during their lifetime. I've never heard of language in which feature count somehow decreases with later versions. If you're happy with the previous version, why upgrade? E.g. the existence of Java 5+ or D 2.0 doesn't mean developing code with Java 1.4 or D 1.x is illegal.
Dec 01 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 14:22:10 -0800, Walter Bright wrote:

 bearophile wrote:
 Right. But what people care in the end is programs that get the work
 done. If a mix of Python plus C/C++ libs are good enough and handy
 enough then they get used. For example I am able to use the PIL Python
 lib with Python to load, save and process jpeg images at high-speed
 with few lines of handy code. So I don't care if PIL is written in C++:
 http://www.pythonware.com/products/pil/

Sure, but that's not about the language. It's about the richness of the ecosystem that supports the language, and Python certainly has a rich one.

I thought D was supposed to be a practical language for real world problems. This 'D is good because everything can and must be written in D' is beginning to sound like a religion. To me it seems the Python way is more practical in all ways. Even novice programmers can produce efficient programs with it by using a mixture of low level C/C++ libs and high level python scripts. I agree that Python isn't as fast as D and it lacks type safety things and so on, but in the end of day the Python coder gets the job done while the D coder still fights with inline assembler, compiler bugs, porting the app, fighting the type system (mostly purity/constness issues). Python has more libs available, you need to write less code to implement the same functionality and it's all less troublesome because the lack of type annotations. So it's really understandable why a greater amount people favor Python.
Dec 01 2009
prev sibling parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 14:24:01 -0800, Walter Bright wrote:

 dsimcha wrote:
 My biggest gripe about static verification is that it can't help you at
 all with high-level logic/algorithmic errors, only lower level coding
 errors.  Good unit tests (and good asserts), on the other hand, are
 invaluable for finding and debugging high-level logic and algorithmic
 errors.

Unit tests have their limitations as well. Unit tests cannot prove a function is pure, for example.

Sure, unit tests can't prove that.
 Both unit tests and static verification are needed.

But it doesn't lead to this conclusion. Static verification is sometimes very expensive and real world business applications don't need those guarantees that often. It's ok if a web site or game crashes every now and then. If I need serious static verification, I would use tools like Coq, not D..
Dec 01 2009
parent Michal Minich <michal.minich gmail.com> writes:
Hello retard,

 Tue, 01 Dec 2009 14:24:01 -0800, Walter Bright wrote:
 
 dsimcha wrote:
 
 My biggest gripe about static verification is that it can't help you
 at all with high-level logic/algorithmic errors, only lower level
 coding errors.  Good unit tests (and good asserts), on the other
 hand, are invaluable for finding and debugging high-level logic and
 algorithmic errors.
 

function is pure, for example.

 Both unit tests and static verification are needed.
 

sometimes very expensive and real world business applications don't need those guarantees that often. It's ok if a web site or game crashes every now and then. If I need serious static verification, I would use tools like Coq, not D..

Static verification in Coq is very expensive, but who really does that for real world programs. I think we are talking about automatic static verification with none or minimal programmer assistance - it will get you assurances for larger project with multiple programmers - that various parts plug in correctly (typecheck) and that they do not affect other parts of program in unexpected ways (const/pure/safe) - then you are at good ground to verify yours program logic by yourself (debugging/pre(post)conditions/unittests/asserts/invariants).
Dec 02 2009
prev sibling next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
retard, el  1 de diciembre a las 11:42 me escribiste:
 Tue, 01 Dec 2009 03:13:28 -0800, Walter Bright wrote:
 
 retard wrote:
 Overall these simplifications don't remove any crucial high level
 language features, in fact they make the code simpler and shorter. For
 instance there isn't high level code that can only be written with
 8-bit byte primitives, static methods or closures, but not with 32-bit
 generic ints, singletons, and generic higher order functions. The only
 thing you lose is some type safety and efficiency.

I'm no expert on Python, but there are some things one gives up with it: 1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming.

Even if the language doesn't enforce immutability it's indeed possible to use immutable data types in a language without pure/const/final attributes.

And BTW, Python *have* some built-in immutable types (strings, tuples, integers, floats, frozensets, and I don't remember if there is anything else). Python uses convention over hard-discipline (no public/private for example), so you can make your own immutable types, just don't add mutating methods and don't mess with. I agree it's arguable, but people actually use this conventions (they are all consenting adults :), so things works. I can only speak from experience, and my bug count in Python is extremely low, even when doing MT (the Queue module provides a very easy way to pass messages from one thread to another). I agree that, when you don't care much for performance, things are much easier :)
 2. as you mentioned, there's the performance problem. It's fine if you
 don't need performance, but once you do, the complexity abruptly goes
 way up.

In D, there's the simplicity problem. It's fine if you don't need readability, but once you do, the efficiency abruptly goes way down.
 
 3. no contract programming (it's very hard to emulate contract
 inheritance)

True, this is a commonly overlooked feature. I don't know any other languages than Eiffel or D that support this. I'm not sure how hard it would be to emulate this feature in languages where you can define your own class mechanism.

There are libraries to do contracts in Python: http://www.wayforward.net/pycontract/ http://blitiri.com.ar/git/?p=pymisc;a=blob;f=contract.py;h=0d78aa3dc9f3af5336c8d34ce521815ebd7d5ea0;hb=HEAD I don't know if they handle contract inheritance though. There is a PEP for that too: http://www.python.org/dev/peps/pep-0316/ But I don't many people really wants DbC in Python, so I don't think it would be implemented.
 4. no metaprogramming

Dynamic languages support dynamic metaprogramming. Ever heard of e.g. lisp macros?

Exactly! You can even generate code dynamically! This is a very nice example: http://code.activestate.com/recipes/362305/ It makes "self" implicit in *pure Python*. If you say dynamic languages don't have metaprogramming capabilities, you just don't have any idea of what a dynamic language really is.
 5. simple interfacing to C

In case you mean no unnecessary wrappers etc., this has more to do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.

In D you need interfacing code too, it can be a little simpler, that's true.
 6. scope guard (transactional processing); Python has the miserable
 try-catch-finally paradigm


WRONG! See the with statement: http://www.python.org/dev/peps/pep-0343/ with lock: some_non_mt_function() with transaction: some_queries() with file(fname) as f: x = f.read(10) f.write(x)
 8. RAII

Ok. I think this could also be enforced dynamically.

Again, the with statement.
 
 9. versioning

I don't know why this can't be done dynamically.

It can, and it's pretty common, you can do things like this: class A: if WHATEVER: def __init__(self): pass else: def __init__(self, x): pass
 10. ability to manage resources directly


What do you mean by resource?
 11. inline assembler


You can do bytecode manipulation, which is the assembler of dynamic languages :) I really think the *only* *major* advantage of D over Python is speed. That's it. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Cuando el M√°rtir estaba siendo perseguido y aglutinado por los citronetos, aquellos perversos que pretendian, en su maldad, piononizar las ense√Īanzas de Peperino. -- Peperino P√≥moro
Dec 01 2009
next sibling parent reply BCS <none anon.com> writes:
Hello Leandro,

 
 If you say dynamic languages don't have metaprogramming capabilities,
 you just don't have any idea of what a dynamic language really is.
 

If you say you can do metaprogramming at runtime you just don't have any idea of what I want to do with metaprogramming. For example: unit carrying types: check for unit errors (adding feet to seconds) at compile time. I can be sure there are no unit error without knowing if I've executed every possible code path. Domain specific compile time optimizations: Evaluate a O(n^3) function so I can generate O(n) code rather than write O(n^2) code. If you do that at runtime, things get slower, not faster. Any language that doesn't have a "compile time" that is evaluated only once for all code and before the product ships, can't do these.
Dec 02 2009
parent reply Don <nospam nospam.com> writes:
Leandro Lucarella wrote:
 BCS, el  2 de diciembre a las 17:37 me escribiste:
 Hello Leandro,


 If you say dynamic languages don't have metaprogramming capabilities,
 you just don't have any idea of what a dynamic language really is.

any idea of what I want to do with metaprogramming. For example:

What you say next, is not metaprogramming per se, they are performance issues (that you resolve using compile-time metaprogramming).

They are metaprogramming tasks. Dynamic languages can do some metaprogramming tasks. They can't do those ones.
 You are right, but if you *don't* need *speed*, you don't need all that
 stuff, that's not metaprogramming to fix a "logic" problem, they are all
 optimization tricks, if you don't need speed, you don't need optimization
 tricks.

"you don't need speed" is a pretty glib statement. I think the reality is that you don't care about constant factors in speed, even if they are large (say 200 times slower is OK). But bubble-sort is probably still not acceptable. Metaprogramming can be used to reduce big-O complexity rather than just constant-factor improvement. Lumping that in with "optimisation" is highly misleading.
 The kind of metaprogramming I'm talking about is, for example, generating
 boring, repetitive boilerplate code.

Dec 02 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 Bubble sort is perfeclty acceptable for, say, a 100 elements array.

 It always depends on the context, of course, but when doing programs that
 deals with small data sets and are mostly IO bounded, you *really* can
 care less about performance and big-O.

The thing about writing code that will be used by others is that they are not going to restrict themselves to small data sets. For example, bubble sort. Putting that in a library is a disaster. You can't just write in the documentation that it is usable only for less than 100 elements. One really does have to worry about big O performance, unless it is a throwaway program.
Dec 02 2009
prev sibling next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
BCS, el  2 de diciembre a las 17:37 me escribiste:
 Hello Leandro,
 
 
If you say dynamic languages don't have metaprogramming capabilities,
you just don't have any idea of what a dynamic language really is.

If you say you can do metaprogramming at runtime you just don't have any idea of what I want to do with metaprogramming. For example:

What you say next, is not metaprogramming per se, they are performance issues (that you resolve using compile-time metaprogramming). You're missing the point.
 unit carrying types: check for unit errors (adding feet to seconds)
 at compile time. I can be sure there are no unit error without
 knowing if I've executed every possible code path.

There is no compile time metaprogrammin in dynamic languages, you just can't verify anything at compile time, of course you can't do that! Again, you are talking about performance issues, that's doable in a dynamic languages, the checks are just runned at run time.
 Domain specific compile time optimizations: Evaluate a O(n^3)
 function so I can generate O(n) code rather than write O(n^2) code.
 If you do that at runtime, things get slower, not faster.

Again *optimization*. How many times should I say that I agree that D is better than almost every dynamic languages if you need speed?
 Any language that doesn't have a "compile time" that is evaluated
 only once for all code and before the product ships, can't do these.

You are right, but if you *don't* need *speed*, you don't need all that stuff, that's not metaprogramming to fix a "logic" problem, they are all optimization tricks, if you don't need speed, you don't need optimization tricks. The kind of metaprogramming I'm talking about is, for example, generating boring, repetitive boilerplate code. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Es mejor probar el sabor de sapo y darse cuenta que es feo, antes que no hacerlo y creer que es una gran gomita de pera. -- Dr Ricardo Vaporesso, Malta 1951
Dec 02 2009
next sibling parent reply BCS <none anon.com> writes:
Hello Leandro,

 BCS, el  2 de diciembre a las 17:37 me escribiste:
 
 Hello Leandro,
 
 If you say dynamic languages don't have metaprogramming
 capabilities, you just don't have any idea of what a dynamic
 language really is.
 

any idea of what I want to do with metaprogramming. For example:

issues (that you resolve using compile-time metaprogramming). You're missing the point.

No you're missing MY point. I was very careful to add "what I want to do with" to my statement. It might not be true for you but what I asserts is true for me. Most of the things *I* want from metaprogramming must be done as compile time metaprogramming. Saying "dynamic languages can do something at run time" doesn't imply that there is nothing more to be had by doing it at compile time.
 unit carrying types: check for unit errors (adding feet to seconds)
 at compile time. I can be sure there are no unit error without
 knowing if I've executed every possible code path.
 

can't verify anything at compile time, of course you can't do that! Again, you are talking about performance issues, that's doable in a dynamic languages, the checks are just runned at run time.

The reason for doing the checks at compile time are not performance but correctness. I want to know a priori that the code is correct rather than wait till runtime.
 Domain specific compile time optimizations: Evaluate a O(n^3)
 function so I can generate O(n) code rather than write O(n^2) code.
 If you do that at runtime, things get slower, not faster.
 

is better than almost every dynamic languages if you need speed?

I'm not arguing on that point. What I'm arguing is that (at least for me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*
 Any language that doesn't have a "compile time" that is evaluated
 only once for all code and before the product ships, can't do these.
 

that stuff, that's not metaprogramming to fix a "logic" problem, they are all optimization tricks, if you don't need speed, you don't need optimization tricks.

Personally, I'd rater use non-metaprograming solutions where runtime solutions are viable. They are generally easier to work (from the lib authors standpoint) with and should be just as powerful. The API might be a little messier but you should be able to get just as much done with it.
 
 The kind of metaprogramming I'm talking about is, for example,
 generating boring, repetitive boilerplate code.

For that kind of things, if I had a choice between compile time meta, run time meta and non meta, that last one I'd use is run-time meta.
Dec 02 2009
next sibling parent reply Sergey Gromov <snake.scaly gmail.com> writes:
BCS wrote:
 I'm not arguing on that point. What I'm arguing is that (at least for 
 me) the primary advantages of metaprogramming are static checks (for 
 non-perf benefits) and performance. Both of these must be done at 
 compile time. Runtime metaprogramming just seems pointless *to me.*

One of important applications of metaprogramming is code generation which would be too tedious or bug-prone to generate and support manually. Dynamic languages can definitely provide for that.
Dec 02 2009
next sibling parent BCS <none anon.com> writes:
Hello Sergey,

 BCS wrote:
 
 I'm not arguing on that point. What I'm arguing is that (at least for
 me) the primary advantages of metaprogramming are static checks (for
 non-perf benefits) and performance. Both of these must be done at
 compile time. Runtime metaprogramming just seems pointless *to me.*
 

which would be too tedious or bug-prone to generate and support manually. Dynamic languages can definitely provide for that.

They can, but I question if it's the best way to do it in those languages. Generating code and running it at runtime seems to be pointless. Why have the intermediate step with the code? I have something I want to do, so I use encode it as one abstraction (a DSL), translate it into another (the host language) and then compute it in a third (the runtime). If it's all at runtime anyway, why not just use the runtime to evaluate/interpret the DSL directly.
Dec 02 2009
prev sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Wed, Dec 2, 2009 at 3:26 PM, BCS <none anon.com> wrote:
 Hello Sergey,

 BCS wrote:

 I'm not arguing on that point. What I'm arguing is that (at least for
 me) the primary advantages of metaprogramming are static checks (for
 non-perf benefits) and performance. Both of these must be done at
 compile time. Runtime metaprogramming just seems pointless *to me.*

which would be too tedious or bug-prone to generate and support manually. =A0Dynamic languages can definitely provide for that.

They can, but I question if it's the best way to do it in those languages=

 Generating code and running it at runtime seems to be pointless. Why have
 the intermediate step with the code? I have something I want to do, so I =

 encode it as one abstraction (a DSL), translate it into another (the host
 language) and then compute it in a third (the runtime). If it's all at
 runtime anyway, why not just use the runtime to evaluate/interpret the DS=

 directly.

You may be able to memoize the generated code so you only have to generate it once per run, but use it many times. Probably performance is the reason you wouldn't want to reinterpret the DSL from scratch every use. Even dynamic language users have their limits on how long they're willing to wait for something to finish. --bb
Dec 02 2009
parent BCS <none anon.com> writes:
Hello Bill,

 On Wed, Dec 2, 2009 at 3:26 PM, BCS <none anon.com> wrote:
 
 Hello Sergey,
 
 They can, but I question if it's the best way to do it in those
 languages. Generating code and running it at runtime seems to be
 pointless. Why have the intermediate step with the code? I have
 something I want to do, so I use encode it as one abstraction (a
 DSL), translate it into another (the host language) and then compute
 it in a third (the runtime). If it's all at runtime anyway, why not
 just use the runtime to evaluate/interpret the DSL directly.
 

generate it once per run, but use it many times. Probably performance is the reason you wouldn't want to reinterpret the DSL from scratch every use. Even dynamic language users have their limits on how long they're willing to wait for something to finish. --bb

Yes, some of the performance issue (that I didn't bring up) can be addressed. But what about the points I did bring up? Like added conceptual complexity and another degree of separation between what you want and what you get?
Dec 03 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
retard wrote:
 Wed, 02 Dec 2009 21:16:28 +0000, BCS wrote:
 
 Hello Leandro,

 Again *optimization*. How many times should I say that I agree that D
 is better than almost every dynamic languages if you need speed?

me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*

Both the language used to represent D metaprograms and D are suboptimal for many kinds of DSLs. A dynamic language can provide better control over these issues without resorting to manual string parsing. If the DSL is closer to the problem domain, it can have a great effect on program correctness. For instance, you could define natural language like statements in your DSL with functional composition. In D you basically have to write all metaprograms inside strings and parse them with CTFE functions. In e.g. lisp or io the DSL is on the same abstraction level as the main language. These are of course slow, but in some environments you need to be able to provide non-developers an intuitive interface for writing business logic. Even the runtime metaprogramming system can provide optimizations after the DSL has been processed. I understand your logic. It's very simple. You use metaprogramming to improve performance.

Static dimensional analysis doesn't improve performance, and I recall he mentioned that. Andrei
Dec 02 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
retard wrote:
 Wed, 02 Dec 2009 16:00:50 -0800, Andrei Alexandrescu wrote:
 
 retard wrote:
 Wed, 02 Dec 2009 21:16:28 +0000, BCS wrote:

 Hello Leandro,
 Again *optimization*. How many times should I say that I agree that D
 is better than almost every dynamic languages if you need speed?

me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*

for many kinds of DSLs. A dynamic language can provide better control over these issues without resorting to manual string parsing. If the DSL is closer to the problem domain, it can have a great effect on program correctness. For instance, you could define natural language like statements in your DSL with functional composition. In D you basically have to write all metaprograms inside strings and parse them with CTFE functions. In e.g. lisp or io the DSL is on the same abstraction level as the main language. These are of course slow, but in some environments you need to be able to provide non-developers an intuitive interface for writing business logic. Even the runtime metaprogramming system can provide optimizations after the DSL has been processed. I understand your logic. It's very simple. You use metaprogramming to improve performance.

mentioned that.

Why not? I agree it does also static checking of type compatibility, but when done at runtime, computing the associated runtime type tag and comparing them also requires cpu cycles. If the analysis is done at compile time, the computational problem degenerates to operations on scalars and types can be erased on runtime if they are not used for anything else.

Doing it at runtime... see Don's VW metaphor. The whole point is to make it impossible to write incorrect programs, not to detect those that are incorrect. There's a huge difference. Andrei
Dec 02 2009
prev sibling next sibling parent reply retard <re tard.com.invalid> writes:
Wed, 02 Dec 2009 21:16:28 +0000, BCS wrote:

 Hello Leandro,

 Again *optimization*. How many times should I say that I agree that D
 is better than almost every dynamic languages if you need speed?

I'm not arguing on that point. What I'm arguing is that (at least for me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*

Both the language used to represent D metaprograms and D are suboptimal for many kinds of DSLs. A dynamic language can provide better control over these issues without resorting to manual string parsing. If the DSL is closer to the problem domain, it can have a great effect on program correctness. For instance, you could define natural language like statements in your DSL with functional composition. In D you basically have to write all metaprograms inside strings and parse them with CTFE functions. In e.g. lisp or io the DSL is on the same abstraction level as the main language. These are of course slow, but in some environments you need to be able to provide non-developers an intuitive interface for writing business logic. Even the runtime metaprogramming system can provide optimizations after the DSL has been processed. I understand your logic. It's very simple. You use metaprogramming to improve performance. That's also the reason you use D - it's the language that can provide greatest performance once the compiler has matured a bit. To me program inefficiency is a rather small problem today. Most programs perform fast enough. But they crash way too often and leak memory. The fact that Walter actually favors segfaults won't fix the #1 problem. The fact that D has a conservative GC won't fix the #2. Other problems we face today are e.g. vendor lock-in in forms of tivoization, closed binaries, and cloud computing. D doesn't help here either. It doesn't enforce copyleft (e.g. AGPL) and features like inline assembler encourage the use of drm systems.
Dec 02 2009
parent reply BCS <none anon.com> writes:
Hello retard,

 Wed, 02 Dec 2009 21:16:28 +0000, BCS wrote:
 
 Hello Leandro,
 
 Again *optimization*. How many times should I say that I agree that
 D is better than almost every dynamic languages if you need speed?
 

me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*

suboptimal for many kinds of DSLs. A dynamic language can provide better control over these issues without resorting to manual string parsing. If the DSL is closer to the problem domain, it can have a great effect on program correctness.

I rather like doing meta program and I've only done one program that uses string parsing. Aside from that one, the two or three most complicated libs I've done work 100% within the normal D grammar. Show me ONE thing that can be done using run time meta programming that can't be done as well or better with run time, non-dynamic, non-meta and/or compile time meta. Unless I'm totally clueless as to what people are talking about when they say runtime meta, I don't think you will be able to. Anything that amounts to making the syntax look nicer can be done as compile time meta and anything else can be done with data structure walking and interpretation. All of that is available in non dynamic languages. I guess I should concede the eval function but if you don't like CTFE+mixin...
 
 For instance, you could define natural language like statements in
 your DSL with functional composition. In D you basically have to write
 all metaprograms inside strings and parse them with CTFE functions.

I dispute that claim.
 In
 e.g. lisp or io the DSL is on the same abstraction level as the main
 language. These are of course slow, but in some environments you need
 to be able to provide non-developers an intuitive interface for
 writing business logic. Even the runtime metaprogramming system can
 provide optimizations after the DSL has been processed.
 
 I understand your logic. It's very simple. You use metaprogramming to
 improve performance.

No, that is a flawed statement. ONE of the things I use metaprogramming for is to improve performance. Look at my parser generator, my equation solver and my units library. None of these have performance as a main driving motivation. For most of the stuff I've done where perf is even considered, it's not a mater of "lets make this faster by doing it meta" it a mater of "if this solution wasn't done meta, it wouldn't be viable and a more conventional solution would". But even that isn't the norm.
 That's also the reason you use D - it's the
 language that can provide greatest performance once the compiler has
 matured a bit. To me program inefficiency is a rather small problem
 today. Most programs perform fast enough. But they crash way too often
 and leak memory. The fact that Walter actually favors segfaults won't
 fix the #1 problem. The fact that D has a conservative GC won't fix
 the #2. Other problems we face today are e.g. vendor lock-in in forms
 of tivoization,

 closed binaries,

Why is that a problem?
 and cloud computing. 

I've never liked the cloud model, but not from the lock-in issues.
 D doesn't help here either. It doesn't enforce copyleft (e.g. AGPL)

And I think it shouldn't.
 and features like inline assembler encourage the use of drm systems.

How does that follow?
Dec 03 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from BCS (none anon.com)'s article
 Show me ONE thing that can be done using run time meta programming that can't
 be done as well or better with run time, non-dynamic, non-meta and/or compile
 time meta. Unless I'm totally clueless as to what people are talking about
 when they say runtime meta, I don't think you will be able to. Anything that
 amounts to making the syntax look nicer can be done as compile time meta
 and anything else can be done with data structure walking and interpretation.
 All of that is available in non dynamic languages.
 I guess I should concede the eval function but if you don't like CTFE+mixin...

Oh come on. I'm as much a fan of D metaprogramming as anyone, but even I admit that there are certain things that static languages just suck at. One day I got really addicted to std.algorithm and decided I wanted similar functionality for text filters from a command line, so I wrote map, filter and count scripts that take predicates specified at the command line. filter.py: import sys pred = eval('lambda line: ' + sys.argv[2]) for line in open(sys.argv[1]): if pred(line) : print line.strip() Usage: filter.py foo.txt "float( line.split()[1]) < 5.0" Metaprogramming isn't very rigorously defined, but this has to qualify. Try writing something similar in D.
Dec 03 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from BCS (none anon.com)'s article
 Show me ONE thing that can be done using run time meta programming that can't
 be done as well or better with run time, non-dynamic, non-meta and/or compile
 time meta. Unless I'm totally clueless as to what people are talking about
 when they say runtime meta, I don't think you will be able to. Anything that
 amounts to making the syntax look nicer can be done as compile time meta
 and anything else can be done with data structure walking and interpretation.
 All of that is available in non dynamic languages.
 I guess I should concede the eval function but if you don't like CTFE+mixin...

Oh come on. I'm as much a fan of D metaprogramming as anyone, but even I admit that there are certain things that static languages just suck at. One day I got really addicted to std.algorithm and decided I wanted similar functionality for text filters from a command line, so I wrote map, filter and count scripts that take predicates specified at the command line. filter.py: import sys pred = eval('lambda line: ' + sys.argv[2]) for line in open(sys.argv[1]): if pred(line) : print line.strip() Usage: filter.py foo.txt "float( line.split()[1]) < 5.0" Metaprogramming isn't very rigorously defined, but this has to qualify. Try writing something similar in D.

eval rocks. Andrei
Dec 03 2009
prev sibling next sibling parent BCS <none anon.com> writes:
Hello dsimcha,

 == Quote from BCS (none anon.com)'s article
 
 Show me ONE thing that can be done using run time meta programming
 that can't
 be done as well or better with run time, non-dynamic, non-meta and/or
 compile
 time meta. Unless I'm totally clueless as to what people are talking
 about
 when they say runtime meta, I don't think you will be able to.
 Anything that
 amounts to making the syntax look nicer can be done as compile time
 meta
 and anything else can be done with data structure walking and
 interpretation.
 All of that is available in non dynamic languages.
 I guess I should concede the eval function but if you don't like
 CTFE+mixin...

even I admit that there are certain things that static languages just suck at. One day I got really addicted to std.algorithm and decided I wanted similar functionality for text filters from a command line, so I wrote map, filter and count scripts that take predicates specified at the command line. filter.py: import sys pred = eval('lambda line: ' + sys.argv[2]) for line in open(sys.argv[1]): if pred(line) : print line.strip() Usage: filter.py foo.txt "float( line.split()[1]) < 5.0" Metaprogramming isn't very rigorously defined, but this has to qualify. Try writing something similar in D.

Yup, eval is the one thing that dynamic *really* has over static.
Dec 03 2009
prev sibling parent retard <re tard.com.invalid> writes:
Thu, 03 Dec 2009 21:35:14 +0000, BCS wrote:

 Hello dsimcha,
 
 == Quote from BCS (none anon.com)'s article
 
 Show me ONE thing that can be done using run time meta programming
 that can't
 be done as well or better with run time, non-dynamic, non-meta and/or
 compile
 time meta. Unless I'm totally clueless as to what people are talking
 about
 when they say runtime meta, I don't think you will be able to.
 Anything that
 amounts to making the syntax look nicer can be done as compile time
 meta
 and anything else can be done with data structure walking and
 interpretation.
 All of that is available in non dynamic languages. I guess I should
 concede the eval function but if you don't like CTFE+mixin...

I admit that there are certain things that static languages just suck at. One day I got really addicted to std.algorithm and decided I wanted similar functionality for text filters from a command line, so I wrote map, filter and count scripts that take predicates specified at the command line. filter.py: import sys pred = eval('lambda line: ' + sys.argv[2]) for line in open(sys.argv[1]): if pred(line) : print line.strip() Usage: filter.py foo.txt "float( line.split()[1]) < 5.0" Metaprogramming isn't very rigorously defined, but this has to qualify. Try writing something similar in D.


You can even send the runtime generated string via network to some other process that runs on a completely different cpu architecture and still compute the result. You can do this with D too, but you need to write the interpreter or JIT yourself. Dynamic languages provide this as a built-in feature. Guess why D or C++ isn't used much in client side web site code :)
Dec 03 2009
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
BCS, el  2 de diciembre a las 21:16 me escribiste:
 Hello Leandro,
 
BCS, el  2 de diciembre a las 17:37 me escribiste:

Hello Leandro,

If you say dynamic languages don't have metaprogramming
capabilities, you just don't have any idea of what a dynamic
language really is.

any idea of what I want to do with metaprogramming. For example:

issues (that you resolve using compile-time metaprogramming). You're missing the point.

No you're missing MY point. I was very careful to add "what I want to do with" to my statement. It might not be true for you but what I asserts is true for me. Most of the things *I* want from metaprogramming must be done as compile time metaprogramming. Saying "dynamic languages can do something at run time" doesn't imply that there is nothing more to be had by doing it at compile time.

Well, I will have to do like Monty Python then. This thread is getting too silly, so I'll have to end it. http://www.youtube.com/watch?v=yTQrCjP14tA -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- ¬ŅC√≥mo estais? ¬ŅC√≥mo os senteis hoy 29 del membre de 1961 d√≠a en que conmemoreramos la non√©sima setima nebulizaci√≥n del martir Peperino P√≥moro junto al Rolo Puente en la ciudad de Jadad? -- Peperino P√≥moro
Dec 02 2009
prev sibling parent retard <re tard.com.invalid> writes:
Wed, 02 Dec 2009 16:00:50 -0800, Andrei Alexandrescu wrote:

 retard wrote:
 Wed, 02 Dec 2009 21:16:28 +0000, BCS wrote:
 
 Hello Leandro,

 Again *optimization*. How many times should I say that I agree that D
 is better than almost every dynamic languages if you need speed?

me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*

Both the language used to represent D metaprograms and D are suboptimal for many kinds of DSLs. A dynamic language can provide better control over these issues without resorting to manual string parsing. If the DSL is closer to the problem domain, it can have a great effect on program correctness. For instance, you could define natural language like statements in your DSL with functional composition. In D you basically have to write all metaprograms inside strings and parse them with CTFE functions. In e.g. lisp or io the DSL is on the same abstraction level as the main language. These are of course slow, but in some environments you need to be able to provide non-developers an intuitive interface for writing business logic. Even the runtime metaprogramming system can provide optimizations after the DSL has been processed. I understand your logic. It's very simple. You use metaprogramming to improve performance.

Static dimensional analysis doesn't improve performance, and I recall he mentioned that.

Why not? I agree it does also static checking of type compatibility, but when done at runtime, computing the associated runtime type tag and comparing them also requires cpu cycles. If the analysis is done at compile time, the computational problem degenerates to operations on scalars and types can be erased on runtime if they are not used for anything else.
Dec 02 2009
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Don, el  2 de diciembre a las 22:20 me escribiste:
 Leandro Lucarella wrote:
BCS, el  2 de diciembre a las 17:37 me escribiste:
Hello Leandro,


If you say dynamic languages don't have metaprogramming capabilities,
you just don't have any idea of what a dynamic language really is.

any idea of what I want to do with metaprogramming. For example:

What you say next, is not metaprogramming per se, they are performance issues (that you resolve using compile-time metaprogramming).

They are metaprogramming tasks. Dynamic languages can do some metaprogramming tasks. They can't do those ones.

Because they make no sense, I really don't know how to put it. If you need speed, you code in C/C++/D whatever. Its like saying that you can't fly with a car and that's a problem. It's not, cars are not supposed to fly. If you need to fly, go buy a plane or a helicopter. Of course is much cooler to fly than to drive a car, but if you need to go just a couple of miles, flying gets really annoying, and it would take you more time, money and effort to do it than using your car.
You are right, but if you *don't* need *speed*, you don't need all that
stuff, that's not metaprogramming to fix a "logic" problem, they are all
optimization tricks, if you don't need speed, you don't need optimization
tricks.

"you don't need speed" is a pretty glib statement. I think the

I don't know what that means...
 reality is that you don't care about constant factors in speed, even
 if they are large (say 200 times slower is OK). But bubble-sort is
 probably still not acceptable.

Bubble sort is perfeclty acceptable for, say, a 100 elements array.
 Metaprogramming can be used to reduce big-O complexity rather than
 just constant-factor improvement. Lumping that in with
 "optimisation" is highly misleading.

It always depends on the context, of course, but when doing programs that deals with small data sets and are mostly IO bounded, you *really* can care less about performance and big-O. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- "CIRILO" Y "SIRACUSA" DE "SE√ĎORITA MAESTRA": UNO MUERTO Y OTRO PRESO -- Cr√≥nica TV
Dec 02 2009
parent BCS <none anon.com> writes:
Hello Leandro,

 Don, el  2 de diciembre a las 22:20 me escribiste:
 
 They are metaprogramming tasks. Dynamic languages can do some
 metaprogramming tasks. They can't do those ones.
 

need speed, you code in C/C++/D whatever. Its like saying that you can't fly with a car and that's a problem. It's not, cars are not supposed to fly. If you need to fly, go buy a plane or a helicopter. Of course is much cooler to fly than to drive a car, but if you need to go just a couple of miles, flying gets really annoying, and it would take you more time, money and effort to do it than using your car.

Saying "you can do that at runtime" re dynamic languages and D's meta programming is like saying a VW bug can carry rock when someone's looking for a pickup to move gravel in. Yes it's technically correct, but there are any thing a pickup can do that the bug can't (and the same the other way). The same things is true of the topic at hand; dynamic language can do /some/ of what D's meta stuff can do, but not all of it. And I'll point out yet again; not all of the extra things D does are perf related. And befor you say it; yes ther are seom things dynamic languages beat D at. All I'm saying is that meta isn't one of them.
 Metaprogramming can be used to reduce big-O complexity rather than
 just constant-factor improvement. Lumping that in with "optimisation"
 is highly misleading.
 

that deals with small data sets and are mostly IO bounded, you *really* can care less about performance and big-O.

1) If I can write a lib that given me a better O() for the same effort (once the lib is written) I *Always* care about O() 2) For all programs, the program is irrelevant or you will have someone throw more input at it than you ever expected. 3) the phrase is "can't care less" (sorry, nitpicking)
Dec 02 2009
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  1 de diciembre a las 10:46 me escribiste:
And BTW, Python *have* some built-in immutable types (strings, tuples,
integers, floats, frozensets, and I don't remember if there is anything
else). Python uses convention over hard-discipline (no public/private for
example), so you can make your own immutable types, just don't add
mutating methods and don't mess with. I agree it's arguable, but people
actually use this conventions (they are all consenting adults :), so
things works.

I agree that statically enforced immutability is unnecessary if you are able to rigidly follow an immutability convention. C++ also has immutability by convention. People who work in large teams with programmers of all skill levels tell me, however, that having a convention and being sure it is followed 100% are two very different things.

Yes, I know, probably Python (and most dynamic languages) and Java are the two extremes in this regard.
I can only speak from experience, and my bug count in Python is extremely
low, even when doing MT (the Queue module provides a very easy way to pass
messages from one thread to another).

How about the GIL?

The GIL is a performance issue. As I said, that's the only point where D is stronger than Python (and maybe other dynamic languages, I mention Python because is the language I use the most).
I agree that, when you don't care much for performance, things are much
easier :)

I would also agree that your bug count and complexity should be low as long as you're staying within the paradigms that Python (or any language) was designed to support.

Of course. But Python is a very flexible language (or I use too few paradigms when programming ;).
4. no metaprogramming

e.g. lisp macros?

Exactly! You can even generate code dynamically! This is a very nice example: http://code.activestate.com/recipes/362305/ It makes "self" implicit in *pure Python*. If you say dynamic languages don't have metaprogramming capabilities, you just don't have any idea of what a dynamic language really is.

Ok, can you do Bill Baxter's swizzler? Can you do Don Clugston's FPU code generator?

I don't know any of those things, but I know Python have very good metaprogramming capabilities (decorators and metaclasses being probably the 2 bigger features in this regard).
5. simple interfacing to C

do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.

In D you need interfacing code too, it can be a little simpler, that's true.

The interfacing in D is nothing more than providing a declaration. There is no code executed.

Unless you want to pass D strings to C, then you have to execute toStringz(), which is a really thin "wrapper", but it's a wrapper. Using C from D is (generally) error prone and painful, so I usually end up writing more D'ish wrappers to make the D coding more pleasant. And BTW, you can access C dynamic libraries in Python via the ctype module: http://docs.python.org/library/ctypes.html It's not safe, and of course, being a dynamic language, you can access C code at "compile time" (because there it no compile time), but you can interface with C very easily:
 import ctypes
 libc = ctypes.cdll.LoadLibrary("libc.so.6")
 libc.printf("hello world %i\n", 5)



Wow, that was hard! =)
6. scope guard (transactional processing); Python has the miserable
try-catch-finally paradigm


WRONG! See the with statement: http://www.python.org/dev/peps/pep-0343/ with lock: some_non_mt_function() with transaction: some_queries() with file(fname) as f: x = f.read(10) f.write(x)

Looks like you're right, and it's a recently added new feature. I suggest it proves my point - Python had to add complexity to support another paradigm. Python's "with" doesn't look any simpler than scope guard.

It's simpler, because you only have one obvious way to do things, in D you can use a struct, a scope class or a scope statement to achieve the same. Of course that gives you more flexibility, but adds complexity to the language. I'm not complaining or saying that D is wrong, I'm just saying that Python is a very expressive language without much complexity. I think the tradeoff is the speed.
8. RAII

I think this could also be enforced dynamically.

Again, the with statement.

Yes, you can emulate RAII with the with statement, but with RAII (objects that destruct when they go out of scope) you can put this behavior in the object rather than explicitly in the code every time you use it. It's more complicated to have to remember to do it every time on use.

Maybe you are right, but the with statement plays very well with the "explicit is better than implicit" of Python :) Again, is flexibility vs complexity.
10. ability to manage resources directly


What do you mean by resource?

Garbage collection isn't appropriate for managing every resources. Scarce ones need handling manually. Even large malloc's often are better done outside of the gc.

We are talking about performance again. If you need speed, I agree Python is worse than D.
11. inline assembler


You can do bytecode manipulation, which is the assembler of dynamic languages :)

That doesn't help if you really need to do a little assembler.

Right, but I don't think anyone uses assembler just for fun, you use it either for optimization (where I already said D is better than Python) or for doing some low-level stuff (where Python clearly is not a viable option).
I really think the *only* *major* advantage of D over Python is speed.
That's it.

I probably place a lot more importance on static verification rather than relying on convention and tons of unit tests.

There are static analyzers for Python: http://www.logilab.org/857 http://divmod.org/trac/wiki/DivmodPyflakes http://pychecker.sourceforge.net/ And again, judging from experience, I don't know why, but I really have a very small bug count when using Python. I don't work with huge teams of crappy programmers (which I think is the scenario that D tries to cover), that can be a reason ;) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Creativity is great but plagiarism is faster
Dec 01 2009
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  1 de diciembre a las 10:58 me escribiste:
I really think the *only* *major* advantage of D over Python is speed.
That's it.

In wake of the above, it's actually huge. If you can provide comparable power for better speed, that's a very big deal. (Usually dynamic/scripting languages are significantly more powerful because they have fewer constraints.)

I develop twice as fast in Python than in D. Of course this is only me, but that's where I think Python is better than D :) I think only not having a compile cycle (no matter how fast compiling is) is a *huge* win. Having an interactive console (with embedded documentation) is another big win. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- When I was a child I had a fever My hands felt just like two balloons. Now I've got that feeling once again I can't explain you would not understand This is not how I am. I have become comfortably numb.
Dec 01 2009