www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - request: python style decorators / aspect orientation

reply Nicolai Waniek <no.spam thank.you> writes:
Hi everyone,

Hopefully a few of you knew the python style decorators (python's version of
aspect orientation):


def aspectFunc(f):
    def wrapper():
        print "log before function call"
        f()
        print "log after function call"

    return wrapper

 aspecFunc
def myfunction():
    print "hello world"


Would it be possible to have something like this in D? IMHO it would make code
more clear. For example it could look like this when decorating a function:


version (debug) {
     logFunc
}
void myFunction(int param0)
{
    // do something here
}


instead of:

void myFunction(int param0)
{
    version (debug) {
        logthis("blabla");
    }
    // do something here
    version (debug) {
        logthis("finally we reached an end here");
    }
}

I think it would take all the bloat out of functions that doesn't really belong
to the function. I don't know how much work it would be to implement such a
thing, but I think there would be many cases this could be usefull. If this is
already possible in a sane way, please let me know as this is one of the
features I do like most in python and would like to have in D.

Best regards,
Nicolai
May 09 2007
next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Nicolai Waniek wrote:
 Hi everyone,
 
 Hopefully a few of you knew the python style decorators (python's version of
 aspect orientation):
 
 
 def aspectFunc(f):
     def wrapper():
         print "log before function call"
         f()
         print "log after function call"
 
     return wrapper
 
  aspecFunc
 def myfunction():
     print "hello world"
 
 
 Would it be possible to have something like this in D? IMHO it would make code
 more clear. For example it could look like this when decorating a function:
 
 
 version (debug) {
      logFunc
 }
 void myFunction(int param0)
 {
     // do something here
 }
 
 
 instead of:
 
 void myFunction(int param0)
 {
     version (debug) {
         logthis("blabla");
     }
     // do something here
     version (debug) {
         logthis("finally we reached an end here");
     }
 }
 
 I think it would take all the bloat out of functions that doesn't really belong
 to the function. I don't know how much work it would be to implement such a
 thing, but I think there would be many cases this could be usefull. If this is
 already possible in a sane way, please let me know as this is one of the
 features I do like most in python and would like to have in D.
 
 Best regards,
 Nicolai
This is just a quick hack, but it does work. The main problem is that you have to put the logFunc alias *after* the function is defined, or you get forward-reference errors. Apart from that, it should do what you want. -- Daniel ----- module fnwrap; import std.stdio; import std.traits; void logthis(char[] msg) { writefln("LOG - %s", msg); } ReturnType!(typeof(fn)) logFunc(alias fn)(ParameterTypeTuple!(typeof(fn)) args) { alias ReturnType!(typeof(fn)) returnT; debug { logthis("ENTER - "~(&fn).stringof); scope(exit) logthis("EXIT - "~(&fn).stringof); } static if( is( returnT == void ) ) fn(args); else return fn(args); } void myFunction_(int param0) { writefln("Do something with %s...", param0); } alias logFunc!(myFunction_) myFunction; void main() { myFunction(42); } -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 09 2007
next sibling parent reply Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Daniel Keep wrote:
 
 Nicolai Waniek wrote:
 Hi everyone,

 Hopefully a few of you knew the python style decorators (python's version of
 aspect orientation):


 def aspectFunc(f):
     def wrapper():
         print "log before function call"
         f()
         print "log after function call"

     return wrapper

  aspecFunc
 def myfunction():
     print "hello world"


 Would it be possible to have something like this in D? IMHO it would make code
 more clear. For example it could look like this when decorating a function:


 version (debug) {
      logFunc
 }
 void myFunction(int param0)
 {
     // do something here
 }


 instead of:

 void myFunction(int param0)
 {
     version (debug) {
         logthis("blabla");
     }
     // do something here
     version (debug) {
         logthis("finally we reached an end here");
     }
 }

 I think it would take all the bloat out of functions that doesn't really belong
 to the function. I don't know how much work it would be to implement such a
 thing, but I think there would be many cases this could be usefull. If this is
 already possible in a sane way, please let me know as this is one of the
 features I do like most in python and would like to have in D.

 Best regards,
 Nicolai
This is just a quick hack, but it does work. The main problem is that you have to put the logFunc alias *after* the function is defined, or you get forward-reference errors. Apart from that, it should do what you want. -- Daniel ----- module fnwrap; import std.stdio; import std.traits; void logthis(char[] msg) { writefln("LOG - %s", msg); } ReturnType!(typeof(fn)) logFunc(alias fn)(ParameterTypeTuple!(typeof(fn)) args) { alias ReturnType!(typeof(fn)) returnT; debug { logthis("ENTER - "~(&fn).stringof); scope(exit) logthis("EXIT - "~(&fn).stringof); } static if( is( returnT == void ) ) fn(args); else return fn(args);
Actually you can leave this check out, as I recall. Returning values in void-return functions is allowed, and I would infer (haven't tested, however) that returning a void from a void() would also be fine. Could be wrong.
 }
 
 void myFunction_(int param0)
 {
     writefln("Do something with %s...", param0);
 }
 alias logFunc!(myFunction_) myFunction;
 
 void main()
 {
     myFunction(42);
 }
 
Overall a neat and useful trick. -- Chris Nicholson-Sauls
May 10 2007
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Chris Nicholson-Sauls wrote:
 Daniel Keep wrote:
 [...]

 module fnwrap;

 import std.stdio;
 import std.traits;

 void logthis(char[] msg)
 {
     writefln("LOG - %s", msg);
 }

 ReturnType!(typeof(fn)) logFunc(alias
 fn)(ParameterTypeTuple!(typeof(fn)) args)
 {
     alias ReturnType!(typeof(fn)) returnT;
     debug
     {
         logthis("ENTER - "~(&fn).stringof);
         scope(exit) logthis("EXIT  - "~(&fn).stringof);
     }
     static if( is( returnT == void ) )
         fn(args);
     else
         return fn(args);
Actually you can leave this check out, as I recall. Returning values in void-return functions is allowed, and I would infer (haven't tested, however) that returning a void from a void() would also be fine. Could be wrong.
Hmm; you're right. I think this is one of those things that doesn't always work, so I just cut off any problems ahead at the pass. Actually, this template is something of a re-write of my glSafe template which does *roughly* the same thing. Difference there is that I need to store the return value while I check for errors, and you can't declare a void variable. So yeah, more copy+paste than thinking on that one :P
 }

 void myFunction_(int param0)
 {
     writefln("Do something with %s...", param0);
 }
 alias logFunc!(myFunction_) myFunction;

 void main()
 {
     myFunction(42);
 }
Overall a neat and useful trick. -- Chris Nicholson-Sauls
Like I said, I use this sort of thing in GL programming to check for errors. So: glMatrixMode(GL_PROJECTION) becomes glSafe!(glMatrixMode)(GL_PROJECTION) Which automatically throws an exception containing the human-readable name of the error condition if something goes wrong. The other neat side-effect is that I can throw a version switch and have my program log every GL call made, which makes tracing problems a lot easier. Tell you what, though; I'd *kill* to be able to do this:
 module mygl;
 static import derelict.gl.gl;
 foreach( symbol ; derelict.gl.gl.symbols )
     static if( startswith(symbol.stringof, "gl") )
         mixin(`alias glSafe!(derelict.gl.gl.`~symbol.stringof~`) `
             ~symbolstringof~`;`);
Which is, incidentally, a trick I used for some Python GL code. Make my D code a lot shorter :) On a side note; is this all Aspect-oriented programming is? Everything I read about it basically amounted to "AOP is the Second Coming!!!!! Also, Java is the best language EVAR!!!!!!" and never really said what it was or why it was useful. If all it is is glorified function wrapping, I'm going to be somewhat disappointed :P -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 10 2007
next sibling parent reply Georg Wrede <georg nospam.org> writes:
Daniel Keep wrote:
 Chris Nicholson-Sauls wrote:
Overall a neat and useful trick.
Absolutely!
 Like I said, I use this sort of thing in GL programming to check for
 errors.  So:
 
 glMatrixMode(GL_PROJECTION)
 
 becomes
 
 glSafe!(glMatrixMode)(GL_PROJECTION)
 
 Which automatically throws an exception containing the human-readable
 name of the error condition if something goes wrong.  The other neat
 side-effect is that I can throw a version switch and have my program log
 every GL call made, which makes tracing problems a lot easier.
Looks like an idiom to remember!
 Tell you what, though; I'd *kill* to be able to do this:
 
 
module mygl;
static import derelict.gl.gl;
foreach( symbol ; derelict.gl.gl.symbols )
    static if( startswith(symbol.stringof, "gl") )
        mixin(`alias glSafe!(derelict.gl.gl.`~symbol.stringof~`) `
            ~symbolstringof~`;`);
Which is, incidentally, a trick I used for some Python GL code. Make my D code a lot shorter :)
I've sometimes used preprocessing (activated by makefiles) to achieve something like this. But it should of course be in the language itself.
 On a side note; is this all Aspect-oriented programming is?  Everything
 I read about it basically amounted to "AOP is the Second Coming!!!!!
 Also, Java is the best language EVAR!!!!!!" and never really said what
 it was or why it was useful.
 
 If all it is is glorified function wrapping, I'm going to be somewhat
 disappointed :P
My exact feelings when OOP was all the rage. People dancing on rooftops hailing the Object. And all it was, was simply structs, and functions that pretended to be inside their scope. And folks saying "saving the objects" when they should have said "writing the data in some of the fields of some of the struct instances to disk". For a long time I thought I was stupid because "I didn't get it". Turned out there wasn't anything to "get". Or rather, the thing to get was the previous sentence.
May 10 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Georg Wrede wrote:
 Daniel Keep wrote:
 
 My exact feelings when OOP was all the rage. People dancing on rooftops 
 hailing the Object. And all it was, was simply structs, and functions 
 that pretended to be inside their scope. And folks saying "saving the 
 objects" when they should have said "writing the data in some of the 
 fields of some of the struct instances to disk".
No no no. You mean "object PERSISTENCE". Sounds a lot fancier. (But also just means "saving some objects", which means just "writing some data from some struts to disk") :-)
 For a long time I thought I was stupid because "I didn't get it". Turned 
 out there wasn't anything to "get". Or rather, the thing to get was the 
 previous sentence.
Maybe it seems like a big deal if you grew up programming Cobol or something. I never did really get the OO craze either. I remember at one point thinking "I must be missing something big here" so I bought and read Timothy Budd's book "Object Oriented Programming". I got some exposure to SmallTalk from that, which was nice, but other than that it was pretty much a disappointment. --bb
May 10 2007
parent reply Georg Wrede <georg nospam.org> writes:
Bill Baxter wrote:
 Georg Wrede wrote:
 My exact feelings when OOP was all the rage. People dancing on 
 rooftops hailing the Object. And all it was, was simply structs, and 
 functions that pretended to be inside their scope. And folks saying 
 "saving the objects" when they should have said "writing the data in 
 some of the fields of some of the struct instances to disk".
No no no. You mean "object PERSISTENCE". Sounds a lot fancier. (But also just means "saving some objects", which means just "writing some data from some struts to disk") :-)
LOL! Right.
 For a long time I thought I was stupid because "I didn't get it". 
 Turned out there wasn't anything to "get". Or rather, the thing to get 
 was the previous sentence.
Maybe it seems like a big deal if you grew up programming Cobol or something. I never did really get the OO craze either. I remember at one point thinking "I must be missing something big here" so I bought and read Timothy Budd's book "Object Oriented Programming". I got some exposure to SmallTalk from that, which was nice, but other than that it was pretty much a disappointment.
Of course many a consultant, guest lecturer, and downright charlatan made a living on it. And they just pretended to be explaining the thing, while making sure that folks didn't really see how simple and mundane the whole thing was. Grand visions of the future where everything is an Object, and where those Objects simply and easily float across computers and the net (entirely disregarding different OSs or CPU architectures, of course!), gather information and come back giving you info and flowers from Jane. The worst thing was that many books on OO did the same. But I guess that's life. You can't sell millions of a book that confesses up front that this is something explained in 5 pages, and that there's nothing more to it. The more people go into FUW (fear, uncertainty and worship), the more money for you. My bet is that we'll see this all over again. Within a couple of years a new paradigm is going to go through the community like a forest fire, until again folks get disillusioned and "get" it. Too bad. Oh, and incidentally, why does quantum computing come to my mind? For example, http://en.wikipedia.org/wiki/Qubits has a nice picture and some esoteric rambling. I'm not saying it's not for real, but I'd sure be amazed if ten years from now we have any real-world practical stuff coming out of it. I once read an article on how you could use a glass of milk and its quantum states to compute (I forget what, but it was pretty damn near the Meaning of Life) amazing and otherwise impossible stuff. Maybe I should start selling a black box called OD (it's a secret what it stands for, but for you guys, if you don't tell, it stands for the Oracle of Delphi). Basically it's just the radioactive grain from a regular household fire alarm and a coil of copper wire and a magnet. But that's a secret, and the whole thing is cast in epoxy to hide it. Then there's an earplug socket which you connect to Line In on your computer, and with this amazing software driver (/dev/od) you now get a stream of fresh entangled qubits. Dunno what to do with them? Well, for $10k a head, send your programmers to a ski resort in the arctic Finland, and we'll enlighten them. We also have them sleep with an OD box next to their head, and the combination of aurora borealis radiation and the OD box will help them assimilate our ahead-of-civilization programming paradigms. And when they come back, we'll monitor your company's progress (for an amazingly reasonable $100k/week) for the next six months. If no progress is evident, then we'll take your middle management for the same treatment (at $200k/head). We guarantee results, or your money back. (Except that by the time you get disillusioned you can't afford to sue us anymore. And if you don't get disillusioned we'll keep at it till you're dry.) Oh, L. Ron Hubbard was our first customer, and he sure died rich and with a smile on his face. --- Man, I'm in the wrong business. I should drop D and start making those OD boxes. ps, a hint to those of you who plan on boringly staying with D. Maybe you can get rich without leaving D. Check out the word "Qudit" on the same page. --- Oh, my! Now that I think about this, I have to confess I've already done it for real. In the nineties I was working in a consultancy, and we were running out of money. After some serious brainstorming we got the fast buck idea that we'll gather gullible cubicle programmers from large companies and drag them to Lapland for an Extreme Java seminar. I organised the thing, got a few lecturers and off we went. The seminar was basically about pouring a list of "believe it or not" stuff on them, giving them nightly assignments so they don't have time to drink beer or sleep, and then giving them a nice certificate of participation to hang on their cubicle wall. Amazing stuff like "you can write an entire web server in Java on a single page", "with multithreading you can have several threads visit the same object at the same time", and the like. By the time they got home they were so confused and in awe, that the first week at work they walked around like zombies mumbling incomprehensible stuff to the cleaning woman and cafeteria waiters. The word got around. Six months later we were booked solid. Our consultancy had a reputation of being a bunch of larger than life gurus on Advanced Topics. We put a clear plastic box three feet across right in the middle of the entrance hall to our office. There was a hidden blower inside, and pillow feathers floating and dancing around in it. No customer ever dared to ask what this contraption was. Gee, I guess writing my memoirs would be even more fun than the D book.
May 11 2007
next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Georg Wrede wrote:
 [...]
 Of course many a consultant, guest lecturer, and downright charlatan
 made a living on it. And they just pretended to be explaining the thing,
 while making sure that folks didn't really see how simple and mundane
 the whole thing was. Grand visions of the future where everything is an
 Object, and where those Objects simply and easily float across computers
 and the net (entirely disregarding different OSs or CPU architectures,
 of course!), gather information and come back giving you info and
 flowers from Jane.
Why does the phrase "intelligent agents" suddenly spring to mind? :P
 [...]
 
 Oh, and incidentally, why does quantum computing come to my mind? For
 example, http://en.wikipedia.org/wiki/Qubits has a nice picture and some
 esoteric rambling. I'm not saying it's not for real, but I'd sure be
 amazed if ten years from now we have any real-world practical stuff
 coming out of it. I once read an article on how you could use a glass of
 milk and its quantum states to compute (I forget what, but it was pretty
 damn near the Meaning of Life) amazing and otherwise impossible stuff.
I actually had to do research on this and give a seminar about it at university, so I can now ruin the mystique of it for you :) Basically quantum computing is nothing more than massively parallel probabilistic brute force. For example, if you want an answer to the travelling salesman problem, you'd just throw it at a quantum computer which will go off and work out, say, a few thousand possible answers. You then measure these answers and pick the most statistically significant answer. QC is all about stacking the odds so that given a particular number of samples, you have a reasonable chance of getting the right answer. This is how things like Shor's algorithm for factoring primes in polynomial time work (and yes; it *does* actually work. People have actually used it to factor small primes). The interesting thing is the hardware. You mentioned a glass of milk; while I haven't heard of that, I do know that a few guys used a thimbleful of chloroform to factor a smallish prime. Hell, you can build a quantum computer out of almost any molecule you can suspend in water; each atom's spin is one qubit, and you use RF to alter the spins (different atoms react to different frequencies). It's all very interesting, but it's fundamentally just picking random answers and hoping you get the right one. The difference is that QC does this very, very fast. It's like the difference in the old games that had both software and hardware renderers. They did exactly the same calculations, it's just the hardware renderers were faster and could do higher resolutions. Tangentially, this is also how DNA computing works; throw a few million strands of DNA at the problem, and you're just *bound* to get the right answer!
 Maybe I should start selling a black box called OD (it's a secret what
 it stands for, but for you guys, if you don't tell, it stands for the
 Oracle of Delphi). [...] Oh, L.
 Ron Hubbard was our first customer, and he sure died rich and with a
 smile on his face.
LOL.
 [...]
 
 Oh, my! Now that I think about this, I have to confess I've already done
 it for real. [...]
 
 Gee, I guess writing my memoirs would be even more fun than the D book.
That's hilarious. I think people are just gullible by nature :P -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 11 2007
next sibling parent reply Oskar Linde <oskar.lindeREM OVEgmail.com> writes:
Daniel Keep skrev:

 QC is all about stacking the odds so that given a particular number of
 samples, you have a reasonable chance of getting the right answer.  This
 is how things like Shor's algorithm for factoring primes in polynomial
 time work (and yes; it *does* actually work.  People have actually used
 it to factor small primes).
I find that very hard to believe... :) (Sorry) /Oskar
May 11 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Oskar Linde wrote:
 Daniel Keep skrev:
 
 QC is all about stacking the odds so that given a particular number of
 samples, you have a reasonable chance of getting the right answer.  This
 is how things like Shor's algorithm for factoring primes in polynomial
 time work (and yes; it *does* actually work.  People have actually used
 it to factor small primes).
That's nothing. I can factor large primes in my head. Try me! The bigger the prime the better!
 I find that very hard to believe... :)
I was thinking you were serious, but on second thought maybe we're just pointing out the same thing here... --bb
May 11 2007
parent Oskar Linde <oskar.lindeREM OVEgmail.com> writes:
Bill Baxter skrev:
 Oskar Linde wrote:
 Daniel Keep skrev:

 QC is all about stacking the odds so that given a particular number of
 samples, you have a reasonable chance of getting the right answer.  This
 is how things like Shor's algorithm for factoring primes in polynomial
 time work (and yes; it *does* actually work.  People have actually used
 it to factor small primes).
That's nothing. I can factor large primes in my head. Try me! The bigger the prime the better!
 I find that very hard to believe... :)
I was thinking you were serious, but on second thought maybe we're just pointing out the same thing here...
Yeah, we said the same thing... But one better than the other. :p /Oskar
May 16 2007
prev sibling parent reply Georg Wrede <georg nospam.org> writes:
Daniel Keep wrote:
 Georg Wrede wrote:
 
and where those Objects simply and easily float across computers
and the net (entirely disregarding different OSs or CPU architectures,
of course!), gather information and come back giving you info and
flowers from Jane.
Why does the phrase "intelligent agents" suddenly spring to mind? :P
Yes, that's the word for it! Intelligent, my foot!
Oh, and incidentally, why does quantum computing come to my mind?
 I actually had to do research on this and give a seminar about it at
 university, so I can now ruin the mystique of it for you :)
Oh no, please, not... I'll pay you for not doing that!
 Basically quantum computing is ...
 QC is all about stacking the odds ...
 The interesting thing is the hardware ...
 ... but it's fundamentally just picking ...
 Tangentially, this is also how DNA computing ...
Deja vu, all over again. Lisp machines, AI, neural networks, OO, you name it.
Maybe I should start selling a black box called OD ...
LOL.
 Now that I think about this, I have to confess I've already done
 it for real. [...]
That's hilarious. I think people are just gullible by nature :P
Well, folks today think they know everything, so they become unfamiliar with confronting stuff they don't understand. Then half the guys pretend to understand (thus adding to the confusion) and the rest don't have mental tools to handle and digest the stuff. What can I say, either one dismisses the thing without proper grounds (risking called conservative, thick, stupid or ignorant), or when one doesn't he'll end up considered gullible after the fact. The old story about the emperor and his (lack of) clothes was originally a vehicle for adult education, disguised as a childrens' story. But when people tell it as a bedtime story, they never stop to ponder about it enough to understand that it's all about something that's never going to go away! One century it's clothes, the next it's OO or Saddam's WMDs or citizen freedom in the Free World, the next it's QC. But the story itself stays the same. And it will not change before man becomes extinct. The remedy against it is utter sarcasm and pathological disillusionment, but they are otherwise a bit expensive for the bearer. :-) The other way to avoid it is to actually know better, but who can be a jack of all trades. It's like the big bang theory of the universe. Today we KNOW it started at zero size. Yeah, and we used to KNOW the earth was flat and the sun orbited us. I'll laugh my head off when they "discover" that the universe has always been and not just created at time T=0. (Not that I anymore bother to actually figure out a stance on this, even for myself.) Case in point, there are cosmologists and nuclear physicists who believe in god. Who am I to say he doesn't exist, but those guys sure ought to know better. ((This is not intended as offense against those in this newsgroup who believe. My apologies in advance.)) Heh, and don't get me started on biology. Or psychology. In some ways we're still not far ahead of Darwin, and psychology is about on the level that chemistry was in the Dark Ages. Pathetic. Uh-oh, turns out I'm the pathologically disillusioned one. :-)
May 11 2007
parent Georg Wrede <georg nospam.org> writes:
Georg Wrede wrote:

 Uh-oh, turns out I'm the pathologically disillusioned one. :-)
Or was it pathetically disillusioned? 8-P
May 11 2007
prev sibling parent reply david <ta-nospam-zz gmx.at> writes:
Georg Wrede wrote:
 Bill Baxter wrote:
 Georg Wrede wrote:
 My exact feelings when OOP was all the rage. People dancing on 
 rooftops hailing the Object. And all it was, was simply structs, and 
 functions that pretended to be inside their scope. And folks saying 
 "saving the objects" when they should have said "writing the data in 
 some of the fields of some of the struct instances to disk".
No no no. You mean "object PERSISTENCE". Sounds a lot fancier. (But also just means "saving some objects", which means just "writing some data from some struts to disk") :-)
LOL! Right.
 For a long time I thought I was stupid because "I didn't get it". 
 Turned out there wasn't anything to "get". Or rather, the thing to 
 get was the previous sentence.
Maybe it seems like a big deal if you grew up programming Cobol or something. I never did really get the OO craze either. I remember at one point thinking "I must be missing something big here" so I bought and read Timothy Budd's book "Object Oriented Programming". I got some exposure to SmallTalk from that, which was nice, but other than that it was pretty much a disappointment.
Of course many a consultant, guest lecturer, and downright charlatan made a living on it. And they just pretended to be explaining the thing, while making sure that folks didn't really see how simple and mundane the whole thing was. Grand visions of the future where everything is an Object, and where those Objects simply and easily float across computers and the net (entirely disregarding different OSs or CPU architectures, of course!), gather information and come back giving you info and flowers from Jane. The worst thing was that many books on OO did the same. But I guess that's life. You can't sell millions of a book that confesses up front that this is something explained in 5 pages, and that there's nothing more to it. The more people go into FUW (fear, uncertainty and worship), the more money for you. My bet is that we'll see this all over again. Within a couple of years a new paradigm is going to go through the community like a forest fire, until again folks get disillusioned and "get" it. Too bad. Oh, and incidentally, why does quantum computing come to my mind? For example, http://en.wikipedia.org/wiki/Qubits has a nice picture and some esoteric rambling. I'm not saying it's not for real, but I'd sure be amazed if ten years from now we have any real-world practical stuff coming out of it. I once read an article on how you could use a glass of milk and its quantum states to compute (I forget what, but it was pretty damn near the Meaning of Life) amazing and otherwise impossible stuff. Maybe I should start selling a black box called OD (it's a secret what it stands for, but for you guys, if you don't tell, it stands for the Oracle of Delphi). Basically it's just the radioactive grain from a regular household fire alarm and a coil of copper wire and a magnet. But that's a secret, and the whole thing is cast in epoxy to hide it. Then there's an earplug socket which you connect to Line In on your computer, and with this amazing software driver (/dev/od) you now get a stream of fresh entangled qubits. Dunno what to do with them? Well, for $10k a head, send your programmers to a ski resort in the arctic Finland, and we'll enlighten them. We also have them sleep with an OD box next to their head, and the combination of aurora borealis radiation and the OD box will help them assimilate our ahead-of-civilization programming paradigms. And when they come back, we'll monitor your company's progress (for an amazingly reasonable $100k/week) for the next six months. If no progress is evident, then we'll take your middle management for the same treatment (at $200k/head). We guarantee results, or your money back. (Except that by the time you get disillusioned you can't afford to sue us anymore. And if you don't get disillusioned we'll keep at it till you're dry.) Oh, L. Ron Hubbard was our first customer, and he sure died rich and with a smile on his face. --- Man, I'm in the wrong business. I should drop D and start making those OD boxes. ps, a hint to those of you who plan on boringly staying with D. Maybe you can get rich without leaving D. Check out the word "Qudit" on the same page. --- Oh, my! Now that I think about this, I have to confess I've already done it for real. In the nineties I was working in a consultancy, and we were running out of money. After some serious brainstorming we got the fast buck idea that we'll gather gullible cubicle programmers from large companies and drag them to Lapland for an Extreme Java seminar. I organised the thing, got a few lecturers and off we went. The seminar was basically about pouring a list of "believe it or not" stuff on them, giving them nightly assignments so they don't have time to drink beer or sleep, and then giving them a nice certificate of participation to hang on their cubicle wall. Amazing stuff like "you can write an entire web server in Java on a single page", "with multithreading you can have several threads visit the same object at the same time", and the like. By the time they got home they were so confused and in awe, that the first week at work they walked around like zombies mumbling incomprehensible stuff to the cleaning woman and cafeteria waiters. The word got around. Six months later we were booked solid. Our consultancy had a reputation of being a bunch of larger than life gurus on Advanced Topics. We put a clear plastic box three feet across right in the middle of the entrance hall to our office. There was a hidden blower inside, and pillow feathers floating and dancing around in it. No customer ever dared to ask what this contraption was. Gee, I guess writing my memoirs would be even more fun than the D book.
<rant> Picking up the topic of quantum computing, I broaden it to quantum mechanics - et voilą! It's just that from time to time I come across an article that states something (about e.g. the future in general, philosophy, psychology, ...), and when it comes to the point where it says that in the end we don't really know, finishes with "uncertainty, just like in quantum mechanics!". And when you don't know better, you believe that (whatever it is that you try to connect with it) and are impressed! (At least some of my friends I asked about it.) Some people just tend to learn a few terms only to impress laymen - and when you're an "insider", it's *so* obvious... Georg Wrede wrote:
 Today we KNOW it started at zero size. Yeah, and we used to KNOW
 the earth was flat and the sun orbited us. I'll laugh my head off
 when they "discover" that the universe has always been and not just
 created at time T=0.
Actually, I was lucky enough to listen to a talk by Prof. Joseph Weizenbaum yesterday evening, and he mentioned about the same - the sun rotating around the earth vs. earth around sun, and that we just pick the most simple that is still true and *dismiss* the other possible solutions/realities. His initial question to this was: "Who believes the earth rotates around the sun and *not* the other way around?" - it's all just a matter of point of view, but the equations are _so_ much easier... That said, quantum computers still make for a _very_ interesting field of research. Ever wondered how you could cut down the search time for a special dataset in a random database? Grover's algorithm changes the average N/2 tries down to O(sqrt(N)), and for the special case of 4 elements (which was actually done as an experiment by one of my profs), the classic 2.25 tries (worst case 3) become 1 (!). It's definitely _very_ interesting (at least for me), but so totally in its infancy that we're still _far far_ away from any applications other than toy problems (like determining the prime factors of 15). david
May 11 2007
parent reply Georg Wrede <georg nospam.org> writes:
david wrote:
 <rant>
 Picking up the topic of quantum computing, I broaden it to
 quantum mechanics - et voilą!
 It's just that from time to time I come across an article
 that states something (about e.g. the future in general,
 philosophy, psychology, ...), and when it comes to the point
 where it says that in the end we don't really know,
 finishes with "uncertainty, just like in quantum mechanics!".
 And when you don't know better, you believe that (whatever it is
 that you try to connect with it) and are impressed!
 (At least some of my friends I asked about it.)
 Some people just tend to learn a few terms only to impress
 laymen - and when you're an "insider", it's *so* obvious...
Most of the time the article writers don't really understand the issue. They ask experts, try to read advanced literature, but the /real understanding/ isn't there. That's fine for the "average reader", i guess. But the astute reader, or those who know at least something about the topic, often get frustrated. Academic texts would be better, but then they're not for the general public, so they usually are intractable. It's such a shame. And there seem to exist so few people who can (or bother to) explain forefront stuff to laymen. (With a very few but notable exceptions.)
 That said, quantum computers still make for a _very_ interesting
 field of research. Ever wondered how you could cut down the search
 time for a special dataset in a random database?
I wonder if I'm different than others. I can read things "on a belief basis", but in my head they get tagged as "not fact", no matter how big an authority says it. Especially at school I had to do this a lot. But other things I read, I could tag as "hard fact", or "understood" right away. And those I seem to remember much better. Also it often happened (and actually still does) that things come together and click, now I understand something new. All those things I do remember for the rest of my life. Sadly, quantum mechanics, string theory, and some other things I simply have to read on this "belief basis". I have to memorize that this guy said this and that guy said that, and the majority seem to think this about that, etc. That's a waste of time and mental energy, when articles really could and should be written so that already when you are reading them, you can go "ah, yes, of course", or "hmm, this would imply that, wonder if he's going to comment on in", or "awww, this is total crap". Take for instance the jet engine. Already at school I could draw a cross section of it and explain every detail. But I felt that I don't /understand/ why it works. No teacher or adult could explain the /essence/ of it, they all just told me what I already knew. Then one day in adulthood it just went click. And the jet engine got tagged as understood. Or the airplane wing. Ask around and you get a few explanations on why it keeps the plane up in the air. Go home and write down each explanation. Then try this: imagine the same wing, except that the leading edge is made sharp. Now, which of the explanations are good enough that you can predict what that does to the lift? (Even without understanding any aerodynamics, one can be pretty sure that the blunt leading edge is somehow better, or else all wings would have a sharp leading edge, since it is "obvious" that the blunt edge creates more drag. But don't mention the leading edge to the explainers.) And if you understand how something works (as opposed to learning the manual by heart), you can apply the thing in ways never imagined by the designer. And you will succeed, and you don't break the thing misusing it. For example, my understanding of quantum entanglement is at the "trying to read the manual by heart" level. And I'm not happy about it.
May 11 2007
parent david <ta-nospam-zz gmx.at> writes:
Georg Wrede schrieb:
 Most of the time the article writers don't really understand the issue. 
 They ask experts, try to read advanced literature, but the /real 
 understanding/ isn't there. That's fine for the "average reader", i 
 guess. But the astute reader, or those who know at least something about 
 the topic, often get frustrated.
I was more referring to the excessive use of certain words in a context that has rather less to do with it ("Everyone uses it and it sounds cool - So I'll use it as well!"),
 Academic texts would be better, but then they're not for the general 
 public, so they usually are intractable. It's such a shame. And there 
 seem to exist so few people who can (or bother to) explain forefront 
 stuff to laymen. (With a very few but notable exceptions.)
That is certainly true, as we see that (e.g. in physics) people like Richard Feynman are revered for their ability to explain complex relations in a way even John Doe can understand - simply because there are so few of them.
 Also it often happened (and actually still does) that things come 
 together and click, now I understand something new. All those things I 
 do remember for the rest of my life.
... and if it happens in your field of expertise you suddenly know that it's all worth the effort you put in it every day :-) (Actually, I'm thinking about some time ago when I was about 1-1.5 years at university and after reading, listening and calculating with them one afternoon just realized what Maxwell's equations really meant, what it was all about. Suddenly I had these weird pictures in my head of reality superposed by vector and scalar fields - it was really cool *g*). david
May 11 2007
prev sibling next sibling parent Alix Pexton <_a_l_i_x_._p_e_x_t_o_n_ _g_m_a_i_l_._c_o_m_> writes:
Daniel Keep wrote:
 On a side note; is this all Aspect-oriented programming is?  Everything
 I read about it basically amounted to "AOP is the Second Coming!!!!!
 Also, Java is the best language EVAR!!!!!!" and never really said what
 it was or why it was useful.
 
 If all it is is glorified function wrapping, I'm going to be somewhat
 disappointed :P
I had a similar experience the first time I tried to learn about AOP, and on that occasion, I just gave up on it. The second time I tried to understand what it was all about I started with the Wikipedia article, and read a few of the linked references. There was alot of stuff about cross-cutting code, things that you do in many places that aren't really related to what you are doing, but are a requirement of the platform you are working with. In the end, I came to the conclusion that AOP was a hack for people who are creating software rapidly, with little or no design process prior to implementation. I am of the opinion that if you design your software and build upon levels of abstraction, like a good little software engineer, then the problem that AOP is supposed to solve, never crops up. All that said, I'm not entirely sure that I understood fully everything that I read about AOP, it didn't have my full attention, due mostly to the frustration with trying to pin down good reference material. If someone can give a good clear example of exactly what AOP is, what the benefits are and why it is the best solution, then I'm still prepared to be convinced :) A...
May 10 2007
prev sibling parent Pragma <ericanderton yahoo.removeme.com> writes:
Daniel Keep wrote:
 
 
 On a side note; is this all Aspect-oriented programming is?  Everything
 I read about it basically amounted to "AOP is the Second Coming!!!!!
 Also, Java is the best language EVAR!!!!!!" and never really said what
 it was or why it was useful.
While I've never done a single keystroke of AOP myself, this seems to be what it is (by design anyway). Sure, there's a wide variety of strategies for wrapping a function (compile-time, runtime, prefix, postfix, etc) but ultimately, that's it.
 If all it is is glorified function wrapping, I'm going to be somewhat
 disappointed :P
I think that it becomes more transformative in use than in theory; so the glorification is somewhat warranted. As this thread has shown (along with the previous CallConstraints thread) a well-crafted solution would allow the coder to say very expressive things with very little code. IMO, that deserves /some/ praise after all. ;) -- - EricAnderton at yahoo
May 10 2007
prev sibling parent Nicolai Waniek <no.spam thank.you> writes:
Daniel Keep wrote:
 
 This is just a quick hack, but it does work.  The main problem is that
 you have to put the logFunc alias *after* the function is defined, or
 you get forward-reference errors.
 
 Apart from that, it should do what you want.
 
 	-- Daniel
 
 -----
 
 module fnwrap;
 
 import std.stdio;
 import std.traits;
 
 void logthis(char[] msg)
 {
     writefln("LOG - %s", msg);
 }
 
 ReturnType!(typeof(fn)) logFunc(alias
 fn)(ParameterTypeTuple!(typeof(fn)) args)
 {
     alias ReturnType!(typeof(fn)) returnT;
     debug
     {
         logthis("ENTER - "~(&fn).stringof);
         scope(exit) logthis("EXIT  - "~(&fn).stringof);
     }
     static if( is( returnT == void ) )
         fn(args);
     else
         return fn(args);
 }
 
 void myFunction_(int param0)
 {
     writefln("Do something with %s...", param0);
 }
 alias logFunc!(myFunction_) myFunction;
 
 void main()
 {
     myFunction(42);
 }
 
That's really nice, I think I can live with that :-) best regards
May 10 2007
prev sibling parent Howard Berkey <howard well.com> writes:
 If all it is is glorified function wrapping, I'm going to be somewhat
 disappointed :P
I think that it becomes more transformative in use than in theory; so the glorification is somewhat warranted. As this thread has shown (along with the previous CallConstraints thread) a well-crafted solution would allow the coder to say very expressive things with very little code. IMO, that deserves /some/ praise after all. ;)
It's actually pretty useful for boiling out common boilerplate code. For example, in python the Django framework uses it for things that have the same common usage pattern across lots of different operations. An example is user login; 99% of the time what you want to do is check if a user is logged in, and if not redirect them to a login page. Using this decorator syntax, all you need to so is put login_required above any of your functions that require the user to be logged in and it boils out the boilerplate for you. It's syntactic sugar, of course. You can to it in every other language in other ways. But it is nice syntactic sugar.
May 11 2007