www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - [OT] What should be in a programming language?

reply Jason House <jason.james.house gmail.com> writes:
I've been thinking lately about what my ideal programming language would look
like and what I might write if I tried to create one. Below is a short list of
big things I'd do. What do you think of my ideas? What would you do?

I'd look to D and Scala for inspiration, but want to allow better reasoning
about code.

I'd use D's transitive const and use transitive pure instead of transitive
immutable.

Functions can't effect glabal variables unless marked as such. I'll have to
learn what a monad is so that some side effects can be hidden? 

By default, a function can not keep its variables past function scope. If a
parameter or local variable is to remain for longer it must be marked as such
in its type.

Similarly, all function arguments will default to (transitive) const. Any
mutable arguments must be marked as such. I'm unsure if ref arguments should
come in a final and non-final form. One nice side effect of this is that value
types vs. reference types don't require special consideration.  

I'd probably do away with out arguments in favor of return tuples.

All types are not nullable by default. T and T? seem reasonable enough. I'd
probably twist that a bit and use a beefed up variable declaration similar to
scala. "pure? x: T = ..." would be a nullable immutable variable of type T.

Compile time capabilities would exceed D's. I'm thinking of a simple marker to
make anything compile time (i.e. #). #if would be like static if. #pure would
be a manifest constant. An array with some compile time values would be easy
[foo(7), #bar(8)]. This also means #switch and #foreach would exist (among
other things). Generics with compile time arguments become templates.

I would have a type type, usable for both compile time and run time reflection.

I've probably forgotten a number of basic things, but that should be enough for
now.
Oct 23 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Fri, 23 Oct 2009 16:42:31 +0400, Jason House  
<jason.james.house gmail.com> wrote:

 I've been thinking lately about what my ideal programming language would  
 look like and what I might write if I tried to create one. Below is a  
 short list of big things I'd do. What do you think of my ideas? What  
 would you do?

 I'd look to D and Scala for inspiration, but want to allow better  
 reasoning about code.

 I'd use D's transitive const and use transitive pure instead of  
 transitive immutable.

 Functions can't effect glabal variables unless marked as such. I'll have  
 to learn what a monad is so that some side effects can be hidden?

Agreed.
 By default, a function can not keep its variables past function scope.  
 If a parameter or local variable is to remain for longer it must be  
 marked as such in its type.

A good idea probably.
 Similarly, all function arguments will default to (transitive) const.  
 Any mutable arguments must be marked as such. I'm unsure if ref  
 arguments should come in a final and non-final form. One nice side  
 effect of this is that value types vs. reference types don't require  
 special consideration.

 I'd probably do away with out arguments in favor of return tuples.

Yes, it's reasonable.
 All types are not nullable by default. T and T? seem reasonable enough.  
 I'd probably twist that a bit and use a beefed up variable declaration  
 similar to scala. "pure? x: T = ..." would be a nullable immutable  
 variable of type T.

Yup!
 Compile time capabilities would exceed D's. I'm thinking of a simple  
 marker to make anything compile time (i.e. #). #if would be like static  
 if. #pure would be a manifest constant. An array with some compile time  
 values would be easy [foo(7), #bar(8)]. This also means #switch and  
 #foreach would exist (among other things). Generics with compile time  
 arguments become templates.

Nice idea that solves some of the D issues quite gracefully.
 I would have a type type, usable for both compile time and run time  
 reflection.

I'd separate that into built-in "type" and library "Type" types.
 I've probably forgotten a number of basic things, but that should be  
 enough for now.

I believe templates are better be written in imperative style. That's why a built-in "type" type is needed. Great list. I believe with upcoming finalization of D2 some of us are already looking into future and D3 so keep thinking. While you get your inspiration from D, D could also adopt some of your suggestions.
Oct 23 2009
next sibling parent Jason House <jason.james.house gmail.com> writes:
Denis Koroskin Wrote:

 On Fri, 23 Oct 2009 16:42:31 +0400, Jason House  
 <jason.james.house gmail.com> wrote:
 [snip]  
 I would have a type type, usable for both compile time and run time  
 reflection.

I'd separate that into built-in "type" and library "Type" types.

I don't really understand how you're envisioning use of types. Can you elaborate a bit more? I'd normally assume the library Type class would be the same as used by the compiler. That would seem to keep things clean/complete, but there's no reason someone couldn't make an enhanced wrapper which would also be usable at compile time
 I've probably forgotten a number of basic things, but that should be  
 enough for now.

I believe templates are better be written in imperative style. That's why a built-in "type" type is needed.

Absolutely! Do you have any inspirational examples?
 Great list. 

Thanks
 I believe with upcoming finalization of D2 some of us are  
 already looking into future and D3 so keep thinking. While you get your  
 inspiration from D, D could also adopt some of your suggestions.

Maybe. Somehow I think most of my list is too extreme. D has even moved away from some of the stuff in my list. (for example, in used to be scope const)
Oct 23 2009
prev sibling parent BCS <none anon.com> writes:
Hello Denis,


 I believe templates are better be written in imperative style. That's
 why  a built-in "type" type is needed.

The problem there is that real template (that, at compile time, generate a difference instance per type) are fundamentally declarative rather than imperative. What might work best is having template be declarative but provide powerful enough compile time imperative constructs that make functional programing (e.g. C++ style meta programming) more or less pointless. The thought is that rather than mix compile time and runtime imperative construct the template can /declare/ "there is a ___" but can access pure compile time constructs to do whatever processing/computation is needed.
Oct 27 2009
prev sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 23/10/2009 14:42, Jason House wrote:
 I've been thinking lately about what my ideal programming language
 would look like and what I might write if I tried to create one.
 Below is a short list of big things I'd do. What do you think of my
 ideas? What would you do?

 I'd look to D and Scala for inspiration, but want to allow better
 reasoning about code.

 I'd use D's transitive const and use transitive pure instead of
 transitive immutable.

 Functions can't effect glabal variables unless marked as such. I'll
 have to learn what a monad is so that some side effects can be
 hidden?

 By default, a function can not keep its variables past function
 scope. If a parameter or local variable is to remain for longer it
 must be marked as such in its type.

 Similarly, all function arguments will default to (transitive) const.
 Any mutable arguments must be marked as such. I'm unsure if ref
 arguments should come in a final and non-final form. One nice side
 effect of this is that value types vs. reference types don't require
 special consideration.

 I'd probably do away with out arguments in favor of return tuples.

 All types are not nullable by default. T and T? seem reasonable
 enough. I'd probably twist that a bit and use a beefed up variable
 declaration similar to scala. "pure? x: T = ..." would be a nullable
 immutable variable of type T.

 Compile time capabilities would exceed D's. I'm thinking of a simple
 marker to make anything compile time (i.e. #). #if would be like
 static if. #pure would be a manifest constant. An array with some
 compile time values would be easy [foo(7), #bar(8)]. This also means
 #switch and #foreach would exist (among other things). Generics with
 compile time arguments become templates.

 I would have a type type, usable for both compile time and run time
 reflection.

 I've probably forgotten a number of basic things, but that should be
 enough for now.

my list would be similar: non-nullable references by default, concurrency semantics (I liked Bartosz' design), transitive const and everything is const by default, immutable as part of the concurrency ownership design, functions would be defined as in ML: they take one tuple argument and return one tuple argument i don't like the #if, etc idea and would prefer a proper AST macro system with proper hygiene I'd remove static from the language completely and instead would have metaclasses. other OOP changes - better separation between sub-typing and sub-classing: type interfaceA extends interfaceB, interfaceC {} implementation implA extends implB, implC implements interfaceA, interfaceD{} auto obj = interfaceA.new(); // returns an implementation, e.g. implA ionstead of the classic: class A {} class B: A {} we'll have: type A {} type B extends A {} implementation A implements A{} implementation B extends implementation A implements B {} everything is an object and no special treatment for specific types
Oct 23 2009
parent reply Jason House <jason.james.house gmail.com> writes:
Yigal Chripun Wrote:

 On 23/10/2009 14:42, Jason House wrote:
 I've been thinking lately about what my ideal programming language
 would look like and what I might write if I tried to create one.
 Below is a short list of big things I'd do. What do you think of my
 ideas? What would you do?

 I'd look to D and Scala for inspiration, but want to allow better
 reasoning about code.

 I'd use D's transitive const and use transitive pure instead of
 transitive immutable.

 Functions can't effect glabal variables unless marked as such. I'll
 have to learn what a monad is so that some side effects can be
 hidden?

 By default, a function can not keep its variables past function
 scope. If a parameter or local variable is to remain for longer it
 must be marked as such in its type.

 Similarly, all function arguments will default to (transitive) const.
 Any mutable arguments must be marked as such. I'm unsure if ref
 arguments should come in a final and non-final form. One nice side
 effect of this is that value types vs. reference types don't require
 special consideration.

 I'd probably do away with out arguments in favor of return tuples.

 All types are not nullable by default. T and T? seem reasonable
 enough. I'd probably twist that a bit and use a beefed up variable
 declaration similar to scala. "pure? x: T = ..." would be a nullable
 immutable variable of type T.

 Compile time capabilities would exceed D's. I'm thinking of a simple
 marker to make anything compile time (i.e. #). #if would be like
 static if. #pure would be a manifest constant. An array with some
 compile time values would be easy [foo(7), #bar(8)]. This also means
 #switch and #foreach would exist (among other things). Generics with
 compile time arguments become templates.

 I would have a type type, usable for both compile time and run time
 reflection.

 I've probably forgotten a number of basic things, but that should be
 enough for now.

my list would be similar: non-nullable references by default, concurrency semantics (I liked Bartosz' design),

That's an important one that I forgot to list: an ownership scheme similar to Bartosz's (I'd favor a slight reduction in annotation, but the same basic thing)
 transitive const and everything is const by default, 
 immutable as part of the concurrency ownership design,
 functions would be defined as in ML: they take one tuple argument and 
 return one tuple argument

I like scala's concept of multiple input tuples, and the freedom to use () or {} when specifying a tuple. I might be slightly stricter with how each tuple is used such that the library writer says which form is allowed. This has the awesome side effects of making libraries look like part of the language.
 
 i don't like the #if, etc idea and would prefer a proper AST macro 
 system with proper hygiene

The AST macro stuff predates my participation on this list. Have any good links explaining how it works? I've been thinking of allowing expressions, statements (and friends) as object types and then allowing mixing in of those objects... #var declareX = Statement ("int x=3"); #mixin declareX; ... or something along those lines...
 
 I'd remove static from the language completely and instead would have 
 metaclasses.

I agree with no statics. What are metaclasses? I like how scala defines singletons for statics (object in scala type declaration)
 other OOP changes - better separation between sub-typing and sub-classing:
 
 type interfaceA extends interfaceB, interfaceC {}
 implementation implA extends implB, implC implements interfaceA, 
 interfaceD{}
 
 auto obj = interfaceA.new(); // returns an implementation, e.g. implA
 
 ionstead of the classic:
 class A {}
 class B: A {}
 
 we'll have:
 type A {}
 type B extends A {}
 implementation A implements A{}
 implementation B extends implementation A implements B {}

There seems to be redundency here. Why would you/others want that? Super verbose code makes it tough to attract users.
 everything is an object and no special treatment for specific types

I don't think "everything is an object" works under the hood, but I do like making that transparent. A lot of the items allow transparent use of reference and value types (scope by default unless explicitly marked/allocated, and transitive const by default unless passed by ref)
Oct 23 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 23/10/2009 19:50, Jason House wrote:
 Yigal Chripun Wrote:

 transitive const and everything is const by default, immutable as
 part of the concurrency ownership design, functions would be
 defined as in ML: they take one tuple argument and return one tuple
 argument

I like scala's concept of multiple input tuples, and the freedom to use () or {} when specifying a tuple. I might be slightly stricter with how each tuple is used such that the library writer says which form is allowed. This has the awesome side effects of making libraries look like part of the language.

(type) tuple to a function and it is auto flattened to the arg list. i.e. void foo(int, int); Tuple!(int, int) a = ...; foo(a); in ML, the arg list *is* a tuple so there's no need for an auto-flatten. you can always nest tuples so it's trivial to have multiple tuples as arguments. example (not tested): fun foo (a, (b, c)) = a*b + a*c let bar = foo(3, (4, 5)) foo's signature would be: ('a, ('a, 'a)) -> ('a)
 i don't like the #if, etc idea and would prefer a proper AST macro
 system with proper hygiene

The AST macro stuff predates my participation on this list. Have any good links explaining how it works? I've been thinking of allowing expressions, statements (and friends) as object types and then allowing mixing in of those objects... #var declareX = Statement ("int x=3"); #mixin declareX; ... or something along those lines...

http://en.wikipedia.org/wiki/Hygienic_macro here's a quick summary of how it should work/look like: macros are written in the same language, no #if, no static if. you write regular looking functions in plain code. those macros are compiled separately into loadable libs that you can specify for the compiler to load. the language has syntax to [de]compose AST.
 I'd remove static from the language completely and instead would
 have metaclasses.

I agree with no statics. What are metaclasses? I like how scala defines singletons for statics (object in scala type declaration)

there are different models for this and this also relates to the macro system above. here's one model (used in smalltalk) class Foo { int a; void foo(); } the compiler will generate for Foo a singleton of the type 'Foo which contains all the information about Foo. for example, it'll contain the list of functions for Foo instances. this is the same as in D - in D we have Typeinfo structs that contain vtables. class Bar { int a; static void foo(); } in compiled languages (c++/d) this is done statically (foo is a global function in the assembly) in smalltalk the previous mechanism is [re]used: we have class Bar which defines it's instances we have class 'Bar that defines Bar we have class ''Bar that defines 'Bar we have class Class that defines ''Bar total of 5 levels which are required to have class shared functions/state
 other OOP changes - better separation between sub-typing and
 sub-classing:

 type interfaceA extends interfaceB, interfaceC {} implementation
 implA extends implB, implC implements interfaceA, interfaceD{}

 auto obj = interfaceA.new(); // returns an implementation, e.g.
 implA

 ionstead of the classic: class A {} class B: A {}

 we'll have: type A {} type B extends A {} implementation A
 implements A{} implementation B extends implementation A implements
 B {}

There seems to be redundency here. Why would you/others want that? Super verbose code makes it tough to attract users.

class from a base class is a poor way to design code and I want to separate two orthogonal issues - a. subtyping and polymorphism b. reuse of code/ implementation
 everything is an object and no special treatment for specific
 types

I don't think "everything is an object" works under the hood, but I do like making that transparent. A lot of the items allow transparent use of reference and value types (scope by default unless explicitly marked/allocated, and transitive const by default unless passed by ref)

c style built in types expose an implementation detail that should be encapsulated and hidden under the hood as you say. there should be no syntactic difference between an int and a user defined type. for example I should be able to do: struct foo : int {}
Oct 23 2009
parent reply Jason House <jason.james.house gmail.com> writes:
Yigal Chripun Wrote:

 On 23/10/2009 19:50, Jason House wrote:
 Yigal Chripun Wrote:

 transitive const and everything is const by default, immutable as
 part of the concurrency ownership design, functions would be
 defined as in ML: they take one tuple argument and return one tuple
 argument

I like scala's concept of multiple input tuples, and the freedom to use () or {} when specifying a tuple. I might be slightly stricter with how each tuple is used such that the library writer says which form is allowed. This has the awesome side effects of making libraries look like part of the language.


My web search and some PDF's didn't turn up a handy example. You can do things in scala like define your own foreach loop. If foreach had the form form foreach(x){y} then x would be one set of arguments and y would be another set. It makes for pretty use of library functions. They look built in!
 in D you can pass a 
 (type) tuple to a function and it is auto flattened to the arg list.
 i.e.
 void foo(int, int);
 Tuple!(int, int) a = ...;
 foo(a);
 in ML, the arg list *is* a tuple so there's no need for an auto-flatten. 
 you can always nest tuples so it's trivial to have multiple tuples as 
 arguments.
 
 example (not tested):
 
 fun foo (a, (b, c)) = a*b + a*c
 let bar = foo(3, (4, 5))
 
 foo's signature would be: ('a, ('a, 'a)) -> ('a)

Sounds reasonable
 i don't like the #if, etc idea and would prefer a proper AST macro
 system with proper hygiene

The AST macro stuff predates my participation on this list. Have any good links explaining how it works? I've been thinking of allowing expressions, statements (and friends) as object types and then allowing mixing in of those objects... #var declareX = Statement ("int x=3"); #mixin declareX; ... or something along those lines...

http://en.wikipedia.org/wiki/Hygienic_macro here's a quick summary of how it should work/look like: macros are written in the same language, no #if, no static if. you write regular looking functions in plain code. those macros are compiled separately into loadable libs that you can specify for the compiler to load. the language has syntax to [de]compose AST.

I looked over the links (quickly). I must admit I don't get it yet. It takes me a while to digest lisp fragments... Can you give a D-ish example of what it'd look like?
 I'd remove static from the language completely and instead would
 have metaclasses.

I agree with no statics. What are metaclasses? I like how scala defines singletons for statics (object in scala type declaration)

there are different models for this and this also relates to the macro system above. here's one model (used in smalltalk) class Foo { int a; void foo(); } the compiler will generate for Foo a singleton of the type 'Foo which contains all the information about Foo. for example, it'll contain the list of functions for Foo instances. this is the same as in D - in D we have Typeinfo structs that contain vtables. class Bar { int a; static void foo(); } in compiled languages (c++/d) this is done statically (foo is a global function in the assembly) in smalltalk the previous mechanism is [re]used: we have class Bar which defines it's instances we have class 'Bar that defines Bar we have class ''Bar that defines 'Bar we have class Class that defines ''Bar total of 5 levels which are required to have class shared functions/state

That seems strange. I'm also missing something important :(
 other OOP changes - better separation between sub-typing and
 sub-classing:

 type interfaceA extends interfaceB, interfaceC {} implementation
 implA extends implB, implC implements interfaceA, interfaceD{}

 auto obj = interfaceA.new(); // returns an implementation, e.g.
 implA

 ionstead of the classic: class A {} class B: A {}

 we'll have: type A {} type B extends A {} implementation A
 implements A{} implementation B extends implementation A implements
 B {}

There seems to be redundency here. Why would you/others want that? Super verbose code makes it tough to attract users.

class from a base class is a poor way to design code and I want to separate two orthogonal issues - a. subtyping and polymorphism b. reuse of code/ implementation

That's a good goal. What should it look like in code?
 everything is an object and no special treatment for specific
 types

I don't think "everything is an object" works under the hood, but I do like making that transparent. A lot of the items allow transparent use of reference and value types (scope by default unless explicitly marked/allocated, and transitive const by default unless passed by ref)

c style built in types expose an implementation detail that should be encapsulated and hidden under the hood as you say. there should be no syntactic difference between an int and a user defined type. for example I should be able to do: struct foo : int {}

I agree
Oct 24 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 25/10/2009 06:26, Jason House wrote:

 My web search and some PDF's didn't turn up a handy example. You can
 do things in scala like define your own foreach loop. If foreach had
 the form form foreach(x){y} then x would be one set of arguments and
 y would be another set. It makes for pretty use of library functions.
 They look built in!

isn't that similar in concept to code blocks?
 i don't like the #if, etc idea and would prefer a proper AST
 macro system with proper hygiene

The AST macro stuff predates my participation on this list. Have any good links explaining how it works? I've been thinking of allowing expressions, statements (and friends) as object types and then allowing mixing in of those objects... #var declareX = Statement ("int x=3"); #mixin declareX; ... or something along those lines...

http://en.wikipedia.org/wiki/Hygienic_macro here's a quick summary of how it should work/look like: macros are written in the same language, no #if, no static if. you write regular looking functions in plain code. those macros are compiled separately into loadable libs that you can specify for the compiler to load. the language has syntax to [de]compose AST.

I looked over the links (quickly). I must admit I don't get it yet. It takes me a while to digest lisp fragments... Can you give a D-ish example of what it'd look like?

macro PrintStage() { System.Console.WriteLine("This is executed during compilation"); <[ System.Console.WriteLine("This is executed at run time") ]> } the first WriteLine is executed during compilation, and the macro returns the AST for the second WriteLine which will be executed at run time when this macro is called. think of it like: // hypothetical D syntax macro print() { Stdout("Hello compile time world").newline; return q{ Stdout("Hello run time world").newline }; } one important design goal is to clearly separate the stages, so this will go to a separate .d file and will be compiled into a lib. to use this macro you simply specify compiler --load-macro=myMacro sources.d in user code you just use "print();"
 I'd remove static from the language completely and instead
 would have metaclasses.

I agree with no statics. What are metaclasses? I like how scala defines singletons for statics (object in scala type declaration)

there are different models for this and this also relates to the macro system above. here's one model (used in smalltalk) class Foo { int a; void foo(); } the compiler will generate for Foo a singleton of the type 'Foo which contains all the information about Foo. for example, it'll contain the list of functions for Foo instances. this is the same as in D - in D we have Typeinfo structs that contain vtables. class Bar { int a; static void foo(); } in compiled languages (c++/d) this is done statically (foo is a global function in the assembly) in smalltalk the previous mechanism is [re]used: we have class Bar which defines it's instances we have class 'Bar that defines Bar we have class ''Bar that defines 'Bar we have class Class that defines ''Bar total of 5 levels which are required to have class shared functions/state

That seems strange. I'm also missing something important :(

OK, here's an example: class Foo { int a; void bar(); } auto obj = new Foo; obj.a = 42; // obj contains a obj.bar(); // calls 'Foo.vtbl.bar remember that 'Foo is the classinfo singelton for Foo class Foo { static a; static void bar(); } Foo.a = 42; // 'Foo contains a Foo.bar(); // calls ''Foo.vtbl.bar ''Foo is the classinfo singelton for 'Foo we get the following chain ("-->" means instance of) obj --> Foo --> MetaFoo --> MetaClass --> Class compared with C++/D/Java/etc: obj --> Foo --> Class
 other OOP changes - better separation between sub-typing and
 sub-classing:

 type interfaceA extends interfaceB, interfaceC {}
 implementation implA extends implB, implC implements
 interfaceA, interfaceD{}

 auto obj = interfaceA.new(); // returns an implementation,
 e.g. implA

 ionstead of the classic: class A {} class B: A {}

 we'll have: type A {} type B extends A {} implementation A
 implements A{} implementation B extends implementation A
 implements B {}

There seems to be redundency here. Why would you/others want that? Super verbose code makes it tough to attract users.

a class from a base class is a poor way to design code and I want to separate two orthogonal issues - a. subtyping and polymorphism b. reuse of code/ implementation

That's a good goal. What should it look like in code?

that's a good question. I don't know yet.
Oct 25 2009
parent reply Jason House <jason.james.house gmail.com> writes:
Yigal Chripun Wrote:

 On 25/10/2009 06:26, Jason House wrote:
 
 My web search and some PDF's didn't turn up a handy example. You can
 do things in scala like define your own foreach loop. If foreach had
 the form form foreach(x){y} then x would be one set of arguments and
 y would be another set. It makes for pretty use of library functions.
 They look built in!

isn't that similar in concept to code blocks?

I'm not familiar enough with code blocks to say for sure. From what I saw in blogs, they are not. Either way, D can't make things look built in like scala can. IMHO, it's a great programming language feature.
 I looked over the links (quickly). I must admit I don't get it yet.
 It takes me a while to digest lisp fragments... Can you give a D-ish
 example of what it'd look like?

macro PrintStage() { System.Console.WriteLine("This is executed during compilation"); <[ System.Console.WriteLine("This is executed at run time") ]> } the first WriteLine is executed during compilation, and the macro returns the AST for the second WriteLine which will be executed at run time when this macro is called.

How is that different from a normal function definition that includes some compile-time calls? I agree that compile-time code should look and feel like normal code. It seems you use macro to switch to compile-time by default and runtime when explcitly marked? Having both defaults (compile time or run time) makes sense.
 one important design goal is to clearly separate the stages, so this 
 will go to a separate .d file and will be compiled into a lib.
 to use this macro you simply specify
 compiler --load-macro=myMacro sources.d
 
 in user code you just use "print();"

I disagree with this. The code that uses the macros should declare what it uses.
 
 OK, here's an example:
 
 class Foo {
 int a;
 void bar();
 }
 
 auto obj = new Foo;
 obj.a = 42; // obj contains a
 obj.bar();  // calls 'Foo.vtbl.bar
 
 remember that 'Foo is the classinfo singelton for Foo
 
 class Foo {
 static a;
 static void bar();
 }
 
 Foo.a = 42; // 'Foo contains a
 Foo.bar(); // calls ''Foo.vtbl.bar
 
 ''Foo is the classinfo singelton for 'Foo
 
 we get the following chain ("-->" means instance of)
 obj --> Foo --> MetaFoo --> MetaClass --> Class
 
 compared with C++/D/Java/etc:
 obj --> Foo --> Class

Ok. That makes sense. It can be simplified when statics are removed.
Oct 26 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
Jason House Wrote:

 How is that different from a normal function definition that includes some
compile-time calls? I agree that compile-time code should look and feel like
normal code. It seems you use macro to switch to compile-time by default and
runtime when explcitly marked? Having both defaults (compile time or run time)
makes sense.
 

The way it's implemented in Nemerle, a macro is actually a class. the above is not how it works. the code inside a macro is regular run-time code. it is compiled into a lib and loaded by the compiler as a plugin. the code is run at run-time but run-time here means run-time of the compiler since it's a plugin of the compiler. in nemerle (like in FP) the last value in a function is what the function returns. so that macro *returns* an AST representation of what's inside. you can use this operator to de/compose AST.
 
 
 one important design goal is to clearly separate the stages, so this 
 will go to a separate .d file and will be compiled into a lib.
 to use this macro you simply specify
 compiler --load-macro=myMacro sources.d
 
 in user code you just use "print();"

I disagree with this. The code that uses the macros should declare what it uses.

I meant from a syntax POV - calling a macro is the same as calling a function. no template syntax. importing the namespace is still required IIRC.
 
 
 
 OK, here's an example:
 
 class Foo {
 int a;
 void bar();
 }
 
 auto obj = new Foo;
 obj.a = 42; // obj contains a
 obj.bar();  // calls 'Foo.vtbl.bar
 
 remember that 'Foo is the classinfo singelton for Foo
 
 class Foo {
 static a;
 static void bar();
 }
 
 Foo.a = 42; // 'Foo contains a
 Foo.bar(); // calls ''Foo.vtbl.bar
 
 ''Foo is the classinfo singelton for 'Foo
 
 we get the following chain ("-->" means instance of)
 obj --> Foo --> MetaFoo --> MetaClass --> Class
 
 compared with C++/D/Java/etc:
 obj --> Foo --> Class

Ok. That makes sense. It can be simplified when statics are removed.

I don't understand this. How removal of statics simplifies this? I think that having class shared functions/data should still be possible but implemented as above instead of static memory as in c++/D. class Foo { static int value; } this still works as in D but value is a member of the singleton object that represents Foo at runtime instead of stored in static memory. those singletons need to be concurrency friendly unlike the static memory design that is definitely is not. btw, in dynamic languages like smalltalk/ruby those meta classes are mutable so you can for example add methods at run-time. I don't know if this should be allowed in a compiled language.
Oct 26 2009
parent reply Jason House <jason.james.house gmail.com> writes:
Yigal Chripun Wrote:

 Jason House Wrote:
 
 How is that different from a normal function definition that includes some
compile-time calls? I agree that compile-time code should look and feel like
normal code. It seems you use macro to switch to compile-time by default and
runtime when explcitly marked? Having both defaults (compile time or run time)
makes sense.
 

The way it's implemented in Nemerle, a macro is actually a class. the above is not how it works. the code inside a macro is regular run-time code. it is compiled into a lib and loaded by the compiler as a plugin. the code is run at run-time but run-time here means run-time of the compiler since it's a plugin of the compiler. in nemerle (like in FP) the last value in a function is what the function returns. so that macro *returns* an AST representation of what's inside. you can use this operator to de/compose AST.

Your examples in Nemerle or D-ish looked like they are returning strings. I'm still not seeing the magic of AST macros.
 


 OK, here's an example:
 
 class Foo {
 int a;
 void bar();
 }
 
 auto obj = new Foo;
 obj.a = 42; // obj contains a
 obj.bar();  // calls 'Foo.vtbl.bar
 
 remember that 'Foo is the classinfo singelton for Foo
 
 class Foo {
 static a;
 static void bar();
 }
 
 Foo.a = 42; // 'Foo contains a
 Foo.bar(); // calls ''Foo.vtbl.bar
 
 ''Foo is the classinfo singelton for 'Foo
 
 we get the following chain ("-->" means instance of)
 obj --> Foo --> MetaFoo --> MetaClass --> Class
 
 compared with C++/D/Java/etc:
 obj --> Foo --> Class

Ok. That makes sense. It can be simplified when statics are removed.

I don't understand this. How removal of statics simplifies this?

As I understood it 'Foo contains the static data and class info for Foo, and ''Foo contains class info for 'Foo. Without statics, ''Foo is unnecessary. I'm sure I've misinterpreted what you're saying ;)
 I think that having class shared functions/data should still be possible but
implemented as above instead of static memory as in c++/D. 
 class Foo {
 static int value;
 }
 
 this still works as in D but value is a member of the singleton object that
represents Foo at runtime instead of stored in static memory.

The singleton object should be in static memory... I don't really see the distinction since the finer storage details don't affect the programmer.
 
 those singletons need to be concurrency friendly unlike the static memory
design that is definitely is not. 
 
 btw, in dynamic languages like smalltalk/ruby those meta classes are mutable
so you can for example add methods at run-time. I don't know if this should be
allowed in a compiled language. 

Oct 26 2009
parent Yigal Chripun <yigal100 gmail.com> writes:
On 26/10/2009 20:30, Jason House wrote:
 Your examples in Nemerle or D-ish looked like they are returning
 strings. I'm still not seeing the magic of AST macros.

When we want to decompose some large code (or more precisely, its syntax tree), we must bind its smaller parts to variables. Then we can process them recursively or just use them in an arbitrary way to construct the result. We can operate on entire subexpressions by writing $( ... ) or $ID inside the quotation operator <[ ... ]>. This means binding the value of ID or the interior of parenthesized expression to the part of syntax tree described by corresponding quotation. macro for (init, cond, change, body) { <[ $init; def loop () : void { if ($cond) { $body; $change; loop() } else () }; loop () ]> } he above macro defines function for, which is similar to the loop known from C. It can be used like this for (mutable i = 0, i < 10, i++, printf ("%d", i)) /quote the above is taken from the macros_tutorial page of nemerle.org. unfortunately the site is down so I'm using Google's cache instead. there are a few more related topics: Constructs with variable number of elements, hygiene, ...
Oct 26 2009