www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - [std.database] at compile time

reply foobar <foo bar.com> writes:
Has anyone looked at Nemerle's design for this? 
They have an SQL macro which allows to write SQL such as:

var employName = "FooBar"
SQL (DBconn, "select * from employees where name = $employName");

what that supposed to do is bind the variable(s) and it also validates the sql
query with the database. This is all done at compile-time. 

My understanding is that D's compile-time features are powerful enough to
implement this. 
Oct 14 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this?
 They have an SQL macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also validates the sql
query with the database. This is all done at compile-time.

 My understanding is that D's compile-time features are powerful enough to
implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database. -- /Jacob Carlborg
Oct 14 2011
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this?
 They have an SQL macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also validates
 the sql query with the database. This is all done at compile-time.

 My understanding is that D's compile-time features are powerful enough
 to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei
Oct 14 2011
next sibling parent reply foobar <foo bar.com> writes:
Andrei Alexandrescu Wrote:

 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this?
 They have an SQL macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also validates
 the sql query with the database. This is all done at compile-time.

 My understanding is that D's compile-time features are powerful enough
 to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

The downsides with writing a separate SQL interpreter are: a) No connection to the DB means no way to validate the schema, e.g. the db might not even have a 'name' column in the employees table. b) No way to validate the SQL per the exact version the DB uses. E.g. LIMIT vs. TOP and also DB vendor specific extensions to SQL syntax. c) NIH - implementing your own SQL interpreter when the DB vendor already provides it. oh, well, perhaps it would be possible with D3 once it supports proper macros. In any case, such a macro probably would be built atop the DB API currently being discussed.
Oct 14 2011
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 14.10.2011 19:13, foobar wrote:
 Andrei Alexandrescu Wrote:

 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this?
 They have an SQL macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also validates
 the sql query with the database. This is all done at compile-time.

 My understanding is that D's compile-time features are powerful enough
 to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

The downsides with writing a separate SQL interpreter are: a) No connection to the DB means no way to validate the schema, e.g. the db might not even have a 'name' column in the employees table.

The upside is that you can at least rebuild your app when database is down or compile it on a separate machine.
 b) No way to validate the SQL per the exact version the DB uses. E.g. LIMIT
vs. TOP and also DB vendor specific extensions to SQL syntax.
 c) NIH - implementing your own SQL interpreter when the DB vendor already
provides it.

 oh, well, perhaps it would be possible with D3 once it supports proper macros.
In any case, such a macro probably would be built atop the DB API currently
being discussed.

-- Dmitry Olshansky
Oct 14 2011
parent foobar <foo bar.com> writes:
Dmitry Olshansky Wrote:

 On 14.10.2011 19:13, foobar wrote:
 Andrei Alexandrescu Wrote:

 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this?
 They have an SQL macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also validates
 the sql query with the database. This is all done at compile-time.

 My understanding is that D's compile-time features are powerful enough
 to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

The downsides with writing a separate SQL interpreter are: a) No connection to the DB means no way to validate the schema, e.g. the db might not even have a 'name' column in the employees table.

The upside is that you can at least rebuild your app when database is down or compile it on a separate machine.
 b) No way to validate the SQL per the exact version the DB uses. E.g. LIMIT
vs. TOP and also DB vendor specific extensions to SQL syntax.
 c) NIH - implementing your own SQL interpreter when the DB vendor already
provides it.

 oh, well, perhaps it would be possible with D3 once it supports proper macros.
In any case, such a macro probably would be built atop the DB API currently
being discussed.

-- Dmitry Olshansky

That's just silly, why would you use a validating SQL macro if you do not want (or don't have the environment set up) to validate the SQL?
Oct 14 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this?
 They have an SQL macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also validates
 the sql query with the database. This is all done at compile-time.

 My understanding is that D's compile-time features are powerful enough
 to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme? -- /Jacob Carlborg
Oct 14 2011
next sibling parent reply Justin Whear <justin economicmodeling.com> writes:
Graham Fawcett wrote:
 
 But you still won't be able to verify the columns to the actual database
 scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler. Graham

This is actually possible now. I wrote a little CTFE parser for CREATE TABLE... statements, used mysql to dump a db, then used import() to include the dump and parse it at compile-time. I was actually using it to generate classes which mapped to the contents of each table (i.e. a "things" table would result in a "Thing" class which mapped the fields as properties). Obviously you need to keep the db dump up to date via an external tool.
Oct 14 2011
parent reply bls <bizprac orange.fr> writes:
Am 14.10.2011 22:29, schrieb Justin Whear:
 This is actually possible now. I wrote a little CTFE parser for CREATE
 TABLE... statements, used mysql to dump a db, then used import() to include
 the dump and parse it at compile-time. I was actually using it to generate
 classes which mapped to the contents of each table (i.e. a "things" table
 would result in a "Thing" class which mapped the fields as properties).
 Obviously you need to keep the db dump up to date via an external tool.

Cool! Hesitate to share the code? I think we should have a two way tool. 1) Dump database structure and generate ORM classes. 2) ORM classes should validate against the database. (classname == tablename, or f.i. class customer { string tablename = "contact";) In case that the db-table does not exist, create it. In case that the ORM class is not in sync with the db-table a) alter the db-table or b) throw an exception. priority : datetimestamp. tools : std.traits and the orange serialisation lib my 2 cents
Oct 15 2011
parent reply Justin Whear <justin economicmodeling.com> writes:
Sure, here it is: http://pastebin.com/KSyuk8HN

Usage would be something like:

import std.stdio;
import ccb.io.automap_table;

mixin( MapTables!(import("school_system.sql"), true) );

void main()
{
    Student a = new Student;
    a.student_name = "John Smith";
    a.term = 1;
    writeln(a.student_name, ", ", a.term);
    students.insert(new Student());
    students.commit();
}

The sql file can be generated like so:
$ mysqldump -d -h dbhost -u username -p password database_name


Note that it's super hacky, literally written on the spur of the moment. I 
have a much cleaner, more robust version, but it currently sends the 
compiler into an infinite loop and I haven't touched it in a couple months.

Justin


 Am 14.10.2011 22:29, schrieb Justin Whear:
 This is actually possible now. I wrote a little CTFE parser for CREATE
 TABLE... statements, used mysql to dump a db, then used import() to
 include the dump and parse it at compile-time. I was actually using it to
 generate classes which mapped to the contents of each table (i.e. a
 "things" table would result in a "Thing" class which mapped the fields as
 properties). Obviously you need to keep the db dump up to date via an
 external tool.

Cool! Hesitate to share the code?

Oct 17 2011
parent Adam D. Ruppe <destructionator gmail.com> writes:
Justin Whear:

That's very similar to what I wrote in my database.d

https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff

Mine is super hacky, but hey, it works for me. I write up my tables
as .sql files anyway so it's no added hassle.

Then the mysql.queryDataObject function can create a dynamic
object for a query. It works reasonably well.
Oct 17 2011
prev sibling next sibling parent reply dennis luehring <dl.soluz gmx.net> writes:
 One approach would be to write a separate tool that connects to the
 database and writes out a representation of the schema to a source
 file. At compile time, the representation is statically imported, and
 used to verify the data model.

what about data/fields that don't comes out of the model? for example: select name_length_add_1, count(*) as counted from ( select (len(p.firstname) + 1) as name_length_add_1, p.* from persons as p inner join members as m on p.firstname == m.firstname ) as blub group by name_length_add_1 how is the "counted" field or "name_length_add_1" staticaly verified? people tend to think that sql is just simple table querying stuff - but thats not, not in the real world
Oct 14 2011
parent dennis luehring <dl.soluz gmx.net> writes:
 Good point. I wasn't thinking of a tool to validate arbitrary SQL
 statements, but one that could validate a D model's correspondence with
 the database schema. Presumably the queries would be generated by the
 library.

what i like to see here are both parts - staticaly checked (if possible) for faster development - but also runtime checked (maybe in form of an check-mode) for example: you've got a varchar(12) field - how many times have you seen an check for the inputsize not exceeding the 12 chars? or datetime, etc. not all databases got the same range for special type values but something like that could be to database/version specific
Oct 14 2011
prev sibling parent reply Ary Manzana <ary esperanto.org.ar> writes:
On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...
Oct 15 2011
next sibling parent reply Don <nospam nospam.com> writes:
On 15.10.2011 22:00, Marco Leise wrote:
 Am 15.10.2011, 18:24 Uhr, schrieb Ary Manzana <ary esperanto.org.ar>:

 On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...

I wonder if that would work well with cross-compiling. If you blindly JIT functions, they may end up using structs of the wrong size, or integers with different endianness. Compile for 64-bit on a 32-bit machine. What size is size_t during CTFE?

Exactly. I don't see how that could work. Cross compilation is a critical feature for a systems language. It's a totally different situation from Nemerle, which has the luxury of restricting itself to systems with a .net JIT compiler.
Oct 15 2011
parent bls <bizprac orange.fr> writes:
Am 16.10.2011 00:09, schrieb Don:
 It's a totally different situation from Nemerle, which has the luxury of
 restricting itself to systems with a .net JIT compiler.

What about Compiler Hooks (enable LanguageXYZ to D Translation) Sure, strange idea but IMHO worth a second look. Consider that language _SQL_ will be translated into D at compile-time. void GetCustomersInParis(ref RowSet lhs_rs) { // Compiler Hook, call SQL2DTranslator. SQL { <DVALUE> lhs_rs = SELECT * FROM CUSTOMER WHERE CITY = "Paris"; } } // Generates lhs_rs = db.exexuteSQL("SELECT * FROM CUSTOMER WHERE CITY = 'Paris'") Assert( SQL2DTranslator("lhs_rs = SELECT * FROM CUSTOMER WHERE CITY = Paris" == "lhs_rs = db.exexuteSQL("SELECT * FROM CUSTOMER WHERE CITY = 'Paris'")" well it is late, maybe I need some sleep...
Oct 15 2011
prev sibling parent reply Ary Manzana <ary esperanto.org.ar> writes:
On 10/15/11 5:00 PM, Marco Leise wrote:
 Am 15.10.2011, 18:24 Uhr, schrieb Ary Manzana <ary esperanto.org.ar>:

 On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...

I wonder if that would work well with cross-compiling. If you blindly JIT functions, they may end up using structs of the wrong size, or integers with different endianness. Compile for 64-bit on a 32-bit machine. What size is size_t during CTFE?

I don't understand this quite well. I want JITted functions to just generate code that ultimately will be compiled. It's like what CTFE is doing now, except that instead of doing it by interpreting every bit and spec of the language you would compile the function, run it to generate code, and then compile the code for the target machine. An example: enum host = "localhost"; enum port = 3306; enum database = "foo"; enum password = "whatever"; string code_db(host, port, database, password) { auto db = new Database(host, port, database, password); auto str = ""; foreach(auto table; db) { str ~= "class " ~ table.name ~ " : Table {"; foreach(auto column; table.columns) { // Well, you get the idea... } str ~= "}"; } } // Now code_db must be executed at compile-time because it's assigned to an enum. Oh, but currently CTFE wouldn't be able to open a connection to the database. Well, it could, if you'd JIT it and then execute it. code_db just generates a string containing the table classes... so why, oh why, does the endianess of size_t matters? enum db = code_db(host, port, database, password); mixin(db); // I want to paste the string into the code, not sure this is the syntax --- Maybe I'm not taking something into account... what is it? Thanks, Ary
Oct 15 2011
parent reply Don <nospam nospam.com> writes:
On 16.10.2011 04:16, Ary Manzana wrote:
 On 10/15/11 5:00 PM, Marco Leise wrote:
 Am 15.10.2011, 18:24 Uhr, schrieb Ary Manzana <ary esperanto.org.ar>:

 On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...

I wonder if that would work well with cross-compiling. If you blindly JIT functions, they may end up using structs of the wrong size, or integers with different endianness. Compile for 64-bit on a 32-bit machine. What size is size_t during CTFE?

I don't understand this quite well. I want JITted functions to just generate code that ultimately will be compiled. It's like what CTFE is doing now, except that instead of doing it by interpreting every bit and spec of the language you would compile the function, run it to generate code, and then compile the code for the target machine.

 Maybe I'm not taking something into account... what is it?

You're assuming that the compiler can run the code it's generating. This isn't true in general. Suppose you're on x86, compiling for ARM. You can't run the ARM code from the compiler.
Oct 15 2011
next sibling parent reply foobar <foo bar.com> writes:
Don Wrote:

 On 16.10.2011 04:16, Ary Manzana wrote:
 On 10/15/11 5:00 PM, Marco Leise wrote:
 Am 15.10.2011, 18:24 Uhr, schrieb Ary Manzana <ary esperanto.org.ar>:

 On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...

I wonder if that would work well with cross-compiling. If you blindly JIT functions, they may end up using structs of the wrong size, or integers with different endianness. Compile for 64-bit on a 32-bit machine. What size is size_t during CTFE?

I don't understand this quite well. I want JITted functions to just generate code that ultimately will be compiled. It's like what CTFE is doing now, except that instead of doing it by interpreting every bit and spec of the language you would compile the function, run it to generate code, and then compile the code for the target machine.

 Maybe I'm not taking something into account... what is it?

You're assuming that the compiler can run the code it's generating. This isn't true in general. Suppose you're on x86, compiling for ARM. You can't run the ARM code from the compiler.

This is quite possible in Nemerle's model of compilation. This is the same concept as XLST - a macro is a high level transform from D code to D code. 1. compile the macro ahead of time into a loadable compiler module/plugin. the plugin is compiled for the HOST machine (x86) either by a separate compiler or by a cross-compiler that can also compile to its HOST target. 2. cross-compiler loads the plugin and uses it during compilation of your code to the TARGET machine (arm). E.g. the SQL plugin will connect to db and verify your query against the db schema and will *generate* D code to query the db. the generated D code is compiled to the TARGET machine. The JIT to .net is merely an implementation detail for Nemerle which makes it much easier to implement. I don't see how Endianess or word size affects this model in any way.
Oct 16 2011
parent reply Ary Manzana <ary esperanto.org.ar> writes:
On 10/16/11 4:56 AM, foobar wrote:
 Don Wrote:

 On 16.10.2011 04:16, Ary Manzana wrote:
 On 10/15/11 5:00 PM, Marco Leise wrote:
 Am 15.10.2011, 18:24 Uhr, schrieb Ary Manzana<ary esperanto.org.ar>:

 On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...

I wonder if that would work well with cross-compiling. If you blindly JIT functions, they may end up using structs of the wrong size, or integers with different endianness. Compile for 64-bit on a 32-bit machine. What size is size_t during CTFE?

I don't understand this quite well. I want JITted functions to just generate code that ultimately will be compiled. It's like what CTFE is doing now, except that instead of doing it by interpreting every bit and spec of the language you would compile the function, run it to generate code, and then compile the code for the target machine.

 Maybe I'm not taking something into account... what is it?

You're assuming that the compiler can run the code it's generating. This isn't true in general. Suppose you're on x86, compiling for ARM. You can't run the ARM code from the compiler.

This is quite possible in Nemerle's model of compilation. This is the same concept as XLST - a macro is a high level transform from D code to D code. 1. compile the macro ahead of time into a loadable compiler module/plugin. the plugin is compiled for the HOST machine (x86) either by a separate compiler or by a cross-compiler that can also compile to its HOST target. 2. cross-compiler loads the plugin and uses it during compilation of your code to the TARGET machine (arm). E.g. the SQL plugin will connect to db and verify your query against the db schema and will *generate* D code to query the db. the generated D code is compiled to the TARGET machine. The JIT to .net is merely an implementation detail for Nemerle which makes it much easier to implement. I don't see how Endianess or word size affects this model in any way.

Exactly!! :-)
Oct 16 2011
parent reply Don <nospam nospam.com> writes:
On 16.10.2011 17:39, Ary Manzana wrote:
 On 10/16/11 4:56 AM, foobar wrote:
 Don Wrote:

 On 16.10.2011 04:16, Ary Manzana wrote:
 On 10/15/11 5:00 PM, Marco Leise wrote:
 Am 15.10.2011, 18:24 Uhr, schrieb Ary Manzana<ary esperanto.org.ar>:

 On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name =
 $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...

I wonder if that would work well with cross-compiling. If you blindly JIT functions, they may end up using structs of the wrong size, or integers with different endianness. Compile for 64-bit on a 32-bit machine. What size is size_t during CTFE?

I don't understand this quite well. I want JITted functions to just generate code that ultimately will be compiled. It's like what CTFE is doing now, except that instead of doing it by interpreting every bit and spec of the language you would compile the function, run it to generate code, and then compile the code for the target machine.

 Maybe I'm not taking something into account... what is it?

You're assuming that the compiler can run the code it's generating. This isn't true in general. Suppose you're on x86, compiling for ARM. You can't run the ARM code from the compiler.

This is quite possible in Nemerle's model of compilation. This is the same concept as XLST - a macro is a high level transform from D code to D code. 1. compile the macro ahead of time into a loadable compiler module/plugin. the plugin is compiled for the HOST machine (x86) either by a separate compiler or by a cross-compiler that can also compile to its HOST target.


YES!!! This is the whole point. That model requires TWO backends. One for the host, one for the target. That is, it requires an entire backend PURELY FOR CTFE. Yes, of course it is POSSIBLE, but it is an incredible burden to place on a compiler vendor.
Oct 16 2011
parent reply foobar <foo bar.com> writes:
Don Wrote:

 On 16.10.2011 17:39, Ary Manzana wrote:
 On 10/16/11 4:56 AM, foobar wrote:
 Don Wrote:

 On 16.10.2011 04:16, Ary Manzana wrote:
 On 10/15/11 5:00 PM, Marco Leise wrote:
 Am 15.10.2011, 18:24 Uhr, schrieb Ary Manzana<ary esperanto.org.ar>:

 On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name =
 $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...

I wonder if that would work well with cross-compiling. If you blindly JIT functions, they may end up using structs of the wrong size, or integers with different endianness. Compile for 64-bit on a 32-bit machine. What size is size_t during CTFE?

I don't understand this quite well. I want JITted functions to just generate code that ultimately will be compiled. It's like what CTFE is doing now, except that instead of doing it by interpreting every bit and spec of the language you would compile the function, run it to generate code, and then compile the code for the target machine.

 Maybe I'm not taking something into account... what is it?

You're assuming that the compiler can run the code it's generating. This isn't true in general. Suppose you're on x86, compiling for ARM. You can't run the ARM code from the compiler.

This is quite possible in Nemerle's model of compilation. This is the same concept as XLST - a macro is a high level transform from D code to D code. 1. compile the macro ahead of time into a loadable compiler module/plugin. the plugin is compiled for the HOST machine (x86) either by a separate compiler or by a cross-compiler that can also compile to its HOST target.


YES!!! This is the whole point. That model requires TWO backends. One for the host, one for the target. That is, it requires an entire backend PURELY FOR CTFE. Yes, of course it is POSSIBLE, but it is an incredible burden to place on a compiler vendor.

How does that differ from the current situation? We already have a separate implementation of a D interpreter for CTFE. I disagree with the second point as well - Nothing forces the SAME compiler to contain two separate implementations as is now the case. In Fact you could compile the macros with a compiler of a different vendor. After all that's the purpose of an ABI, isn't it? In fact it makes the burden on the vendor much smaller since you remove the need for a separate interpreter.
Oct 16 2011
parent reply foobar <foo bar.com> writes:
foobar Wrote:

 Don Wrote:
 You're assuming that the compiler can run the code it's generating. This
 isn't true in general. Suppose you're on x86, compiling for ARM. You
 can't run the ARM code from the compiler.

This is quite possible in Nemerle's model of compilation. This is the same concept as XLST - a macro is a high level transform from D code to D code. 1. compile the macro ahead of time into a loadable compiler module/plugin. the plugin is compiled for the HOST machine (x86) either by a separate compiler or by a cross-compiler that can also compile to its HOST target.


YES!!! This is the whole point. That model requires TWO backends. One for the host, one for the target. That is, it requires an entire backend PURELY FOR CTFE. Yes, of course it is POSSIBLE, but it is an incredible burden to place on a compiler vendor.

How does that differ from the current situation? We already have a separate implementation of a D interpreter for CTFE. I disagree with the second point as well - Nothing forces the SAME compiler to contain two separate implementations as is now the case. In Fact you could compile the macros with a compiler of a different vendor. After all that's the purpose of an ABI, isn't it? In fact it makes the burden on the vendor much smaller since you remove the need for a separate interpreter.

I forgot to mention an additional aspect of this design - it greatly simplifies the language which also reduces the burden on the compiler vendor. You no longer need to support static constructs like "static if", CTFE, is(), pragma, etc. You also gain more capabilities with less effort, e.g you could connect to a DB at compile time to validate a SQL query or use regular IO functions from phobos.
Oct 16 2011
parent reply Don <nospam nospam.com> writes:
On 17.10.2011 01:43, foobar wrote:
 foobar Wrote:

 Don Wrote:
 You're assuming that the compiler can run the code it's generating. This
 isn't true in general. Suppose you're on x86, compiling for ARM. You
 can't run the ARM code from the compiler.

This is quite possible in Nemerle's model of compilation. This is the same concept as XLST - a macro is a high level transform from D code to D code. 1. compile the macro ahead of time into a loadable compiler module/plugin. the plugin is compiled for the HOST machine (x86) either by a separate compiler or by a cross-compiler that can also compile to its HOST target.


YES!!! This is the whole point. That model requires TWO backends. One for the host, one for the target. That is, it requires an entire backend PURELY FOR CTFE. Yes, of course it is POSSIBLE, but it is an incredible burden to place on a compiler vendor.

How does that differ from the current situation? We already have a separate implementation of a D interpreter for CTFE.


That's true, but it's quite different from a separate backend. The CTFE interpreter doesn't have much in common with a backend. It's more like a glue layer. Most of what's in there at the moment, is doing marshalling and error checking. Suppose instead, you made calls into a real backend. You'd still need a marshalling layer, first to get the syntax trees into native types for the backend, and secondly to convert the native types back into syntax trees. The thing is, you can be changing only part of a syntax tree (just one element of an array, for example) so if you used a native backend, the majority of the code in the CTFE interpreter would remain. Yes, there is an actual CTFE backend in there, but it's tiny.
 I disagree with the second point as well - Nothing forces the SAME compiler to
contain two separate implementations as is now the case.
 In Fact you could compile the macros with a compiler of a different vendor.
After all that's the purpose of an ABI, isn't it?
 In fact it makes the burden on the vendor much smaller since you remove the
need for a separate interpreter.


 I forgot to mention an additional aspect of this design - it greatly
simplifies the language which also reduces the burden on the compiler vendor.
 You no longer need to support static constructs like "static if", CTFE,  is(),
pragma, etc. You also gain more capabilities with less effort,
 e.g you could connect to a DB at compile time to validate a SQL query or use
regular IO functions from phobos.

Statements of the form "XXX would make the compiler simpler" seem to come up quite often, and I can't remember a single one which was made with much knowledge of where the complexities are! For example, the most complex thing you listed is is(), because is(typeof()) accepts all kinds of things which normally wouldn't compile. This has implications for the whole compiler, and no changes in how CTFE is done would make it any simpler.
 e.g you could connect to a DB at compile time to validate a SQL query 

You know, I'm řet to be convinced that that it's really desirable. The ability to put all your source in a repository and say, "the executable depends on this and nothing else", is IMHO of enormous value. Once you allow it to depend on the result of external function calls, it depends on all kinds of hidden variables, which are ephemeral, and it seems to me much better to completely separate that "information gathering" step from the compilation step. Note that since it's a snapshot, you *don't* have a guarantee that your SQL query is still valid by the time the code runs. BTW there's nothing in the present design which prevents CTFE from being implemented by doing a JIT to native code. I expect most implementations will go that way, but they'll be motivated by speed. Also I'm confused by this term "macros" that keeps popping up. I don't know why it's being used, and I don't know what it means.
Oct 18 2011
parent reply foobar <foo bar.com> writes:
Don Wrote:

 On 17.10.2011 01:43, foobar wrote:
 foobar Wrote:

 Don Wrote:
 You're assuming that the compiler can run the code it's generating. This
 isn't true in general. Suppose you're on x86, compiling for ARM. You
 can't run the ARM code from the compiler.

This is quite possible in Nemerle's model of compilation. This is the same concept as XLST - a macro is a high level transform from D code to D code. 1. compile the macro ahead of time into a loadable compiler module/plugin. the plugin is compiled for the HOST machine (x86) either by a separate compiler or by a cross-compiler that can also compile to its HOST target.


YES!!! This is the whole point. That model requires TWO backends. One for the host, one for the target. That is, it requires an entire backend PURELY FOR CTFE. Yes, of course it is POSSIBLE, but it is an incredible burden to place on a compiler vendor.

How does that differ from the current situation? We already have a separate implementation of a D interpreter for CTFE.


That's true, but it's quite different from a separate backend. The CTFE interpreter doesn't have much in common with a backend. It's more like a glue layer. Most of what's in there at the moment, is doing marshalling and error checking. Suppose instead, you made calls into a real backend. You'd still need a marshalling layer, first to get the syntax trees into native types for the backend, and secondly to convert the native types back into syntax trees. The thing is, you can be changing only part of a syntax tree (just one element of an array, for example) so if you used a native backend, the majority of the code in the CTFE interpreter would remain. Yes, there is an actual CTFE backend in there, but it's tiny.

I'm not sure I follow you here. macros as defined in Nemerle are simply compiler plugins that use hooks in the compiler front-end. Nemerle supports hooks at several levels, mostly at parsing and semantic analysis passes but I think also during lexing in order to allows syntax extensions. for instance at the parsing stage, the macro's input and output would be token streams. I don't understand how that requires a separate backend. It does require APIs/hooks into the compiler passes. The macro itself is a regular program - implemented in Nemerle and compiled into a dll.
 I disagree with the second point as well - Nothing forces the SAME compiler to
contain two separate implementations as is now the case.
 In Fact you could compile the macros with a compiler of a different vendor.
After all that's the purpose of an ABI, isn't it?
 In fact it makes the burden on the vendor much smaller since you remove the
need for a separate interpreter.


 I forgot to mention an additional aspect of this design - it greatly
simplifies the language which also reduces the burden on the compiler vendor.
 You no longer need to support static constructs like "static if", CTFE,  is(),
pragma, etc. You also gain more capabilities with less effort,
 e.g you could connect to a DB at compile time to validate a SQL query or use
regular IO functions from phobos.

Statements of the form "XXX would make the compiler simpler" seem to come up quite often, and I can't remember a single one which was made with much knowledge of where the complexities are! For example, the most complex thing you listed is is(), because is(typeof()) accepts all kinds of things which normally wouldn't compile. This has implications for the whole compiler, and no changes in how CTFE is done would make it any simpler.

I actually said it would make the _language_ simpler. it makes the language more regular in that you use the same syntax to implement the macro itself. No need to maintain an additional set of compile-time syntax. This in turn would simplify the implementation of the compiler. the above list should contain all compile-time features (I didn't mention templates and traits) Since a macro has hooks into the semantic pass it can do all sorts of transforms directly on the abstract syntax tree - modifying the data structures in the compiler's memory. this means you use _regular_ code in combination with a compiler API instead of special syntax. E.g. traits and is(typof()) would be regular functions (part of the API) instead of language features.
  > e.g you could connect to a DB at compile time to validate a SQL query 
 or use regular IO functions from phobos.
 
 You know, I'm yet to be convinced that that it's really desirable. The 
 ability to put all your source in a repository and say, "the executable 
 depends on this and nothing else", is IMHO of enormous value. Once you 
 allow it to depend on the result of external function calls, it depends 
 on all kinds of hidden variables, which are ephemeral, and it seems to 
 me much better to completely separate that "information gathering" step 
 from the compilation step.
 Note that since it's a snapshot, you *don't* have a guarantee that your 
 SQL query is still valid by the time the code runs.
 

I agree this adds dependencies to the compilation process. However, IIUC, D already allows adding vendor extensions with pragma. At the moment this requires to changes the compiler's source. macros simply extend this notion with shared objects. Look for example at the pragma for printing at compile-time. instead of implementing it in as part of the compiler, you could simply use phobos' writeln.
 BTW there's nothing in the present design which prevents CTFE from being 
 implemented by doing a JIT to native code. I expect most implementations 
 will go that way, but they'll be motivated by speed.
 
 Also I'm confused by this term "macros" that keeps popping up. I don't 
 know why it's being used, and I don't know what it means.

The term comes from lisp.
Oct 18 2011
parent Don <nospam nospam.com> writes:
On 19.10.2011 01:52, foobar wrote:
 Don Wrote:

 On 17.10.2011 01:43, foobar wrote:
 foobar Wrote:

 Don Wrote:
 You're assuming that the compiler can run the code it's generating. This
 isn't true in general. Suppose you're on x86, compiling for ARM. You
 can't run the ARM code from the compiler.

This is quite possible in Nemerle's model of compilation. This is the same concept as XLST - a macro is a high level transform from D code to D code. 1. compile the macro ahead of time into a loadable compiler module/plugin. the plugin is compiled for the HOST machine (x86) either by a separate compiler or by a cross-compiler that can also compile to its HOST target.


YES!!! This is the whole point. That model requires TWO backends. One for the host, one for the target. That is, it requires an entire backend PURELY FOR CTFE. Yes, of course it is POSSIBLE, but it is an incredible burden to place on a compiler vendor.

How does that differ from the current situation? We already have a separate implementation of a D interpreter for CTFE.


That's true, but it's quite different from a separate backend. The CTFE interpreter doesn't have much in common with a backend. It's more like a glue layer. Most of what's in there at the moment, is doing marshalling and error checking. Suppose instead, you made calls into a real backend. You'd still need a marshalling layer, first to get the syntax trees into native types for the backend, and secondly to convert the native types back into syntax trees. The thing is, you can be changing only part of a syntax tree (just one element of an array, for example) so if you used a native backend, the majority of the code in the CTFE interpreter would remain. Yes, there is an actual CTFE backend in there, but it's tiny.

I'm not sure I follow you here. macros as defined in Nemerle are simply compiler plugins that use hooks in the compiler front-end. Nemerle supports hooks at several levels, mostly at parsing and semantic analysis passes but I think also during lexing in order to allows syntax extensions. for instance at the parsing stage, the macro's input and output would be token streams. I don't understand how that requires a separate backend.

 It does require APIs/hooks into the compiler passes.

Whatever code the compiler calls (compiler plugin, or JITted code), the compiler needs to be able to pass parameters to it, and return them. How can it do that? For example, all extant D compilers are written in C++ and they can't call D code directly.
 The macro itself is a regular program - implemented in Nemerle and compiled
into a dll.

 I disagree with the second point as well - Nothing forces the SAME compiler to
contain two separate implementations as is now the case.
 In Fact you could compile the macros with a compiler of a different vendor.
After all that's the purpose of an ABI, isn't it?
 In fact it makes the burden on the vendor much smaller since you remove the
need for a separate interpreter.


 I forgot to mention an additional aspect of this design - it greatly
simplifies the language which also reduces the burden on the compiler vendor.
 You no longer need to support static constructs like "static if", CTFE,  is(),
pragma, etc. You also gain more capabilities with less effort,
 e.g you could connect to a DB at compile time to validate a SQL query or use
regular IO functions from phobos.

Statements of the form "XXX would make the compiler simpler" seem to come up quite often, and I can't remember a single one which was made with much knowledge of where the complexities are! For example, the most complex thing you listed is is(), because is(typeof()) accepts all kinds of things which normally wouldn't compile. This has implications for the whole compiler, and no changes in how CTFE is done would make it any simpler.

I actually said it would make the _language_ simpler. it makes the language more regular in that you use the same syntax to implement the macro itself. No need to maintain an additional set of compile-time syntax.

I don't understand. D has almost no compile-time syntax.
This in turn would simplify the implementation of the compiler.

Syntax is trivial. It's a negligible fraction of the compiler (provided you retain D's rigid seperation between parsing and semantic pass).
 the above list should contain all compile-time features (I didn't mention
templates and traits)
 Since a macro has hooks into the semantic pass it can do all sorts of
transforms directly on the abstract syntax tree - modifying the data structures
in the compiler's memory.
 this means you use _regular_ code in combination with a compiler API instead
of special syntax.
 E.g. traits and is(typof()) would be regular functions (part of the API)
instead of language features.

I don't have a clue how you would do that. Most of the simple traits could be implemented in D in the existing compiler, but is(typeof()) is a different story. It practically runs the entire compiler with errors suppressed. The complexity isn't in is(typeof()) itself, but rather in the fact that the whole compiler has to cope with errors being suppressed.
   >  e.g you could connect to a DB at compile time to validate a SQL query
 or use regular IO functions from phobos.

 You know, I'm yet to be convinced that that it's really desirable. The
 ability to put all your source in a repository and say, "the executable
 depends on this and nothing else", is IMHO of enormous value. Once you
 allow it to depend on the result of external function calls, it depends
 on all kinds of hidden variables, which are ephemeral, and it seems to
 me much better to completely separate that "information gathering" step
 from the compilation step.
 Note that since it's a snapshot, you *don't* have a guarantee that your
 SQL query is still valid by the time the code runs.

I agree this adds dependencies to the compilation process. However, IIUC, D already allows adding vendor extensions with pragma.

To date, no vendor extensions introduce additional dependencies.
 At the moment this requires to changes the compiler's source. macros simply
extend this notion with shared objects.
 Look for example at the pragma for printing at compile-time. instead of
implementing it in as part of the compiler, you could simply use phobos'
writeln.

"simply"? Let me show you the code for pragma(msg). The whole thing, in all its complexity: ------------------- Statement *PragmaStatement::semantic(Scope *sc) { if (ident == Id::msg) { if (args) { for (size_t i = 0; i < args->dim; i++) { Expression *e = args->tdata()[i]; e = e->semantic(sc); e = e->optimize(WANTvalue | WANTinterpret); StringExp *se = e->toString(); if (se) fprintf(stdmsg, "%.*s", (int)se->len, (char *)se->string); else fprintf(stdmsg, "%s", e->toChars()); } fprintf(stdmsg, "\n"); } } -------------------
 BTW there's nothing in the present design which prevents CTFE from being
 implemented by doing a JIT to native code. I expect most implementations
 will go that way, but they'll be motivated by speed.

 Also I'm confused by this term "macros" that keeps popping up. I don't
 know why it's being used, and I don't know what it means.

The term comes from lisp.

The word is much older than Lisp (macro assemblers are ancient; CAR and CDR were asm macros). The uses of the word on these forums are clearly influenced by Lisp macros, but the mapping of 'macro' from Lisp to the usage here is by no means clear. I feel that a lot of extra assumptions are being smuggled in through the word. Most commonly, it seems to mean "some compile-time programming system, which is completely undescribed except for the fact that it's almost perfect in every way" <g>. Sometimes it seems to mean "macros as implemented by Nemerle". Even then, it's unclear which aspects of Nemerle macros are considered to be important. Occasionally, macro means "any compile-time programming system", including what we have in D at the moment.
Oct 19 2011
prev sibling parent Ary Manzana <ary esperanto.org.ar> writes:
On 10/16/11 2:35 AM, Don wrote:
 On 16.10.2011 04:16, Ary Manzana wrote:
 On 10/15/11 5:00 PM, Marco Leise wrote:
 Am 15.10.2011, 18:24 Uhr, schrieb Ary Manzana <ary esperanto.org.ar>:

 On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...

I wonder if that would work well with cross-compiling. If you blindly JIT functions, they may end up using structs of the wrong size, or integers with different endianness. Compile for 64-bit on a 32-bit machine. What size is size_t during CTFE?

I don't understand this quite well. I want JITted functions to just generate code that ultimately will be compiled. It's like what CTFE is doing now, except that instead of doing it by interpreting every bit and spec of the language you would compile the function, run it to generate code, and then compile the code for the target machine.

 Maybe I'm not taking something into account... what is it?

You're assuming that the compiler can run the code it's generating. This isn't true in general. Suppose you're on x86, compiling for ARM. You can't run the ARM code from the compiler.

Compile the function to be compile-time evaluated in X86. Compile the function that will go to the obj files/executables in ARM. (you'd eventually compile the first ones in ARM if they are used in run-time). What's bad about that?
Oct 16 2011
prev sibling next sibling parent Graham Fawcett <fawcett uwindsor.ca> writes:
On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler. Graham
Oct 14 2011
prev sibling next sibling parent Graham Fawcett <fawcett uwindsor.ca> writes:
On Fri, 14 Oct 2011 23:58:45 +0200, dennis luehring wrote:

 One approach would be to write a separate tool that connects to the
 database and writes out a representation of the schema to a source
 file. At compile time, the representation is statically imported, and
 used to verify the data model.

what about data/fields that don't comes out of the model? for example: select name_length_add_1, count(*) as counted from ( select (len(p.firstname) + 1) as name_length_add_1, p.* from persons as p inner join members as m on p.firstname == m.firstname ) as blub group by name_length_add_1 how is the "counted" field or "name_length_add_1" staticaly verified? people tend to think that sql is just simple table querying stuff - but thats not, not in the real world

Good point. I wasn't thinking of a tool to validate arbitrary SQL statements, but one that could validate a D model's correspondence with the database schema. Presumably the queries would be generated by the library. Best, Graham
Oct 14 2011
prev sibling parent reply "Marco Leise" <Marco.Leise gmx.de> writes:
Am 15.10.2011, 18:24 Uhr, schrieb Ary Manzana <ary esperanto.org.ar>:

 On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...

I wonder if that would work well with cross-compiling. If you blindly JIT functions, they may end up using structs of the wrong size, or integers with different endianness. Compile for 64-bit on a 32-bit machine. What size is size_t during CTFE?
Oct 15 2011
parent foobar <foo bar.com> writes:
Marco Leise Wrote:

 Am 15.10.2011, 18:24 Uhr, schrieb Ary Manzana <ary esperanto.org.ar>:
 
 On 10/14/11 5:16 PM, Graham Fawcett wrote:
 On Fri, 14 Oct 2011 21:10:29 +0200, Jacob Carlborg wrote:

 On 2011-10-14 15:26, Andrei Alexandrescu wrote:
 On 10/14/11 6:08 AM, Jacob Carlborg wrote:
 On 2011-10-14 12:19, foobar wrote:
 Has anyone looked at Nemerle's design for this? They have an SQL
 macro which allows to write SQL such as:

 var employName = "FooBar"
 SQL (DBconn, "select * from employees where name = $employName");

 what that supposed to do is bind the variable(s) and it also
 validates the sql query with the database. This is all done at
 compile-time.

 My understanding is that D's compile-time features are powerful
 enough to implement this.

You cannot connect to a database in D at compile time. You could some form of validation and escape the query without connecting to the database.

A little SQL interpreter can be written that figures out e.g. the names of the columns involved. Andrei

But you still won't be able to verify the columns to the actual database scheme?

One approach would be to write a separate tool that connects to the database and writes out a representation of the schema to a source file. At compile time, the representation is statically imported, and used to verify the data model. If we had preprocessor support, the tool could be run as such, checking the model just before passing the source to the compiler.

Yeah, but you need a separate tool. In Nemerle it seems you can do everything just in Nemerle... It would be awesome if CTFE would be implemented by JITting functions, not by reinventing the wheel and implementing a handcrafted interpreter...

I wonder if that would work well with cross-compiling. If you blindly JIT functions, they may end up using structs of the wrong size, or integers with different endianness. Compile for 64-bit on a 32-bit machine. What size is size_t during CTFE?

That is a good question. In Nemerle's case, the language compiles to .NET bytecode. In addition, Nemerle Plugins are essentially compiler plugins which are compiled separately and loaded during compilation of your actual program. In D's case, If we choose the Nemerle design of separate compilation, the macro would be compiled for the host machine (32 bits in your example).
Oct 15 2011