www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - preparing for const, final, and invariant

reply Walter Bright <newshound1 digitalmars.com> writes:
This is coming for the D 2.0 beta, and it will need some source code 
changes. Specifically, for function parameters that are arrays or 
pointers, start using 'in' for them.

'in' will mean 'scope const final', which means:

final - the parameter will not be reassigned within the function
const - the function will not attempt to change the contents of what is 
referred to
scope - the function will not keep a reference to the parameter's data 
that will persist beyond the scope of the function

For example:

int[] g;

void foo(in int[] a)
{
     a = [1,2];	// error, a is final
     a[1] = 2;   // error, a is const
     g = a;	// error, a is scope
}

Do not use 'in' if you wish to do any of these operations on a 
parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
backwards compatible.

Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
think the results will be worth the effort.
May 17 2007
next sibling parent reply Frank Benoit <keinfarbton googlemail.com> writes:
int[] g;
void foo(in int[] a){
    g = a.dup;    // allowed?
}
May 17 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Frank Benoit wrote:
 int[] g;
 void foo(in int[] a){
     g = a.dup;    // allowed?
 }

Yes.
May 17 2007
parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Walter Bright wrote

 Frank Benoit wrote:
 int[] g;
 void foo(in int[] a){
     g = a.dup;    // allowed?
 }

Yes.

The `.dup'ing an array where a cell contains a reference to the array is allowed too? alias void* T; T[] a; void main(){ T[] g; g.length= 1; g[0]=&g; void f( in T[] p){ a= p.dup; } } That would contradict the given rule. -manfred
May 17 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Manfred Nowak wrote:
 Walter Bright wrote
 
 Frank Benoit wrote:
 int[] g;
 void foo(in int[] a){
     g = a.dup;    // allowed?
 }

Yes.

The `.dup'ing an array where a cell contains a reference to the array is allowed too? alias void* T; T[] a; void main(){ T[] g; g.length= 1; g[0]=&g; void f( in T[] p){ a= p.dup; } } That would contradict the given rule.

I don't understand. What rule is being violated, and how?
May 17 2007
parent reply =?ISO-8859-1?Q?Manuel_K=F6nig?= <ManuelK89 gmx.net> writes:
Walter Bright wrote:
 Manfred Nowak wrote:
 Walter Bright wrote

 Frank Benoit wrote:
 int[] g;
 void foo(in int[] a){
     g = a.dup;    // allowed?
 }

Yes.

The `.dup'ing an array where a cell contains a reference to the array is allowed too? alias void* T; T[] a; void main(){ T[] g; g.length= 1; g[0]=&g; void f( in T[] p){ a= p.dup; } } That would contradict the given rule.

I don't understand. What rule is being violated, and how?

I think he is concerning to the scope rule. When 'f' gets called with 'g' as param, then 'a' has implicitly a reference to 'g', namely in a[0]. This would be a violation of the scope rule. -- Regards Manuel
May 18 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Manuel König wrote:
 I think he is concerning to the scope rule. When 'f' gets called with 
 'g' as param, then 'a' has implicitly a reference to 'g', namely in 
 a[0]. This would be a violation of the scope rule.

Passing things through void* is a way of escaping the type checking, and if you break the rules by doing so, your program might break.
May 18 2007
parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Walter Bright wrote

 Passing things through void* is a way of escaping the type
 checking, and if you break the rules by doing so, your program
 might break. 

The compiler seems to follow the specs which say | A pointer T* can be implicitly converted to one of the following: | | void* There is no hint in the specs that this even _implicite_ possible conversion is breaking the security of the type system. But let me assume for now, that this implicite conversion is an exception and that "breaks type system" will be included in the specs. Now how about using a circular list, i.e avoiding use of `void *' struct T{ T* next; } T a; void main(){ T g; g.next= &g; void f( in T p){ a= *(p.next); // *_ } f( g); // breaking of scope rule? } -manfred
May 18 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Manfred Nowak wrote:
 Walter Bright wrote
 
 Passing things through void* is a way of escaping the type
 checking, and if you break the rules by doing so, your program
 might break. 

The compiler seems to follow the specs which say | A pointer T* can be implicitly converted to one of the following: | | void* There is no hint in the specs that this even _implicite_ possible conversion is breaking the security of the type system. But let me assume for now, that this implicite conversion is an exception and that "breaks type system" will be included in the specs. Now how about using a circular list, i.e avoiding use of `void *' struct T{ T* next; } T a; void main(){ T g; g.next= &g; void f( in T p){ a= *(p.next); // *_ } f( g); // breaking of scope rule? }

Yes, you're breaking it.
May 18 2007
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.
 
 'in' will mean 'scope const final', which means:
 
 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function
 
 For example:
 
 int[] g;
 
 void foo(in int[] a)
 {
     a = [1,2];    // error, a is final
     a[1] = 2;   // error, a is const
     g = a;    // error, a is scope
 }
 
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.

So if you don't use 'in' then the behavior will the the same as not using anything (or using 'in') in D1.0? --bb
May 17 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 So if you don't use 'in' then the behavior will the the same as not 
 using anything (or using 'in') in D1.0?

Right - except that you won't be able to past string literals to them (as string literals will be const).
May 17 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 So if you don't use 'in' then the behavior will the the same as not 
 using anything (or using 'in') in D1.0?

Right - except that you won't be able to past string literals to them (as string literals will be const).

Ok. Well that is actually a little nicer than C++ where every reference param you don't intend to modify needs to be marked 'const'. Nicer in the sense that 'in' is shorter to type, at least, and in that it won't make Don Clugston cringe every time he has to type it. What about method signatures that want 'this' to be an 'in' param. Trailing 'in' like C++? void aMethod() in { writefln(x, toString); } Seems a little strange but I'm sure I'd get used to it. I guess const would mean the same thing, though, since 'this' is already final and scope doesn't really apply. --bb
May 17 2007
parent Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 Bill Baxter wrote:
 So if you don't use 'in' then the behavior will the the same as not 
 using anything (or using 'in') in D1.0?

Right - except that you won't be able to past string literals to them (as string literals will be const).

Ok. Well that is actually a little nicer than C++ where every reference param you don't intend to modify needs to be marked 'const'. Nicer in the sense that 'in' is shorter to type, at least, and in that it won't make Don Clugston cringe every time he has to type it. What about method signatures that want 'this' to be an 'in' param. Trailing 'in' like C++? void aMethod() in { writefln(x, toString); }

Assuming they haven't changed, this would break pre-conditions.
 Seems a little strange but I'm sure I'd get used to it.
 I guess const would mean the same thing, though, since 'this' is already 
 final and scope doesn't really apply.
 
 --bb

Since this may have its uses, and so long as the meaning is very clear, I could live with 'const' in that position. -- Chris Nicholson-Sauls
May 17 2007
prev sibling next sibling parent Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Walter Bright wrote:
 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.
 
 'in' will mean 'scope const final', which means:
 
 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function
 
 For example:
 
 int[] g;
 
 void foo(in int[] a)
 {
     a = [1,2];    // error, a is final
     a[1] = 2;   // error, a is const
     g = a;    // error, a is scope
 }
 
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.

Sounds good to me. How soon can we expect 2.0beta? -- Chris Nicholson-Sauls
May 17 2007
prev sibling next sibling parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Walter Bright wrote

 scope - the function will not keep a reference to the parameter's
 data that will persist beyond the scope of the function

     g = a;	// error, a is scope

In this example the function assigns to a variable living outside of the scope of the function. Therefore the function does not keep that reference itself---and that rule should not fire. In addition: if the rule's wording is changed to match that example, then it becomes unclear, whether `writefln( &a)' is allowed because `writefln' might store that reference somewhere---especially via OS in a file, which might be read in later by the calling process. In case of disallowance that rule would disallow printing too. -manfred
May 17 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Manfred Nowak wrote:
 Walter Bright wrote
 
 scope - the function will not keep a reference to the parameter's
 data that will persist beyond the scope of the function

     g = a;	// error, a is scope

In this example the function assigns to a variable living outside of the scope of the function. Therefore the function does not keep that reference itself---and that rule should not fire.

Since it's storing a reference that will "persist beyond the scope of the function", it's illegal.
 In addition: if the rule's wording is changed to match that example, 
 then it becomes unclear, whether `writefln( &a)' is allowed because 
 `writefln' might store that reference somewhere---especially via OS in 
 a file, which might be read in later by the calling process.
 
 In case of disallowance that rule would disallow printing too.

Making copies is allowed.
May 17 2007
parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Walter Bright wrote
 Making copies is allowed.

A copy of a reference is not the reference itself? Then `g = a;' does not make a copy of the adress contained in `a'? WHat does it do then? -manfred
May 18 2007
parent reply torhu <fake address.dude> writes:
Manfred Nowak wrote:
 Walter Bright wrote
 Making copies is allowed.

A copy of a reference is not the reference itself? Then `g = a;' does not make a copy of the adress contained in `a'?

It does make a copy, but if g is 'scope', you're only allowed to have local copies of a. This would be fine: int[] b = a;
 
 WHat does it do then?
 
 -manfred

May 18 2007
parent reply Manfred Nowak <svv1999 hotmail.com> writes:
torhu wrote

 It does make a copy, but if g is 'scope', you're only allowed to
 have local copies of a.
 
 This would be fine:
 
 int[] b = a;

I understand this exactly as you say it: only local copies are allowed! But `writef( &b);' as well as `writef( &a);' may create global copies, i.e. at least not local copies. If this holds, then the scope-rule forbids printing out. But Walter responds with "Making copies is allowed." Which seems to mean, that global copies are allowed by printing out---or his remark has nothing to do with the problem stated. How to resolve this contradiction? -manfred
May 18 2007
parent reply torhu <fake address.dude> writes:
Manfred Nowak wrote:
 torhu wrote
 
 It does make a copy, but if g is 'scope', you're only allowed to
 have local copies of a.
 
 This would be fine:
 
 int[] b = a;

I understand this exactly as you say it: only local copies are allowed! But `writef( &b);' as well as `writef( &a);' may create global copies, i.e. at least not local copies. If this holds, then the scope-rule forbids printing out. But Walter responds with "Making copies is allowed." Which seems to mean, that global copies are allowed by printing out---or his remark has nothing to do with the problem stated. How to resolve this contradiction?

It's not a contradiction. Only references that are actually inside your application are relevant for this rule.
May 18 2007
parent Manfred Nowak <svv1999 hotmail.com> writes:
torhu wrote

 It's not a contradiction.  Only references that are actually
 inside your application are relevant for this rule.

Very true. If one restricts the application to local copies, then the rule cannot be violated---but then the rule is ineffective also. This seems to shout for a formalization of the scope rule. -manfred
May 20 2007
prev sibling next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Note: I've read the reply where you clarified that this is to allow
static things like string literals to be passed in.

Walter Bright wrote:
 This is coming for the D 2.0 beta, and it will need some source code
 changes. Specifically, for function parameters that are arrays or
 pointers, start using 'in' for them.
 
 'in' will mean 'scope const final', which means:
 
 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is
 referred to
 scope - the function will not keep a reference to the parameter's data
 that will persist beyond the scope of the function
 
 For example:
 
 int[] g;
 
 void foo(in int[] a)
 {
     a = [1,2];    // error, a is final
     a[1] = 2;   // error, a is const
     g = a;    // error, a is scope
 }
 Do not use 'in' if you wish to do any of these operations on a
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be
 backwards compatible.

Is final really necessary? I can understand const and scope; but is it really a problem if the function rebinds the argument? I mean, that shouldn't affect the calling code in the slightest, should it? void foo(const scope int[] a) { a = [1,2]; } static bar = [3,4]; foo(bar); assert( bar == [3,4] ); // This should still hold, right? There are a few functions I've written which re-bind the argument as they run; the simplest examples being functions that process strings (effectively, it just loops until there's just an empty slice left). Apart from that, I don't think there's any problems with doing this. Oh well; I'll just have to declare an extra argument. :P
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I
 think the results will be worth the effort.

Actually, I was thinking of doing this anyway so that my argument lists are nice and symmetric (qualifiers on all of them, instead of just ref and out). Now I have an excellent reason to do so other than my pedantic-ness. :) One question: is there a keyword for "normal" arguments--for instance, I know that most variables are "auto" if you don't specify the storage class explicitly. I can't imagine where it would be useful; just curious. At any rate, looks spiffy. -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 18 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Daniel Keep wrote:
 Is final really necessary?

No. It's there because nearly all the time, one won't be rebinding a parameter, so making it easier to be final seems like a good idea.
 I can understand const and scope; but is it
 really a problem if the function rebinds the argument?  I mean, that
 shouldn't affect the calling code in the slightest, should it?

No, it does not affect the caller. Only the callee.
 void foo(const scope int[] a)
 {
     a = [1,2];
 }
 
 static bar = [3,4];
 foo(bar);
 assert( bar == [3,4] ); // This should still hold, right?

Yes.
 There are a few functions I've written which re-bind the argument as
 they run; the simplest examples being functions that process strings
 (effectively, it just loops until there's just an empty slice left).
 
 Apart from that, I don't think there's any problems with doing this.  Oh
 well; I'll just have to declare an extra argument.  :P
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I
 think the results will be worth the effort.

Actually, I was thinking of doing this anyway so that my argument lists are nice and symmetric (qualifiers on all of them, instead of just ref and out). Now I have an excellent reason to do so other than my pedantic-ness. :) One question: is there a keyword for "normal" arguments--

No. We could use 'auto' for that, but what's the point <g>.
 for instance, I
 know that most variables are "auto" if you don't specify the storage
 class explicitly.  I can't imagine where it would be useful; just curious.
 
 At any rate, looks spiffy.
 
 	-- Daniel
 

May 18 2007
prev sibling next sibling parent reply Aarti_pl <aarti interia.pl> writes:
Walter Bright pisze:
 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.
 
 'in' will mean 'scope const final', which means:
 
 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function
 
 For example:
 
 int[] g;
 
 void foo(in int[] a)
 {
     a = [1,2];    // error, a is final
     a[1] = 2;   // error, a is const
     g = a;    // error, a is scope
 }
 
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.

I want to ask about something opposite to above example... Will it be possible to pass ownership of object? I mean that when you pass object by reference to function/class/member variable you can still modify this object from outside of function/class. It breaks encapsulation in program. Example: import std.stdio; public class TestX { char[] value; char[] toString() { return value; } } public class TestMain { TestX x; } void main(char[][] args) { TestX parameter = new TestX(); parameter.value = "First assignment"; TestMain tmain = new TestMain(); tmain.x = parameter; parameter.value = "Second assignment"; writefln(tmain.x); } Notice that tmain.x value has changed although I would like just to set once, and have second assignment to parameter illegal... When using setters and getters problem is even more visible.... How to achieve proper behavior with new final/const/invariant/scope?? Regards Marcin Kuszczak (aarti_pl)
May 18 2007
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Aarti_pl wrote:
 I want to ask about something opposite to above example...
 
 Will it be possible to pass ownership of object? I mean that when you
 pass object by reference to function/class/member variable you can still
 modify this object from outside of function/class. It breaks
 encapsulation in program.
 
 Example:
 
 import std.stdio;
 public class TestX {
    char[] value;
    char[] toString() {
        return value;
    }
 }
 public class TestMain {
    TestX x;
 }
 
 void main(char[][] args) {
       TestX parameter = new TestX();
       parameter.value = "First assignment";
 
       TestMain tmain = new TestMain();
       tmain.x = parameter;
 
       parameter.value = "Second assignment";
       writefln(tmain.x);
 }
 
 Notice that tmain.x value has changed although I would like just to set
 once, and have second assignment to parameter illegal... When using
 setters and getters problem is even more visible....
 
 How to achieve proper behavior with new final/const/invariant/scope??
 
 Regards
 Marcin Kuszczak
 (aarti_pl)

I don't think the new const, final & invariant are going to help you any. Basically, you seem to be complaining that reference semantics are... well, reference semantics. That's like complaining that water is wet :P There's a few things I can think of to get the behaviour you want. 1. Use a write-once setter for 'value'. You can either create a nullable template, or use a flag to ensure external code can only set it once. 1.a. A "nicer" approach would be to set it in the constructor, and then either mark it "final" (with the new final), or only write a public getter function. 2. Use a struct instead; no references, no indirect changes. 3. Take a private copy of the object by writing a .dup method. -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 18 2007
parent reply Aarti_pl <aarti interia.pl> writes:
Daniel Keep pisze:
 
 Aarti_pl wrote:
 I want to ask about something opposite to above example...

 Will it be possible to pass ownership of object? I mean that when you
 pass object by reference to function/class/member variable you can still
 modify this object from outside of function/class. It breaks
 encapsulation in program.

 Example:

 import std.stdio;
 public class TestX {
    char[] value;
    char[] toString() {
        return value;
    }
 }
 public class TestMain {
    TestX x;
 }

 void main(char[][] args) {
       TestX parameter = new TestX();
       parameter.value = "First assignment";

       TestMain tmain = new TestMain();
       tmain.x = parameter;

       parameter.value = "Second assignment";
       writefln(tmain.x);
 }

 Notice that tmain.x value has changed although I would like just to set
 once, and have second assignment to parameter illegal... When using
 setters and getters problem is even more visible....

 How to achieve proper behavior with new final/const/invariant/scope??

 Regards
 Marcin Kuszczak
 (aarti_pl)

I don't think the new const, final & invariant are going to help you any. Basically, you seem to be complaining that reference semantics are... well, reference semantics. That's like complaining that water is wet :P

The problem here is that current behavior breaks encapsulation - you can change already passed value from outside of object, when contract is that you can set it only with setter. Imagine consequences in multithreaded application, when in the middle of function your data suddenly change... But also with single threaded application it can be real problem when you change referenced object by mistake. So it is more like complaining that water is dry when it should be wet in fact :-) I know that other languages also have same problem, but I think that D can do better.
 There's a few things I can think of to get the behaviour you want.
 
 1. Use a write-once setter for 'value'.  You can either create a
 nullable template, or use a flag to ensure external code can only set it
 once.
 

Could you please give example? I don't know how to achieve this behavior with this method...
 1.a. A "nicer" approach would be to set it in the constructor, and then
 either mark it "final" (with the new final), 
 or only write a public getter function.

I think that rather invariant? Final will not disallow changing of referenced object. And I am afraid that it won't help anyway, it would be still possible to change value from outside... Using only getter and passing reference in constructor also doesn't help. You can still also modify variable from outside...
 
 2. Use a struct instead; no references, no indirect changes.
 

Ok. I didn't think about it. But it is basically same as below, so please see comment below. Probably you have also problem when struct has references inside...
 3. Take a private copy of the object by writing a .dup method.
 

Yes that is possible solution, but program would be much faster (no unnecessary copies) with other solution...
 	-- Daniel
 

BR Marcin Kuszczak (aarti_pl)
May 18 2007
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Aarti_pl wrote:
 Daniel Keep pisze:
 Aarti_pl wrote:
 I want to ask about something opposite to above example...

 Will it be possible to pass ownership of object? I mean that when you
 pass object by reference to function/class/member variable you can still
 modify this object from outside of function/class. It breaks
 encapsulation in program.

 Example:

 import std.stdio;
 public class TestX {
    char[] value;
    char[] toString() {
        return value;
    }
 }
 public class TestMain {
    TestX x;
 }

 void main(char[][] args) {
       TestX parameter = new TestX();
       parameter.value = "First assignment";

       TestMain tmain = new TestMain();
       tmain.x = parameter;

       parameter.value = "Second assignment";
       writefln(tmain.x);
 }

 Notice that tmain.x value has changed although I would like just to set
 once, and have second assignment to parameter illegal... When using
 setters and getters problem is even more visible....

 How to achieve proper behavior with new final/const/invariant/scope??

 Regards
 Marcin Kuszczak
 (aarti_pl)

I don't think the new const, final & invariant are going to help you any. Basically, you seem to be complaining that reference semantics are... well, reference semantics. That's like complaining that water is wet :P

The problem here is that current behavior breaks encapsulation - you can change already passed value from outside of object, when contract is that you can set it only with setter. Imagine consequences in multithreaded application, when in the middle of function your data suddenly change... But also with single threaded application it can be real problem when you change referenced object by mistake. So it is more like complaining that water is dry when it should be wet in fact :-) I know that other languages also have same problem, but I think that D can do better.

Ok: so the problem is that because D's class system doesn't have a concept of ownership, encapsulation can be violated. That said, I've never seen an OO language that *did* have a concept of ownership, so I don't know what we could do to "fix" it :P
 There's a few things I can think of to get the behaviour you want.

 1. Use a write-once setter for 'value'.  You can either create a
 nullable template, or use a flag to ensure external code can only set it
 once.

Could you please give example? I don't know how to achieve this behavior with this method...

class TestX { private { char[] _value; bool _value_set = false; } char[] value(char[] v) { if( _value_set ) assert(false, "cannot set value more than once!"); _value = v; _value_set = true; return v; } char[] value() { return _value; } }
 1.a. A "nicer" approach would be to set it in the constructor, and then
 either mark it "final" (with the new final), or only write a public
 getter function.

I think that rather invariant? Final will not disallow changing of referenced object. And I am afraid that it won't help anyway, it would be still possible to change value from outside... Using only getter and passing reference in constructor also doesn't help. You can still also modify variable from outside...

Since D2.0 hasn't been released yet, I don't know for certain, but I would hope it would be possible to create a final invariant member. In this case, you can assign to it *only* during the constructor, and you can only assign something to it which will never change. But then, I'm not sure if that would really work or not.
 2. Use a struct instead; no references, no indirect changes.

Ok. I didn't think about it. But it is basically same as below, so please see comment below. Probably you have also problem when struct has references inside...

Mmm.
 3. Take a private copy of the object by writing a .dup method.

Yes that is possible solution, but program would be much faster (no unnecessary copies) with other solution...
     -- Daniel

BR Marcin Kuszczak (aarti_pl)

Here's another idea I had. This *might* work come D2.0 (but again, I don't know for certain). The idea here is that if we are passed a possibly mutable value, we take a private copy of it since this is the only way we can ensure no one else can mutate it. BUT, if we are passed an invariant string, we just take a reference since the compiler is effectively guaranteeing that its contents will never, ever change. Incidentally, I'm also assuming the whole "casting to const" thing works. class TestX { private { char[] _value; } // Copy v: no external mutations! char[] value(char[] v) { _value = v.dup; return v; } // Invariant, so a reference to it is fine char[] value(invariant char[] v) { _value = v; return v; } // Return const: no one calling this can modify it, even though they // have a reference! const char[] value() { return cast(const char[])_value; } } So maybe you will be able to get what you want. Will be interesting to find out :) -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 18 2007
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Daniel Keep wrote:
     // Invariant, so a reference to it is fine
     char[] value(invariant char[] v)
     {
         _value = v;
         return v;
     }

But a mutable reference to it shouldn't be fine. So IMHO this should fail because you're trying to return an invariant char[] as a mutable char[]...
     // Return const: no one calling this can modify it, even though they
     // have a reference!
     const char[] value()
     {
         return cast(const char[])_value;
     }

I think that explicit cast should be unneeded. Mutable to const should be possible through an implicit cast..
May 18 2007
parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Frits van Bommel wrote:
 Daniel Keep wrote:
     // Invariant, so a reference to it is fine
     char[] value(invariant char[] v)
     {
         _value = v;
         return v;
     }

But a mutable reference to it shouldn't be fine. So IMHO this should fail because you're trying to return an invariant char[] as a mutable char[]...

*grumbles* Fine, Mr. Pick On My Mistakes...
 invariant char[] value(invariant char[] v)
 {
     _value = v;
     return v;
 }

There, all better now? :) -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 18 2007
prev sibling next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.
 
 'in' will mean 'scope const final', which means:
 
 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function

Looks great, although somewhat overwhelming for a newcomer. Will functions be overloadable on all of these? Anyway, it sounds as though we'll see 2.0 beta 1 before DMD 1.15 ?
May 18 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Don Clugston wrote:
 Walter Bright wrote:
 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what 
 is referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function

Looks great, although somewhat overwhelming for a newcomer. Will functions be overloadable on all of these?

Just for const.
 Anyway, it sounds as though we'll see 2.0 beta 1 before DMD 1.15 ?

Probably <g>.
May 18 2007
prev sibling next sibling parent reply bobef <ads aad.asd> writes:
Without any disrespect to the great work you are doing, let me put it in
another way: In order to make a good, *usable* application, is this "const
final scope" thing more important or having a good GUI library? There are not
many console applications these days, you know. And they are not very *usable*.
And, as we all know, choosing a GUI for D is not like choosing a GUI for C++.
So why instead of adding nerd stuff to the language, make it sooooo much more
*usable*, by fixing the reflection stuff that Tioport needs to produce
libraries which are not 10mb or 20mb? Or... it kind of sucks when half of my
code won't compile, or compiles in specific order only, because of some forward
reference errors. I don't even know what forward reference is, but I know that
using (simple) consts and aliases is something perfectly legal. I don't know if
this second example is more usable than the final cosnt thing, just because I
can't think of any use of it, but this is because I rarely use fancy stuff that
breaks my code in the next version of DMD... But I am making this example to
show that D (or DMD) still have so many things to fix the way it is now, we
don't need some new fancy stuff before the old things work....


Walter Bright Wrote:

 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.
 
 'in' will mean 'scope const final', which means:
 
 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function
 
 For example:
 
 int[] g;
 
 void foo(in int[] a)
 {
      a = [1,2];	// error, a is final
      a[1] = 2;   // error, a is const
      g = a;	// error, a is scope
 }
 
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.

May 19 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
To me const is not "fancy stuff".  It's a big hole that's long needed 
some implementation in D.  Right now we have two choices for passing big 
objects/structs to functions - pass it by value (SLOOW but SAFE) or pass 
it by reference (FAST but UNSAFE).  Neither is particularly attractive. 
  Const will make it possible to write functions that are both FAST and 
SAFE.  Coming from C++, the lack of const in D seems like a joke. 
Painful though it may be, it's great that Walter is doing something 
about it.

--bb

bobef wrote:
 Without any disrespect to the great work you are doing, let me put it in
another way: In order to make a good, *usable* application, is this "const
final scope" thing more important or having a good GUI library? There are not
many console applications these days, you know. And they are not very *usable*.
And, as we all know, choosing a GUI for D is not like choosing a GUI for C++.
So why instead of adding nerd stuff to the language, make it sooooo much more
*usable*, by fixing the reflection stuff that Tioport needs to produce
libraries which are not 10mb or 20mb? Or... it kind of sucks when half of my
code won't compile, or compiles in specific order only, because of some forward
reference errors. I don't even know what forward reference is, but I know that
using (simple) consts and aliases is something perfectly legal. I don't know if
this second example is more usable than the final cosnt thing, just because I
can't think of any use of it, but this is because I rarely u

se fancy stuff that breaks my code in the next version of DMD... But I am making this example to show that D (or DMD) still have so many things to fix the way it is now, we don't need some new fancy stuff before the old things work....
 
 
 Walter Bright Wrote:
 
 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.

 'in' will mean 'scope const final', which means:

 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function

 For example:

 int[] g;

 void foo(in int[] a)
 {
      a = [1,2];	// error, a is final
      a[1] = 2;   // error, a is const
      g = a;	// error, a is scope
 }

 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.

 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.


May 19 2007
parent reply bobef <asd asd.com> writes:
Do you think that any user cares about if you use consts in your code or not?
This is what I mean when I talk about usability. We write applications not
code. We should focus on the usability. If it is easier to write - good, but
the application is what is important, no the code (yes, of course it matters
too). Plus what consts are you talking about in C++? Just cast them to void*
and back to the type without const... If you want to modify it nothing is
stopping you, if not just don't do it :)


Bill Baxter Wrote:

 To me const is not "fancy stuff".  It's a big hole that's long needed 
 some implementation in D.  Right now we have two choices for passing big 
 objects/structs to functions - pass it by value (SLOOW but SAFE) or pass 
 it by reference (FAST but UNSAFE).  Neither is particularly attractive. 
   Const will make it possible to write functions that are both FAST and 
 SAFE.  Coming from C++, the lack of const in D seems like a joke. 
 Painful though it may be, it's great that Walter is doing something 
 about it.
 
 --bb
 
 bobef wrote:
 Without any disrespect to the great work you are doing, let me put it in
another way: In order to make a good, *usable* application, is this "const
final scope" thing more important or having a good GUI library? There are not
many console applications these days, you know. And they are not very *usable*.
And, as we all know, choosing a GUI for D is not like choosing a GUI for C++.
So why instead of adding nerd stuff to the language, make it sooooo much more
*usable*, by fixing the reflection stuff that Tioport needs to produce
libraries which are not 10mb or 20mb? Or... it kind of sucks when half of my
code won't compile, or compiles in specific order only, because of some forward
reference errors. I don't even know what forward reference is, but I know that
using (simple) consts and aliases is something perfectly legal. I don't know if
this second example is more usable than the final cosnt thing, just because I
can't think of any use of it, but this is because I rarely u

se fancy stuff that breaks my code in the next version of DMD... But I am making this example to show that D (or DMD) still have so many things to fix the way it is now, we don't need some new fancy stuff before the old things work....
 
 
 Walter Bright Wrote:
 
 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.

 'in' will mean 'scope const final', which means:

 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function

 For example:

 int[] g;

 void foo(in int[] a)
 {
      a = [1,2];	// error, a is final
      a[1] = 2;   // error, a is const
      g = a;	// error, a is scope
 }

 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.

 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.



May 19 2007
next sibling parent "Anders Bergh" <anders andersman.org> writes:
On 5/19/07, bobef <asd asd.com> wrote:
 Do you think that any user cares about if you use consts in your code or not?
This is what I mean when I talk about usability. We write applications not
code. We should focus on the usability. If it is easier to write - good, but
the application is what is important, no the code (yes, of course it matters
too). Plus what consts are you talking about in C++? Just cast them to void*
and back to the type without const... If you want to modify it nothing is
stopping you, if not just don't do it :)

I think users want their applications to be safe and stable, and consts help with that. Of course, const is not a magical way to make code safer... but it helps. -- Anders
May 19 2007
prev sibling parent reply Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
bobef wrote:
 Do you think that any user cares about if you use consts in your code or not?
This is what I mean when I talk about usability. We write applications not
code. We should focus on the usability. If it is easier to write - good, but
the application is what is important, no the code (yes, of course it matters
too). Plus what consts are you talking about in C++? Just cast them to void*
and back to the type without const... If you want to modify it nothing is
stopping you, if not just don't do it :)
 

No, I don't think users care if I use const. I do think they care if the program runs quickly and is stable -- both of which const contributes to, by avoiding unnecessary copies of data and pre-empting bugs before they happen. Yes I can use trickery and break the type system to get around it -- but if I'm doing that, there's probably something wrong with the design to begin with. This /is/ aiding library writers, and app writers. Once this is done, hopefully the major issues will be the next to get attention. I really don't think the console is going away anytime soon. A friend recently needed a new log parsing utility. The one we tossed together in an afternoon in D, on the console, finished within minutes -- compared to the old (GUI) app he'd been using that took hours. GUI isn't always a blessing. (All this program even need as input was a filename. Adding GUI to something like that is merely bloat and slowdown.) Usability is important, I agree. But software that's quick and easy to write, is also quicker and easier to /make/ usable. And GUI isn't always the answer to usability. That's my stance in a tiny overly restrictive nutshell. (Walnut? Maybe pecan...) -- Chris Nicholson-Sauls
May 19 2007
parent antonio <antonio abrevia.net> writes:
Chris Nicholson-Sauls wrote:
 bobef wrote:
 Do you think that any user cares about if you use consts in your code 
 or not? This is what I mean when I talk about usability. We write 
 applications not code. We should focus on the usability. If it is 
 easier to write - good, but the application is what is important, no 
 the code (yes, of course it matters too). Plus what consts are you 
 talking about in C++? Just cast them to void* and back to the type 
 without const... If you want to modify it nothing is stopping you, if 
 not just don't do it :)

No, I don't think users care if I use const. I do think they care if the program runs quickly and is stable -- both of which const contributes to, by avoiding unnecessary copies of data and pre-empting bugs before they happen. Yes I can use trickery and break the type system to get around it -- but if I'm doing that, there's probably something wrong with the design to begin with. This /is/ aiding library writers, and app writers. Once this is done, hopefully the major issues will be the next to get attention. I really don't think the console is going away anytime soon. A friend recently needed a new log parsing utility. The one we tossed together in an afternoon in D, on the console, finished within minutes -- compared to the old (GUI) app he'd been using that took hours. GUI isn't always a blessing. (All this program even need as input was a filename. Adding GUI to something like that is merely bloat and slowdown.) Usability is important, I agree. But software that's quick and easy to write, is also quicker and easier to /make/ usable. And GUI isn't always the answer to usability. That's my stance in a tiny overly restrictive nutshell. (Walnut? Maybe pecan...) -- Chris Nicholson-Sauls

May be, Walter could be centered solving bugs and closely working with debugger developpements (ddbg?). anyway, "in" is a good hight level pogramming feature for people that uses D as hight level language (is expressive of an intention)
May 22 2007
prev sibling next sibling parent reply Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
bobef wrote:
 Without any disrespect to the great work you are doing, let me put it in
another way: In order to make a good, *usable* application, is this "const
final scope" thing more important or having a good GUI library? There are not
many console applications these days, you know. And they are not very *usable*.
And, as we all know, choosing a GUI for D is not like choosing a GUI for C++.
So why instead of adding nerd stuff to the language, make it sooooo much more
*usable*, by fixing the reflection stuff that Tioport needs to produce
libraries which are not 10mb or 20mb? Or... it kind of sucks when half of my
code won't compile, or compiles in specific order only, because of some forward
reference errors. I don't even know what forward reference is, but I know that
using (simple) consts and aliases is something perfectly legal. I don't know if
this second example is more usable than the final cosnt thing, just because I
can't think of any use of it, but this is because I rarely u

se fancy stuff that breaks my code in the next version of DMD... But I am making this example to show that D (or DMD) still have so many things to fix the way it is now, we don't need some new fancy stuff before the old things work....
 

Well here's something: I don't /want/ Walter working on a GUI lib. That's already being taken care of, with DFL, Tioport/SWT, Harmonia, and gtkD. (Probably a few others.) Choice is good, and there it is. What I /do/ want is Walter working on the things that we've all been asking for all this long while -- and that seems to be precisely what he's doing. Some of us do still write console applications, and daemons with no direct user input at all. (I look at the loooong list of software on our Linux boxes and remark that out of all of these, precious few have any GUI at all. Mainly, Kate and KDE.) If D were to go in a direction that only focused on application development, and only on graphical apps... I'd probably have a good cry, give a salute, and just start using Ruby for everything. Meanwhile, ask me sometime about the dozens of bits of at least one of my projects the new const'ness concepts will allow me to actually /write/ in a sane way in the first place. The project is on hold, has been for quite some while, all just waiting for something like this. -- Chris Nicholson-Sauls PS - Forward reference is using a symbol before it is defined. Most cases of forward reference in D are accounted for and resolved by the compiler, but there are a handful that it still can't handle. Pray some day it can handle them all -- here's hoping.
May 19 2007
next sibling parent "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Chris Nicholson-Sauls" <ibisbasenji gmail.com> wrote in message 
news:f2mu3j$cek$1 digitalmars.com...
 PS - Forward reference is using a symbol before it is defined.  Most cases 
 of forward reference in D are accounted for and resolved by the compiler, 
 but there are a handful that it still can't handle.  Pray some day it can 
 handle them all -- here's hoping.

We shouldn't _have_ to hope. This is something that should have been fixed ages ago.
May 19 2007
prev sibling parent reply bobef <asd asd.com> writes:
I am not asking Walter to Write GUI libraries, but to fix what is preventing
other people from doing so.

And of course there are many console applications and daemons, but let get down
on earth. In many cases GUI applications are easier to use than console ones.
So many Linux console apps are getting GUI front ends. Although many geeks
won't believe it times are changing and the console interface has no future.


Chris Nicholson-Sauls Wrote:

 bobef wrote:
 Without any disrespect to the great work you are doing, let me put it in
another way: In order to make a good, *usable* application, is this "const
final scope" thing more important or having a good GUI library? There are not
many console applications these days, you know. And they are not very *usable*.
And, as we all know, choosing a GUI for D is not like choosing a GUI for C++.
So why instead of adding nerd stuff to the language, make it sooooo much more
*usable*, by fixing the reflection stuff that Tioport needs to produce
libraries which are not 10mb or 20mb? Or... it kind of sucks when half of my
code won't compile, or compiles in specific order only, because of some forward
reference errors. I don't even know what forward reference is, but I know that
using (simple) consts and aliases is something perfectly legal. I don't know if
this second example is more usable than the final cosnt thing, just because I
can't think of any use of it, but this is because I rarely u

se fancy stuff that breaks my code in the next version of DMD... But I am making this example to show that D (or DMD) still have so many things to fix the way it is now, we don't need some new fancy stuff before the old things work....
 

Well here's something: I don't /want/ Walter working on a GUI lib. That's already being taken care of, with DFL, Tioport/SWT, Harmonia, and gtkD. (Probably a few others.) Choice is good, and there it is. What I /do/ want is Walter working on the things that we've all been asking for all this long while -- and that seems to be precisely what he's doing. Some of us do still write console applications, and daemons with no direct user input at all. (I look at the loooong list of software on our Linux boxes and remark that out of all of these, precious few have any GUI at all. Mainly, Kate and KDE.) If D were to go in a direction that only focused on application development, and only on graphical apps... I'd probably have a good cry, give a salute, and just start using Ruby for everything. Meanwhile, ask me sometime about the dozens of bits of at least one of my projects the new const'ness concepts will allow me to actually /write/ in a sane way in the first place. The project is on hold, has been for quite some while, all just waiting for something like this. -- Chris Nicholson-Sauls PS - Forward reference is using a symbol before it is defined. Most cases of forward reference in D are accounted for and resolved by the compiler, but there are a handful that it still can't handle. Pray some day it can handle them all -- here's hoping.

May 19 2007
parent reply Georg Wrede <georg nospam.org> writes:
bobef wrote:
 Although many geeks won't believe it times are changing
 and the console interface has no future.

Hmm. That's as smart as saying that typewriters will obsolete the pen. They are both needed. Neither will completely replace the other.
May 19 2007
parent reply bobef <asd asd.com> writes:
I use pen twice a year. People have all this pocket PCs with digital pens and
stuff. Voice recognition is evolving too. Who knows in few years someone may
come up with a mind reading machine... I know it is hard to change one's habits
but loot at things from another view point. Even if you and me don't want to
give up the console (yes, I am still using it too), we are aging and the newer
people don't have such habits like consoles and pens... So these tools are
indeed fading away... Nothing is permanent, you know...

Georg Wrede Wrote:

 bobef wrote:
 Although many geeks won't believe it times are changing
 and the console interface has no future.

Hmm. That's as smart as saying that typewriters will obsolete the pen. They are both needed. Neither will completely replace the other.

May 19 2007
parent reply "Aziz K." <aziz.kerim gmail.com> writes:
On Sun, 20 May 2007 00:21:40 +0200, bobef <asd asd.com> wrote:
 bobef wrote:
 Although many geeks won't believe it times are changing
 and the console interface has no future.

Hmm. That's as smart as saying that typewriters will obsolete the pen. They are both needed. Neither will completely replace the other.


Even if every PC had voice recognition, I don't think it would be perfect, because even humans often misunderstand what another fellow meant to say. Language is pretty complex, and there can be a lot of ambiguity even in simple sentences. I wouldn't feel very comfortable about telling my PC to delete a certain folder, while it could very well destroy the wrong one. Maybe the computer could ask you to confirm the command every time before it is executed, or even allow you to edit the command in a pop-up text box before execution. I'm not sure though where the added benefit of a CPU-intensive voice-recognition program is, whilst it should be more effective to directly type the command that is on your mind in the console. So I don't think that the console is unnecessary or that it will be replaced by anything else anytime soon. It would be better to enhance it rather than get rid of it. To my mind the console is absolutely indispensible for certain tasks, and it is often much easier to type in a series of commands than to accomplish the same with a GUI driven program. I don't think the console and the GUI are in conflict, but they very much complement each other. When I switched to Linux I really began to appreciate how powerful a console can be, that's why I have to cringe every time I have to use the Windows abomination of a console :-)
May 20 2007
parent reply Don Clugston <dac nospam.com.au> writes:
Aziz K. wrote:
 On Sun, 20 May 2007 00:21:40 +0200, bobef <asd asd.com> wrote:
 bobef wrote:
 Although many geeks won't believe it times are changing
 and the console interface has no future.

Hmm. That's as smart as saying that typewriters will obsolete the pen. They are both needed. Neither will completely replace the other.


Even if every PC had voice recognition, I don't think it would be perfect, because even humans often misunderstand what another fellow meant to say. Language is pretty complex, and there can be a lot of ambiguity even in simple sentences. I wouldn't feel very comfortable about telling my PC to delete a certain folder, while it could very well destroy the wrong one. Maybe the computer could ask you to confirm the command every time before it is executed, or even allow you to edit the command in a pop-up text box before execution. I'm not sure though where the added benefit of a CPU-intensive voice-recognition program is, whilst it should be more effective to directly type the command that is on your mind in the console. So I don't think that the console is unnecessary or that it will be replaced by anything else anytime soon. It would be better to enhance it rather than get rid of it. To my mind the console is absolutely indispensible for certain tasks, and it is often much easier to type in a series of commands than to accomplish the same with a GUI driven program. I don't think the console and the GUI are in conflict, but they very much complement each other. When I switched to Linux I really began to appreciate how powerful a console can be, that's why I have to cringe every time I have to use the Windows abomination of a console :-)

Agreed. My experience of D has really reinforced to me that GUIs are evil for development. When developing logic or algorithms, it's a hundred times easier doing it with a console app. Run your GUI for testing and developing your GUI, and for *nothing else*.
May 20 2007
parent Sean Kelly <sean f4.ca> writes:
Don Clugston wrote:
 
 Agreed. My experience of D has really reinforced to me that GUIs are 
 evil for development. When developing logic or algorithms, it's a 
 hundred times easier doing it with a console app. Run your GUI for 
 testing and developing your GUI, and for *nothing else*.

Many of the C/C++ examples in MSDN are terrible for exactly this reason. There are perhaps 5-10 lines of useful information buried in pages of useless stuff. What I tend to do is develop algorithms and such separately in a console app and only move it into the real app once I've got it working properly. Sean
May 20 2007
prev sibling next sibling parent Ary Manzana <ary esperanto.org.ar> writes:
I agree with you (althought I don't like your tone, but it's 
understandable).

I think D has in mind this sentence: "D is a language you should feel 
very comfortable in when writing applications, and that lets the user 
focus on his problem instead of fighting with the compiler". (I think I 
can recall a sentence like this one in a video about D given by Walter).

Well... with the problem of forward references this is not true. You 
focus on your problem until the compiler can't compile your code because 
you are using forward referneces. Then you have to fight with the 
compiler, try to hack it and trick it, to get your way. The const, 
final, scope keywords surely will help making code safer, but their lack 
doesn't prevent users from compiling their programs.

bobef escribió:
 Without any disrespect to the great work you are doing, let me put it in
another way: In order to make a good, *usable* application, is this "const
final scope" thing more important or having a good GUI library? There are not
many console applications these days, you know. And they are not very *usable*.
And, as we all know, choosing a GUI for D is not like choosing a GUI for C++.
So why instead of adding nerd stuff to the language, make it sooooo much more
*usable*, by fixing the reflection stuff that Tioport needs to produce
libraries which are not 10mb or 20mb? Or... it kind of sucks when half of my
code won't compile, or compiles in specific order only, because of some forward
reference errors. I don't even know what forward reference is, but I know that
using (simple) consts and aliases is something perfectly legal. I don't know if
this second example is more usable than the final cosnt thing, just because I
can't think of any use of it, but this is because I rarely u

se fancy stuff that breaks my code in the next version of DMD... But I am making this example to show that D (or DMD) still have so many things to fix the way it is now, we don't need some new fancy stuff before the old things work....
 
 
 Walter Bright Wrote:
 
 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.

 'in' will mean 'scope const final', which means:

 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function

 For example:

 int[] g;

 void foo(in int[] a)
 {
      a = [1,2];	// error, a is final
      a[1] = 2;   // error, a is const
      g = a;	// error, a is scope
 }

 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.

 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.


May 19 2007
prev sibling parent reply BLS <nanali yyy.fr> writes:
I agree. 
(and I am afraid that reflection will take along long time)
Bjoern

bobef Wrote:

 Without any disrespect to the great work you are doing, let me put it in
another way: In order to make a good, *usable* application, is this "const
final scope" thing more important or having a good GUI library? There are not
many console applications these days, you know. And they are not very *usable*.
And, as we all know, choosing a GUI for D is not like choosing a GUI for C++.
So why instead of adding nerd stuff to the language, make it sooooo much more
*usable*, by fixing the reflection stuff that Tioport needs to produce
libraries which are not 10mb or 20mb? Or... it kind of sucks when half of my
code won't compile, or compiles in specific order only, because of some forward
reference errors. I don't even know what forward reference is, but I know that
using (simple) consts and aliases is something perfectly legal. I don't know if
this second example is more usable than the final cosnt thing, just because I
can't think of any use of it, but this is because I rarely use fancy stuff that
breaks my code in the next version of DMD... But I am making this example to
show that D (or DMD) still have so many things to fix the way it is now, we
don't need some new fancy stuff before the old things work....
 

May 19 2007
parent reply Brad Roberts <braddr puremagic.com> writes:
Not that this debate has any value, since const, et al are currently 
being implemented and that's not remotely likely to change, but here's 
my opinion, for what it's worth:

For any non-trivial application, const (and it's stronger brother 
invariant) is a must to allow me and the people I work with to both 
specify the behavior of routines in their signature, and to actually 
provide enforcement.  I can't consider a language without the ability to 
make these sorts of guarantees without stupid work around like copying 
data left and right a usable language.  It's an interesting toy, but it 
can't graduate to anything besides toy apps.  I recognize that non-toy 
apps have been written with D, and that's great.  So, for me, const is 
an 'every app' layer feature, or at least any library / application of 
significant size.  There's no way in hell that 'const' can be called a 
nerd feature.

Reflection, while a very very powerful feature, is something that's 
useful in a subset of applications.  Additionally, it's not like 
reflection has been totally ignored.  The features of it have _also_ 
been increasing over time.  D might not match Java yet, but few features 
of this weight are black and white.  Between CTFE, Mixins, and the 
compile time reflection that's already available, I suspect it's 
possible to write a reflection library that covers a high percentage of 
java's functionality.  I highly encourage anyone who is in serious need 
of runtime reflection take a stab at it and report back a prioritized 
list of what's missing.

Later,
Brad

BLS wrote:
 I agree. (and I am afraid that reflection will take along long time) 
 Bjoern
 
 bobef Wrote:
 
 Without any disrespect to the great work you are doing, let me put
 it in another way: In order to make a good, *usable* application,
 is this "const final scope" thing more important or having a good
 GUI library? There are not many console applications these days,
 you know. And they are not very *usable*. And, as we all know,
 choosing a GUI for D is not like choosing a GUI for C++. So why
 instead of adding nerd stuff to the language, make it sooooo much
 more *usable*, by fixing the reflection stuff that Tioport needs to
 produce libraries which are not 10mb or 20mb? Or... it kind of
 sucks when half of my code won't compile, or compiles in specific
 order only, because of some forward reference errors. I don't even
 know what forward reference is, but I know that using (simple)
 consts and aliases is something perfectly legal. I don't know if
 this second example is more usable than the final cosnt thing, just
 because I can't think of any use of it, but this is because I
 rarely use fancy stuff that breaks my code in the next version of
 DMD... But I am making this example to show that D (or DMD) still
 have so many things to fix the way it is now, we don't need some
 new fancy stuff before the old things work....
 


May 19 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Brad Roberts wrote:
 For any non-trivial application, const (and it's stronger brother 
 invariant) is a must to allow me and the people I work with to both 
 specify the behavior of routines in their signature, and to actually 
 provide enforcement.

The combination of const, invariant, final, and scope will allow one to be much more expressive on the intended usage of a variable than C++ does, along with enforcement of it. One downside is there's no dipping one's toe into const-correctness. It's got to be done whole hog.
May 19 2007
prev sibling next sibling parent reply torhu <fake address.dude> writes:
Walter Bright wrote:
 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.
 
 'in' will mean 'scope const final', which means:
 
 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function
 
 For example:
 
 int[] g;
 
 void foo(in int[] a)
 {
      a = [1,2];	// error, a is final
      a[1] = 2;   // error, a is const
      g = a;	// error, a is scope
 }
 
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.

If you've got a C library with a header file containing this: // C header file void f(const char* p); Is there any reason why you should think twice before turning it into this D code, and link it with the C library, not knowing anything about the implementation of f? // D import module extern (C) void f(in char* p); Without reading the docs for f, would it be better to just go with 'const'? In that case it won't be backwards compatible with D 1.0 anymore. If I get the meaning of 'scope' correctly, that's the one that can cause problems here.
May 20 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
torhu wrote:
 If you've got a C library with a header file containing this:
 
 // C header file
 void f(const char* p);
 
 Is there any reason why you should think twice before turning it into 
 this D code, and link it with the C library, not knowing anything about 
 the implementation of f?
 
 // D import module
 extern (C) void f(in char* p);
 
 
 Without reading the docs for f, would it be better to just go with 
 'const'?  In that case it won't be backwards compatible with D 1.0 
 anymore.  If I get the meaning of 'scope' correctly, that's the one that 
 can cause problems here.

I'd go with: f(const char* p); because that expresses the C semantics. Using 'in' will cause problems if f() stores p somewhere.
May 20 2007
prev sibling next sibling parent reply Regan Heath <regan netmail.co.nz> writes:
Walter Bright Wrote:
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.

Perhaps I have missed the discussion (being away for the last 7 months) which discussed why we don't want 'scope const final' applied to implicit 'in' parameters? As opposed to requiring explicit 'in' which Walter is proposing. To explain... Currently an parameter is implicitly 'in' i.e. void foo(int a) {} //a is 'in' Why not make this implicit 'in' parameter 'scope const final' avoiding the need to explicity say 'in' everywhere? My reasoning is that I think 'scope const final' should be the default for parameters as it's the most commonly used and safest option for parameters. People should use it by default/accident and should have to explicitly opt out in cases where it makes sense. These cases would then be clearly, visibly marked with 'out', 'ref' etc From what I can see we currently have 2 reasons to require explicit 'in' (from Walters post and others in this thread): 1. avoid breaking backward compatibility with D 1.0. 2. parameters will all have parameter specifiers and pedantic people (no offence intended) will enjoy fully specified function parameters all the time. If so, isn't part of the point in making this change in a beta so that we can ignore reason #1. Granted if the feature was to be integrated into the mainstream version it would then break backward compatibility, but, imagine the situation: Old code would cease to compile and would need modification, but, the modifictions required would for the most part simply be the addition of 'out', 'ref', or an added .dup or copy. Which, in actual fact should have been there in the first place. Meaning, the exisiting code is unsafe and the resulting code after these changes would be much safer. In other words, applying 'scope const final' to implicit 'in' will catch existing bugs, but requiring explicit 'in' will not, right? As for reason #2 I think it's largely aesthetic however.. 1. Applying it to implicit 'in' is less typing for those non-pedantic programmers who can live without all parameters having a specifier. I personally dont think the presence of an explicit 'in' results in clearer code as it is clear to me that unspecified parameters are 'in' (and with these changes would be 'scope const final' too). Regan Heath
May 20 2007
next sibling parent Frank Benoit <keinfarbton googlemail.com> writes:
I second that.
Make the safe thing the default, and the unsafe the explicit case.
May 20 2007
prev sibling next sibling parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Regan Heath wrote:
 Walter Bright Wrote:
 Perhaps I have missed the discussion (being away for the last 7 months) which
discussed why we don't want 'scope const final' applied to implicit 'in'
parameters?

Makes sense to me too. I'd at least like to try it out for a few months and see how the shoe fits. It seems to make a lot of sense. I found this big thread about "const by default": http://lists.puremagic.com/pipermail/digitalmars-d/2006-July/005626.html --bb
May 20 2007
prev sibling next sibling parent reply =?ISO-8859-1?Q?Manuel_K=F6nig?= <ManuelK89 gmx.net> writes:
I second this.

Doing it this way 'in' also keeps its expressive character of saying 
"Hey, I am only the input and not that bunch of scope const final!", 
which especially makes sense when compared to 'out' in terms of data 
flow. And dismissing all of 'scope const final' just requires you to 
declare your params as 'in', which will rarely be the case.
May 20 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Manuel König wrote:
 I second this.
 
 Doing it this way 'in' also keeps its expressive character of saying 
 "Hey, I am only the input and not that bunch of scope const final!", 
 which especially makes sense when compared to 'out' in terms of data 
 flow. And dismissing all of 'scope const final' just requires you to 
 declare your params as 'in', which will rarely be the case.

Does nobody quote any more? What are you seconding? --bb
May 20 2007
next sibling parent =?ISO-8859-1?Q?Manuel_K=F6nig?= <ManuelK89 gmx.net> writes:
Bill Baxter wrote:
 Manuel König wrote:
 I second this.

 Doing it this way 'in' also keeps its expressive character of saying 
 "Hey, I am only the input and not that bunch of scope const final!", 
 which especially makes sense when compared to 'out' in terms of data 
 flow. And dismissing all of 'scope const final' just requires you to 
 declare your params as 'in', which will rarely be the case.

Does nobody quote any more? What are you seconding? --bb

I'm seconding just the whole proposal. Quoting something didn't came to my because I'm not sticking to something in particular, but the whole thing :P Anyhow, are there thoughts, comments by anyone who does/does not like the behaviour of omitting 'in' being 'scope const final'? Otherwise the 'in' behaviour proposed by Regan should really be part of the language, IMHO. greetings, manuel
May 20 2007
prev sibling parent reply Regan Heath <regan netmail.co.nz> writes:
Bill Baxter Wrote:
 Manuel König wrote:
 I second this.
 
 Doing it this way 'in' also keeps its expressive character of saying 
 "Hey, I am only the input and not that bunch of scope const final!", 
 which especially makes sense when compared to 'out' in terms of data 
 flow. And dismissing all of 'scope const final' just requires you to 
 declare your params as 'in', which will rarely be the case.

Does nobody quote any more? What are you seconding?

Are you using a news reader which displays posts in threads? When you're not it can be annoying to find a post with no quotation, I agree. The web interface (which I am using until I have my own PC) seems to have trouble correctly threading all the posts too. Opera didn't seem to have any trouble when I last used it. I suspect either some of our newsreaders/posters do not correctly format the headers in replies and/or the web interface isn't looking for some headers which Opera did and/or there is some clever trick Opera was used to thread them correctly. Anyway, enough rambling. ;) Regan Heath
May 20 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Regan Heath wrote:
 Bill Baxter Wrote:
 Manuel König wrote:
 I second this.

 Doing it this way 'in' also keeps its expressive character of saying 
 "Hey, I am only the input and not that bunch of scope const final!", 
 which especially makes sense when compared to 'out' in terms of data 
 flow. And dismissing all of 'scope const final' just requires you to 
 declare your params as 'in', which will rarely be the case.

Does nobody quote any more? What are you seconding?

Are you using a news reader which displays posts in threads? When you're not it can be annoying to find a post with no quotation, I agree.

I use Thunderbird, but since the D groups have so much traffic, and since I read it from multiple different computers, I end up reading in sort-by-date mode so that all the recent stuff is at the bottom and easy to find. Oh for the day when I can store my Thunderbird settings on the net. I wonder if there's a Google Thunderbird Sync plugin on the way... --bb
May 20 2007
prev sibling parent reply Regan Heath <regan netmail.co.nz> writes:
Manuel König Wrote:
 Doing it this way 'in' also keeps its expressive character of saying 
 "Hey, I am only the input and not that bunch of scope const final!", 
 which especially makes sense when compared to 'out' in terms of data 
 flow. And dismissing all of 'scope const final' just requires you to 
 declare your params as 'in', which will rarely be the case.

To clarify, I was actually proposing that 'in' would be 'scope const final' and there would be no difference between explicit and implicit 'in'. I think it's a bad idea to have implicit and explicit 'in' mean different things, it would just be confusing to me. That said, you could decide/declare that: void foo(int a) {} was actually equivalent to: void foo(scope const final int a) {} and that: void foo(in int a) {} was something entirely different. In that case what do you want 'in' it to mean? I get the impression you'd like it to behave as the current implicit/explicit 'in' does, right? My only question is, why do you want the current behaviour? I'm guessing you want to be able to do the things that 'scope const final' will protect against, but surely those things are dangerous and shouldn't be done? Is there a specific case you are thinking of where you need to do these things? One where there is no decent work-around in the presense of 'scope const final' as default? Regan Heath
May 20 2007
parent =?ISO-8859-1?Q?Manuel_K=F6nig?= <ManuelK89 gmx.net> writes:
Regan Heath wrote:
 Manuel König Wrote:
 Doing it this way 'in' also keeps its expressive character of saying 
 "Hey, I am only the input and not that bunch of scope const final!", 
 which especially makes sense when compared to 'out' in terms of data 
 flow. And dismissing all of 'scope const final' just requires you to 
 declare your params as 'in', which will rarely be the case.

To clarify, I was actually proposing that 'in' would be 'scope const final' and there would be no difference between explicit and implicit 'in'. I think it's a bad idea to have implicit and explicit 'in' mean different things, it would just be confusing to me.

Yes, difference between implicit/explicit 'in' can be confusing. But that's only because we're used of data 'flowing in' to a function, and marking it 'out' if that's not the case. In fact, either 'in' or 'out' are obligatory for every parameter (when in/out being considered as data flow). So you can say that only by accident 'in' became the default if not explicitly specified.
 That said, you could decide/declare that:
 
   void foo(int a) {}
 
 was actually equivalent to:
 
   void foo(scope const final int a) {}
 
 and that:
 
   void foo(in int a) {}
 
 was something entirely different.

True.
 
 In that case what do you want 'in' it to mean?  I get the impression you'd
like it to behave as the current implicit/explicit 'in' does, right?
 

Yes, I "like" it. But I would put it more like 'in' and 'out' are two complementary attributes where one has to be present.
 My only question is, why do you want the current behaviour?  
 
 I'm guessing you want to be able to do the things that 'scope const final'
will protect against, but surely those things are dangerous and shouldn't be
done?
 

Yes, that's exactly what I want. I think everyone can agree that 'scope' and 'const' are not everytime well appreciated. So the one left is the 'final' attribute. Ok, I think I really can live with declaring my params 'final' when I don't want all the other things. But sometimes it would be nice if the parameter could be reassigned to something, like this: void log(LogObject msg) { msg.isLogged = true; // => no const lastLoggedMsg = msg; // => no scope // preprocessing => no final if (systemIsServer) { msg = "Server-log: " ~ msg; } else { msg = "Client-log: " ~ msg; } /* actually doing something with msg */ writef(logfile, msg); } Here msg gets preprocessed first, and than used by whatever you want (in this case it gets logged). If you could not rebind 'msg' to another symbol, you would have to declare a new variable, think of a new name for it, eg. 'msg_ex', and finally assign it to the preprocessing results. But that seems to me like a workaround that you can't just reassign 'msg' and it also bloats your code. Not too much an argument? Declaring new variables really isn't that bad compared to all the issues you trouble into when code gets rewritten and suddenly something does not work because you assign to a renamed parameter that was a local variable before? Ok, I hear you. May be 'in' being 'scope final const' isn't that bad at all :). I'm just being stuck with 'in' not only meaning the data flow. In the end I would prefer 'in' being only 'in' a little bit over 'scope final const (in)' because there seems to be no reason for me using 'in' when I have the opportunity to just write nothing. And when I would use 'in' I would seriously know what I'm typing there! But that's only my personal opinion. I would be totally fine at all with 'in' meaning 'scope const final'.
 Is there a specific case you are thinking of where you need to do these
things?  One where there is no decent work-around in the presense of 'scope
const final' as default?
 

Look above. (ok, there actually IS a decent work-around... :) )
 Regan Heath

May 20 2007
prev sibling next sibling parent Johan Granberg <lijat.meREM OVEgmail.com> writes:
Regan Heath wrote:

 Walter Bright Wrote:
 Do not use 'in' if you wish to do any of these operations on a
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be
 backwards compatible.
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I
 think the results will be worth the effort.

Perhaps I have missed the discussion (being away for the last 7 months) which discussed why we don't want 'scope const final' applied to implicit 'in' parameters? As opposed to requiring explicit 'in' which Walter is proposing. To explain... Currently an parameter is implicitly 'in' i.e. void foo(int a) {} //a is 'in' Why not make this implicit 'in' parameter 'scope const final' avoiding the need to explicity say 'in' everywhere? My reasoning is that I think 'scope const final' should be the default for parameters as it's the most commonly used and safest option for parameters. People should use it by default/accident and should have to explicitly opt out in cases where it makes sense. These cases would then be clearly, visibly marked with 'out', 'ref' etc From what I can see we currently have 2 reasons to require explicit 'in' (from Walters post and others in this thread): 1. avoid breaking backward compatibility with D 1.0. 2. parameters will all have parameter specifiers and pedantic people (no offence intended) will enjoy fully specified function parameters all the time. If so, isn't part of the point in making this change in a beta so that we can ignore reason #1. Granted if the feature was to be integrated into the mainstream version it would then break backward compatibility, but, imagine the situation: Old code would cease to compile and would need modification, but, the modifictions required would for the most part simply be the addition of 'out', 'ref', or an added .dup or copy. Which, in actual fact should have been there in the first place. Meaning, the exisiting code is unsafe and the resulting code after these changes would be much safer. In other words, applying 'scope const final' to implicit 'in' will catch existing bugs, but requiring explicit 'in' will not, right? As for reason #2 I think it's largely aesthetic however.. 1. Applying it to implicit 'in' is less typing for those non-pedantic programmers who can live without all parameters having a specifier. I personally dont think the presence of an explicit 'in' results in clearer code as it is clear to me that unspecified parameters are 'in' (and with these changes would be 'scope const final' too). Regan Heath

I agree with this, const by default is a real good idea (TM)
May 20 2007
prev sibling next sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Sun, 20 May 2007 14:53:54 -0400, Regan Heath wrote:

 Walter Bright Wrote:
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.


...
 Why not make this implicit 'in' parameter 'scope const final' avoiding
 the need to explicity say 'in' everywhere?
 
 My reasoning is that I think 'scope const final' should be the default
 for parameters as it's the most commonly used and safest option for
 parameters.  People should use it by default/accident and should have
 to explicitly opt out in cases where it makes sense.  These cases
 would then be clearly, visibly marked with 'out', 'ref' etc

Thanks Regan, your proposal sits very comfortably with me. I have no problems with adding the 'ref' (whatever) keyword in those cases where an implicit 'in' (a.k.a. 'scope const final') causes the compiler to complain. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 21/05/2007 9:59:51 AM
May 20 2007
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Derek Parnell wrote:
 On Sun, 20 May 2007 14:53:54 -0400, Regan Heath wrote:
 
 Walter Bright Wrote:
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.

 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.


....
 Why not make this implicit 'in' parameter 'scope const final' avoiding
 the need to explicity say 'in' everywhere?

 My reasoning is that I think 'scope const final' should be the default
 for parameters as it's the most commonly used and safest option for
 parameters.  People should use it by default/accident and should have
 to explicitly opt out in cases where it makes sense.  These cases
 would then be clearly, visibly marked with 'out', 'ref' etc

Thanks Regan, your proposal sits very comfortably with me. I have no problems with adding the 'ref' (whatever) keyword in those cases where an implicit 'in' (a.k.a. 'scope const final') causes the compiler to complain.

The only thing I'm concerned about is having a way of specifying "not scope const final". I'm happy to have "safe-by-default", but there should be a way to escape it. Maybe lack of type annotations on an argument could be taken as "scope const final"; adding any annotations manually disables this (so if you use "const int foo", then it really means "const int foo" and not "scope const final const int foo". Then, we can use "in" to mean "just in; nothing else." -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 20 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Daniel Keep wrote:
 
 Derek Parnell wrote:
 On Sun, 20 May 2007 14:53:54 -0400, Regan Heath wrote:

 Walter Bright Wrote:
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.

 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.


....
 Why not make this implicit 'in' parameter 'scope const final' avoiding
 the need to explicity say 'in' everywhere?

 My reasoning is that I think 'scope const final' should be the default
 for parameters as it's the most commonly used and safest option for
 parameters.  People should use it by default/accident and should have
 to explicitly opt out in cases where it makes sense.  These cases
 would then be clearly, visibly marked with 'out', 'ref' etc

Thanks Regan, your proposal sits very comfortably with me. I have no problems with adding the 'ref' (whatever) keyword in those cases where an implicit 'in' (a.k.a. 'scope const final') causes the compiler to complain.

The only thing I'm concerned about is having a way of specifying "not scope const final". I'm happy to have "safe-by-default", but there should be a way to escape it. Maybe lack of type annotations on an argument could be taken as "scope const final"; adding any annotations manually disables this (so if you use "const int foo", then it really means "const int foo" and not "scope const final const int foo". Then, we can use "in" to mean "just in; nothing else."

Ideally, starting from a black slate I'd say 'inout' should remove the const/final/scope-ness, instead of what it does now which is to make the *pointer* to the object itself modifiable. Then 'ref' could be used to mean 'i want to pass the pointer by reference'. So.... void foobulate(MyObject o) { o = new Object(); // bad o.member = 42; // bad } void foobulate(inout MyObject o) { o = new Object(); // bad! o.member = 42; // ok! } void foobulate(ref MyObject o) { o = new Object(); // ok! o.member = 42; // ok! } But that's just my top-of-the-head reaction. I'm sure there are ramifications that I haven't considered. I just wouldn't like to see 'in' take on a meaning other than "this parameter is being passed _in_ and you shouldn't expect to be able to get any information _out_ using it" If it has a meaning of "it's ok to modify the thing being passed in" then pretty much the only time it will be used is when you are interested in the modification, so basically 'in' would mean "we want to get something out", which is nonsensical. --bb
May 20 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 I just wouldn't like to see 'in' take on a meaning other than "this 
 parameter is being passed _in_ and you shouldn't expect to be able to 
 get any information _out_ using it"

I also think that would be a disastrously confusing change from what people having been currently using 'in' for. It'd be like wearing those funny glasses that turn the world upside down.
May 20 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Regan Heath wrote:
 Why not make this implicit 'in' parameter 'scope const final' avoiding the
need to explicity say 'in' everywhere?

It's a good idea, but then there needs to be some way to pass a mutable reference. Such as: class C { int x; } void foo(C c) { c.x = 3; } That doesn't work if 'const' is the default. Using out or inout doesn't work either, as those have slightly different semantics (an extra level of indirection).
May 20 2007
next sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Sun, 20 May 2007 20:10:44 -0700, Walter Bright wrote:

 Regan Heath wrote:
 Why not make this implicit 'in' parameter 'scope const final' avoiding the
need to explicity say 'in' everywhere?

It's a good idea, but then there needs to be some way to pass a mutable reference. Such as: class C { int x; } void foo(C c) { c.x = 3; } That doesn't work if 'const' is the default. Using out or inout doesn't work either, as those have slightly different semantics (an extra level of indirection).

What about 'const ref' meaning that a reference is being passed and that reference is constant but not the data being referred to...? class C { int x; } void foo(const ref C c) { c.x = 3; // okay c = new C; // fail } I know I wouldn't mind changing my code to do this. It is just a lot safer and I'd rather "get it right" now than some years down the road with D. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 21/05/2007 1:53:21 PM
May 20 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 What about 'const ref' meaning that a reference is being passed and that
 reference is constant but not the data being referred to...?
 
  class C { int x; }
  void foo(const ref C c)
  {
       c.x = 3; // okay
       c = new C; // fail
  }

The trouble is that ref adds an extra level of indirection, and it would be confusing to say that in this case it didn't. Another option is to reuse 'inout' to mean 'mutable', since 'inout' is replaced by 'ref'.
May 20 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Derek Parnell wrote:
 What about 'const ref' meaning that a reference is being passed and that
 reference is constant but not the data being referred to...?

  class C { int x; }
  void foo(const ref C c)
  {
       c.x = 3; // okay
       c = new C; // fail
  }

The trouble is that ref adds an extra level of indirection, and it would be confusing to say that in this case it didn't. Another option is to reuse 'inout' to mean 'mutable', since 'inout' is replaced by 'ref'.

...which is what my last message was suggesting. Any reason why that wouldn't work? There is the question of what would happen to "out" and how you'd get out behavior applied to the pointer rather than the value. And while "mutable" is on the table, is D going to have a story for private mutable members that don't affect the interface? Like the classic private mutable cache member in C++. --bb
May 20 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 Another option is to reuse 'inout' to mean 'mutable', since 'inout' is 
 replaced by 'ref'.

...which is what my last message was suggesting.

You're right, I read your posting too quickly.
 Any reason why that wouldn't work?

Breaking existing code.
 There is the question of what would happen to "out" and 
 how you'd get out behavior applied to the pointer rather than the value.

I'd leave out as it is.
 And while "mutable" is on the table, is D going to have a story for 
 private mutable members that don't affect the interface?  Like the 
 classic private mutable cache member in C++.

Ah, the "logical constness" design pattern. I personally loathe that <g>. Const but mutable data just smacks of being far too clever.
May 21 2007
parent reply Regan Heath <regan netmail.co.nz> writes:
Walter Bright Wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 Another option is to reuse 'inout' to mean 'mutable', since 'inout' is 
 replaced by 'ref'.

...which is what my last message was suggesting.

You're right, I read your posting too quickly.
 Any reason why that wouldn't work?

Breaking existing code.
 There is the question of what would happen to "out" and 
 how you'd get out behavior applied to the pointer rather than the value.

I'd leave out as it is.
 And while "mutable" is on the table, is D going to have a story for 
 private mutable members that don't affect the interface?  Like the 
 classic private mutable cache member in C++.

Ah, the "logical constness" design pattern. I personally loathe that <g>. Const but mutable data just smacks of being far too clever.

My feeling is that if we have 'scope const final' as default and implicit then we do need some way to escape it, as we've all suggested. I think the best way is as Daniel suggested, any keyword will override the implicit/default ones, so: void foo(int i) {} //scope, const, final void foo(const int i) {} //just const ..etc.. So, that just leaves the problem you (Walter) proposed of:
class C { int x; }
void foo(C c)
{
     c.x = 3;
}

and being able to pass a mutable reference. Would this reference be 'scope' or 'final'? My understanding is that 'final' means the reference itself could not be changed and 'scope' means an outside reference cannot be given it's value. It seems to me you want both of these ('scope' because the reference will persist outside the function and 'final' because the very point of 'ref' is to be able to modify the reference) except in cases where you pass it by 'ref', in which case you want neither. Assuming we want 'scope' and 'final' applied to these mutable references then I dislike re-using 'inout' because (and perhaps this is ingrained thinking due to having used inout and out) the 'out' part of the keyword doesn't immediately appear to be happening. We're not setting the reference to something which is then used 'out'side the function, instead (as Bill mentioned) we're changing them internally and only in this way is it reflected outside the function. I'd think I'd prefer to use a new keyword like 'mutable', which in our case would be a shortcut for 'scope final'. In a general sense it seems we have 2 classes of keyword here, the base ones: const final scope ref and these handy, shortcut, combination ones: in(default) = const, scope, final mutable = scope, final The question I think we need to ask before we decide what keyword to use is: do we want/need to have opposites for all the base keywords? or do we want to use !<keyword>? or do we want something else? I dislike !<keyword> purely for aesthetic reasons, to me it looks *ick*. So, if we had opposites what would they be? const - mutable? scope - global? final - mutable? I seem to have hit a little wall here, we can't use mutable for both the opposite of const and final, and then also for the combination of 'scope final', can we? It seems I have asked more questions than given answers, hopefully someone else can come up with a few solutions :) Regan Heath
May 21 2007
next sibling parent reply Regan Heath <regan netmail.co.nz> writes:
Regan Heath Wrote:
 It seems to me you want both of these ('scope' because the reference will
persist outside the function and 'final' because the very point of 'ref' is to
be able to modify the reference) except in cases where you pass it by 'ref', in
which case you want neither.  

Re-reading this it appears I have made a mistake and worded it terribly to boot. To clarify... What I was trying to say is twofold: 1. Because we have 'ref' as an option then in the cases where we do not use 'ref' we do not need to modify the reference and therefore it should be 'final'. 2. Because the reference is not passed by 'ref' it is a copy and will not persist outside the function and therefore is 'scope' In short, unless you use 'ref' you want 'scope final' applied to these references. Fingers crossed I haven't made any more mistakes there. Regan Heath
May 21 2007
parent reply gareis <dhasenan gmail.com> writes:
== Quote from Regan Heath (regan netmail.co.nz)'s article
 Regan Heath Wrote:
 It seems to me you want both of these ('scope' because the reference will


persist outside the function and 'final' because the very point of 'ref' is to be able to modify the reference) except in cases where you pass it by 'ref', in which case you want neither.
 Re-reading this it appears I have made a mistake and worded it terribly to
boot.

To clarify...
 What I was trying to say is twofold:
 1. Because we have 'ref' as an option then in the cases where we do not use

'ref' we do not need to modify the reference and therefore it should be 'final'.
 2. Because the reference is not passed by 'ref' it is a copy and will not

persist outside the function and therefore is 'scope'
 In short, unless you use 'ref' you want 'scope final' applied to these
references.
 Fingers crossed I haven't made any more mistakes there.
 Regan Heath

So wait...if I have a ref parameter, can I change the value of the reference locally without global changes? I like passing mutable copies of references. It's simple and expected behavior that I can count on. So will there be syntax that, for example, would give me the following? --- void func(char[] a) { a = a[1..$]; // good a[1] = 'f'; // error } --- For that, I'd just use final, correct?
May 24 2007
parent Regan Heath <regan netmail.co.nz> writes:
gareis Wrote:
 == Quote from Regan Heath (regan netmail.co.nz)'s article
 Regan Heath Wrote:
 It seems to me you want both of these ('scope' because the reference will


persist outside the function and 'final' because the very point of 'ref' is to be able to modify the reference) except in cases where you pass it by 'ref', in which case you want neither.
 Re-reading this it appears I have made a mistake and worded it terribly to
boot.

To clarify...
 What I was trying to say is twofold:
 1. Because we have 'ref' as an option then in the cases where we do not use

'ref' we do not need to modify the reference and therefore it should be 'final'.
 2. Because the reference is not passed by 'ref' it is a copy and will not

persist outside the function and therefore is 'scope'
 In short, unless you use 'ref' you want 'scope final' applied to these
references.
 Fingers crossed I haven't made any more mistakes there.
 Regan Heath

So wait...if I have a ref parameter, can I change the value of the reference locally without global changes?

No, as that's the point of the 'ref' (the new name for 'inout') keyword, to explain.. void foo(ref char[] a) { a = "1,2,3"; } void main() { char[] b = "testing"; foo(b); writefln(b); } In the above 'b' is passed by reference to 'foo' (not a copy of 'b') which changes the value of the reference itself. This change can be seen when foo returns and 'b' is written to the console resulting in "1,2,3" instead of "testing". Remove 'ref' and you see "testing" on the console as "a = .." only modifies the copy of the original reference. In comparrison in Walters new sceme, assuming implicit 'in' meaning 'final const scope', eg. void foo(char[] a) { a = "1,2,3"; } void main() { char[] b = "testing"; foo(b); writefln(b); } you would get an error as the "a = .." line would violate the 'final' protection.
 I like passing mutable copies of references. It's simple and expected behavior
 that I can count on.
 
 So will there be syntax that, for example, would give me the following?
 ---
 void func(char[] a) {
    a = a[1..$]; // good
    a[1] = 'f';  // error
 }
 ---
 
 For that, I'd just use final, correct?

No, I think you'd use 'const scope'. In this thread we talked about having a new 'mutable' keyword which would mean 'const scope', eg. //these would be identical declarations void func(mutable char[] a) void func(const scope char[] a) My understanding, and I could be wrong here, is that 'final' protects the reference and 'const' protects the thing to which it refers. In the case of arrays: char[] aa; //global void func(char[] a) { a = a[1..$]; // violates final a[1] = 'f'; // violates const aa = a; //violates scope } In the case of classes: class A { int b; } A aa; //global void foo(A a) { a.b = 1; //violates const a = new A(); //violates final aa = a; //violates scope } Someone please correct me if I have this wrong/backward. Regan
May 24 2007
prev sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Regan Heath wrote:
 Walter Bright Wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 Another option is to reuse 'inout' to mean 'mutable', since 'inout' is 
 replaced by 'ref'.

...which is what my last message was suggesting.

You're right, I read your posting too quickly.
 Any reason why that wouldn't work?

Breaking existing code.
 There is the question of what would happen to "out" and 
 how you'd get out behavior applied to the pointer rather than the value.

I'd leave out as it is.
 And while "mutable" is on the table, is D going to have a story for 
 private mutable members that don't affect the interface?  Like the 
 classic private mutable cache member in C++.

Ah, the "logical constness" design pattern. I personally loathe that <g>. Const but mutable data just smacks of being far too clever.

My feeling is that if we have 'scope const final' as default and implicit then we do need some way to escape it, as we've all suggested. I think the best way is as Daniel suggested, any keyword will override the implicit/default ones, so: void foo(int i) {} //scope, const, final void foo(const int i) {} //just const ..etc..

That makes sense to me too. If you don't say anything it's 'scope const final'. But if you do specify something then it's only that. I'm not wild about the aesthetics of !const for parameters, and even less wild about the possibility that !const could become common idiom for modifiable parameters. If it's a common way to pass a parameter, then there should be a way to express the attribute positively (like "mutable" or "variable" or "inout") in terms of what it does do, rather than what it doesn't. --bb
May 21 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 That makes sense to me too.  If you don't say anything it's 'scope const 
 final'.  But if you do specify something then it's only that.

Right. There are too many qualifies to do otherwise.
 I'm not 
 wild about the aesthetics of !const for parameters, and even less wild 
 about the possibility that !const could become common idiom for 
 modifiable parameters.  If it's a common way to pass a parameter, then 
 there should be a way to express the attribute positively (like 
 "mutable" or "variable" or "inout") in terms of what it does do, rather 
 than what it doesn't.

Uh, I think you put a finger on just where I was getting a bad feeling about !const. It's generally confusing to use negatives as attributes, i.e., having state variables named: "notFull" is a bad idea. I'm at the moment thinking we should just bite the bullet and introduce 'mutable' as a keyword.
May 21 2007
next sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Mon, 21 May 2007 19:11:01 -0700, Walter Bright wrote:

 I'm at the moment thinking we should just bite the bullet and introduce 
 'mutable' as a keyword.

Excellent! And could the opposite of 'scope' be 'persist', maybe? -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 22/05/2007 3:37:15 PM
May 21 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 On Mon, 21 May 2007 19:11:01 -0700, Walter Bright wrote:
 
 I'm at the moment thinking we should just bite the bullet and introduce 
 'mutable' as a keyword.

Excellent! And could the opposite of 'scope' be 'persist', maybe?

No, the opposite of scope would just be - nothing.
May 21 2007
prev sibling parent Dave <Dave_member pathlink.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 That makes sense to me too.  If you don't say anything it's 'scope 
 const final'.  But if you do specify something then it's only that.


So the new 'in' would be the default if not specified, right?
 Right. There are too many qualifies to do otherwise.
 
 I'm not wild about the aesthetics of !const for parameters, and even 
 less wild about the possibility that !const could become common idiom 
 for modifiable parameters.  If it's a common way to pass a parameter, 
 then there should be a way to express the attribute positively (like 
 "mutable" or "variable" or "inout") in terms of what it does do, 
 rather than what it doesn't.

Uh, I think you put a finger on just where I was getting a bad feeling about !const. It's generally confusing to use negatives as attributes, i.e., having state variables named: "notFull" is a bad idea. I'm at the moment thinking we should just bite the bullet and introduce 'mutable' as a keyword.

A little late and FWIW, but this all sounds great to me!
May 25 2007
prev sibling parent reply Reiner Pope <some address.com> writes:
Walter Bright wrote:
 Regan Heath wrote:
 Why not make this implicit 'in' parameter 'scope const final' avoiding 
 the need to explicity say 'in' everywhere?

It's a good idea, but then there needs to be some way to pass a mutable reference. Such as: class C { int x; } void foo(C c) { c.x = 3; } That doesn't work if 'const' is the default. Using out or inout doesn't work either, as those have slightly different semantics (an extra level of indirection).

At the risk of aggravating more people by adding Yet Another Keyword, how about 'const scope final' by default, with some negative keywords to undo it. Like, say, unconst and unscope: void foo(unconst C c) { c.x = 3; } or why not use the ! operator, which already means 'not': void foo(!const C c) { c.x = 3; } Cheers, Reiner
May 20 2007
next sibling parent Reiner Pope <some address.com> writes:
Reiner Pope wrote:
 void foo(!const C c)
 {
     c.x = 3;
 }
 

Although, on second thoughts, the ! there looks quite invisible...
May 20 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Reiner Pope wrote:
 or why not use the ! operator, which already means 'not':
 
 void foo(!const C c)
 {
     c.x = 3;
 }

That's Andrei's suggestion, too. It would take some getting used to.
May 21 2007
parent Deewiant <deewiant.doesnotlike.spam gmail.com> writes:
Walter Bright wrote:
 Reiner Pope wrote:
 or why not use the ! operator, which already means 'not':

 void foo(!const C c)
 {
     c.x = 3;
 }

That's Andrei's suggestion, too. It would take some getting used to.

I'd prefer it, though. That, or some other mechanism for const by default, but I like this syntax because no new keywords are needed and it's not overly verbose. Reiner's thought that the ! looks invisible doesn't matter, in my opinion, because you wouldn't ever write "const" without the ! for a function parameter. A 2.0 release is the time to break existing code, and I don't see why you shouldn't do so. Existing projects can go on with only 1.0 support or convert to 2.0 as they will.
May 21 2007
prev sibling next sibling parent janderson <askme me.com> writes:
Perhaps we need a D2.0.learn newsgroup?  Would we also need to make 2.0 
of the others?

-Joel
May 20 2007
prev sibling next sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
I've only now been able to read this thread, so, just some simple questions:

Walter Bright wrote:
 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.
 
 'in' will mean 'scope const final', which means:
 
 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function
 

Does this mean you have finished a working design for const/final/invariant/etc. ? If so, then what is the type of fooptr here? : final foo; auto fooptr = &foo;
 For example:
 
 int[] g;
 
 void foo(in int[] a)
 {
     a = [1,2];    // error, a is final
     a[1] = 2;   // error, a is const
     g = a;    // error, a is scope
 }
 
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.

Whoa there, that idea of 'scope' being the default togheter with 'const' and 'final', where did that came from? I understand why (and agree) that 'final' and 'const' should be the default type modifiers for function parameters, but why 'scope' as well? -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
May 26 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Bruno Medeiros wrote:
 Whoa there, that idea of 'scope' being the default togheter with 'const' 
 and 'final', where did that came from? I understand why (and agree) that 
 'final' and 'const' should be the default type modifiers for function 
 parameters, but why 'scope' as well?

Because most function parameters are scope already.
Jun 04 2007
prev sibling parent reply Charlie <charlie.fats gmail.com> writes:
I'm appalled, both that this is pretty much assured to be in D , and 
that the community seems to be behind it.  I thought that the reason 
Walter didn't want const was because of its added complexity , so 
instead he introduces _3_ new keywords ?  Does no one else feel like 
this is using a machine gun to kill a fly ?

I understand the need for immutable data when writing libraries, but 
'scope const final MyClass myInstance' ?!?  Theres got to be a better way.

I know this sounds over-dramatic, but if this is the direction D is 
headed, then count me out.  I loved D because if its elegant and 
powerful simplicity, I think D has strayed way to far from its original 
goal.

If anyone feels like _this_ implementation for const ( not the 
usefulness of const mind you ) is not for D, then please speak up or we 
all might end up losing our favorite language.

Charlie

Walter Bright wrote:
 This is coming for the D 2.0 beta, and it will need some source code 
 changes. Specifically, for function parameters that are arrays or 
 pointers, start using 'in' for them.
 
 'in' will mean 'scope const final', which means:
 
 final - the parameter will not be reassigned within the function
 const - the function will not attempt to change the contents of what is 
 referred to
 scope - the function will not keep a reference to the parameter's data 
 that will persist beyond the scope of the function
 
 For example:
 
 int[] g;
 
 void foo(in int[] a)
 {
     a = [1,2];    // error, a is final
     a[1] = 2;   // error, a is const
     g = a;    // error, a is scope
 }
 
 Do not use 'in' if you wish to do any of these operations on a 
 parameter. Using 'in' has no useful effect on D 1.0 code, so it'll be 
 backwards compatible.
 
 Adding in all those 'in's is tedious, as I'm finding out :-(, but I 
 think the results will be worth the effort.

Jun 04 2007
next sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Charlie" <charlie.fats gmail.com> wrote in message 
news:46649DD9.1010801 gmail.com...
 I'm appalled, both that this is pretty much assured to be in D , and that 
 the community seems to be behind it.  I thought that the reason Walter 
 didn't want const was because of its added complexity , so instead he 
 introduces _3_ new keywords ?  Does no one else feel like this is using a 
 machine gun to kill a fly ?

 I understand the need for immutable data when writing libraries, but 
 'scope const final MyClass myInstance' ?!?  Theres got to be a better way.

 I know this sounds over-dramatic, but if this is the direction D is 
 headed, then count me out.  I loved D because if its elegant and powerful 
 simplicity, I think D has strayed way to far from its original goal.

 If anyone feels like _this_ implementation for const ( not the usefulness 
 of const mind you ) is not for D, then please speak up or we all might end 
 up losing our favorite language.

I was beginning to think I was the only one. It doesn't seem any easier than the C++ style const-ness at all. If anything it's more complex. Instead of "here a const, there a const, everywhere a const * const" it seems like it'll be "here a const, there a final, everywhere an invariant scope int[new]" :P
Jun 04 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Jarrett Billingsley wrote:
 "Charlie" <charlie.fats gmail.com> wrote in message 
 news:46649DD9.1010801 gmail.com...
 I'm appalled, both that this is pretty much assured to be in D , and that 
 the community seems to be behind it.  I thought that the reason Walter 
 didn't want const was because of its added complexity , so instead he 
 introduces _3_ new keywords ?  Does no one else feel like this is using a 
 machine gun to kill a fly ?

 I understand the need for immutable data when writing libraries, but 
 'scope const final MyClass myInstance' ?!?  Theres got to be a better way.

 I know this sounds over-dramatic, but if this is the direction D is 
 headed, then count me out.  I loved D because if its elegant and powerful 
 simplicity, I think D has strayed way to far from its original goal.

 If anyone feels like _this_ implementation for const ( not the usefulness 
 of const mind you ) is not for D, then please speak up or we all might end 
 up losing our favorite language.

I was beginning to think I was the only one. It doesn't seem any easier than the C++ style const-ness at all. If anything it's more complex. Instead of "here a const, there a const, everywhere a const * const" it seems like it'll be "here a const, there a final, everywhere an invariant scope int[new]" :P

I think we should wait and see how it comes out. Of course expressing your doubts and misgivings is a good thing too, but, yeh, let's try not to be over-dramatic. Naturally the discussion here tends to revolve around the cases that aren't obvious or straightforward, because the obvious, easy cases need no discussion. So it's natural that it ends up sounding like "everywhere an invariant scope int[new]", but I suspect in typical code such things will be uncommon. I'm not sure about this int[*] thing though. That will probably require a lot of changes whether int[*] ends up meaning resizeable or not. But come on, you have to admit that slices are a little dicey, and giving the compiler a way to detect bogus usage of a slice will be good. They're supposed to be a safer alternative to naked pointers, but instead they introduce their own equivalently dangerous set of gotchas due to the ambiguity of ownership. Other than that, I think pretty much all of the changes Walter has mentioned will be ignorable. --bb
Jun 04 2007
parent Dave <Dave_member pathlink.com> writes:
Bill Baxter wrote:
 Jarrett Billingsley wrote:
 "Charlie" <charlie.fats gmail.com> wrote in message 
 news:46649DD9.1010801 gmail.com...
 I'm appalled, both that this is pretty much assured to be in D , and 
 that the community seems to be behind it.  I thought that the reason 
 Walter didn't want const was because of its added complexity , so 
 instead he introduces _3_ new keywords ?  Does no one else feel like 
 this is using a machine gun to kill a fly ?

 I understand the need for immutable data when writing libraries, but 
 'scope const final MyClass myInstance' ?!?  Theres got to be a better 
 way.

 I know this sounds over-dramatic, but if this is the direction D is 
 headed, then count me out.  I loved D because if its elegant and 
 powerful simplicity, I think D has strayed way to far from its 
 original goal.

 If anyone feels like _this_ implementation for const ( not the 
 usefulness of const mind you ) is not for D, then please speak up or 
 we all might end up losing our favorite language.

I was beginning to think I was the only one. It doesn't seem any easier than the C++ style const-ness at all. If anything it's more complex. Instead of "here a const, there a const, everywhere a const * const" it seems like it'll be "here a const, there a final, everywhere an invariant scope int[new]" :P

I think we should wait and see how it comes out. Of course expressing your doubts and misgivings is a good thing too, but, yeh, let's try not to be over-dramatic. Naturally the discussion here tends to revolve around the cases that aren't obvious or straightforward, because the obvious, easy cases need no discussion. So it's natural that it ends up sounding like "everywhere an invariant scope int[new]", but I suspect in typical code such things will be uncommon.

Recalling the resistance that Walter originally had to adding const (the 'loose' semantics of C++ const and the concern about "littering" D code w/ const), I too think it will be wise to see what Walter has come up with. I'm pretty confident it will be both better and less onerous than C++ const for a majority of situations given the quality of design (of D) in other areas.
 I'm not sure about this int[*] thing though. That will probably require 
 a lot of changes whether int[*] ends up meaning resizeable or not.  But 
 come on, you have to admit that slices are a little dicey, and giving 
 the compiler a way to detect bogus usage of a slice will be good. 
 They're supposed to be a safer alternative to naked pointers, but 
 instead they introduce their own equivalently dangerous set of gotchas 
 due to the ambiguity of ownership.
 
 Other than that, I think pretty much all of the changes Walter has 
 mentioned will be ignorable.
 
 --bb

Jun 04 2007
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Charlie wrote:
 I'm appalled, both that this is pretty much assured to be in D , and 
 that the community seems to be behind it.  I thought that the reason 
 Walter didn't want const was because of its added complexity , so 
 instead he introduces _3_ new keywords ?  Does no one else feel like 
 this is using a machine gun to kill a fly ?
 
 I understand the need for immutable data when writing libraries, but 
 'scope const final MyClass myInstance' ?!?  Theres got to be a better way.
 
 I know this sounds over-dramatic, but if this is the direction D is 
 headed, then count me out.  I loved D because if its elegant and 
 powerful simplicity, I think D has strayed way to far from its original 
 goal.
 
 If anyone feels like _this_ implementation for const ( not the 
 usefulness of const mind you ) is not for D, then please speak up or we 
 all might end up losing our favorite language.

Actually, I quite empathize with your viewpoint. I worry that the final, const, invariant thing is too complicated. But there are some mitigating factors: 1) Just as in C++, you can pretty much ignore final, const, and invariant if they don't appeal to you. I don't bother using const in my C++ code. 2) The transitive nature of const means that there are a lot fewer const's you have to write. 3) Using D's type inference capability, a lot fewer types (and their attendant const's) need to be written. 4) It provides information that is actually useful to the compiler. 5) Scope has the promise of enabling reference counting to be far more efficient than is possible in C++. 6) Working together, these features move us towards supporting the functional programming paradigm better. FP is very important for the future, as it is very adaptable to parallel programming. 7) They make interfaces much more self-documenting. 8) They can make automated code analysis tools more effective. Automated code analysis is big business and is getting to be far more important as security consultants are brought in to analyze code for correctness, security, etc. Wall Street in particular is very interested in this stuff. 9) So far, in my work to make Phobos const-correct, it hasn't been the annoyance I thought it would be.
Jun 04 2007