www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Automatic typing

reply "JS" <js.mdnq gmail.com> writes:
Would it be possible for a language(specifically d) to have the 
ability to automatically type a variable by looking at its use 
cases without adding too much complexity? It seems to me that 
most compilers already can infer type mismatchs which would allow 
them to handle stuff like:

main()
{
    auto x;
    auto y;
    x = 3;   // x is an int, same as auto x = 3;
    y = f(); // y is the same type as what f() returns
    x = 3.9; // x is really a float, no mismatch with previous 
type(int)
}

in this case x and y's type is inferred from future use. The 
compiler essentially just lazily infers the variable type. 
Obviously ambiguity will generate an error.
Jun 27 2013
next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
I believe it would be possible. D does something similar for auto 
return values on functions already.  Might be a bit of work in 
the compiler though.
Jun 27 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 27 Jun 2013 20:34:53 -0400, JS <js.mdnq gmail.com> wrote:

 Would it be possible for a language(specifically d) to have the ability  
 to automatically type a variable by looking at its use cases without  
 adding too much complexity? It seems to me that most compilers already  
 can infer type mismatchs which would allow them to handle stuff like:

 main()
 {
     auto x;
     auto y;
     x = 3;   // x is an int, same as auto x = 3;
     y = f(); // y is the same type as what f() returns
     x = 3.9; // x is really a float, no mismatch with previous type(int)
 }

 in this case x and y's type is inferred from future use. The compiler  
 essentially just lazily infers the variable type. Obviously ambiguity  
 will generate an error.

There are very good reasons not to do this, even if possible. Especially if the type can change. Consider this case: void foo(int); void foo(double); void main() { auto x; x = 5; foo(x); .... // way later down in main x = 6.0; } What version of foo should be called? By your logic, it should be the double version, but looking at the code, I can't reason about it. I have to read the whole function, and look at every usage of x. auto then becomes a liability, and not a benefit. Coupling the type of a variable with sparse usages is going to be extremely confusing and problematic. You are better off declaring the variable as a variant. -Steve
Jun 27 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
JS:

 in this case x and y's type is inferred from future use. The 
 compiler essentially just lazily infers the variable type. 
 Obviously ambiguity will generate an error.

Do you mean the flow-sensitive typing of the Whiley language? http://whiley.org/guide/typing/flow-typing/ It's surely neat. But it needs flow analysis. Bye, bearophile
Jun 27 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Friday, 28 June 2013 at 00:48:23 UTC, Steven Schveighoffer 
wrote:
 On Thu, 27 Jun 2013 20:34:53 -0400, JS <js.mdnq gmail.com> 
 wrote:

 Would it be possible for a language(specifically d) to have 
 the ability to automatically type a variable by looking at its 
 use cases without adding too much complexity? It seems to me 
 that most compilers already can infer type mismatchs which 
 would allow them to handle stuff like:

 main()
 {
    auto x;
    auto y;
    x = 3;   // x is an int, same as auto x = 3;
    y = f(); // y is the same type as what f() returns
    x = 3.9; // x is really a float, no mismatch with previous 
 type(int)
 }

 in this case x and y's type is inferred from future use. The 
 compiler essentially just lazily infers the variable type. 
 Obviously ambiguity will generate an error.

There are very good reasons not to do this, even if possible. Especially if the type can change. Consider this case: void foo(int); void foo(double); void main() { auto x; x = 5; foo(x); .... // way later down in main x = 6.0; } What version of foo should be called? By your logic, it should be the double version, but looking at the code, I can't reason about it. I have to read the whole function, and look at every usage of x. auto then becomes a liability, and not a benefit.

says who? No one is forcing you to use it with an immediate inference. If you get easily confused then simply declare x as a double in the first place! Most of the time a variable's type is well know by the programmer. That is, the programmer has some idea of the type a variable is to take on. Having the compiler infer the type is tantamount to figuring out what the programmer had in mind, in most cases this is rather easy to do... in any ambiguous case an error can be thrown.
 Coupling the type of a variable with sparse usages is going to 
 be extremely confusing and problematic.  You are better off 
 declaring the variable as a variant.

If you are confused by the usage then don't use it. Just because for some programmers in some cases it is bad does not mean that it can't be useful to some programmers in some cases. Some programmers what to dumb down the compiler because they themselves want to limit all potential risk... What's amazing is that many times the features they are against does not have to be used in the first place. If you devise an extremely convoluted example then simply use a unit test or define the type explicitly. I don't think limiting the compiler feature set for the lowest common denominator is a way to develop a powerful language. You say using a variant type is better off, how? What is the difference besides performance? An auto type without immediate type inference offers all the benefits of static typing with some of those from a variant type... Since it seems you are not against variant then why would you be against a static version, since it actually offers more safety? In fact, my suggestion could simply be seen as an optimization of a variant type. e.g., variant x; x = 3; the compiler realizes that x can be reduced to an int type and sees the code as int x; x = 3; Hence, unless you are against variants and think they are evil(which contradicts your suggestion to use it), your argument fails.
Jun 27 2013
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, June 28, 2013 02:34:53 JS wrote:
 Would it be possible for a language(specifically d) to have the
 ability to automatically type a variable by looking at its use
 cases without adding too much complexity? It seems to me that
 most compilers already can infer type mismatchs which would allow
 them to handle stuff like:
 
 main()
 {
     auto x;
     auto y;
     x = 3;   // x is an int, same as auto x = 3;
     y = f(); // y is the same type as what f() returns
     x = 3.9; // x is really a float, no mismatch with previous
 type(int)
 }
 
 in this case x and y's type is inferred from future use. The
 compiler essentially just lazily infers the variable type.
 Obviously ambiguity will generate an error.

Regardless of whether such a feature would be of value (and honestly, I'm inclined to believe that it would do more harm than good), Walter would never go for it, because it would require code flow analysis, and he pretty much refuses to have that in the compiler or to have any feature which would require it in the language. So, while it may be technically feasible, it'll never happen. - Jonathan M Davis
Jun 28 2013
prev sibling next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Friday, 28 June 2013 at 00:34:54 UTC, JS wrote:
 Would it be possible for a language(specifically d) to have the 
 ability to automatically type a variable by looking at its use 
 cases without adding too much complexity?

Well ocaml has it (https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner) and well it's not all that positive, at least in that language. Combined with parametric polymorphism it's nice and sound, except it has the potential to hide a simple typo a lot further from where it is (think mixing up integer operators and FP operators). As it break overloading on "leaves", it is the reason ocaml has "print_string" and "print_int" which is quite frankly ugly. Once you have type inference, the first thing you do to make your code readable is to add back type annotations.
Jun 28 2013
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 06/28/2013 03:44 PM, ponce wrote:
 On Friday, 28 June 2013 at 00:34:54 UTC, JS wrote:
 Would it be possible for a language(specifically d) to have the
 ability to automatically type a variable by looking at its use cases
 without adding too much complexity?

Well ocaml has it (https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner) and well it's not all that positive, at least in that language. Combined with parametric polymorphism it's nice and sound, except it has the potential to hide a simple typo a lot further from where it is (think mixing up integer operators and FP operators). As it break overloading on "leaves", it is the reason ocaml has "print_string" and "print_int" which is quite frankly ugly.

The type system might be too primitive in that regard. Haskell has 'print'.
 Once you have type inference, the first thing you do to make your code
 readable is to add back type annotations.

... at a few places.
Jun 28 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 28 June 2013 at 07:04:12 UTC, Jonathan M Davis wrote:
 On Friday, June 28, 2013 02:34:53 JS wrote:
 Would it be possible for a language(specifically d) to have the
 ability to automatically type a variable by looking at its use
 cases without adding too much complexity? It seems to me that
 most compilers already can infer type mismatchs which would 
 allow
 them to handle stuff like:
 
 main()
 {
     auto x;
     auto y;
     x = 3;   // x is an int, same as auto x = 3;
     y = f(); // y is the same type as what f() returns
     x = 3.9; // x is really a float, no mismatch with previous
 type(int)
 }
 
 in this case x and y's type is inferred from future use. The
 compiler essentially just lazily infers the variable type.
 Obviously ambiguity will generate an error.

Regardless of whether such a feature would be of value (and honestly, I'm inclined to believe that it would do more harm than good), Walter would never go for it, because it would require code flow analysis, and he pretty much refuses to have that in the compiler or to have any feature which would require it in the language. So, while it may be technically feasible, it'll never happen.

code flow analysis is require for disable this. And this is the very reason why disable this is full of holes right now.
Jun 28 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
ponce:

 Well ocaml has it 
 (https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner) and well 
 it's not all that positive, at least in that language.

I think a H-M global type inferencer is not needed for what the OP is asking for, that is limited _inside_ functions, so it's not global. See the flow-typing link I have shown above (the links I add to threads aren't just for show). Bye, bearophile
Jun 28 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 28 Jun 2013 02:51:39 -0400, JS <js.mdnq gmail.com> wrote:

 On Friday, 28 June 2013 at 00:48:23 UTC, Steven Schveighoffer wrote:
 On Thu, 27 Jun 2013 20:34:53 -0400, JS <js.mdnq gmail.com> wrote:

 Would it be possible for a language(specifically d) to have the  
 ability to automatically type a variable by looking at its use cases  
 without adding too much complexity? It seems to me that most compilers  
 already can infer type mismatchs which would allow them to handle  
 stuff like:

 main()
 {
    auto x;
    auto y;
    x = 3;   // x is an int, same as auto x = 3;
    y = f(); // y is the same type as what f() returns
    x = 3.9; // x is really a float, no mismatch with previous type(int)
 }

 in this case x and y's type is inferred from future use. The compiler  
 essentially just lazily infers the variable type. Obviously ambiguity  
 will generate an error.

There are very good reasons not to do this, even if possible. Especially if the type can change. Consider this case: void foo(int); void foo(double); void main() { auto x; x = 5; foo(x); .... // way later down in main x = 6.0; } What version of foo should be called? By your logic, it should be the double version, but looking at the code, I can't reason about it. I have to read the whole function, and look at every usage of x. auto then becomes a liability, and not a benefit.

says who? No one is forcing you to use it with an immediate inference. If you get easily confused then simply declare x as a double in the first place!

This is already defined: auto x = 5; To change the meaning of that, would be unnecessarily confusing.
 Most of the time a variable's type is well know by the programmer. That  
 is, the programmer has some idea of the type a variable is to take on.  
 Having the compiler infer the type is tantamount to figuring out what  
 the programmer had in mind, in most cases this is rather easy to do...  
 in any ambiguous case an error can be thrown.

It's not the programmer I'm worried about. It's the maintainer/reviewer.
 Coupling the type of a variable with sparse usages is going to be  
 extremely confusing and problematic.  You are better off declaring the  
 variable as a variant.

If you are confused by the usage then don't use it. Just because for some programmers in some cases it is bad does not mean that it can't be useful to some programmers in some cases.

But I use auto all the time, and I don't want its meaning to change.
 Some programmers what to dumb down the compiler because they themselves  
 want to limit all potential risk... What's amazing is that many times  
 the features they are against does not have to be used in the first  
 place.

As you have the compiler infer more and more, you lose it's ability to statically detect errors. This is the point of having a statically typed language. If you want loosey goosey semantics, you can use php.
 If you devise an extremely convoluted example then simply use a unit  
 test or define the type explicitly. I don't think limiting the compiler  
 feature set for the lowest common denominator is a way to develop a  
 powerful language.

"simply" is an inaccurate description. Maintain any large project for some time in php and you will know what I mean.
 You say using a variant type is better off, how? What is the difference  
 besides performance? An auto type without immediate type inference  
 offers all the benefits of static typing with some of those from a  
 variant type...

It declares up front "this can change type mid-function". I don't have to read the whole function to guess it's type, it's a variant.
 Since it seems you are not against variant then why would you be against  
 a static version, since it actually offers more safety?

Variant is reasonable. It allows you to specify that you don't care about the type. And anything that takes variant can do the same. But in your scheme, the type is NOT I don't care, but basically defined by the compiler. Good luck discovering what it is. I see a lot of pragma(msg, typeof(x)) going to be put in that code.
 In fact, my suggestion could simply be seen as an optimization of a  
 variant type.

 e.g.,

 variant x;
 x = 3;


 the compiler realizes that x can be reduced to an int type and sees the  
 code as

 int x;
 x = 3;

 Hence, unless you are against variants and think they are evil(which  
 contradicts your suggestion to use it), your argument fails.

My argument is that auto should be left the way it is. I don't want it to change. And variant already does what you want with less confusing semantics, no reason to add another feature. -Steve
Jun 28 2013
prev sibling next sibling parent "Brian Rogoff" <brogoff gmail.com> writes:
On Friday, 28 June 2013 at 13:44:19 UTC, ponce wrote:
 On Friday, 28 June 2013 at 00:34:54 UTC, JS wrote:
 Would it be possible for a language(specifically d) to have 
 the ability to automatically type a variable by looking at its 
 use cases without adding too much complexity?

Well ocaml has it (https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner) and well it's not all that positive, at least in that language.

No, OCaml doesn't quite do what the OP is asking for. In particular, the part where x is assigned an int and subsequently assigned a float can not be modeled directly. Even the separate 'auto x' and subsequent use of x is different from how OCaml's let polymorphism works. To model that, you could declare everything as 'a option refs and assign them later, like let main () = let x = ref None in let y = ref None in begin x := Some 1; y := Some (f()); end but if you want to use x as another type later you'll have to shadow the declaration in a new scope.
 Combined with parametric polymorphism it's nice and sound, 
 except it has the potential to hide a simple typo a lot further 
 from where it is (think mixing up integer operators and FP 
 operators).
 As it break overloading on "leaves", it is the reason ocaml has 
 "print_string" and "print_int" which is quite frankly ugly.

Yes, and the +., -., *., /. operators are ugly too, as is the requirement that record field labels in the same scope be distinct. Overloading can be dangerous, but no overloading is a PITA. Combining type inference and overloading is problematic.
 Once you have type inference, the first thing you do to make 
 your code readable is to add back type annotations.

There should always be a .mli file with exported decls. I wish OCaml had some ability to have separate type decls from the value, like Haskell. Type inference is most useful inside a function, IMO. I think that even if the feature being requested were feasible, it would be awful for D. -- Brian
Jun 28 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Brian Rogoff:

 No, OCaml doesn't quite do what the OP is asking for. In 
 particular, the part where x is assigned an int and 
 subsequently assigned a float can not be modeled directly.

I see, then this is not related to the flow typing I have linked to, sorry. Bye, bearophile
Jun 28 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Friday, 28 June 2013 at 14:02:07 UTC, Steven Schveighoffer 
wrote:
 On Fri, 28 Jun 2013 02:51:39 -0400, JS <js.mdnq gmail.com> My 
 argument is that auto should be left the way it is.  I don't 
 want it to change.  And variant already does what you want with 
 less confusing semantics, no reason to add another feature.

 -Steve

Using the auto keyword was just an example. My argument does not depend the specific keyword used. My "idea" is simply generalizing auto to use forward inferencing. variant is NOT what I am talking about. It is not a performant time but a union of types. I am talking about the compiler finding the best choice for the type by looking ahead of the definition of the time. I think variant would be a good choice as my suggestion is a sort of optimization of variant. i.e., attempt to find the appropriate type at compile time, if not use a variant(or throw an error).
Jun 28 2013
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/27/2013 5:34 PM, JS wrote:
 Would it be possible for a language(specifically d) to have the ability to
 automatically type a variable by looking at its use cases without adding too
 much complexity? It seems to me that most compilers already can infer type
 mismatchs which would allow them to handle stuff like:

 main()
 {
     auto x;
     auto y;
     x = 3;   // x is an int, same as auto x = 3;
     y = f(); // y is the same type as what f() returns
     x = 3.9; // x is really a float, no mismatch with previous type(int)
 }

 in this case x and y's type is inferred from future use. The compiler
 essentially just lazily infers the variable type. Obviously ambiguity will
 generate an error.

I don't see a compelling use case for this proposal, or even any use case. There'd have to be some serious advantage to it to justify its complexity.
Jun 28 2013
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 06/29/2013 12:29 AM, Walter Bright wrote:
 On 6/27/2013 5:34 PM, JS wrote:
 Would it be possible for a language(specifically d) to have the
 ability to
 automatically type a variable by looking at its use cases without
 adding too
 much complexity? It seems to me that most compilers already can infer
 type
 mismatchs which would allow them to handle stuff like:

 main()
 {
     auto x;
     auto y;
     x = 3;   // x is an int, same as auto x = 3;
     y = f(); // y is the same type as what f() returns
     x = 3.9; // x is really a float, no mismatch with previous type(int)
 }

 in this case x and y's type is inferred from future use. The compiler
 essentially just lazily infers the variable type. Obviously ambiguity
 will
 generate an error.

I don't see a compelling use case for this proposal, or even any use case. There'd have to be some serious advantage to it to justify its complexity.

Eg: auto a; if(x in cache) a=cache[x]; else cache[x]=a=new AnnoyingToSpellOutBeforeTheIf!"!"(); Using the type of the lexically first assignment would often be good enough, where reading the variable is disallowed prior to this first assignment. A little better (and still decidable) would be using the common type of all branches' first assignments not preceded by a read.
Jun 28 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/28/2013 4:42 PM, Timon Gehr wrote:
 On 06/29/2013 12:29 AM, Walter Bright wrote:
 I don't see a compelling use case for this proposal, or even any use
 case. There'd have to be some serious advantage to it to justify its
 complexity.

Eg: auto a;

typeof(cache[x]) a; // (1)
 if(x in cache) a=cache[x];
 else cache[x]=a=new AnnoyingToSpellOutBeforeTheIf!"!"();

Or: auto a = (x in cache) ? cache[x] : (cache[x]=new AnnoyingToSpellOutBeforeTheIf!"!"()); // (2)
 Using the type of the lexically first assignment would often be good enough,
 where reading the variable is disallowed prior to this first assignment.

So far, the use case is not compelling.
 A little better (and still decidable) would be using the common type of all
 branches' first assignments not preceded by a read.

(1) handles that. Not every workaround needs a language feature.
Jun 28 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/28/2013 5:00 PM, JS wrote:
 Is variant useful? If not then you have a point. I'm not proposing anything
that
 variant can't already do except add compile time performance. I do not think
the
 complexity is much more than what is already done.

 D already checks for time mismatch. With such a variant or auto the check
simply
 is more intelligent.

 e.g.,

 auto x;  // x's type is undefined or possibly variant.
 x = 3;   // x's type is set temporarily to an int
 ...
 x = 3.0; // ****


 at **** we have several possibilities.

     1. Throw an error, this makes auto more useful and avoids many pitfalls.
     2. Set x's type to a variant. [possibly goto 3 if castable to new type]
     3. Set x's type to a double.

 Both 2 and 3 require an extra pass to convert the code because it uses forward
 inference.

 auto looks only at the immediate assignment expression to determine the type. I
 am talking about generalizing it to look in the scope with possible fallback to
 a variant type with optional warning, an error, or type enlargement.

Again, I need a compelling use case. It's not enough to say it's the same as variant, and it's not enough to say it can be implemented. A compelling use case would be a pattern that is commonplace, and for which the workarounds are ugly, unsafe, error prone, unportable, etc.
Jun 28 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/28/2013 6:48 PM, JS wrote:
 What was the use case for auto that got in into the language?

In code where the type of the initializer would change, and if the type of the variable was fixed, then there would be an unintended implicit conversion. The other use case was voldemort types. Such patterns are commonplace, and a source of error and inconvenience without auto. auto appears in various forms in other languages, and is almost universally lauded as worthwhile. These cases don't apply to this proposal, nor do I know of its successful adoption in another language. It's not a matter of finding reasons not to implement it. It's finding reasons TO implement it. Language features do not have zero cost - some benefit has to offset it.
Jun 28 2013
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 28 June 2013 at 22:29:21 UTC, Walter Bright wrote:
 On 6/27/2013 5:34 PM, JS wrote:
 Would it be possible for a language(specifically d) to have 
 the ability to
 automatically type a variable by looking at its use cases 
 without adding too
 much complexity? It seems to me that most compilers already 
 can infer type
 mismatchs which would allow them to handle stuff like:

 main()
 {
    auto x;
    auto y;
    x = 3;   // x is an int, same as auto x = 3;
    y = f(); // y is the same type as what f() returns
    x = 3.9; // x is really a float, no mismatch with previous 
 type(int)
 }

 in this case x and y's type is inferred from future use. The 
 compiler
 essentially just lazily infers the variable type. Obviously 
 ambiguity will
 generate an error.

I don't see a compelling use case for this proposal, or even any use case. There'd have to be some serious advantage to it to justify its complexity.

My thoughts too. It's a lot of work for a very minor (and perhaps rather unwise) convenience.
Jun 28 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Friday, 28 June 2013 at 22:29:21 UTC, Walter Bright wrote:
 On 6/27/2013 5:34 PM, JS wrote:
 Would it be possible for a language(specifically d) to have 
 the ability to
 automatically type a variable by looking at its use cases 
 without adding too
 much complexity? It seems to me that most compilers already 
 can infer type
 mismatchs which would allow them to handle stuff like:

 main()
 {
    auto x;
    auto y;
    x = 3;   // x is an int, same as auto x = 3;
    y = f(); // y is the same type as what f() returns
    x = 3.9; // x is really a float, no mismatch with previous 
 type(int)
 }

 in this case x and y's type is inferred from future use. The 
 compiler
 essentially just lazily infers the variable type. Obviously 
 ambiguity will
 generate an error.

I don't see a compelling use case for this proposal, or even any use case. There'd have to be some serious advantage to it to justify its complexity.

Is variant useful? If not then you have a point. I'm not proposing anything that variant can't already do except add compile time performance. I do not think the complexity is much more than what is already done. D already checks for time mismatch. With such a variant or auto the check simply is more intelligent. e.g., auto x; // x's type is undefined or possibly variant. x = 3; // x's type is set temporarily to an int ... x = 3.0; // **** at **** we have several possibilities. 1. Throw an error, this makes auto more useful and avoids many pitfalls. 2. Set x's type to a variant. [possibly goto 3 if castable to new type] 3. Set x's type to a double. Both 2 and 3 require an extra pass to convert the code because it uses forward inference. auto looks only at the immediate assignment expression to determine the type. I am talking about generalizing it to look in the scope with possible fallback to a variant type with optional warning, an error, or type enlargement.
Jun 28 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Saturday, 29 June 2013 at 01:31:13 UTC, Walter Bright wrote:
 On 6/28/2013 5:00 PM, JS wrote:
 Is variant useful? If not then you have a point. I'm not 
 proposing anything that
 variant can't already do except add compile time performance. 
 I do not think the
 complexity is much more than what is already done.

 D already checks for time mismatch. With such a variant or 
 auto the check simply
 is more intelligent.

 e.g.,

 auto x;  // x's type is undefined or possibly variant.
 x = 3;   // x's type is set temporarily to an int
 ...
 x = 3.0; // ****


 at **** we have several possibilities.

    1. Throw an error, this makes auto more useful and avoids 
 many pitfalls.
    2. Set x's type to a variant. [possibly goto 3 if castable 
 to new type]
    3. Set x's type to a double.

 Both 2 and 3 require an extra pass to convert the code because 
 it uses forward
 inference.

 auto looks only at the immediate assignment expression to 
 determine the type. I
 am talking about generalizing it to look in the scope with 
 possible fallback to
 a variant type with optional warning, an error, or type 
 enlargement.

Again, I need a compelling use case. It's not enough to say it's the same as variant, and it's not enough to say it can be implemented. A compelling use case would be a pattern that is commonplace, and for which the workarounds are ugly, unsafe, error prone, unportable, etc.

What was the use case for auto that got in into the language?
Jun 28 2013
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Saturday, 29 June 2013 at 01:49:01 UTC, JS wrote:
 On Saturday, 29 June 2013 at 01:31:13 UTC, Walter Bright wrote:
 On 6/28/2013 5:00 PM, JS wrote:
 Is variant useful? If not then you have a point. I'm not 
 proposing anything that
 variant can't already do except add compile time performance. 
 I do not think the
 complexity is much more than what is already done.

 D already checks for time mismatch. With such a variant or 
 auto the check simply
 is more intelligent.

 e.g.,

 auto x;  // x's type is undefined or possibly variant.
 x = 3;   // x's type is set temporarily to an int
 ...
 x = 3.0; // ****


 at **** we have several possibilities.

   1. Throw an error, this makes auto more useful and avoids 
 many pitfalls.
   2. Set x's type to a variant. [possibly goto 3 if castable 
 to new type]
   3. Set x's type to a double.

 Both 2 and 3 require an extra pass to convert the code 
 because it uses forward
 inference.

 auto looks only at the immediate assignment expression to 
 determine the type. I
 am talking about generalizing it to look in the scope with 
 possible fallback to
 a variant type with optional warning, an error, or type 
 enlargement.

Again, I need a compelling use case. It's not enough to say it's the same as variant, and it's not enough to say it can be implemented. A compelling use case would be a pattern that is commonplace, and for which the workarounds are ugly, unsafe, error prone, unportable, etc.

What was the use case for auto that got in into the language?

A quick look at any heavily templated c++ (pre c++11) is all you need to justify it. auto cuts code clutter by huge amounts all over the place.
Jun 28 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Saturday, 29 June 2013 at 02:05:35 UTC, Walter Bright wrote:
 On 6/28/2013 6:48 PM, JS wrote:
 What was the use case for auto that got in into the language?

In code where the type of the initializer would change, and if the type of the variable was fixed, then there would be an unintended implicit conversion. The other use case was voldemort types. Such patterns are commonplace, and a source of error and inconvenience without auto. auto appears in various forms in other languages, and is almost universally lauded as worthwhile. These cases don't apply to this proposal, nor do I know of its successful adoption in another language. It's not a matter of finding reasons not to implement it. It's finding reasons TO implement it. Language features do not have zero cost - some benefit has to offset it.

I don't disagree with you and I'm not saying auto is not useful. IMO though, auto is almost all convenience and very little to do with solving errors. A very simple use case is: auto x = 0; ... x = complex(1, 1) + x; which obviously is an error. By having auto use forward inferencing we can avoid such errors. It's obvious that x was intended to be a complex variable. Having auto look forward(or using a different keyword) reduces code refactoring and improves on the power of auto. for example suppose we use instead auto x = pow(2, 10); where pow returns a real. In this case x is still the wrong type. So we always have to know the "final" supertype anyways... I'm just proposing we let the compiler figure it out for us if we want. But because some types are supertypes of others it is entirely logical that they be extended when it is obvious the are being used as such. To see why this is even more useful, suppose we had such code above but now want to refactor to use quaternions. In this case, the line x = complex(1,1) + x is becomes invalid too and requires fixing(or the auto x line has to be fixed). If we allowed auto x; to easily see that it is suppose to be a quaternion then that line does not have to be fixed and everything would work as expected. So the real question is, is it error prone to allow the compiler to deduce what supertype a variable really is? I do not think it is a problem because ambiguity results in an error and warnings could be given when a variable type was upgraded to a super type. At some point, if flow analysis is ever added, auto would be a natural bridge to it. another useless case is auto someflag; static if (__cpu == 3) someflag = "amdx64" else someflag = false; which allows a sort of static variant type. (which could be gotten around if a static if conditional expression, e.g., static auto someflag = (__cpu == 3) ?? "amdx64" : false; ) The above may make it easier in some cases for configuration code. The biggest benefit I can immediately see comes from using mixins. In this case we can always have a variable select the appropriate type. e.g., mixin template Foo() { ((...) ?? int : float) func() { } } // func is of type int or float depending on the condition ... class Bar { auto x; // possibly auto x = default(typeof(func)); but possibly error prone due to refactoring if line order matters mixin Foo; static Bar() { x = default(typeof(func)); } } note that x's type is deduced at compile time to be the appropriate type corresponding to the mixin used. Because x's type does not have to be immediately known we can specify it implicitly later on without ever having to know the return types of the mixins. In this case Bar acts like a templated class but isn't(or is a sort of statically templated class so to speak) Such classes would be useful for versioning where the user of the class is oblivious to the types used in the class and does not need to specify a type parameter. e.g., The Bar class above could use int for a more performant version of the program and float for a more precise version. The use of auto would completely or near completely eliminate having to deal with which one is being used. In any case I can't specify any super duper use case because I don't know any.
Jun 28 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 28 Jun 2013 18:00:54 -0400, JS <js.mdnq gmail.com> wrote:

 On Friday, 28 June 2013 at 14:02:07 UTC, Steven Schveighoffer wrote:
 On Fri, 28 Jun 2013 02:51:39 -0400, JS <js.mdnq gmail.com> My argument  
 is that auto should be left the way it is.  I don't want it to change.   
 And variant already does what you want with less confusing semantics,  
 no reason to add another feature.

 -Steve

Using the auto keyword was just an example. My argument does not depend the specific keyword used. My "idea" is simply generalizing auto to use forward inferencing. variant is NOT what I am talking about. It is not a performant time but a union of types. I am talking about the compiler finding the best choice for the type by looking ahead of the definition of the time.

There is a possible way to solve this -- auto return types. If you can fit your initialization of the variable into a function (even an inner function) that returns auto, then the compiler should be able to figure out the best type. Example: import std.stdio; void main(string[] args) { auto foo() { if(args.length > 1 && args[1] == "1") return 1; else return 2.5; } auto x = foo(); writeln(typeof(x).stringof); } this outputs "double". Granted, it doesn't solve the "general" case, where you want to use x before it's initialized a second time, but I think that really is just a programming error -- use a different variable. -Steve
Jun 28 2013
prev sibling next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 6/27/13 9:34 PM, JS wrote:
 Would it be possible for a language(specifically d) to have the ability
 to automatically type a variable by looking at its use cases without
 adding too much complexity? It seems to me that most compilers already
 can infer type mismatchs which would allow them to handle stuff like:

 main()
 {
     auto x;
     auto y;
     x = 3;   // x is an int, same as auto x = 3;
     y = f(); // y is the same type as what f() returns
     x = 3.9; // x is really a float, no mismatch with previous type(int)
 }

 in this case x and y's type is inferred from future use. The compiler
 essentially just lazily infers the variable type. Obviously ambiguity
 will generate an error.

What you are asking is essentially what Crystal does for all variables (and types): https://github.com/manastech/crystal/wiki/Introduction#type-inference Your example would be written like this: x = 3 y = f() x = 3.9 But since Crystal transforms your code to SSA (http://en.wikipedia.org/wiki/Static_single_assignment_form) you actually have *two* "x" variables in your code. The first one is of type Int32, the second of type Float64. The above solves the problem mentioned by Steven Schveighoffer, where you didn't know what overloaded version you was calling: x = 3 f(x) # always calls f(Int32), because at run-time # x will always be an Int32 at this point x = 3.9 But to have this in a language you need some things: 1. Don't have a different syntax for declaring and updating variables 2. Transform your code to SSA (maybe more?) So this is not possible in D right now, and I don't think it will ever be because it requires a huge change to the whole language.
Jun 29 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/29/2013 12:18 PM, Ary Borenszweig wrote:
 What you are asking is essentially what Crystal does for all variables (and
types):

 https://github.com/manastech/crystal/wiki/Introduction#type-inference

 Your example would be written like this:

 x = 3
 y = f()
 x = 3.9

 But since Crystal transforms your code to SSA
 (http://en.wikipedia.org/wiki/Static_single_assignment_form) you actually have
 *two* "x" variables in your code. The first one is of type Int32, the second of
 type Float64.

Sorry, but that seems like a solution in search of a problem. And besides, yuk. Imagine the bugs caused by "hey, it doesn't implicitly convert, so instead of letting the user know he goofed, let's just silently create a new variable!"
Jun 29 2013
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 6/29/13 6:01 PM, Walter Bright wrote:
 On 6/29/2013 12:18 PM, Ary Borenszweig wrote:
 What you are asking is essentially what Crystal does for all variables
 (and types):

 https://github.com/manastech/crystal/wiki/Introduction#type-inference

 Your example would be written like this:

 x = 3
 y = f()
 x = 3.9

 But since Crystal transforms your code to SSA
 (http://en.wikipedia.org/wiki/Static_single_assignment_form) you
 actually have
 *two* "x" variables in your code. The first one is of type Int32, the
 second of
 type Float64.

Sorry, but that seems like a solution in search of a problem. And besides, yuk. Imagine the bugs caused by "hey, it doesn't implicitly convert, so instead of letting the user know he goofed, let's just silently create a new variable!"

Sorry, but I can't imagine those bugs. Can you show me an example?
Jun 29 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/29/2013 2:53 PM, Ary Borenszweig wrote:
 On 6/29/13 6:01 PM, Walter Bright wrote:
 On 6/29/2013 12:18 PM, Ary Borenszweig wrote:
 What you are asking is essentially what Crystal does for all variables
 (and types):

 https://github.com/manastech/crystal/wiki/Introduction#type-inference

 Your example would be written like this:

 x = 3
 y = f()
 x = 3.9

 But since Crystal transforms your code to SSA
 (http://en.wikipedia.org/wiki/Static_single_assignment_form) you
 actually have
 *two* "x" variables in your code. The first one is of type Int32, the
 second of
 type Float64.

Sorry, but that seems like a solution in search of a problem. And besides, yuk. Imagine the bugs caused by "hey, it doesn't implicitly convert, so instead of letting the user know he goofed, let's just silently create a new variable!"

Sorry, but I can't imagine those bugs. Can you show me an example?

Sure: x = 3 px = &x y = f() x = 3.9 // uh-oh, *px points to a different x, and wasn't updated! printf("%d\n", x); // uh-oh, I thought x was an int!
Jun 29 2013
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 6/29/13 7:30 PM, Walter Bright wrote:
 On 6/29/2013 2:53 PM, Ary Borenszweig wrote:
 On 6/29/13 6:01 PM, Walter Bright wrote:
 On 6/29/2013 12:18 PM, Ary Borenszweig wrote:
 What you are asking is essentially what Crystal does for all variables
 (and types):

 https://github.com/manastech/crystal/wiki/Introduction#type-inference

 Your example would be written like this:

 x = 3
 y = f()
 x = 3.9

 But since Crystal transforms your code to SSA
 (http://en.wikipedia.org/wiki/Static_single_assignment_form) you
 actually have
 *two* "x" variables in your code. The first one is of type Int32, the
 second of
 type Float64.

Sorry, but that seems like a solution in search of a problem. And besides, yuk. Imagine the bugs caused by "hey, it doesn't implicitly convert, so instead of letting the user know he goofed, let's just silently create a new variable!"

Sorry, but I can't imagine those bugs. Can you show me an example?

Sure: x = 3 px = &x y = f() x = 3.9 // uh-oh, *px points to a different x, and wasn't updated! printf("%d\n", x); // uh-oh, I thought x was an int!

If the last statements were: x = 4 printf("%d\n", *px); I can see where the problem is (you would expect that to print 4, right?). That can be easily fixed by not transforming the last x to SSA if its address is taken. That's a really good example you gave :-)
Jun 29 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/29/2013 4:08 PM, Ary Borenszweig wrote:
 That's a really good example you gave :-)

Thanks. I remember seeing it somewhere before, but can't recall just where.
Jun 29 2013
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07/01/2013 03:08 AM, Kenji Hara wrote:
 2013/7/1 JS <js.mdnq gmail.com <mailto:js.mdnq gmail.com>>

     I am simply talking about having the compiler enlarge the type if
     needed. (this is mainly for built in types since the type hierarchy
     is explicitly known)


 Just a simple matter, it would *drastically* increase compilation time.

 void foo()
 {
      auto elem;
      auto arr = [elem];

      elem = 1;
      ....
      elem = 2.0;
      // typeof(elem) change should modify the result of typeof(arr)
 }

 Such type dependencies between multiple variables are common in the
 realistic program.

 When `elem = 2.0;` is found, compiler should run semantic analysis of
 the whole function body of foo _once again_, because the setting type of
 elem ignites the change of typeof(arr), and it would affect the code
 meaning.

 If another variable type would be modified, it also ignites the whole
 function body semantic again.

 After all, semantic analysis repetition would drastically increase.

 I can easily imagine that the compilation cost would not be worth the
 small benefits.

 Kenji Hara

The described strategy can easily result in non-termination, and which template instantiations it performs can be non-obvious. auto foo(T)(T arg){ static if(is(T==int)) return 1.0; else return 1; } void main(){ auto x; x = 1; x = foo(x); }
Jun 30 2013
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/01/2013 05:44 AM, JS wrote:
 On Monday, 1 July 2013 at 01:56:22 UTC, Timon Gehr wrote:
 ...
 The described strategy can easily result in non-termination, and which
 template instantiations it performs can be non-obvious.

 auto foo(T)(T arg){
     static if(is(T==int)) return 1.0;
     else return 1;
 }

 void main(){
     auto x;
     x = 1;
     x = foo(x);
 }

Sorry,

That's fine.
 it only results in non-termination if you don't check all return
 types out of a function.

Why is this relevant? I was specifically responding to the method lined out in the post I was answering. There have not been any other attempts to formalize the proposal so far.
 It is a rather easy case to handle by just
 following all the return types and choosing the largest one.

That neither handles the above case in a sensible way nor is it a solution for the general issue. (Hint: D's type system is Turing complete.)
 No big deal...  any other tries?

That's not how it goes. The proposed inference method has to be completely specified for all instances, not only for those instances that I can be bothered to provide to you as counterexamples.
Jun 30 2013
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 6/30/13 10:56 PM, Timon Gehr wrote:
 On 07/01/2013 03:08 AM, Kenji Hara wrote:
 2013/7/1 JS <js.mdnq gmail.com <mailto:js.mdnq gmail.com>>

     I am simply talking about having the compiler enlarge the type if
     needed. (this is mainly for built in types since the type hierarchy
     is explicitly known)


 Just a simple matter, it would *drastically* increase compilation time.

 void foo()
 {
      auto elem;
      auto arr = [elem];

      elem = 1;
      ....
      elem = 2.0;
      // typeof(elem) change should modify the result of typeof(arr)
 }

 Such type dependencies between multiple variables are common in the
 realistic program.

 When `elem = 2.0;` is found, compiler should run semantic analysis of
 the whole function body of foo _once again_, because the setting type of
 elem ignites the change of typeof(arr), and it would affect the code
 meaning.

 If another variable type would be modified, it also ignites the whole
 function body semantic again.

 After all, semantic analysis repetition would drastically increase.

 I can easily imagine that the compilation cost would not be worth the
 small benefits.

 Kenji Hara

The described strategy can easily result in non-termination, and which template instantiations it performs can be non-obvious. auto foo(T)(T arg){ static if(is(T==int)) return 1.0; else return 1; } void main(){ auto x; x = 1; x = foo(x); }

Just tried it in Crystal and it ends alright. It works like this: 1. x is an Int 2. you call foo(x), it returns a float so x is now a float (right now in Crystal that's a union of int and float, but that will soon change). 3. Since x is a float, foo returns an int, but assigning it to x, which is already a float, gives back a float. 4. No type changed, so we end. Crystal also supports recursive and mutuilly recursive functions. The compiler is always guaranteed to finish. (I'm just using Crystal as an example to have a proof that it can be done)
Jul 01 2013
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/01/2013 03:44 PM, Ary Borenszweig wrote:
 On 6/30/13 10:56 PM, Timon Gehr wrote:
 On 07/01/2013 03:08 AM, Kenji Hara wrote:
 2013/7/1 JS <js.mdnq gmail.com <mailto:js.mdnq gmail.com>>

     I am simply talking about having the compiler enlarge the type if
     needed. (this is mainly for built in types since the type hierarchy
     is explicitly known)


 Just a simple matter, it would *drastically* increase compilation time.

 void foo()
 {
      auto elem;
      auto arr = [elem];

      elem = 1;
      ....
      elem = 2.0;
      // typeof(elem) change should modify the result of typeof(arr)
 }

 Such type dependencies between multiple variables are common in the
 realistic program.

 When `elem = 2.0;` is found, compiler should run semantic analysis of
 the whole function body of foo _once again_, because the setting type of
 elem ignites the change of typeof(arr), and it would affect the code
 meaning.

 If another variable type would be modified, it also ignites the whole
 function body semantic again.

 After all, semantic analysis repetition would drastically increase.

 I can easily imagine that the compilation cost would not be worth the
 small benefits.

 Kenji Hara

The described strategy can easily result in non-termination, and which template instantiations it performs can be non-obvious. auto foo(T)(T arg){ static if(is(T==int)) return 1.0; else return 1; } void main(){ auto x; x = 1; x = foo(x); }

Just tried it in Crystal

Using overloaded functions, I guess? It is not really the same thing, because those need to be type checked in any case.
 and it ends alright.

(Note that I was specifically addressing the method Kenji Hara lined out, which appears to completely restart type checking every time a type changes.)
 It works like this:

 1. x is an Int
 2. you call foo(x), it returns a float so x is now a float (right now in
 Crystal that's a union of int and float, but that will soon change).
 3. Since x is a float, foo returns an int, but assigning it to x, which
 is already a float, gives back a float.
 4. No type changed, so we end.
...

This kind of fixed-point iteration will terminate in D in most relevant cases (it is possible to create an infinitely ascending chain of types, but then, type checking failing implicit conversions won't terminate anyway). But note that now x is a double even though it is only assigned ints. Furthermore, this approach still implicitly instantiates template versions that are not referred to in the final type checked code.
Jul 01 2013
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/30/2013 6:08 PM, Kenji Hara wrote:
 I can easily imagine that the compilation cost would not be worth the small
 benefits.

There are arguably not even small benefits.
Jun 30 2013
prev sibling next sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
On 6/30/13 10:30 PM, JS wrote:
 On Monday, 1 July 2013 at 01:08:49 UTC, Kenji Hara wrote:
 2013/7/1 JS <js.mdnq gmail.com>

 I am simply talking about having the compiler enlarge the type if
 needed.
 (this is mainly for built in types since the type hierarchy is
 explicitly
 known)

Just a simple matter, it would *drastically* increase compilation time. void foo() { auto elem; auto arr = [elem]; elem = 1; .... elem = 2.0; // typeof(elem) change should modify the result of typeof(arr) } Such type dependencies between multiple variables are common in the realistic program. When `elem = 2.0;` is found, compiler should run semantic analysis of the whole function body of foo _once again_, because the setting type of elem ignites the change of typeof(arr), and it would affect the code meaning. If another variable type would be modified, it also ignites the whole function body semantic again. After all, semantic analysis repetition would drastically increase. I can easily imagine that the compilation cost would not be worth the small benefits. Kenji Hara

No, this would be a brute force approach. Only one "preprocessing pass" of (#lines) would be required. Since parsing statement by statement already takes place, it should be an insignificant cost.

Believe me, it's not. Look at this: --- int foo(int elem) { return 1; } char foo(float elem) { return 'a'; } auto elem; elem = 1; auto other = foo(elem); elem = other + 2.5; --- Explain to me how the compiler would work in this case, step by step.
Jul 01 2013
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 6/30/13 7:39 PM, JS wrote:
 On Saturday, 29 June 2013 at 19:18:13 UTC, Ary Borenszweig wrote:
 On 6/27/13 9:34 PM, JS wrote:
 Would it be possible for a language(specifically d) to have the ability
 to automatically type a variable by looking at its use cases without
 adding too much complexity? It seems to me that most compilers already
 can infer type mismatchs which would allow them to handle stuff like:

 main()
 {
    auto x;
    auto y;
    x = 3;   // x is an int, same as auto x = 3;
    y = f(); // y is the same type as what f() returns
    x = 3.9; // x is really a float, no mismatch with previous type(int)
 }

 in this case x and y's type is inferred from future use. The compiler
 essentially just lazily infers the variable type. Obviously ambiguity
 will generate an error.

What you are asking is essentially what Crystal does for all variables (and types): https://github.com/manastech/crystal/wiki/Introduction#type-inference Your example would be written like this: x = 3 y = f() x = 3.9 But since Crystal transforms your code to SSA (http://en.wikipedia.org/wiki/Static_single_assignment_form) you actually have *two* "x" variables in your code. The first one is of type Int32, the second of type Float64. The above solves the problem mentioned by Steven Schveighoffer, where you didn't know what overloaded version you was calling: x = 3 f(x) # always calls f(Int32), because at run-time # x will always be an Int32 at this point x = 3.9 But to have this in a language you need some things: 1. Don't have a different syntax for declaring and updating variables 2. Transform your code to SSA (maybe more?) So this is not possible in D right now, and I don't think it will ever be because it requires a huge change to the whole language.

This is not what I am talking about and it seems quite dangerous to have one variable name masquerade as multiple variables.

Why dangerous? I've been programming in Ruby for quite a time and never found it to be a problem, but an advantage. Now I'm programming in Crystal and it's the same, but the compiler can catch some errors too. Show me an example where this is dangerous (the pointer example gave by Walter is not valid anymore since it has a fix).
Jul 01 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/1/2013 6:39 AM, Ary Borenszweig wrote:
 This is not what I am talking about and it seems quite dangerous to have
 one variable name masquerade as multiple variables.

Why dangerous?

D already disallows: int x; { float x; } as an error-prone construct, so why should it allow: int x; float x; ?
 I've been programming in Ruby for quite a time and never found it
 to be a problem, but an advantage.

What advantage? Does Ruby have a shortage of names for variables? (Early versions of BASIC only allowed variable names with one letter, leading to some pretty awful workarounds.)
 Show me an example where this is dangerous (the pointer example gave by Walter
 is not valid anymore since it has a fix).

I'm actually rather sure that one can come up with rule after rule to 'fix' each issue, but good luck with trying to fit all those rules into some sort of comprehensible framework. Consider what happened to C++ when they tried to fit new things into the overloading rules - the rules now span multiple pages of pretty arbitrary rules, and practically nobody understands the whole. What people do is randomly try things until it seems to do the right thing for them, and then they move on. And all this for what - nobody has come up with a significant reason to support this proposal. I see post after post in this thread trying to make it work, and nothing about what problem it solves.
Jul 01 2013
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 7/1/13 1:15 PM, Walter Bright wrote:
 On 7/1/2013 6:39 AM, Ary Borenszweig wrote:
 This is not what I am talking about and it seems quite dangerous to have
 one variable name masquerade as multiple variables.

Why dangerous?

D already disallows: int x; { float x; } as an error-prone construct, so why should it allow: int x; float x; ?

Well, those constructs don't even make sense because in the examples I gave I never say what type I want my variables to be. I let the compiler figure it out.
 I've been programming in Ruby for quite a time and never found it
 to be a problem, but an advantage.

What advantage? Does Ruby have a shortage of names for variables? (Early versions of BASIC only allowed variable names with one letter, leading to some pretty awful workarounds.)

I'll give you an example: # var can be an Int or String def foo(var) var = var.to_s # do something with var, which is now guaranteed to be a string end I can call it like this: foo(1) foo("hello") If I had to put types, I would end up doing of of these: 1. void foo(int var) { foo(to!string(var)) } void foo(string var) { // do something with var } 2. void foo(T)(T var) { string myVar; static if (is(T == string)) { myVar = var; } else if (is(T == int)) { myVar = to!string(var) } else { static assert(false); } // do something with myVar } Both examples are ugly and verbose (or, at least, the example in Ruby/Crystal is much shorter and cleaner). The example I give is very simple, I can reuse a var which *has the same meaning* for me when I'm coding and I don't need to come up with a new name. It's not that Ruby has a shortage of names. It's just that I don't want to spend time thinking new, similar names, just to satisfy the compiler. (And if you are worried about efficiency, the method "#to_s" of String just returns itself, so in the end it compiles to the same code you could have written manually like I showed in D)
 Show me an example where this is dangerous (the pointer example gave
 by Walter
 is not valid anymore since it has a fix).

I'm actually rather sure that one can come up with rule after rule to 'fix' each issue, but good luck with trying to fit all those rules into some sort of comprehensible framework. Consider what happened to C++ when they tried to fit new things into the overloading rules - the rules now span multiple pages of pretty arbitrary rules, and practically nobody understands the whole. What people do is randomly try things until it seems to do the right thing for them, and then they move on. And all this for what - nobody has come up with a significant reason to support this proposal. I see post after post in this thread trying to make it work, and nothing about what problem it solves.

Exactly, because in D you need to specify the types of variables. And that goes against inferring a variable's type from its usage (which is different from inferring it from its initializer). I'm also against this proposal. I'm just saying that in D it's not feasible, and if you want to make it work you'll have to change so many things that you'll end up with a different language.
Jul 01 2013
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 7/1/13 1:45 PM, John Colvin wrote:
 T)(T var)
 {
      auto myVar = var.to!string;
      //do something with myVar string
 }

Ah, that's also ok. But then you have to remember to use myVar instead of var.
Jul 01 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/1/2013 9:46 AM, Ary Borenszweig wrote:
 On 7/1/13 1:45 PM, John Colvin wrote:
 T)(T var)
 {
      auto myVar = var.to!string;
      //do something with myVar string
 }

Ah, that's also ok. But then you have to remember to use myVar instead of var.

Heck, why bother with different variable names at all? We can just use x for all variables, t for all types, and f for all functions! t f(t x) { t x; t x = x & 0xFF; if (x == x) x = 0; else if ((x & 0xFFFD00) == 0x0F3800) x = x[(x >> 8) & 0xFF]; else if ((x & 0xFF00) == 0x0F00) x = x[x]; else x = x[x]; return x & x; } Sorry for the sarcasm, but I just don't get the notion that it's a burden to use a different name for a variable that has a different type and a different purpose. I'd go further and say it is a bad practice to use the same name for such.
Jul 01 2013
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/1/13 9:59 AM, Walter Bright wrote:
 Sorry for the sarcasm, but I just don't get the notion that it's a
 burden to use a different name for a variable that has a different type
 and a different purpose. I'd go further and say it is a bad practice to
 use the same name for such.

Reducing the number of names seems worthless, but increasing it can be quite annoying. Andrei
Jul 01 2013
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07/01/2013 06:15 PM, Walter Bright wrote:
 On 7/1/2013 6:39 AM, Ary Borenszweig wrote:
 This is not what I am talking about and it seems quite dangerous to have
 one variable name masquerade as multiple variables.

Why dangerous?

D already disallows: int x; { float x; } as an error-prone construct, ...

module b; int x; module a; void main(){ int x; { import b; x = 2; } import std.stdio; writeln(x); // prints 0 }
Jul 01 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/1/2013 10:07 AM, Timon Gehr wrote:
 module b;
 int x;

 module a;

 void main(){
      int x;
      {
          import b;
          x = 2;

I'd encourage you to submit an enhancement request that would produce the message: Error: import b.x hides local declaration of x
      }
      import std.stdio;
      writeln(x); // prints 0
 }

Jul 01 2013
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
Don't feed the troll.
Jul 01 2013
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/1/2013 9:57 AM, JS wrote:
 The problem is when such a "idea" is present you get people who are
 automatically against it for various irrational fears and they won't take any
 serious look at it to see if it has any merit...  If you jump to the conclusion
 that something is useless without any real thought on it then it obviously
is...
 but the same type of mentality has been used to "prove" just about anything was
 useless at one time or another.

It's up to you to demonstrate your idea has merit. Throwing ideas out and asking others to find the merit for you is not going to work. It's even worse when you insult them for not finding the merit that you didn't find. Once you demonstrate merit then go about finding ways to make it work. Not the other way around. There are famous cases in business history where a solution was found before anyone identified a problem - 3M's not-very-sticky adhesive that was eventually turned into the hugely profitable postit notes is an example - but it languished for nearly a decade before someone thought of a use for it. And 3M certainly didn't productize it before the problem was discovered.
Jul 01 2013
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/1/2013 10:51 AM, deadalnix wrote:
 But if you one very stupid one, I declared in the late 90s that a phone with a
 tactile screen on its whole surface was a stupid idea and that it would never
work.
 I you look hard enough, I guess we all said the most stupid thing at some
point.
 We got to admit it and not repeat the mistake.

None of us have to look very hard at ourselves to find such, if we're being remotely honest. I don't much care for the popular "gotcha" practice of digging up something someone did or said decades ago. It presumes that we are all born wise, and offers no hope for learning from our mistakes. Sadly, the internet and the surveillance state are going to make life difficult for anyone trying to live down something stupid. Makes me glad I grew up before the internet. Makes me glad that most of the drivel I posted to Usenet back in the 80's has been hopefully lost :-)
Jul 01 2013
prev sibling next sibling parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 07/01/2013 10:51 AM, deadalnix wrote:

 I declared in the late 90s that a phone
 with a tactile screen on its whole surface was a stupid idea and that it
 would never work.

I still think so. :D Ali
Jul 01 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/1/13 9:57 AM, JS wrote:
 I think there is big confusion in what I'm "suggesting"(it's not a
 proposal because I don't expect anyone to take the time to prove its
 validity... and you can't know how useful it could be if you don't have
 any way to test it out).

 It's two distinctly different concepts when you allow a "multi-variable"
 and one that can be "up-typed" by inference(the compiler automatically
 "up-types").

To me the basic notion was very clear from day one. Changing the type of a variable is equivalent with "unaliasing" the existing variable (i.e. destroy it and force it out of the symbol table) and defining an entirely different variable, with its own lifetime. It just so happens it has the same name. It's a reasonable feature to have -- a nice cheat that brings a statically-typed language closer to the look-and-feel of dynamic languages. Saves on names, which is more helpful than one might think. In D things like overloading and implicit conversions would probably make it too confusing to be useful.
 I'm more interested in a true counterexample where my concept(which I've
 not seen in any language before) results in an invalid context....

It's obvious to me that the concept is sound within reasonable use bounds.
 The problem is when such a "idea" is present you get people who are
 automatically against it for various irrational fears and they won't
 take any serious look at it to see if it has any merit... If you jump to
 the conclusion that something is useless without any real thought on it
 then it obviously is... but the same type of mentality has been used to
 "prove" just about anything was useless at one time or another. (and if
 that mentality ruled we'd still be using 640k of memory)

I think this is an unfair characterization. The discussion was pretty good and gave the notion a fair shake.
 I have a very basic question for you and would like a simple answer:

 In some programming languages, one can do the following type of code:

 var x; // x is some type of variable that holds data. It's type is not
 statically defined and can change at run time.
 x = 3; // x holds some type of number... usually an integer but the
 language may store all numbers as doubles or even strings.

 now, suppose we have a program that contains essentially the following:

 var x;
 x = 3;

 Is it possible that the compiler can optimize such code to find the
 least amount of data to represent x without issue?  Yes or no?

Yes, and in fact it's already done. Consider: if (expr) { int a; ... } else { int b; ... } In some C implementations, a and b have the same physical address. In some others, they have distinct addresses. This appears to not be related, but it is insofar as a and b have non-overlapping lifetimes.
  Is this a
 good thing? Yes or no?

It's marginally good - increases stack locality and makes it simpler for a register allocator. (In fact all register allocators do that already, otherwise they'd suck.)
 (I don't need and don't want any explanation)

Too late :o). Andrei
Jul 01 2013
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/1/2013 4:30 PM, Andrei Alexandrescu wrote:
 Yes, and in fact it's already done. Consider:

 if (expr)
 {
      int a;
      ...
 }
 else
 {
      int b;
      ...
 }

 In some C implementations, a and b have the same physical address. In some
 others, they have distinct addresses. This appears to not be related, but it is
 insofar as a and b have non-overlapping lifetimes.

What is happening with (modern) compilers is the "live range" of each variable is computed. A live range is nothing more than a bitmap across the instructions for a function, with a bit set meaning "the variable is in play at this point". The compiler then uses a "tetris" style algorithm to try to fit as many variables as possible into the limited register set, and to use as little stack space as possible. The usual algorithms do not use scoping to determine the live range, but look at actual usage. A variable that, for example, that has no usage is considered 'dead' and is removed. The proposal here neither adds nor subtracts from this.
Jul 01 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/1/13 6:29 PM, JS wrote:
 Would would be nice is an experimental version of D where would could
 easily extend the language to try out such concepts to see if they truly
 are useful and how difficult to implement. e.g., I could attempt to add
 said "feature", it could be merged with the experimental compiler, those
 interested can download the compiler and test the feature out... all
 without negatively affecting D directly. If such features could be
 implemented dynamically then it would probably be pretty powerful.

I don't think such a feature would make it in D, even if the implementation cost was already sunken (i.e. an implementation was already done and one pull request away). Ascribing distinct objects to the same symbol is a very core feature that affects and is affected by everything else. We'd design a lot of D differently if that particular feature were desired, and now the fundamentals of the design are long frozen. For a very simple example, consider: auto a = 2.5; // fine, a is double ... a = 3; By the proposed rule a will become an entirely different variable of type int, and the previous double variable would disappear. But current rules dictate that the type stays double. So we'd either have an unthinkably massive breakage, or we'd patch the language with a million exceptions. Even so! If the feature were bringing amazing power, there may still be a case in its favor. But fundamentally it doesn't bring anything new - it's just alpha renaming; it doesn't enable doing anything that couldn't be done without it. Andrei
Jul 01 2013
next sibling parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 07/01/2013 07:35 PM, JS wrote:

 On Tuesday, 2 July 2013 at 02:15:09 UTC, Andrei Alexandrescu wrote:

 auto a = 2.5; // fine, a is double
 ...
 a = 3;

No, not under what I am talking about. You can't downgrade a type, only upgrade it. a = 3, a is still a float. Using the concept I am talking about, your example does nothing new. but reverse the numbers: auto a = 3; a = 2.5; and a is now a float, and your logic then becomes correct EXCEPT a is expanded, which is safe. I really don't know how to make it any clearer but I'm not sure if anyone understands what I'm talking about ;/

I think I understand. I think I heard either on this or your other thread that function overloading may produce confusing results. Consider the following program: void foo(int i) {} void foo(double d) {} void main() { auto a = 3; foo(a); // Some time later somebody adds the following line: a = 2.5; } If the type of 'a' would suddenly be double from that point on, foo(a) would silently go to a different function. It may be that calling the 'double' overload is the right thing to do but it may as well be that it would be the completely the wrong thing to do. The difference is, today the compiler warns me about the incompatible types. With the proposed feature, the semantics of the program might be different without any warning. Of course one may argue that every line must be added very carefully and the unit tests must be comprehensive, etc. Of course I agree but I am another person who does not see the benefit of this proposal. It is never a chore to modify the type of a variable when the compiler warns me about an incompatibility. Ali
Jul 01 2013
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/1/13 7:35 PM, JS wrote:
 On Tuesday, 2 July 2013 at 02:15:09 UTC, Andrei Alexandrescu wrote:
 On 7/1/13 6:29 PM, JS wrote:
 Would would be nice is an experimental version of D where would could
 easily extend the language to try out such concepts to see if they truly
 are useful and how difficult to implement. e.g., I could attempt to add
 said "feature", it could be merged with the experimental compiler, those
 interested can download the compiler and test the feature out... all
 without negatively affecting D directly. If such features could be
 implemented dynamically then it would probably be pretty powerful.

I don't think such a feature would make it in D, even if the implementation cost was already sunken (i.e. an implementation was already done and one pull request away). Ascribing distinct objects to the same symbol is a very core feature that affects and is affected by everything else. We'd design a lot of D differently if that particular feature were desired, and now the fundamentals of the design are long frozen. For a very simple example, consider: auto a = 2.5; // fine, a is double ... a = 3;

No, not under what I am talking about. You can't downgrade a type, only upgrade it. a = 3, a is still a float. Using the concept I am talking about, your example does nothing new. but reverse the numbers: auto a = 3; a = 2.5; and a is now a float, and your logic then becomes correct EXCEPT a is expanded, which is safe. I really don't know how to make it any clearer but I'm not sure if anyone understands what I'm talking about ;/

You can definitely assume you are being well understood. That's going to break a lot of code because of e.g. calls to overloaded functions in between changes of type. Just drop this. Not only it won't make it into D, it's also not particularly interesting. Andrei
Jul 01 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sat, 29 Jun 2013 17:01:54 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 6/29/2013 12:18 PM, Ary Borenszweig wrote:
 What you are asking is essentially what Crystal does for all variables  
 (and types):

 https://github.com/manastech/crystal/wiki/Introduction#type-inference

 Your example would be written like this:

 x = 3
 y = f()
 x = 3.9

 But since Crystal transforms your code to SSA
 (http://en.wikipedia.org/wiki/Static_single_assignment_form) you  
 actually have
 *two* "x" variables in your code. The first one is of type Int32, the  
 second of
 type Float64.

Sorry, but that seems like a solution in search of a problem. And besides, yuk. Imagine the bugs caused by "hey, it doesn't implicitly convert, so instead of letting the user know he goofed, let's just silently create a new variable!"

x is a variant that is compile-time optimized to be an int or a float. Where would the bug be? If x could possibly change types depending on runtime data, then x is given a type of a union between int or float. It would be somewhat like the compiler optimizing this: { Variant v = 1; v = 3.5; } to: { int v = 1; } { float v = 3.5; } because it sees that during the optimized scopes, v is only used as that specific type. It seems like the compiler is generating variants that can hold exactly the types that are used for that variable. Interesting concept. -Steve
Jun 29 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Saturday, 29 June 2013 at 19:18:13 UTC, Ary Borenszweig wrote:
 On 6/27/13 9:34 PM, JS wrote:
 Would it be possible for a language(specifically d) to have 
 the ability
 to automatically type a variable by looking at its use cases 
 without
 adding too much complexity? It seems to me that most compilers 
 already
 can infer type mismatchs which would allow them to handle 
 stuff like:

 main()
 {
    auto x;
    auto y;
    x = 3;   // x is an int, same as auto x = 3;
    y = f(); // y is the same type as what f() returns
    x = 3.9; // x is really a float, no mismatch with previous 
 type(int)
 }

 in this case x and y's type is inferred from future use. The 
 compiler
 essentially just lazily infers the variable type. Obviously 
 ambiguity
 will generate an error.

What you are asking is essentially what Crystal does for all variables (and types): https://github.com/manastech/crystal/wiki/Introduction#type-inference Your example would be written like this: x = 3 y = f() x = 3.9 But since Crystal transforms your code to SSA (http://en.wikipedia.org/wiki/Static_single_assignment_form) you actually have *two* "x" variables in your code. The first one is of type Int32, the second of type Float64. The above solves the problem mentioned by Steven Schveighoffer, where you didn't know what overloaded version you was calling: x = 3 f(x) # always calls f(Int32), because at run-time # x will always be an Int32 at this point x = 3.9 But to have this in a language you need some things: 1. Don't have a different syntax for declaring and updating variables 2. Transform your code to SSA (maybe more?) So this is not possible in D right now, and I don't think it will ever be because it requires a huge change to the whole language.

This is not what I am talking about and it seems quite dangerous to have one variable name masquerade as multiple variables. I am simply talking about having the compiler enlarge the type if needed. (this is mainly for built in types since the type hierarchy is explicitly known) e.g., auto x = 3; x = 3.0; // invalid, but there is really no reason It's obvious that we wanting x to be a floating point... why not expand it to one at compile time? Worse thing in general is a performance hit. One can argue, and it has been already stated, that one doesn't know which overloaded function is called. This is true, but if one uses auto(or rather a more appropriate keyword), then the programmer knows that the largest type will be used. In general, it will not be a problem at all because the programmer will not intentionally treat a variable as a multi-type(which seems to be what crystal is doing). What I am talking about allows us to do a few things easily: auto x; ... x = 3.0; // x's type is set to a double if we do not assign x a larger x compatible with double. auto x; ... x = 3; // x is set to an int type, we don't have to immediately assign to x. this is not very useful though. more importantly, the we can have the compiler infer the type when we mix subtypes: auto x; // x is a string x = 3; // x is a string x = 3.0; // x is a string x = "" // x is a string but if we remove the last line we end up with auto x; // x is a double x = 3; // x is a double x = 3.0; // x is a double Which, the importance is that the compiler is choosing the most appropriate storage for us. x is not a multi variable like crystal nor a variant. It is simply an auto variable that looks at the entire scope rather than just its immediate assignment. If one prefers, { autoscope x; // x is defined as the largest type used } One problem is user defined types. Do we allow inheritance to be used: { autoscope x; x = new A; x = new B; } // x is of type B if B inherits A, else error this would be the same as auto x = (B)(new A);
Jun 30 2013
prev sibling next sibling parent Kenji Hara <k.hara.pg gmail.com> writes:
--047d7b62509c35f61b04e068de6f
Content-Type: text/plain; charset=UTF-8

2013/7/1 JS <js.mdnq gmail.com>

 I am simply talking about having the compiler enlarge the type if needed.
 (this is mainly for built in types since the type hierarchy is explicitly
 known)

Just a simple matter, it would *drastically* increase compilation time. void foo() { auto elem; auto arr = [elem]; elem = 1; .... elem = 2.0; // typeof(elem) change should modify the result of typeof(arr) } Such type dependencies between multiple variables are common in the realistic program. When `elem = 2.0;` is found, compiler should run semantic analysis of the whole function body of foo _once again_, because the setting type of elem ignites the change of typeof(arr), and it would affect the code meaning. If another variable type would be modified, it also ignites the whole function body semantic again. After all, semantic analysis repetition would drastically increase. I can easily imagine that the compilation cost would not be worth the small benefits. Kenji Hara --047d7b62509c35f61b04e068de6f Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">2013= /7/1 JS <span dir=3D"ltr">&lt;<a href=3D"mailto:js.mdnq gmail.com" target= =3D"_blank">js.mdnq gmail.com</a>&gt;</span><br><blockquote class=3D"gmail_= quote" style=3D"margin-top:0px;margin-right:0px;margin-bottom:0px;margin-le= ft:0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-le= ft-style:solid;padding-left:1ex"> <div class=3D""><div class=3D"h5">I am simply talking about having the comp= iler enlarge the type if needed. (this is mainly for built in types since t= he type hierarchy is explicitly known)<br></div></div> </blockquote></div><br></div><div class=3D"gmail_extra"><div class=3D"gmail= _extra">Just a simple matter, it would *drastically* increase compilation t= ime.</div><div class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">v= oid foo()</div> <div class=3D"gmail_extra">{</div><div class=3D"gmail_extra">=C2=A0 =C2=A0 = auto elem;</div><div class=3D"gmail_extra">=C2=A0 =C2=A0 auto arr =3D [elem= ];</div><div class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">=C2= =A0 =C2=A0 elem =3D 1;</div><div class=3D"gmail_extra"> =C2=A0 =C2=A0 ....</div><div class=3D"gmail_extra">=C2=A0 =C2=A0 elem =3D 2= .0;</div><div class=3D"gmail_extra">=C2=A0 =C2=A0 // typeof(elem) change sh= ould modify the result of typeof(arr)</div><div class=3D"gmail_extra">}</di= v><div class=3D"gmail_extra"><br></div> <div class=3D"gmail_extra">Such type dependencies between multiple variable= s are common in the realistic program.</div><div class=3D"gmail_extra"><br>= </div><div class=3D"gmail_extra">When `elem =3D 2.0;` is found, compiler sh= ould run semantic analysis of the whole function body of foo _once again_,= =C2=A0because the setting type of elem ignites the change of typeof(arr), a= nd it would affect the code meaning.</div> <div class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">If another = variable type would be modified, it also ignites the whole function body se= mantic again.</div><div class=3D"gmail_extra"><br></div><div class=3D"gmail= _extra"> After all, semantic analysis repetition would drastically increase.</div><d= iv class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">I can easily = imagine that the compilation cost would not be worth the small benefits.</d= iv> <div class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">Kenji Hara<= /div></div></div> --047d7b62509c35f61b04e068de6f--
Jun 30 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Monday, 1 July 2013 at 01:08:49 UTC, Kenji Hara wrote:
 2013/7/1 JS <js.mdnq gmail.com>

 I am simply talking about having the compiler enlarge the type 
 if needed.
 (this is mainly for built in types since the type hierarchy is 
 explicitly
 known)

Just a simple matter, it would *drastically* increase compilation time. void foo() { auto elem; auto arr = [elem]; elem = 1; .... elem = 2.0; // typeof(elem) change should modify the result of typeof(arr) } Such type dependencies between multiple variables are common in the realistic program. When `elem = 2.0;` is found, compiler should run semantic analysis of the whole function body of foo _once again_, because the setting type of elem ignites the change of typeof(arr), and it would affect the code meaning. If another variable type would be modified, it also ignites the whole function body semantic again. After all, semantic analysis repetition would drastically increase. I can easily imagine that the compilation cost would not be worth the small benefits. Kenji Hara

No, this would be a brute force approach. Only one "preprocessing pass" of (#lines) would be required. Since parsing statement by statement already takes place, it should be an insignificant cost. arr is of of type *typeof(elem), when elem is known arr is immediately known. One would have to create a dependency tree but this is relatively simple and in most cases the tree's would be very small. The type of elem is known in one pass since we just have to scan statement by statement and update elem's type(using if (newtype > curtype) curtype = newtype). At the end of the scope elem's type is known and the dependency tree can be updated. The complexity of the algorithm would be small since each additional *autoscope* variable would not add much additional computation(just updating the type... we have to scan the scope anyways).
Jun 30 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sun, 30 Jun 2013 21:56:21 -0400, Timon Gehr <timon.gehr gmx.ch> wrote:

 The described strategy can easily result in non-termination, and which  
 template instantiations it performs can be non-obvious.

 auto foo(T)(T arg){
      static if(is(T==int)) return 1.0;
      else return 1;
 }

 void main(){
      auto x;
      x = 1;
      x = foo(x);
 }

Ouch! That is better than Walter's case :) -Steve
Jun 30 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Monday, 1 July 2013 at 01:56:22 UTC, Timon Gehr wrote:
 On 07/01/2013 03:08 AM, Kenji Hara wrote:
 2013/7/1 JS <js.mdnq gmail.com <mailto:js.mdnq gmail.com>>

    I am simply talking about having the compiler enlarge the 
 type if
    needed. (this is mainly for built in types since the type 
 hierarchy
    is explicitly known)


 Just a simple matter, it would *drastically* increase 
 compilation time.

 void foo()
 {
     auto elem;
     auto arr = [elem];

     elem = 1;
     ....
     elem = 2.0;
     // typeof(elem) change should modify the result of 
 typeof(arr)
 }

 Such type dependencies between multiple variables are common 
 in the
 realistic program.

 When `elem = 2.0;` is found, compiler should run semantic 
 analysis of
 the whole function body of foo _once again_, because the 
 setting type of
 elem ignites the change of typeof(arr), and it would affect 
 the code
 meaning.

 If another variable type would be modified, it also ignites 
 the whole
 function body semantic again.

 After all, semantic analysis repetition would drastically 
 increase.

 I can easily imagine that the compilation cost would not be 
 worth the
 small benefits.

 Kenji Hara

The described strategy can easily result in non-termination, and which template instantiations it performs can be non-obvious. auto foo(T)(T arg){ static if(is(T==int)) return 1.0; else return 1; } void main(){ auto x; x = 1; x = foo(x); }

Sorry, it only results in non-termination if you don't check all return types out of a function. It is a rather easy case to handle by just following all the return types and choosing the largest one. No big deal... any other tries?
Jun 30 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Monday, 1 July 2013 at 04:19:51 UTC, Timon Gehr wrote:
 On 07/01/2013 05:44 AM, JS wrote:
 On Monday, 1 July 2013 at 01:56:22 UTC, Timon Gehr wrote:
 ...
 The described strategy can easily result in non-termination, 
 and which
 template instantiations it performs can be non-obvious.

 auto foo(T)(T arg){
    static if(is(T==int)) return 1.0;
    else return 1;
 }

 void main(){
    auto x;
    x = 1;
    x = foo(x);
 }

Sorry,

That's fine.
 it only results in non-termination if you don't check all 
 return
 types out of a function.

Why is this relevant? I was specifically responding to the method lined out in the post I was answering. There have not been any other attempts to formalize the proposal so far.
 It is a rather easy case to handle by just
 following all the return types and choosing the largest one.

That neither handles the above case in a sensible way nor is it a solution for the general issue. (Hint: D's type system is Turing complete.)
 No big deal...  any other tries?

That's not how it goes. The proposed inference method has to be completely specified for all instances, not only for those instances that I can be bothered to provide to you as counterexamples.

well duh, but it is quite a simple mathematical problem and your counter-example is not one at all. For a statically typed language all types must be known at compile time... so you can't come up with any valid counter-example. Just because you come up with some convoluted example that seems to break the algorithm does not prove anything. Do you agree that a function's return type must be known at compile time in a statically typed language? If not then we have nothing more to discuss... (Just because you allow a function to be compile time polymorphic doesn't change anything because each type that a function can possibly return must be known)
Jun 30 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 1 July 2013 at 06:38:20 UTC, JS wrote:
 well duh, but it is quite a simple mathematical problem and 
 your counter-example is not one at all.

 For a statically typed language all types must be known at 
 compile time... so you can't come up with any valid 
 counter-example. Just because you come up with some convoluted 
 example that seems to break the algorithm does not prove 
 anything.

 Do you agree that a function's return type must be known at 
 compile time in a statically typed language? If not then we 
 have nothing more to discuss... (Just because you allow a 
 function to be compile time polymorphic doesn't change anything 
 because each type that a function can possibly return must be 
 known)

As a compiler implementer, Timon is probably way more competent than you are on the question. You'll get anything interesting to add by considering you know better. The type of problem he mention are already present in many aspect of D and makes it really hard to compile in a consistent way accross implementations. Adding new one is a really bad idea. If you don't understand what the problem is, I suggest you to study the question or ask questions rather than try to make a point.
Jun 30 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Monday, 1 July 2013 at 06:51:53 UTC, deadalnix wrote:
 On Monday, 1 July 2013 at 06:38:20 UTC, JS wrote:
 well duh, but it is quite a simple mathematical problem and 
 your counter-example is not one at all.

 For a statically typed language all types must be known at 
 compile time... so you can't come up with any valid 
 counter-example. Just because you come up with some convoluted 
 example that seems to break the algorithm does not prove 
 anything.

 Do you agree that a function's return type must be known at 
 compile time in a statically typed language? If not then we 
 have nothing more to discuss... (Just because you allow a 
 function to be compile time polymorphic doesn't change 
 anything because each type that a function can possibly return 
 must be known)

As a compiler implementer, Timon is probably way more competent than you are on the question. You'll get anything interesting to add by considering you know better. The type of problem he mention are already present in many aspect of D and makes it really hard to compile in a consistent way accross implementations. Adding new one is a really bad idea. If you don't understand what the problem is, I suggest you to study the question or ask questions rather than try to make a point.

You can't be as smart as you think or you would know that "proof by authority" is a fallacy.
Jul 01 2013
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 1 July 2013 at 09:31:04 UTC, JS wrote:
 On Monday, 1 July 2013 at 06:51:53 UTC, deadalnix wrote:
 On Monday, 1 July 2013 at 06:38:20 UTC, JS wrote:
 well duh, but it is quite a simple mathematical problem and 
 your counter-example is not one at all.

 For a statically typed language all types must be known at 
 compile time... so you can't come up with any valid 
 counter-example. Just because you come up with some 
 convoluted example that seems to break the algorithm does not 
 prove anything.

 Do you agree that a function's return type must be known at 
 compile time in a statically typed language? If not then we 
 have nothing more to discuss... (Just because you allow a 
 function to be compile time polymorphic doesn't change 
 anything because each type that a function can possibly 
 return must be known)

As a compiler implementer, Timon is probably way more competent than you are on the question. You'll get anything interesting to add by considering you know better. The type of problem he mention are already present in many aspect of D and makes it really hard to compile in a consistent way accross implementations. Adding new one is a really bad idea. If you don't understand what the problem is, I suggest you to study the question or ask questions rather than try to make a point.

You can't be as smart as you think or you would know that "proof by authority" is a fallacy.

Authority is not proof, but many years of experience provide a perspective that is worth serious consideration. Which is what deadalnix said.
Jul 01 2013
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 1 July 2013 at 16:33:56 UTC, Ary Borenszweig wrote:
 I'll give you an example:

 # var can be an Int or String
 def foo(var)
   var = var.to_s
   # do something with var, which is now guaranteed to be a 
 string
 end

 I can call it like this:

 foo(1)
 foo("hello")

 If I had to put types, I would end up doing of of these:

 1.

 void foo(int var) {
   foo(to!string(var))
 }

 void foo(string var) {
   // do something with var
 }

 2.

 void foo(T)(T var) {
   string myVar;
   static if (is(T == string)) {
     myVar = var;
   } else if (is(T == int)) {
     myVar = to!string(var)
   } else {
     static assert(false);
   }
   // do something with myVar
 }

Why not this? void foo(T)(T var) { auto myVar = var.to!string; //do something with myVar string }
Jul 01 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Monday, 1 July 2013 at 16:15:04 UTC, Walter Bright wrote:
 On 7/1/2013 6:39 AM, Ary Borenszweig wrote:
 This is not what I am talking about and it seems quite 
 dangerous to have
 one variable name masquerade as multiple variables.

Why dangerous?

D already disallows: int x; { float x; } as an error-prone construct, so why should it allow: int x; float x; ?

I think there is big confusion in what I'm "suggesting"(it's not a proposal because I don't expect anyone to take the time to prove its validity... and you can't know how useful it could be if you don't have any way to test it out). It's two distinctly different concepts when you allow a "multi-variable" and one that can be "up-typed" by inference(the compiler automatically "up-types").
 I've been programming in Ruby for quite a time and never found 
 it
 to be a problem, but an advantage.

What advantage? Does Ruby have a shortage of names for variables? (Early versions of BASIC only allowed variable names with one letter, leading to some pretty awful workarounds.)
 Show me an example where this is dangerous (the pointer 
 example gave by Walter
 is not valid anymore since it has a fix).

I'm actually rather sure that one can come up with rule after rule to 'fix' each issue, but good luck with trying to fit all those rules into some sort of comprehensible framework. Consider what happened to C++ when they tried to fit new things into the overloading rules - the rules now span multiple pages of pretty arbitrary rules, and practically nobody understands the whole. What people do is randomly try things until it seems to do the right thing for them, and then they move on. And all this for what - nobody has come up with a significant reason to support this proposal. I see post after post in this thread trying to make it work, and nothing about what problem it solves.

I'm more interested in a true counterexample where my concept(which I've not seen in any language before) results in an invalid context.... and not a naive example that uses a half baked implementation of the algorithm(which I've already outlined). Just because someone comes up with an example and says "this will produce a non-termination" doesn't mean it will except in the most naive implementations. The problem is when such a "idea" is present you get people who are automatically against it for various irrational fears and they won't take any serious look at it to see if it has any merit... If you jump to the conclusion that something is useless without any real thought on it then it obviously is... but the same type of mentality has been used to "prove" just about anything was useless at one time or another. (and if that mentality ruled we'd still be using 640k of memory) I have a very basic question for you and would like a simple answer: In some programming languages, one can do the following type of code: var x; // x is some type of variable that holds data. It's type is not statically defined and can change at run time. x = 3; // x holds some type of number... usually an integer but the language may store all numbers as doubles or even strings. now, suppose we have a program that contains essentially the following: var x; x = 3; Is it possible that the compiler can optimize such code to find the least amount of data to represent x without issue? Yes or no? Is this a good thing? Yes or no? (I don't need and don't want any explanation) (the above example is at the heart of the matter... regardless if it is probably a valid semantic in D or easily to implement(since no one knows and most don't care because they think it won't benefit them(just like how bill gates thought all everyone needed was 640k)))
Jul 01 2013
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 1 July 2013 at 16:46:57 UTC, Ary Borenszweig wrote:
 On 7/1/13 1:45 PM, John Colvin wrote:
 T)(T var)
 {
     auto myVar = var.to!string;
     //do something with myVar string
 }

Ah, that's also ok. But then you have to remember to use myVar instead of var.

Personally I like the explicit use of a new variable. If you're changing the type of a variable then you want it to be explicit. I spend far too many hours a month chasing down accidental type changes in python. A "convenience" feature is only a feature if it helps *stop* you shooting yourself in the foot, not if it actively encourages it. auto a; //loads of code, with function calls to all sorts of unfamiliar libraries //do something with a. How do I know what type a is to work with? I have to either read and understand all the code in between, try and write something generic, or put a pragma(msg, ...) in to show it for me. Either way I have to pray that nobody changes it.
Jul 01 2013
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 1 July 2013 at 16:46:57 UTC, Ary Borenszweig wrote:
 Ah, that's also ok. But then you have to remember to use myVar 
 instead of var.

I have wanted to remove a variable from scope before. I think it would be kinda cool if we could do something like __undefine(x), and subsequent uses of it would be an error. (It could also prohibit redeclaration of it if we wanted.) D doesn't have that, but we can get reasonably close using other functions when we change param types (just forward it to the other overload) and functions and/or scopes for local variables: void foo() { { int a; /* use a */ } // a is now gone, so you don't accidentally use it later in the function }
Jul 01 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 1 July 2013 at 16:57:53 UTC, JS wrote:
 (the above example is at the heart of the matter... regardless 
 if it is probably a valid semantic in D or easily to 
 implement(since no one knows and most don't care because they 
 think it won't benefit them(just like how bill gates thought 
 all everyone needed was 640k)))

For the record, this quote is plain wrong : https://groups.google.com/forum/#!msg/alt.folklore.computers/mpjS-h4jpD8/9DW_VQVLzpkJ But if you one very stupid one, I declared in the late 90s that a phone with a tactile screen on its whole surface was a stupid idea and that it would never work. I you look hard enough, I guess we all said the most stupid thing at some point. We got to admit it and not repeat the mistake.
Jul 01 2013
prev sibling next sibling parent "MattCoder" <mattcoder hotmail.com> writes:
On Monday, 1 July 2013 at 17:51:02 UTC, deadalnix wrote:
 On Monday, 1 July 2013 at 16:57:53 UTC, JS wrote:
 (the above example is at the heart of the matter... regardless 
 if it is probably a valid semantic in D or easily to 
 implement(since no one knows and most don't care because they 
 think it won't benefit them(just like how bill gates thought 
 all everyone needed was 640k)))

For the record, this quote is plain wrong : https://groups.google.com/forum/#!msg/alt.folklore.computers/mpjS-h4jpD8/9DW_VQVLzpkJ

I liked this answer: " QUESTION: I read in a newspaper that in 1981 you said, "640K should be enough for anybody." I always thought he was talking about his monthly bonus, not computer memory... " :) Matheus.
Jul 01 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Monday, 1 July 2013 at 23:30:19 UTC, Andrei Alexandrescu wrote:
 On 7/1/13 9:57 AM, JS wrote:
 I think there is big confusion in what I'm "suggesting"(it's 
 not a
 proposal because I don't expect anyone to take the time to 
 prove its
 validity... and you can't know how useful it could be if you 
 don't have
 any way to test it out).

 It's two distinctly different concepts when you allow a 
 "multi-variable"
 and one that can be "up-typed" by inference(the compiler 
 automatically
 "up-types").

To me the basic notion was very clear from day one. Changing the type of a variable is equivalent with "unaliasing" the existing variable (i.e. destroy it and force it out of the symbol table) and defining an entirely different variable, with its own lifetime. It just so happens it has the same name. It's a reasonable feature to have -- a nice cheat that brings a statically-typed language closer to the look-and-feel of dynamic languages. Saves on names, which is more helpful than one might think. In D things like overloading and implicit conversions would probably make it too confusing to be useful.
 I'm more interested in a true counterexample where my 
 concept(which I've
 not seen in any language before) results in an invalid 
 context....

It's obvious to me that the concept is sound within reasonable use bounds.
 The problem is when such a "idea" is present you get people 
 who are
 automatically against it for various irrational fears and they 
 won't
 take any serious look at it to see if it has any merit... If 
 you jump to
 the conclusion that something is useless without any real 
 thought on it
 then it obviously is... but the same type of mentality has 
 been used to
 "prove" just about anything was useless at one time or 
 another. (and if
 that mentality ruled we'd still be using 640k of memory)

I think this is an unfair characterization. The discussion was pretty good and gave the notion a fair shake.
 I have a very basic question for you and would like a simple 
 answer:

 In some programming languages, one can do the following type 
 of code:

 var x; // x is some type of variable that holds data. It's 
 type is not
 statically defined and can change at run time.
 x = 3; // x holds some type of number... usually an integer 
 but the
 language may store all numbers as doubles or even strings.

 now, suppose we have a program that contains essentially the 
 following:

 var x;
 x = 3;

 Is it possible that the compiler can optimize such code to 
 find the
 least amount of data to represent x without issue?  Yes or no?

Yes, and in fact it's already done. Consider: if (expr) { int a; ... } else { int b; ... } In some C implementations, a and b have the same physical address. In some others, they have distinct addresses. This appears to not be related, but it is insofar as a and b have non-overlapping lifetimes.
 Is this a
 good thing? Yes or no?

It's marginally good - increases stack locality and makes it simpler for a register allocator. (In fact all register allocators do that already, otherwise they'd suck.)
 (I don't need and don't want any explanation)

Too late :o). Andrei

Too be honest, your reply seems to be the only one that attempts to discuss exactly what I asked. Nothing more, nothing less. I do realize there was some confusion between what Crystal does and what I'm talking about... I still think the two are confused by some and I'm not sure if anyone quite gets exactly what I am talking about(Which is not re-aliasing any variables, using a sort of variant type(directly at least), or having a multi-variable(e.g., crystal)). Would would be nice is an experimental version of D where would could easily extend the language to try out such concepts to see if they truly are useful and how difficult to implement. e.g., I could attempt to add said "feature", it could be merged with the experimental compiler, those interested can download the compiler and test the feature out... all without negatively affecting D directly. If such features could be implemented dynamically then it would probably be pretty powerful. The example I gave was sort of the reverse. Instead of expanding the type into a supertype we are reducing it. float x; x = 3; x could be stored as a byte which would potentially be an increase in performance. Reducing the type can be pretty dangerous though unless it is verifiable. I'm somewhat convinced that expanding the type is almost always safe(at least in safe code) although not necessarily performant. IMO it makes auto more powerful in most cases but only having a test bed can really say how much.
Jul 01 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Tuesday, 2 July 2013 at 02:15:09 UTC, Andrei Alexandrescu 
wrote:
 On 7/1/13 6:29 PM, JS wrote:
 Would would be nice is an experimental version of D where 
 would could
 easily extend the language to try out such concepts to see if 
 they truly
 are useful and how difficult to implement. e.g., I could 
 attempt to add
 said "feature", it could be merged with the experimental 
 compiler, those
 interested can download the compiler and test the feature 
 out... all
 without negatively affecting D directly. If such features 
 could be
 implemented dynamically then it would probably be pretty 
 powerful.

I don't think such a feature would make it in D, even if the implementation cost was already sunken (i.e. an implementation was already done and one pull request away). Ascribing distinct objects to the same symbol is a very core feature that affects and is affected by everything else. We'd design a lot of D differently if that particular feature were desired, and now the fundamentals of the design are long frozen. For a very simple example, consider: auto a = 2.5; // fine, a is double ... a = 3;

No, not under what I am talking about. You can't downgrade a type, only upgrade it. a = 3, a is still a float. Using the concept I am talking about, your example does nothing new. but reverse the numbers: auto a = 3; a = 2.5; and a is now a float, and your logic then becomes correct EXCEPT a is expanded, which is safe. I really don't know how to make it any clearer but I'm not sure if anyone understands what I'm talking about ;/
 By the proposed rule a will become an entirely different 
 variable of type int, and the previous double variable would 
 disappear. But current rules dictate that the type stays 
 double. So we'd either have an unthinkably massive breakage, or 
 we'd patch the language with a million exceptions.

 Even so! If the feature were bringing amazing power, there may 
 still be a case in its favor. But fundamentally it doesn't 
 bring anything new - it's just alpha renaming; it doesn't 
 enable doing anything that couldn't be done without it.

Expanding a type is always valid because it just consumes more memory. A double can always masquerade as an int without issue because one just wastes 4 bytes. An int can't masquerade as a double because any function think uses it as a double will cause corruption of 4 bytes of memory. (I'm ignoring that a double and int use different cpu instructions. This is irrelevant unless we are hacking stuff up) The simplest example I can give is: auto x = 2; x = 2.5; x IS a double, regardless of the fact that auto x = 2; makes it look like an int BECAUSE that is how auto currently is defined(which might be the confusion). The reason is, that the compiler looked at the scope for all assignments to x and was able to determine automatically that x needed to be a double. I'll give one more way to look at this, that is a sort of inbetween but necessary logical step: We have currently that auto looks at the immediate assignment after it's keyword to determine the type, correct? e.g., auto x = 3; What if we allow auto to look at the first assignment to x, not necessarily the immediate assignment, e.g.: auto x; x = 3; (should be identical to above) or auto x; .... (no assignments to x) x = 3; All this should be semantically equivalent, correct? To me, the last case is more powerful since it is more general. Of course, one could argue that it makes it more difficult to know the type of x but I doubt this would be a huge issue.
Jul 01 2013
prev sibling next sibling parent "JS" <js.mdnq gmail.com> writes:
On Tuesday, 2 July 2013 at 03:17:58 UTC, Ali Çehreli wrote:
 On 07/01/2013 07:35 PM, JS wrote:

 On Tuesday, 2 July 2013 at 02:15:09 UTC, Andrei Alexandrescu

 auto a = 2.5; // fine, a is double
 ...
 a = 3;

No, not under what I am talking about. You can't downgrade a

 upgrade it. a = 3, a is still a float. Using the concept I am

 about, your example does nothing new.

 but reverse the numbers:

 auto a = 3;
 a = 2.5;

 and a is now a float, and your logic then becomes correct

 expanded, which is safe.

 I really don't know how to make it any clearer but I'm not

 anyone understands what I'm talking about ;/

I think I understand. I think I heard either on this or your other thread that function overloading may produce confusing results. Consider the following program: void foo(int i) {} void foo(double d) {} void main() { auto a = 3; foo(a); // Some time later somebody adds the following line: a = 2.5; } If the type of 'a' would suddenly be double from that point on, foo(a) would silently go to a different function. It may be that calling the 'double' overload is the right thing to do but it may as well be that it would be the completely the wrong thing to do.

Yes, basically. If one coded for integers then decided to change it to doubles and was doing some weird stuff then it could completely change the semantics.
 The difference is, today the compiler warns me about the 
 incompatible types. With the proposed feature, the semantics of 
 the program might be different without any warning.

Yes, this is the "downside" I see. But there doesn't otherwise seem to be any inherent reason why it's a bad idea.
 Of course one may argue that every line must be added very 
 carefully and the unit tests must be comprehensive, etc. Of 
 course I agree but I am another person who does not see the 
 benefit of this proposal. It is never a chore to modify the 
 type of a variable when the compiler warns me about an 
 incompatibility.

No one says that the compiler can't warn you still. One could use a different keyword from the start. Say *autoscope*. If you use that from the get go then you should know full well that strange things are possible(and D can still do strange things without such a "feature"). Although I am of a different mind set than you are in that I like to have more control instead of less. You can't adapt to change if it never happens. Such a feature would, probably in 98% of cases result in something beneficial. Again, after all, if you don't like it don't use it... I do think it would simplify a few things though. Actually, as I mentioned before, there are some use cases where it does reduce code complexity. If one is doing something like this: auto foo(double x) { if(typeof(x) == int) return ""; else return 2; } then they are asking for trouble. auto x; x = foo(1); x = 2.5 or x = 3 result in x becoming a very different type. (and maybe this is essentially the objection of people... but note it's not because of x... auto x = foo(); does the exact same thing. I believe that having the best tool for the job is what is important. But the tools should be available to be used. (This is a general statement, I'm not talking about this "feature") When you need a bazooka you need a bazooka... to not have one really sucks... your pea shooter might do the job 99% of the time and that's fine, that's what should be used. But trying to take out a tank with a pea shooter isn't going to cut it.
Jul 01 2013
prev sibling next sibling parent "F i L" <witte2008 gmail.com> writes:
Steven Schveighoffer wrote:
 There are very good reasons not to do this, even if possible.  
 Especially if the type can change.

+1 This sort of inference can only lead to problems down the line, IMO.
Jul 01 2013
prev sibling parent "Ivan Kazmenko" <gassa mail.ru> writes:
On Saturday, 29 June 2013 at 03:20:27 UTC, JS wrote:
 I don't disagree with you and I'm not saying auto is not 
 useful. IMO though, auto is almost all convenience and very 
 little to do with solving errors.

 A very simple use case is:

 auto x = 0;
 ...
 x = complex(1, 1) + x;

 which obviously is an error.

Let me expand the example with this: auto x = 0; complex y; ... x = complex(1, 1) + x; // ouch, I meant the next line! y = complex(1, 1) + x; Personally, I appreciate the strengths of static typing here, forcing me to choose which of the two behaviors I meant instead of silently picking one of them. Ivan Kazmenko.
Jul 02 2013