www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - No more implicit conversion real->complex?!

reply Norbert Nemec <Norbert Nemec-online.de> writes:
I just notice that as of D 0.150, implicit conversion from
real/imaginary to complex does not work any more. I could not make out
the message containing the suggestion by Don Clugston, so I'm not sure
about the rationale.

In any case: if this conversion does not work implicitely any more, I
wonder whether I understand the rule which conversions do? real->complex
is possible without ambiguities or loss of information. Why not make it
implicit?

I think this is an important issue: in numerics, mixing of real and
complex values happens all the time, therefore it should be as simple as
possible.
Mar 20 2006
next sibling parent "Walter Bright" <newshound digitalmars.com> writes:
"Norbert Nemec" <Norbert Nemec-online.de> wrote in message 
news:dvn3vm$qpu$1 digitaldaemon.com...
I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make out
 the message containing the suggestion by Don Clugston, so I'm not sure
 about the rationale.

 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do? real->complex
 is possible without ambiguities or loss of information. Why not make it
 implicit?

 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as simple as
 possible.

The issue Don brought up was the problem of overload resolution between functions taking real args and those taking complex args.
Mar 20 2006
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
Norbert Nemec wrote:
 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make out
 the message containing the suggestion by Don Clugston, so I'm not sure
 about the rationale.
 
 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do? real->complex
 is possible without ambiguities or loss of information. Why not make it
 implicit?

It's not 100% unambiguous, there are two possible conversions 7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.
 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as simple as
 possible.

I agree. But the implicit conversions were actually making mixing of real and complex functions much more difficult. It would be good to have someone other than me seriously thinking about these issues, and gaining some experience with numerics in D.
Mar 20 2006
next sibling parent reply kris <foo bar.com> writes:
Don Clugston wrote:
 Norbert Nemec wrote:
 
 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make out
 the message containing the suggestion by Don Clugston, so I'm not sure
 about the rationale.

 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do? real->complex
 is possible without ambiguities or loss of information. Why not make it
 implicit?

It's not 100% unambiguous, there are two possible conversions 7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.
 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as simple as
 possible.

I agree. But the implicit conversions were actually making mixing of real and complex functions much more difficult. It would be good to have someone other than me seriously thinking about these issues, and gaining some experience with numerics in D.

By this argument, if the overloaded types were char and long (instead of creal & real) then D should not allow implicit conversion there?
Mar 21 2006
next sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 21 Mar 2006 00:15:37 -0800, kris <foo bar.com> wrote:
 Don Clugston wrote:
 Norbert Nemec wrote:

 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make out
 the message containing the suggestion by Don Clugston, so I'm not sure
 about the rationale.

 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do?  
 real->complex
 is possible without ambiguities or loss of information. Why not make it
 implicit?

7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.
 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as simple  
 as
 possible.

real and complex functions much more difficult. It would be good to have someone other than me seriously thinking about these issues, and gaining some experience with numerics in D.

By this argument, if the overloaded types were char and long (instead of creal & real) then D should not allow implicit conversion there?

You can disambiguate char and long by using an "IntegerSuffix" eg. void foo(char c) {} void foo(long c) {} void main() { foo(5L); } Is the same true for real and creal? (I've not done very much in the way of numerics and I've never used creal, so I honestly don't know) Regan
Mar 21 2006
parent Don Clugston <dac nospam.com.au> writes:
Regan Heath wrote:
 On Tue, 21 Mar 2006 00:15:37 -0800, kris <foo bar.com> wrote:
 Don Clugston wrote:
 Norbert Nemec wrote:

 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make out
 the message containing the suggestion by Don Clugston, so I'm not sure
 about the rationale.

 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do? 
 real->complex
 is possible without ambiguities or loss of information. Why not make it
 implicit?

7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.
 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as 
 simple as
 possible.

of real and complex functions much more difficult. It would be good to have someone other than me seriously thinking about these issues, and gaining some experience with numerics in D.

By this argument, if the overloaded types were char and long (instead of creal & real) then D should not allow implicit conversion there?

You can disambiguate char and long by using an "IntegerSuffix" eg. void foo(char c) {} void foo(long c) {} void main() { foo(5L); } Is the same true for real and creal?

Yes. But IMHO, it is ridiculous to expect user code to write sin(2L); instead of sin(2); just because std.math includes sin(creal). (It could be a high school student, who's never heard of complex numbers!).
Mar 21 2006
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
kris wrote:
 Don Clugston wrote:
 Norbert Nemec wrote:

 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make out
 the message containing the suggestion by Don Clugston, so I'm not sure
 about the rationale.

 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do? real->complex
 is possible without ambiguities or loss of information. Why not make it
 implicit?

It's not 100% unambiguous, there are two possible conversions 7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.
 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as simple as
 possible.

I agree. But the implicit conversions were actually making mixing of real and complex functions much more difficult. It would be good to have someone other than me seriously thinking about these issues, and gaining some experience with numerics in D.

By this argument, if the overloaded types were char and long (instead of creal & real) then D should not allow implicit conversion there?

I can't think of many examples where you have overloads of both char and long. But it's _extremely_ common for complex functions to be overloads of real functions. Let's not forget that the purpose of implicit conversions is for convenience. IMHO, real->creal fails to be convenient, given the D's simple lookup rules.
Mar 21 2006
parent reply kris <foo bar.com> writes:
Don Clugston wrote:
 kris wrote:
 
 Don Clugston wrote:

 Norbert Nemec wrote:

 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make out
 the message containing the suggestion by Don Clugston, so I'm not sure
 about the rationale.

 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do? 
 real->complex
 is possible without ambiguities or loss of information. Why not make it
 implicit?

It's not 100% unambiguous, there are two possible conversions 7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.
 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as 
 simple as
 possible.

I agree. But the implicit conversions were actually making mixing of real and complex functions much more difficult. It would be good to have someone other than me seriously thinking about these issues, and gaining some experience with numerics in D.

By this argument, if the overloaded types were char and long (instead of creal & real) then D should not allow implicit conversion there?

I can't think of many examples where you have overloads of both char and long. But it's _extremely_ common for complex functions to be overloads of real functions. Let's not forget that the purpose of implicit conversions is for convenience. IMHO, real->creal fails to be convenient, given the D's simple lookup rules.

Yes, Don, but isn't that a question of extent? You argue, reasonably, for a distinction between creal & real. Surely the same argument can be used to distinguish between a UTF8 char and a signed 64-bit integer? I mean, the latter two are of significantly different type, with quite distinct intent. Isn't it just as inconvenient to have those bumping into each other? Suffixes can conceivably be used to disambiguate the utf8/long case, yet, surely that same approach could be applied to creal vs real? Alternatively, one might argue that suffixes themselves are entirely inconvenient. They are certainly not clear to a 'novice' (what with all the talk of D as a first language), and can become confusing to an 'expert' too ~ especially, I imagine, when maintaining someone else's code? I think what you pointed out here is that type coercion (as it stands) should probably only work for a few select types, where it actually makes sense. Walter has now changed the compiler to abolish type-coercion for real/creal; should this trend not continue to other types, where it makes sense to do so? Otherwise, shouldn't the change have been to apply suffix-distinction in the cases you talk about? Unfortunately, this opens up the whole type-selection concern regarding arguments of the literal variety, of any type, within D. A concern that keeps coming up :-)
Mar 21 2006
parent reply Sean Kelly <sean f4.ca> writes:
kris wrote:
 Don Clugston wrote:
 kris wrote:

 Don Clugston wrote:

 Norbert Nemec wrote:

 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make out
 the message containing the suggestion by Don Clugston, so I'm not sure
 about the rationale.

 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do? 
 real->complex
 is possible without ambiguities or loss of information. Why not 
 make it
 implicit?

It's not 100% unambiguous, there are two possible conversions 7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.
 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as 
 simple as
 possible.

I agree. But the implicit conversions were actually making mixing of real and complex functions much more difficult. It would be good to have someone other than me seriously thinking about these issues, and gaining some experience with numerics in D.

By this argument, if the overloaded types were char and long (instead of creal & real) then D should not allow implicit conversion there?

I can't think of many examples where you have overloads of both char and long. But it's _extremely_ common for complex functions to be overloads of real functions. Let's not forget that the purpose of implicit conversions is for convenience. IMHO, real->creal fails to be convenient, given the D's simple lookup rules.

Yes, Don, but isn't that a question of extent? You argue, reasonably, for a distinction between creal & real. Surely the same argument can be used to distinguish between a UTF8 char and a signed 64-bit integer? I mean, the latter two are of significantly different type, with quite distinct intent. Isn't it just as inconvenient to have those bumping into each other?

Yes :-) However, there may be compatibility reasons to support the char conversion, as C does as well. Sean
Mar 21 2006
parent reply kris <foo bar.com> writes:
Sean Kelly wrote:
 kris wrote:
 
 Don Clugston wrote:

 kris wrote:

 Don Clugston wrote:

 Norbert Nemec wrote:

 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make 
 out
 the message containing the suggestion by Don Clugston, so I'm not 
 sure
 about the rationale.

 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do? 
 real->complex
 is possible without ambiguities or loss of information. Why not 
 make it
 implicit?

It's not 100% unambiguous, there are two possible conversions 7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.
 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as 
 simple as
 possible.

I agree. But the implicit conversions were actually making mixing of real and complex functions much more difficult. It would be good to have someone other than me seriously thinking about these issues, and gaining some experience with numerics in D.

By this argument, if the overloaded types were char and long (instead of creal & real) then D should not allow implicit conversion there?

I can't think of many examples where you have overloads of both char and long. But it's _extremely_ common for complex functions to be overloads of real functions. Let's not forget that the purpose of implicit conversions is for convenience. IMHO, real->creal fails to be convenient, given the D's simple lookup rules.

Yes, Don, but isn't that a question of extent? You argue, reasonably, for a distinction between creal & real. Surely the same argument can be used to distinguish between a UTF8 char and a signed 64-bit integer? I mean, the latter two are of significantly different type, with quite distinct intent. Isn't it just as inconvenient to have those bumping into each other?

Yes :-) However, there may be compatibility reasons to support the char conversion, as C does as well. Sean

Aye. There's always some excuse for a lack of consistency :-) Even then, one might argue that "compatability" is actually there in name only. Why would anyone convert a C program to D? I've yet to see an extensive example of that; no doubt due to the extensive /incompatability/ of D with .h files (in truth, I haven't seen any examples) Thus the nobility of "C compatability" is perhaps just a bit thin? Wouldn't it be nice to tidy some of this up while the opportunity presents itself? - Kris
Mar 21 2006
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
kris wrote:
 Sean Kelly wrote:
 kris wrote:

 Don Clugston wrote:

 kris wrote:

 Don Clugston wrote:

 Norbert Nemec wrote:

 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not 
 make out
 the message containing the suggestion by Don Clugston, so I'm not 
 sure
 about the rationale.

 In any case: if this conversion does not work implicitely any 
 more, I
 wonder whether I understand the rule which conversions do? 
 real->complex
 is possible without ambiguities or loss of information. Why not 
 make it
 implicit?

It's not 100% unambiguous, there are two possible conversions 7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.
 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as 
 simple as
 possible.

I agree. But the implicit conversions were actually making mixing of real and complex functions much more difficult. It would be good to have someone other than me seriously thinking about these issues, and gaining some experience with numerics in D.

By this argument, if the overloaded types were char and long (instead of creal & real) then D should not allow implicit conversion there?

I can't think of many examples where you have overloads of both char and long. But it's _extremely_ common for complex functions to be overloads of real functions. Let's not forget that the purpose of implicit conversions is for convenience. IMHO, real->creal fails to be convenient, given the D's simple lookup rules.

Yes, Don, but isn't that a question of extent? You argue, reasonably, for a distinction between creal & real. Surely the same argument can be used to distinguish between a UTF8 char and a signed 64-bit integer? I mean, the latter two are of significantly different type, with quite distinct intent. Isn't it just as inconvenient to have those bumping into each other?



 Yes :-)  However, there may be compatibility reasons to support the 
 char conversion, as C does as well.

 Sean

Aye. There's always some excuse for a lack of consistency :-) Even then, one might argue that "compatability" is actually there in name only. Why would anyone convert a C program to D? I've yet to see an extensive example of that; no doubt due to the extensive /incompatability/ of D with .h files (in truth, I haven't seen any examples) Thus the nobility of "C compatability" is perhaps just a bit thin? Wouldn't it be nice to tidy some of this up while the opportunity presents itself? - Kris

I think you're right. I think the "C compatibility" argument is relevant only for cases where changes would be very common and hard to track down. (FWIW, I've converted the Cephes math libraries from C to D. In every case, the tighter D language rules improved the code). I haven't done much work with char/wchar/dchar in D, so I don't have much idea of how troublesome the implicit conversions are. Undoubtedly you have the most experience here. If you're also finding them to be a nuisance rather than a convenience, we should It does seem to me that implicit widening conversions are nearly always helpful, but those ones that change the semantics (wchar->short, real->creal, etc) seem much more likely to be annoying. I particularly dislike conversions that silently insert code. The suggestion was made to have two levels of implicit conversion, roughly similar to the promotion rules, ie: * exact match * match with implicit widening conversions * match with implicit semantic-changing conversions. Would this improve the situation for char/int conversions?
Mar 22 2006
next sibling parent Sean Kelly <sean f4.ca> writes:
Don Clugston wrote:
 kris wrote:
 Thus the nobility of "C compatability" is perhaps just a bit thin? 
 Wouldn't it be nice to tidy some of this up while the opportunity 
 presents itself?

I think you're right. I think the "C compatibility" argument is relevant only for cases where changes would be very common and hard to track down. (FWIW, I've converted the Cephes math libraries from C to D. In every case, the tighter D language rules improved the code). I haven't done much work with char/wchar/dchar in D, so I don't have much idea of how troublesome the implicit conversions are. Undoubtedly you have the most experience here. If you're also finding them to be a nuisance rather than a convenience, we should It does seem to me that implicit widening conversions are nearly always helpful, but those ones that change the semantics (wchar->short, real->creal, etc) seem much more likely to be annoying. I particularly dislike conversions that silently insert code.

I think it might be ideal to allow implicit widening conversions between char types but to disallow promotion from char to integer types. One might argue that math with chars is routinely performed in C, but outside the ASCII character set I don't see that being particularly useful in D. I'd be willing to live with the need for an explicit cast to ubyte/ushort/uint in D, as it seems more meaningful.
 The suggestion was made to have two levels of implicit conversion, 
 roughly similar to the promotion rules, ie:
 
 * exact match
 * match with implicit widening conversions
 * match with implicit semantic-changing conversions.
 
 Would this improve the situation for char/int conversions?

I think it might. Sean
Mar 22 2006
prev sibling parent kris <foo bar.com> writes:
Don Clugston wrote:
 kris wrote:
 
 Sean Kelly wrote:

 kris wrote:

 Don Clugston wrote:

 kris wrote:

 Don Clugston wrote:

 Norbert Nemec wrote:

 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not 
 make out
 the message containing the suggestion by Don Clugston, so I'm 
 not sure
 about the rationale.

 In any case: if this conversion does not work implicitely any 
 more, I
 wonder whether I understand the rule which conversions do? 
 real->complex
 is possible without ambiguities or loss of information. Why not 
 make it
 implicit?

It's not 100% unambiguous, there are two possible conversions 7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.
 I think this is an important issue: in numerics, mixing of real and
 complex values happens all the time, therefore it should be as 
 simple as
 possible.

I agree. But the implicit conversions were actually making mixing of real and complex functions much more difficult. It would be good to have someone other than me seriously thinking about these issues, and gaining some experience with numerics in D.

By this argument, if the overloaded types were char and long (instead of creal & real) then D should not allow implicit conversion there?

I can't think of many examples where you have overloads of both char and long. But it's _extremely_ common for complex functions to be overloads of real functions. Let's not forget that the purpose of implicit conversions is for convenience. IMHO, real->creal fails to be convenient, given the D's simple lookup rules.

Yes, Don, but isn't that a question of extent? You argue, reasonably, for a distinction between creal & real. Surely the same argument can be used to distinguish between a UTF8 char and a signed 64-bit integer? I mean, the latter two are of significantly different type, with quite distinct intent. Isn't it just as inconvenient to have those bumping into each other?



 Yes :-)  However, there may be compatibility reasons to support the 
 char conversion, as C does as well.

 Sean

Aye. There's always some excuse for a lack of consistency :-) Even then, one might argue that "compatability" is actually there in name only. Why would anyone convert a C program to D? I've yet to see an extensive example of that; no doubt due to the extensive /incompatability/ of D with .h files (in truth, I haven't seen any examples) Thus the nobility of "C compatability" is perhaps just a bit thin? Wouldn't it be nice to tidy some of this up while the opportunity presents itself? - Kris

I think you're right. I think the "C compatibility" argument is relevant only for cases where changes would be very common and hard to track down. (FWIW, I've converted the Cephes math libraries from C to D. In every case, the tighter D language rules improved the code). I haven't done much work with char/wchar/dchar in D, so I don't have much idea of how troublesome the implicit conversions are. Undoubtedly you have the most experience here. If you're also finding them to be a nuisance rather than a convenience, we should

It has been a nuisance, but (so far) less than the trouble caused by string literals (which is pretty awful) In retrospect, it's a shame they weren't called utf8, utf16, and utf32 from the start. Char would have little reason to exist at that point; only byte/ubyte.
 It does seem to me that implicit widening conversions are nearly always 
 helpful, but those ones that change the semantics (wchar->short, 
 real->creal, etc) seem much more likely to be annoying. I particularly 
 dislike conversions that silently insert code.

Agreed. Although "annoying" is perhaps is bit light-hearted for what happens in reality :-)
 
 The suggestion was made to have two levels of implicit conversion, 
 roughly similar to the promotion rules, ie:
 
 * exact match
 * match with implicit widening conversions
 * match with implicit semantic-changing conversions.
 
 Would this improve the situation for char/int conversions?

I wouldn't doubt it. Something similar would probably fix the problem with string literals also? (although, I believe there's an easier and more consistent way to resolve the latter).
Mar 22 2006
prev sibling parent reply "Walter Bright" <newshound digitalmars.com> writes:
"kris" <foo bar.com> wrote in message news:4420640A.7020208 bar.com...
 Even then, one might argue that "compatability" is actually there in name 
 only. Why would anyone convert a C program to D? I've yet to see an 
 extensive example of that; no doubt due to the extensive /incompatability/ 
 of D with .h files (in truth, I haven't seen any examples)

Take a look at std.md5, std.random, etc. For C++ to D, see std.regexp.
Mar 23 2006
parent kris <foo bar.com> writes:
Walter Bright wrote:
 "kris" <foo bar.com> wrote in message news:4420640A.7020208 bar.com...
 
Even then, one might argue that "compatability" is actually there in name 
only. Why would anyone convert a C program to D? I've yet to see an 
extensive example of that; no doubt due to the extensive /incompatability/ 
of D with .h files (in truth, I haven't seen any examples)

Take a look at std.md5, std.random, etc. For C++ to D, see std.regexp.

Thanks. Those do count as /any/ examples, but I called "extensive example" doesn't cover such things as md5 and random. Regexp is a better example yet is still just one 'module', thus avoiding much of the need for numerous .H files. The latter is where the issue lies in what I was referring to (as is stated above) ~ larger C projects such as say, an XML parser or text editor, are a completely different kettle of fish. The "compatability" with C is a nice check-mark, but IMO the only real benefit is familiarity of syntax. For anything else, said "compatability" is seriously limited; to the point of hubris vis-a-vis larger C projects. That's just fine though; realistically, there's precious little reason to do otherwise.
Mar 23 2006
prev sibling parent reply "Rioshin an'Harthen" <rharth75 hotmail.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message 
news:dvobhe$2cm6$1 digitaldaemon.com...
 Norbert Nemec wrote:
 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make out
 the message containing the suggestion by Don Clugston, so I'm not sure
 about the rationale.

 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do? real->complex
 is possible without ambiguities or loss of information. Why not make it
 implicit?

It's not 100% unambiguous, there are two possible conversions 7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.

version(delurk) { I've been thinking about this ever since the last discussion about this, and believe there might be a better solution to the problem at hand than disabling real -> creal implicit conversions. Since the compiler knows the storage requirements of the different types, and if multiple implicit conversions are possible (e.g. the above mentioned sin( creal ) and sin( real )), why not make the compiler choose the one with the least storage requirement (i.e. changing the original number the least). So if we have creal sin( creal c ); real sin( real r ); writefln( sin( 3.2 ) ); as above, the 3.2 is according to specs a double, and we don't find double sin( double d ) anywhere, we try implicit conversions. Now, we have two options: creal sin( creal c ) and real sin( real r ). The storage requirement of creal is larger than that of real, so conversion to real changes the original double less than conversion to creal. Thus, the compiler chooses to convert it into real. Naturally, we can help the compiler make the choice: writefln( sin( cast( creal ) 3.2 ) ); would naturally pick the creal version (since 3.2 has been cast to it). What are your thoughts about this? Could this work? And if this could, should this be added to the D specifications? }
Mar 21 2006
parent reply Don Clugston <dac nospam.com.au> writes:
Rioshin an'Harthen wrote:
 "Don Clugston" <dac nospam.com.au> wrote in message 
 news:dvobhe$2cm6$1 digitaldaemon.com...
 Norbert Nemec wrote:
 I just notice that as of D 0.150, implicit conversion from
 real/imaginary to complex does not work any more. I could not make out
 the message containing the suggestion by Don Clugston, so I'm not sure
 about the rationale.

 In any case: if this conversion does not work implicitely any more, I
 wonder whether I understand the rule which conversions do? real->complex
 is possible without ambiguities or loss of information. Why not make it
 implicit?

7.2 -> 7.2 + 0i and 7.2 -> 7.2 - 0i. OK, it's not a big deal. But the real problem is that with that implicit conversion in place, overload resolution is a real nuisance. Consider creal sin(creal c); real sin(real x); writefln( sin(3.2) ); Now, 3.2 is a double, so it tries to find sin(double). This fails, so it tries implicit conversions. Both sin(creal) and sin(real) are possible, so it's ambiguous, and compilation will fail. Up to now, the only way of overcoming this was to supply seperate functions for float, double, real, and creal arguments. This is clumsy, and becomes impractical once multiple arguments are used.

version(delurk) { I've been thinking about this ever since the last discussion about this, and believe there might be a better solution to the problem at hand than disabling real -> creal implicit conversions. Since the compiler knows the storage requirements of the different types, and if multiple implicit conversions are possible (e.g. the above mentioned sin( creal ) and sin( real )), why not make the compiler choose the one with the least storage requirement (i.e. changing the original number the least). So if we have creal sin( creal c ); real sin( real r ); writefln( sin( 3.2 ) ); as above, the 3.2 is according to specs a double, and we don't find double sin( double d ) anywhere, we try implicit conversions. Now, we have two options: creal sin( creal c ) and real sin( real r ). The storage requirement of creal is larger than that of real, so conversion to real changes the original double less than conversion to creal. Thus, the compiler chooses to convert it into real. Naturally, we can help the compiler make the choice: writefln( sin( cast( creal ) 3.2 ) ); would naturally pick the creal version (since 3.2 has been cast to it). What are your thoughts about this? Could this work? And if this could, should this be added to the D specifications? }

This would mean the lookup rules become more complicated. I think Walter was very keen to keep them simple.
Mar 21 2006
parent reply "Rioshin an'Harthen" <rharth75 hotmail.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message 
news:dvoklp$2pmq$2 digitaldaemon.com...
 Rioshin an'Harthen wrote:
 "Don Clugston" <dac nospam.com.au> wrote in message 
 news:dvobhe$2cm6$1 digitaldaemon.com...

 I've been thinking about this ever since the last discussion about this, 
 and believe there might be a better solution to the problem at hand than 
 disabling real -> creal implicit conversions.

 Since the compiler knows the storage requirements of the different types, 
 and if multiple implicit conversions are possible (e.g. the above 
 mentioned sin( creal ) and sin( real )), why not make the compiler choose 
 the one with the least storage requirement (i.e. changing the original 
 number the least).

 So if we have

 creal sin( creal c );
 real sin( real r );

 writefln( sin( 3.2 ) );

 as above, the 3.2 is according to specs a double, and we don't find 
 double sin( double d ) anywhere, we try implicit conversions. Now, we 
 have two options: creal sin( creal c ) and real sin( real r ). The 
 storage requirement of creal is larger than that of real, so conversion 
 to real changes the original double less than conversion to creal. Thus, 
 the compiler chooses to convert it into real.

 Naturally, we can help the compiler make the choice:

 writefln( sin( cast( creal ) 3.2 ) );

 would naturally pick the creal version (since 3.2 has been cast to it).

 What are your thoughts about this? Could this work? And if this could, 
 should this be added to the D specifications?

This would mean the lookup rules become more complicated. I think Walter was very keen to keep them simple.

I doubt they'd become that much more complicated. Currently, the DMD compiler has to look up all the possible implicit conversions, and if there's more than one possible conversion, error out because it can't know which one. Now, since it knows the types in question - and the type of the value being passed to the function, it's not that much more to do, IMHO. Basically, if it would error with an amiguous implicit conversion, do a search of the minimum of the possible types that are larger than the current type. If this doesn't match any type, do the error, otherwise select the type. A simple search during compile time is all that's required if we encounter more than one possible implicit conversion, to select the one that is the smallest of the possible ones. Approximately (in some kind of pseudo-code): type tryImplicitConversion( type intype, type[] implicit_conversion ) { least_type = illegal_type; // illegal type, if we can't find any other - size is defined as // maximum foreach( type in implicit_conversion ) { if( intype.sizeof > type.sizeof ) continue; // we're not interested in types that are *less* in size than our // input type if( least_type.sizeof < type.sizeof ) continue; // nor are we interested in larger types than necessary least_type = type; // ok, this is the smallest type we can convert to // (we've found so far) } return least_type; }
Mar 21 2006
parent reply Don Clugston <dac nospam.com.au> writes:
Rioshin an'Harthen wrote:
 "Don Clugston" <dac nospam.com.au> wrote in message 
 news:dvoklp$2pmq$2 digitaldaemon.com...
 Rioshin an'Harthen wrote:
 "Don Clugston" <dac nospam.com.au> wrote in message 
 news:dvobhe$2cm6$1 digitaldaemon.com...

 I've been thinking about this ever since the last discussion about this, 
 and believe there might be a better solution to the problem at hand than 
 disabling real -> creal implicit conversions.

 Since the compiler knows the storage requirements of the different types, 
 and if multiple implicit conversions are possible (e.g. the above 
 mentioned sin( creal ) and sin( real )), why not make the compiler choose 
 the one with the least storage requirement (i.e. changing the original 
 number the least).

 So if we have

 creal sin( creal c );
 real sin( real r );

 writefln( sin( 3.2 ) );

 as above, the 3.2 is according to specs a double, and we don't find 
 double sin( double d ) anywhere, we try implicit conversions. Now, we 
 have two options: creal sin( creal c ) and real sin( real r ). The 
 storage requirement of creal is larger than that of real, so conversion 
 to real changes the original double less than conversion to creal. Thus, 
 the compiler chooses to convert it into real.

 Naturally, we can help the compiler make the choice:

 writefln( sin( cast( creal ) 3.2 ) );

 would naturally pick the creal version (since 3.2 has been cast to it).

 What are your thoughts about this? Could this work? And if this could, 
 should this be added to the D specifications?

was very keen to keep them simple.

I doubt they'd become that much more complicated. Currently, the DMD compiler has to look up all the possible implicit conversions, and if there's more than one possible conversion, error out because it can't know which one. Now, since it knows the types in question - and the type of the value being passed to the function, it's not that much more to do, IMHO. Basically, if it would error with an amiguous implicit conversion, do a search of the minimum of the possible types that are larger than the current type. If this doesn't match any type, do the error, otherwise select the type. A simple search during compile time is all that's required if we encounter more than one possible implicit conversion, to select the one that is the smallest of the possible ones.

(a) your scheme would mean that float->cfloat (64 bits) is preferred over float->real (80 bits) on x86 CPUs. (b) since the size of real is not fixed, the result of the function lookup could depend on what CPU it's being compiled for! (c) What if the function has more than one argument? It might be better to just include a tie-break for the special cases of real types -> complex types, and imaginary -> complex. But I think case (c) is a serious problem anyway. given func( real, creal ) // #1 func( creal, real ) // #2 func( creal, creal) // #3 should func(7.0, 5.0) match #1, #2, or #3 ? <genuinequestion> And if "none, it's still ambiguous", have we really solved the problem? </genuinequestion>
 
 Approximately (in some kind of pseudo-code):
 
 type tryImplicitConversion( type intype, type[] implicit_conversion )
 {
     least_type = illegal_type; // illegal type, if we can't find any other - 
 size is defined as
                                           // maximum
     foreach( type in implicit_conversion )
     {
         if( intype.sizeof > type.sizeof )
             continue; // we're not interested in types that are *less* in 
 size than our
                           // input type
 
         if( least_type.sizeof < type.sizeof )
             continue; // nor are we interested in larger types than 
 necessary
 
         least_type = type; // ok, this is the smallest type we can convert 
 to
                                     // (we've found so far)
     }
 
     return least_type;
 }
 
 

Mar 21 2006
parent reply "Rioshin an'Harthen" <rharth75 hotmail.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message 
news:dvotc1$489$1 digitaldaemon.com...
 Rioshin an'Harthen wrote:
 I doubt they'd become that much more complicated. Currently, the DMD 
 compiler has to look up all the possible implicit conversions, and if 
 there's more than one possible conversion, error out because it can't 
 know which one.

 Now, since it knows the types in question - and the type of the value 
 being passed to the function, it's not that much more to do, IMHO. 
 Basically, if it would error with an amiguous implicit conversion, do a 
 search of the minimum of the possible types that are larger than the 
 current type. If this doesn't match any type, do the error, otherwise 
 select the type. A simple search during compile time is all that's 
 required if we encounter more than one possible implicit conversion, to 
 select the one that is the smallest of the possible ones.

(a) your scheme would mean that float->cfloat (64 bits) is preferred over float->real (80 bits) on x86 CPUs.

Hmm... yes, a slight problem in my logic. Still fixable, though. Let's introduce a concept of "family" into this, with a family consisting of: A: void B: char, wchar, dchar C: bool D: byte, short, int, long, cent E: ubyte, ushort, uint, ulong, ucent F: float, double, real G: ifloat, idouble, ireal H: cfloat, cdouble, creal etc. Now, allow implicit conversion upwards in a family, and between families only if impossible to convert inside the family. This would fix this problem, but it would introduce a different level of complexity into it. It might be worth it, or then again it might not. It's for Walter to decide at a later point. I'd like to have implicit conversion between real and complex numbers - there's a ton of occasions I've used it, so I'm trying to voice some thoughts into the matter on how to preserve those.
 (b) since the size of real is not fixed, the result of the function lookup 
 could depend on what CPU it's being compiled for!

True, real is not fixed in size. But according to the D specifications it is the "largest hardware implemented floating point size", and I take it to mean it can't be less in size than double. If a real and a double is the same size, there's no problem, and even less of one if real is larger.
 (c) What if the function has more than one argument?

 It might be better to just include a tie-break for the special cases of
 real types -> complex types, and imaginary -> complex. But I think case 
 (c) is a serious problem anyway.

 given
 func( real, creal ) // #1
 func( creal, real ) // #2
 func( creal, creal) // #3

 should func(7.0, 5.0)
 match #1, #2, or  #3 ?

Well, this is a problem, there's no doubt about it. As I take the example, the intention is that at least one of the arguments of the function has to be complex, and #1 and #2 are more like optimized versions than #3. This is still ambiguous. If we'd go by symmetry, having it match #3, then the question could be posed as: given func( real, creal ) // #1 func( creal, real ) // #2 should func( 7.0, 5.0 ) match #1 or #2. Still, I would go for the symmetrical - if any one parameter is implicitly converted, first try a version where as many as possible of the parameters are implicitly converted, unless a cast( ) has been used to explicitly mark a type. So I say (in this case) match #3 - I may be utterly wrong, but it's the feeling I have.
 <genuinequestion>
 And if "none, it's still ambiguous", have we really solved the problem?
 </genuinequestion>

No, we haven't. And probably we never will. But I think we'd be making some progress into solving the problem, maybe making it easier for others in the long run to be able to get it more right than we have. <humourous> Hmm, how about ditching float, double and real, as well as the imaginary versions? Only going for the complex types - now that'd be a way to solve this problem! ;) </humourous>
Mar 21 2006
parent reply Don Clugston <dac nospam.com.au> writes:
Rioshin an'Harthen wrote:
 "Don Clugston" <dac nospam.com.au> wrote in message 
 news:dvotc1$489$1 digitaldaemon.com...
 Rioshin an'Harthen wrote:
 I doubt they'd become that much more complicated. Currently, the DMD 
 compiler has to look up all the possible implicit conversions, and if 
 there's more than one possible conversion, error out because it can't 
 know which one.

 Now, since it knows the types in question - and the type of the value 
 being passed to the function, it's not that much more to do, IMHO. 
 Basically, if it would error with an amiguous implicit conversion, do a 
 search of the minimum of the possible types that are larger than the 
 current type. If this doesn't match any type, do the error, otherwise 
 select the type. A simple search during compile time is all that's 
 required if we encounter more than one possible implicit conversion, to 
 select the one that is the smallest of the possible ones.

float->real (80 bits) on x86 CPUs.

Hmm... yes, a slight problem in my logic. Still fixable, though. Let's introduce a concept of "family" into this, with a family consisting of: A: void B: char, wchar, dchar C: bool D: byte, short, int, long, cent E: ubyte, ushort, uint, ulong, ucent F: float, double, real G: ifloat, idouble, ireal H: cfloat, cdouble, creal etc. Now, allow implicit conversion upwards in a family, and between families only if impossible to convert inside the family.

A name I was using instead of "family" was "archetype". Ie, archetype!(char) = dchar, archetype!(ifloat) = ireal. That would mean that a uint would prefer to be converted to a ulong than to an int. That might cause problems. Maybe.
 This would fix this problem, but it would introduce a different level of 
 complexity into it. It might be worth it, or then again it might not. It's 
 for Walter to decide at a later point. I'd like to have implicit conversion 
 between real and complex numbers - there's a ton of occasions I've used it, 
 so I'm trying to voice some thoughts into the matter on how to preserve 
 those.

How have you been using implicit conversion? Are you talking about in functions, or in expressions? real r; creal c; c += r; c = 2.0; I think this could be OK. That is, assignment of a real to a creal could still be possible, without an implicit conversion. After all, there are no complex literals, so creal c = 2 + 3i; should be the same as c = 2 c += 3i;
 (b) since the size of real is not fixed, the result of the function lookup 
 could depend on what CPU it's being compiled for!

True, real is not fixed in size. But according to the D specifications it is the "largest hardware implemented floating point size", and I take it to mean it can't be less in size than double. If a real and a double is the same size, there's no problem, and even less of one if real is larger.

Yes, the only issue is that, for example, real is bigger than cfloat on x86, but the same size on PowerPC. And on a machine with 128-bit reals, a real could be the same size as a cdouble.
 
 (c) What if the function has more than one argument?

 It might be better to just include a tie-break for the special cases of
 real types -> complex types, and imaginary -> complex. But I think case 
 (c) is a serious problem anyway.

 given
 func( real, creal ) // #1
 func( creal, real ) // #2
 func( creal, creal) // #3

 should func(7.0, 5.0)
 match #1, #2, or  #3 ?

Well, this is a problem, there's no doubt about it. As I take the example, the intention is that at least one of the arguments of the function has to be complex, and #1 and #2 are more like optimized versions than #3. This is still ambiguous. If we'd go by symmetry, having it match #3, then the question could be posed as: given func( real, creal ) // #1 func( creal, real ) // #2 should func( 7.0, 5.0 ) match #1 or #2. Still, I would go for the symmetrical - if any one parameter is implicitly converted, first try a version where as many as possible of the parameters are implicitly converted, unless a cast( ) has been used to explicitly mark a type. So I say (in this case) match #3 - I may be utterly wrong, but it's the feeling I have.

So one possibility would be to change the lookup rules to be: * an exact match * OR an unambiguous match with implicit conversions, not including real->creal, ireal->creal (and possibly not including inter-family conversions) * OR an unambiguous match with implicit conversions, *including* real->creal, ireal->creal (possibly including other inter-family conversions, like char->short). * OR it does not match. which is a little more complicated than the existing D rules, but not by much.
 No, we haven't. And probably we never will. But I think we'd be making some 
 progress into solving the problem, maybe making it easier for others in the 
 long run to be able to get it more right than we have.

In practice, it might cover 95% of the use cases.
 <humourous>
 Hmm, how about ditching float, double and real, as well as the imaginary 
 versions? Only going for the complex types - now that'd be a way to solve 
 this problem! ;)
 </humourous>

Or we could stick to ints. Microsoft dropped 80-bit reals, why not continue the trend and abolish floating point entirely.
Mar 21 2006
parent "Rioshin an'Harthen" <rharth75 hotmail.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message 
news:dvp8ie$k5c$1 digitaldaemon.com...
 Rioshin an'Harthen wrote:
 "Don Clugston" <dac nospam.com.au> wrote in message 
 news:dvotc1$489$1 digitaldaemon.com...
 (a) your scheme would mean that float->cfloat (64 bits) is preferred 
 over float->real (80 bits) on x86 CPUs.

Hmm... yes, a slight problem in my logic. Still fixable, though. Let's introduce a concept of "family" into this, with a family consisting of: A: void B: char, wchar, dchar C: bool D: byte, short, int, long, cent E: ubyte, ushort, uint, ulong, ucent F: float, double, real G: ifloat, idouble, ireal H: cfloat, cdouble, creal etc. Now, allow implicit conversion upwards in a family, and between families only if impossible to convert inside the family.

A name I was using instead of "family" was "archetype". Ie, archetype!(char) = dchar, archetype!(ifloat) = ireal. That would mean that a uint would prefer to be converted to a ulong than to an int. That might cause problems. Maybe.

Well, I don't see it as a problem. I'm a member of the faction of firm believers in always casting conversions between signed types and unsigned ones.
 This would fix this problem, but it would introduce a different level of 
 complexity into it. It might be worth it, or then again it might not. 
 It's for Walter to decide at a later point. I'd like to have implicit 
 conversion between real and complex numbers - there's a ton of occasions 
 I've used it, so I'm trying to voice some thoughts into the matter on how 
 to preserve those.

How have you been using implicit conversion? Are you talking about in functions, or in expressions? real r; creal c; c += r; c = 2.0; I think this could be OK. That is, assignment of a real to a creal could still be possible, without an implicit conversion. After all, there are no complex literals, so creal c = 2 + 3i; should be the same as c = 2 c += 3i;

I've been using implicit casts to complex numbers in many situations - most of the time in expressions, but quite often in function calls, as well. Thus, I've been thinking of ways to make the implicit casts work.
 (b) since the size of real is not fixed, the result of the function 
 lookup could depend on what CPU it's being compiled for!

True, real is not fixed in size. But according to the D specifications it is the "largest hardware implemented floating point size", and I take it to mean it can't be less in size than double. If a real and a double is the same size, there's no problem, and even less of one if real is larger.

Yes, the only issue is that, for example, real is bigger than cfloat on x86, but the same size on PowerPC. And on a machine with 128-bit reals, a real could be the same size as a cdouble.

I think this problem would go away if we take into account the "family" or archetype of a type in the cast, since we prefer to cast to any larger type of those having the same archetype (or being in the same family). Only if this is not possible, would we cast to a type outside the family.
 (c) What if the function has more than one argument?

 It might be better to just include a tie-break for the special cases of
 real types -> complex types, and imaginary -> complex. But I think case 
 (c) is a serious problem anyway.

 given
 func( real, creal ) // #1
 func( creal, real ) // #2
 func( creal, creal) // #3

 should func(7.0, 5.0)
 match #1, #2, or  #3 ?

Well, this is a problem, there's no doubt about it. As I take the example, the intention is that at least one of the arguments of the function has to be complex, and #1 and #2 are more like optimized versions than #3. This is still ambiguous. If we'd go by symmetry, having it match #3, then the question could be posed as: given func( real, creal ) // #1 func( creal, real ) // #2 should func( 7.0, 5.0 ) match #1 or #2. Still, I would go for the symmetrical - if any one parameter is implicitly converted, first try a version where as many as possible of the parameters are implicitly converted, unless a cast( ) has been used to explicitly mark a type. So I say (in this case) match #3 - I may be utterly wrong, but it's the feeling I have.

So one possibility would be to change the lookup rules to be: * an exact match * OR an unambiguous match with implicit conversions, not including real->creal, ireal->creal (and possibly not including inter-family conversions) * OR an unambiguous match with implicit conversions, *including* real->creal, ireal->creal (possibly including other inter-family conversions, like char->short). * OR it does not match. which is a little more complicated than the existing D rules, but not by much.

This is sounding like what I was thinking.
 No, we haven't. And probably we never will. But I think we'd be making 
 some progress into solving the problem, maybe making it easier for others 
 in the long run to be able to get it more right than we have.

In practice, it might cover 95% of the use cases.

True, and I think 95% is good enough for most. In the rest 5% where the implicit cast is not good enough (the compiler replies with an error message), it's just time to use an explicit cast.
 <humourous>
 Hmm, how about ditching float, double and real, as well as the imaginary 
 versions? Only going for the complex types - now that'd be a way to solve 
 this problem! ;)
 </humourous>

Or we could stick to ints. Microsoft dropped 80-bit reals, why not continue the trend and abolish floating point entirely.

I would hope we'd be smarter than Microsoft... :)
Mar 21 2006