www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Feature request: Path append operators for strings

reply "TommiT" <tommitissari hotmail.com> writes:
How would you feel about adding the '/' binary operator and the 
'/=' assignment operator for strings, wstrings and dstrings? The 
operators would behave the same way as they do with 
boost::filesystem::path objects:

http://www.boost.org/doc/libs/1_54_0/libs/filesystem/doc/reference.html#path-appends

In short (and omitting some details) code such as:

string s = "C:\\Users" / "John";

...would be the same as:

string s = "C:\\Users" ~ std.path.dirSeparator ~ "John";
Jul 02 2013
next sibling parent "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On 2013-07-02, 21:46, TommiT wrote:

 How would you feel about adding the '/' binary operator and the '/='  
 assignment operator for strings, wstrings and dstrings? The operators  
 would behave the same way as they do with boost::filesystem::path  
 objects:

 http://www.boost.org/doc/libs/1_54_0/libs/filesystem/doc/reference.html#path-appends

 In short (and omitting some details) code such as:

 string s = "C:\\Users" / "John";

 ...would be the same as:

 string s = "C:\\Users" ~ std.path.dirSeparator ~ "John";
This would be much better done with a library type: auto s = Path("C:\\Users") / "John"; -- Simen
Jul 02 2013
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, July 02, 2013 21:46:26 TommiT wrote:
 How would you feel about adding the '/' binary operator and the
 '/=' assignment operator for strings, wstrings and dstrings? The
 operators would behave the same way as they do with
 boost::filesystem::path objects:
 
 http://www.boost.org/doc/libs/1_54_0/libs/filesystem/doc/reference.html#path
 -appends
 
 In short (and omitting some details) code such as:
 
 string s = "C:\\Users" / "John";
 
 ...would be the same as:
 
 string s = "C:\\Users" ~ std.path.dirSeparator ~ "John";
That's what std.path.buildPath is for. - Jonathan M Davis
Jul 02 2013
parent "TommiT" <tommitissari hotmail.com> writes:
On Tuesday, 2 July 2013 at 19:56:20 UTC, Jonathan M Davis wrote:
 On Tuesday, July 02, 2013 21:46:26 TommiT wrote:
 How would you feel about adding the '/' binary operator and the
 '/=' assignment operator for strings, wstrings and dstrings? 
 The
 operators would behave the same way as they do with
 boost::filesystem::path objects:
 
 http://www.boost.org/doc/libs/1_54_0/libs/filesystem/doc/reference.html#path
 -appends
 
 In short (and omitting some details) code such as:
 
 string s = "C:\\Users" / "John";
 
 ...would be the same as:
 
 string s = "C:\\Users" ~ std.path.dirSeparator ~ "John";
That's what std.path.buildPath is for. - Jonathan M Davis
Oh, I hadn't noticed that function. I don't know about its behaviour of dropping path components situated before a rooted path component. Throwing an Exception would seem like a better behaviour in that situation.
Jul 02 2013
prev sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 2 July 2013 at 19:46:34 UTC, TommiT wrote:
 How would you feel about adding the '/' binary operator and the 
 '/=' assignment operator for strings, wstrings and dstrings? 
 The operators would behave the same way as they do with 
 boost::filesystem::path objects:
There is a *massive* difference here. boost::filesystem adds the overload for *path* objects. It doesn't add a global operator for any indiscriminate string.
Jul 02 2013
parent reply "TommiT" <tommitissari hotmail.com> writes:
On Tuesday, 2 July 2013 at 20:31:14 UTC, monarch_dodra wrote:
 On Tuesday, 2 July 2013 at 19:46:34 UTC, TommiT wrote:
 How would you feel about adding the '/' binary operator and 
 the '/=' assignment operator for strings, wstrings and 
 dstrings? The operators would behave the same way as they do 
 with boost::filesystem::path objects:
There is a *massive* difference here. boost::filesystem adds the overload for *path* objects. It doesn't add a global operator for any indiscriminate string.
As far as I can tell, Phobos already uses strings or const(char)[] to represent paths all over the place. So, I figured, we can't add a separate Path type at this point because that train has passed. Although, I don't know if that design would have been a better anyway. Division operator for strings doesn't make any sense, and I doubt there will ever be some other meaning for '/' that would make more sense than "a directory separator" for strings in the context of programming.
Jul 02 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code. The classic example of this is the overloading of << and >> for stream operations in C++.
Jul 02 2013
next sibling parent "Araq" <rumpf_a gmx.de> writes:
On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code. The classic example of this is the overloading of << and >> for stream operations in C++.
Before C came along, '<<' meant "much less" ...
Jul 02 2013
prev sibling next sibling parent reply "TommiT" <tommitissari hotmail.com> writes:
On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code. The classic example of this is the overloading of << and >> for stream operations in C++.
I've never thought of it like that. At some point I remember writing a vector type which overloaded its binary * operator to mean dot product (or cross product, I can't remember). So, you can overload an operator, but you can't overload the meaning of an operator.
Jul 02 2013
next sibling parent reply "TommiT" <tommitissari hotmail.com> writes:
On Tuesday, 2 July 2013 at 22:28:24 UTC, TommiT wrote:
 I've never thought of it like that. [..]
Boost Filesystem overloads the meaning of / to mean "append to path". Boost Exception overloads << to mean "add this info to this exception". Boost Serialization overloads << and >> to mean serialize and deserialize, and & to mean either one of those. So no wonder I was under the impression that we're allowed to overload the meaning of operators.
Jul 02 2013
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, July 03, 2013 00:55:59 TommiT wrote:
 So no wonder I was under the impression that we're allowed to
 overload the meaning of operators.
Well, of course, you _can_ overload them to do different stuff. It's trivial to make most overloaded operators do something completely different from what they do normally. The argument against it is that doing so is bad practice, because it makes your code hard to understand. And for some operators (e.g. opCmp and opEquals), D actually implements the overloaded operator in a way that giving it an alternate meaning doesn't work. You _could_ do it with / though. It's just arguably bad practice to do so. But since you can get the same functonality out of a normal function without the confusion, it really doesn't make sense in general to overload operators to do something fundamentally different with a user-defined type than what they do with the built-in types. However, some people are really hung up on making everything terse or making it look like mathh or whatnot and insist on abusing operators by overloading them with completely different meanings. - Jonathan M Davis
Jul 02 2013
prev sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 2 July 2013 at 22:56:00 UTC, TommiT wrote:
 On Tuesday, 2 July 2013 at 22:28:24 UTC, TommiT wrote:
 I've never thought of it like that. [..]
Boost Filesystem overloads the meaning of / to mean "append to path". Boost Exception overloads << to mean "add this info to this exception". Boost Serialization overloads << and >> to mean serialize and deserialize, and & to mean either one of those. So no wonder I was under the impression that we're allowed to overload the meaning of operators.
Such overloads make for code that's fast to write but hard to read, especially to outsiders. It's a tempting direction, but not a good one.
Jul 02 2013
prev sibling parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Tuesday, 2 July 2013 at 22:28:24 UTC, TommiT wrote:
 On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code. The classic example of this is the overloading of << and >> for stream operations in C++.
I've never thought of it like that. At some point I remember writing a vector type which overloaded its binary * operator to mean dot product (or cross product, I can't remember). So, you can overload an operator, but you can't overload the meaning of an operator.
This is something I was discussing with a friend recently, and we agreed it would be cool if there were set of operators with no definition until overloaded, so you could use e.g. (.) for dot product, (*) for cross product, (+) (or maybe [+]?) for matrix add, etc. instead of overloading things that already have specific, well-understood meaning. -Wyatt
Jul 03 2013
next sibling parent reply "TommiT" <tommitissari hotmail.com> writes:
On Wednesday, 3 July 2013 at 12:24:33 UTC, Wyatt wrote:
 On Tuesday, 2 July 2013 at 22:28:24 UTC, TommiT wrote:
 On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code. The classic example of this is the overloading of << and >> for stream operations in C++.
I've never thought of it like that. At some point I remember writing a vector type which overloaded its binary * operator to mean dot product (or cross product, I can't remember). So, you can overload an operator, but you can't overload the meaning of an operator.
This is something I was discussing with a friend recently, and we agreed it would be cool if there were set of operators with no definition until overloaded, so you could use e.g. (.) for dot product, (*) for cross product, (+) (or maybe [+]?) for matrix add, etc. instead of overloading things that already have specific, well-understood meaning. -Wyatt
I don't see why we couldn't add the actual unicode ∙ and × characters to the language, make them operators and give them the fixed meaning of dot product and cross product respectively. Wouldn't + be the correct operator to use for matrix addition. What happens when matrices are added is quite different from when real values are added, but the meaning of + is still addition for the both of them.
Jul 03 2013
next sibling parent "Wyatt" <wyatt.epp gmail.com> writes:
On Wednesday, 3 July 2013 at 12:45:53 UTC, TommiT wrote:
 I don't see why we couldn't add the actual unicode ∙ and × 
 characters to the language, make them operators and give them 
 the fixed meaning of dot product and cross product respectively.

 Wouldn't + be the correct operator to use for matrix addition. 
 What happens when matrices are added is quite different from 
 when real values are added, but the meaning of + is still 
 addition for the both of them.
That's also a possibility, I suppose, but the real thrust is it would allow you to have very clear (as in, visually offset by some sort of brace, in this example) operators that handle whatever weird transform you want for any convoluted data structure you care to define one for. That they can be entered with a standard 104-key keyboard without groping about for however people prefer to enter unicode characters is just icing. -Wyatt
Jul 03 2013
prev sibling next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Wednesday, 3 July 2013 at 12:45:53 UTC, TommiT wrote:
 On Wednesday, 3 July 2013 at 12:24:33 UTC, Wyatt wrote:
 On Tuesday, 2 July 2013 at 22:28:24 UTC, TommiT wrote:
 On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code. The classic example of this is the overloading of << and >> for stream operations in C++.
I've never thought of it like that. At some point I remember writing a vector type which overloaded its binary * operator to mean dot product (or cross product, I can't remember). So, you can overload an operator, but you can't overload the meaning of an operator.
This is something I was discussing with a friend recently, and we agreed it would be cool if there were set of operators with no definition until overloaded, so you could use e.g. (.) for dot product, (*) for cross product, (+) (or maybe [+]?) for matrix add, etc. instead of overloading things that already have specific, well-understood meaning. -Wyatt
I don't see why we couldn't add the actual unicode ∙ and × characters to the language, make them operators and give them the fixed meaning of dot product and cross product respectively. Wouldn't + be the correct operator to use for matrix addition. What happens when matrices are added is quite different from when real values are added, but the meaning of + is still addition for the both of them.
Technically, + is already 1D matrix addition (or should I say +=). You can toy around to make it work for N-Dimensional matrixes: -------- import std.stdio; void main() { int[4][4] a = 1; int[4][4] b = 2; (*a.ptr).ptr[0 .. 16] += (*b.ptr).ptr[0 .. 16]; writeln(a); } -------- Yeah... not optimal :/ This also discards static type information.
Jul 03 2013
parent reply "TommiT" <tommitissari hotmail.com> writes:
On Wednesday, 3 July 2013 at 13:24:41 UTC, monarch_dodra wrote:
 Technically, + is already 1D matrix addition [..]
Not 1D matrix, but rather, 1x1 matrix.
Jul 03 2013
next sibling parent "TommiT" <tommitissari hotmail.com> writes:
On Wednesday, 3 July 2013 at 14:03:20 UTC, TommiT wrote:
 On Wednesday, 3 July 2013 at 13:24:41 UTC, monarch_dodra wrote:
 Technically, + is already 1D matrix addition [..]
Not 1D matrix, but rather, 1x1 matrix.
Sorry, didn't realize you were talking about: sum[] = values[] + values[];
Jul 03 2013
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Wednesday, 3 July 2013 at 14:03:20 UTC, TommiT wrote:
 On Wednesday, 3 July 2013 at 13:24:41 UTC, monarch_dodra wrote:
 Technically, + is already 1D matrix addition [..]
Not 1D matrix, but rather, 1x1 matrix.
in conjunction with [] you have 1D addition. e.g. int[10] a = 1; int[10] b = 2; int[10] c = a[] + b[]; foreach(el_c; c) assert(el_c == 3);
Jul 03 2013
parent reply "w0rp" <devw0rp gmail.com> writes:
I am strongly against this kind of thing. Operator overloading is 
a very useful tool for providing obvious semantics to types. User 
defined data structures, like a matrix type, can be treated like 
first class citizens, just like built in primitive types, by 
having overloads for relevant operators.

Using an operator to implement something non-obvious is a crime 
to me. Plus, it's usually wrong, because like C++ streams, you'd 
have to have each binary relation take a reference to something 
(like an ostream) and return the reference again so you can chain 
the operators. Why chain several binary function calls together 
when you can have a single n-ary function call like 
std.path.buildPath?

Also to shamelessly self-plug, I made a garbage collected matrix 
type with a few overloads for fun recently. Maybe somebody will 
find a use for it. 
https://github.com/w0rp/dmatrix/blob/master/matrix.d
Jul 03 2013
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jul 03, 2013 at 11:48:21PM +0200, w0rp wrote:
 I am strongly against this kind of thing. Operator overloading is a
 very useful tool for providing obvious semantics to types. User
 defined data structures, like a matrix type, can be treated like
 first class citizens, just like built in primitive types, by having
 overloads for relevant operators.
Operator overloading was initially intended to support user-defined arithmetic types in a transparent way. However, the ability to attach Turing-complete semantics to an operator (for the purpose of implementing arithmetic) led to the temptation to assign *arbitrary* semantics to it. Which leads to (IMO) poor choices like operator<< and operator>> in C++'s iostream.
 Using an operator to implement something non-obvious is a crime to
 me. Plus, it's usually wrong, because like C++ streams, you'd have
 to have each binary relation take a reference to something (like an
 ostream) and return the reference again so you can chain the
 operators. Why chain several binary function calls together when you
 can have a single n-ary function call like std.path.buildPath?
+1. Especially given how nicely D has solved the problem of type-safe variadics.
 Also to shamelessly self-plug, I made a garbage collected matrix
 type with a few overloads for fun recently. Maybe somebody will find
 a use for it. https://github.com/w0rp/dmatrix/blob/master/matrix.d
I think this is a clear sign that we need *some* kind of linear algebra / multidimensional array support in Phobos. Denis Shelomovskij and myself have independently implemented generic n-dimensional array libraries, of which 2D arrays are a special case. Your matrix implementation is the 3rd instance that I know of. There are probably more. Though, matrix types and 2D arrays probably will want to overload opBinary!"*" differently (matrix multiplication vs. per-element multiplication, for example). In theory, though, a matrix type can just wrap around a 2D array and just override / provide its own opBinary!"*". T -- Talk is cheap. Whining is actually free. -- Lars Wirzenius
Jul 03 2013
prev sibling parent "Martin Primer" <megaboo zoo.com> writes:
On Wednesday, 3 July 2013 at 12:45:53 UTC, TommiT wrote:
 On Wednesday, 3 July 2013 at 12:24:33 UTC, Wyatt wrote:
 On Tuesday, 2 July 2013 at 22:28:24 UTC, TommiT wrote:
 On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code. The classic example of this is the overloading of << and >> for stream operations in C++.
I've never thought of it like that. At some point I remember writing a vector type which overloaded its binary * operator to mean dot product (or cross product, I can't remember). So, you can overload an operator, but you can't overload the meaning of an operator.
This is something I was discussing with a friend recently, and we agreed it would be cool if there were set of operators with no definition until overloaded, so you could use e.g. (.) for dot product, (*) for cross product, (+) (or maybe [+]?) for matrix add, etc. instead of overloading things that already have specific, well-understood meaning. -Wyatt
I don't see why we couldn't add the actual unicode ∙ and × characters to the language, make them operators and give them the fixed meaning of dot product and cross product respectively. Wouldn't + be the correct operator to use for matrix addition. What happens when matrices are added is quite different from when real values are added, but the meaning of + is still addition for the both of them.
Time to bring back those old APL keyboards - we can have a lot more symbols then ;-) MP
Jul 03 2013
prev sibling parent "Wyatt" <wyatt.epp gmail.com> writes:
On Wednesday, 3 July 2013 at 12:24:33 UTC, Wyatt wrote:
 This is something I was discussing with a friend recently, and 
 we agreed it would be cool if there were set of operators with 
 no definition until overloaded, so you could use e.g. (.) for 
 dot product, (*) for cross product, (+) (or maybe [+]?) for 
 matrix add, etc. instead of overloading things that already 
 have specific, well-understood meaning.
I'd like to clarify this a little with a concrete example I hit late yesterday. I have a sparse tree-like recursive struct with an array of children and a single leaf value. I thought it was fairly simple, but I quickly found the range of common operations I want to support exceeds the limits of orthogonal operations. Like my opOpAssign!("~") adds the children of the RHS as children of LHS, while the opIndexAssign assigns a leaf value to a child of LHS, and the opIndexOpAssign!("~") makes the entire RHS tree a child of the LHS. And I'm sure I'm not "done"; but I'm also VERY reluctant to go any further because it's getting ugly fast. (I think I may be able to _somewhat_ work around this with multiple overloads for different types. I haven't tried it, but I think that works?) Having some way of differentiating the different semantic concepts (i.e. operating on trees vs. operating on leaf values) would be hugely useful for my ability to reason about the code easily. Not just that, having a way of offsetting them _visually_ would be useful for me to keep track of them and know, at-a-glance, that I'm doing something different; something that's not QUITE like e.g. a concatenation. (As I think I mentioned, I see this as a major factor in favour of some kind of bracing, if not parentheses.) IMO, it's the sort of thing where almost any non-trivial data structure you manipulate frequently could stand to benefit. Unfortunately; conversely, I _also_ completely understand that adding more features to the language at this point is a fairly tall order. Worse, I think this would require some compiler/spec changes. Or maybe there's a third path I'm not seeing-- I don't know. All that said, does anyone aside from myself and a few others have strong opinions on this? -Wyatt
Jul 08 2013
prev sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code.
s/division/"The common agreed upon semantic"/
 The classic example of this is the overloading of << and >> for 
 stream operations in C++.
Or overloading ~ to mean "concat" ?
Jul 02 2013
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/2/2013 4:28 PM, monarch_dodra wrote:
 The classic example of this is the overloading of << and >> for stream
 operations in C++.
Or overloading ~ to mean "concat" ?
Binary ~ has no other meaning, so it is not "overloading" it to mean something else.
Jul 02 2013
prev sibling parent reply "TommiT" <tommitissari hotmail.com> writes:
On Tuesday, 2 July 2013 at 23:28:41 UTC, monarch_dodra wrote:
 On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code.
s/division/"The common agreed upon semantic"/
 The classic example of this is the overloading of << and >> 
 for stream operations in C++.
Or overloading ~ to mean "concat" ?
It's rather C++'s std::string which overloads the meaning of + to mean "concatenation". I wonder if some other programming language has assigned some other symbol (than ~) to mean "concatenation". I guess math uses || for it.
Jul 05 2013
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 05.07.2013 16:59, schrieb TommiT:
 On Tuesday, 2 July 2013 at 23:28:41 UTC, monarch_dodra wrote:
 On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code.
s/division/"The common agreed upon semantic"/
 The classic example of this is the overloading of << and >> for
 stream operations in C++.
Or overloading ~ to mean "concat" ?
It's rather C++'s std::string which overloads the meaning of + to mean "concatenation". I wonder if some other programming language has assigned some other symbol (than ~) to mean "concatenation". I guess math uses || for it.
Visual Basic uses & Perl and PHP use . Ocaml uses ^ Just from the top of my mind, surely there are other examples. -- Paulo
Jul 05 2013
next sibling parent "TommiT" <tommitissari hotmail.com> writes:
On Friday, 5 July 2013 at 15:04:44 UTC, Paulo Pinto wrote:
 Am 05.07.2013 16:59, schrieb TommiT:
 On Tuesday, 2 July 2013 at 23:28:41 UTC, monarch_dodra wrote:
 On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code.
s/division/"The common agreed upon semantic"/
 The classic example of this is the overloading of << and >> 
 for
 stream operations in C++.
Or overloading ~ to mean "concat" ?
It's rather C++'s std::string which overloads the meaning of + to mean "concatenation". I wonder if some other programming language has assigned some other symbol (than ~) to mean "concatenation". I guess math uses || for it.
Visual Basic uses & Perl and PHP use . Ocaml uses ^ Just from the top of my mind, surely there are other examples. -- Paulo
So it's a mess, basically.
Jul 05 2013
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jul 05, 2013 at 05:04:46PM +0200, Paulo Pinto wrote:
 Am 05.07.2013 16:59, schrieb TommiT:
On Tuesday, 2 July 2013 at 23:28:41 UTC, monarch_dodra wrote:
On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
On 7/2/2013 1:47 PM, TommiT wrote:
Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code.
s/division/"The common agreed upon semantic"/
The classic example of this is the overloading of << and >> for
stream operations in C++.
Or overloading ~ to mean "concat" ?
It's rather C++'s std::string which overloads the meaning of + to mean "concatenation". I wonder if some other programming language has assigned some other symbol (than ~) to mean "concatenation". I guess math uses || for it.
Visual Basic uses & Perl and PHP use . Ocaml uses ^ Just from the top of my mind, surely there are other examples.
[...] Python uses +. Arguably, C uses blank (two string literals side-by-side are automatically concatenated), but that's a hack, and an incomplete one at that. :-P T -- Lawyer: (n.) An innocence-vending machine, the effectiveness of which depends on how much money is inserted.
Jul 05 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/5/2013 9:17 AM, H. S. Teoh wrote:
 Python uses +.
There's much historical precedence for + meaning concatenation, and much historical experience with the resulting ambiguity. The famous example is: "123" + 4 ? In D, the canonical problem is: int[] array; array + 4 Does that mean append 4 to array, or add 4 to each element of array? What if you want to create a user defined type that supports both addition and concatenation? Use + for addition, ~ for concatenation, and all these problems go away.
Jul 05 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jul 05, 2013 at 10:44:43AM -0700, Walter Bright wrote:
 On 7/5/2013 9:17 AM, H. S. Teoh wrote:
Python uses +.
There's much historical precedence for + meaning concatenation, and much historical experience with the resulting ambiguity.
Which leads to some nasty situations in Javascript, where sometimes what you think is an int, is actually a string, and you wonder why your calculation is producing strange results. Or vice versa, when you're trying to concatenate strings, and get a strange large number instead.
 The famous example is:
 
     "123" + 4
 
 ? In D, the canonical problem is:
 
     int[] array;
 
     array + 4
 
 Does that mean append 4 to array, or add 4 to each element of array?
 What if you want to create a user defined type that supports both
 addition and concatenation?
 
 Use + for addition, ~ for concatenation, and all these problems go
 away.
It doesn't necessarily have to be ~, as long as it's something other than + (or any other numerical binary operator). Perl uses '.', but in D's case, that would be a bad idea, since you'd have ambiguity in: auto x = mod1.x . mod2.y; // concatenation or long module path name? It's not a problem in Perl, because Perl uses :: as module separator, like C++. Your example is somewhat faulty, though; adding 4 to each element of the array would have to be written "array[] + 4", wouldn't it? You can't make the [] optional, because if you have an array of arrays, then you're in trouble: int[][] aoa; aoa ~ [1]; // append to outer array, or each inner array? While it's possible to use type-matching to decide, it seems like a bug waiting to happen. Much better if array <op> x always means apply <op> to the entire array, and array[] <op> x to mean apply <op> to each array element. T -- People tell me I'm stubborn, but I refuse to accept it!
Jul 05 2013
parent "Wyatt" <wyatt.epp gmail.com> writes:
On Friday, 5 July 2013 at 18:18:14 UTC, H. S. Teoh wrote:
 It doesn't necessarily have to be ~, as long as it's something 
 other
 than + (or any other numerical binary operator). Perl uses '.', 
 but in
 D's case, that would be a bad idea, since you'd have ambiguity 
 in:
Perl is my day job and I've come to strongly dislike the period for concatenation. IMO, that the tilde is nice and visible is a strong UX argument in its favour. Periods get used at the end of every sentence. Full stop. :P -Wyatt
Jul 05 2013
prev sibling next sibling parent "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Friday, 5 July 2013 at 14:59:39 UTC, TommiT wrote:
 It's rather C++'s std::string which overloads the meaning of + 
 to mean "concatenation". I wonder if some other programming 
 language has assigned some other symbol (than ~) to mean 
 "concatenation". I guess math uses || for it.
|| is used for a different kind of concatenation [1] For strings, math uses middle dot, (a⋅b) or even just (ab) like with multiplication [2] [1]: https://en.wikipedia.org/wiki/Concatenation_(mathematics) [2]: https://en.wikipedia.org/wiki/String_operations#Strings_and_languages
Jul 05 2013
prev sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, July 05, 2013 16:59:38 TommiT wrote:
 On Tuesday, 2 July 2013 at 23:28:41 UTC, monarch_dodra wrote:
 On Tuesday, 2 July 2013 at 21:48:54 UTC, Walter Bright wrote:
 On 7/2/2013 1:47 PM, TommiT wrote:
 Division operator for strings doesn't make any sense,
That's why overloading / to do something completely unrelated to division is anti-ethical to writing understandable code.
s/division/"The common agreed upon semantic"/
 The classic example of this is the overloading of << and >>
 for stream operations in C++.
Or overloading ~ to mean "concat" ?
It's rather C++'s std::string which overloads the meaning of + to mean "concatenation". I wonder if some other programming language has assigned some other symbol (than ~) to mean "concatenation". I guess math uses || for it.
Most languages I've used use + for concatenating strings, so it was definitely surprising to me that D didn't. I have no problem with the fact that it has a specific operator for concatenation (and there are some good reasons for it), but + seems to be pretty standard across languages from what I've seen. I've certainly never seen another language use ~ for that purpose, but at the moment, I can't remember whether I've ever seen another programming language use ~ for _any_ purpose. - Jonathan M Davis
Jul 05 2013
parent reply "Namespace" <rswhite4 googlemail.com> writes:
 Most languages I've used use + for concatenating strings, so it 
 was definitely
 surprising to me that D didn't. I have no problem with the fact 
 that it has a
 specific operator for concatenation (and there are some good 
 reasons for it),
 but + seems to be pretty standard across languages from what 
 I've seen. I've
 certainly never seen another language use ~ for that purpose, 
 but at the
 moment, I can't remember whether I've ever seen another 
 programming language
 use ~ for _any_ purpose.

 - Jonathan M Davis
logical not in e.g. Java?
Jul 05 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07/05/2013 09:43 PM, Namespace wrote:
 Most languages I've used use + for concatenating strings, so it was
 definitely
 surprising to me that D didn't. I have no problem with the fact that
 it has a
 specific operator for concatenation (and there are some good reasons
 for it),
 but + seems to be pretty standard across languages from what I've
 seen. I've
 certainly never seen another language use ~ for that purpose, but at the
 moment, I can't remember whether I've ever seen another programming
 language
 use ~ for _any_ purpose.

 - Jonathan M Davis
logical not in e.g. Java?
Unary ~ is bitwise not in Java and D, and he is referring to binary usage.
Jul 05 2013
parent reply "Namespace" <rswhite4 googlemail.com> writes:
 Unary ~ is bitwise not in Java and D, and he is referring to 
 binary usage.
 [...] use ~ for _any_ purpose.
I'd expected that *any* really means *any* and do not refer to binary.
Jul 05 2013
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, July 05, 2013 22:09:53 Namespace wrote:
 Unary ~ is bitwise not in Java and D, and he is referring to
 binary usage.
 
 [...] use ~ for _any_ purpose.
I'd expected that *any* really means *any* and do not refer to binary.
I did mean any, not just binary. I thought that there might be a case of it being used as a unary operator somewhere in at least one language I'd used, but I couldn't think of any when I posted (probably due to a combination of just having gotten up and the fact that I use bitwise operations very rarely). - Jonathan M Davis
Jul 05 2013
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07/05/2013 10:09 PM, Namespace wrote:
 Unary ~ is bitwise not in Java and D, and he is referring to binary
 usage.
 [...] use ~ for _any_ purpose.
I'd expected that *any* really means *any* and do not refer to binary.
Yes. Neither do 'use', 'for' and 'purpose'. Establishing that it is likely that ~ is referring to binary requires some more context (eg. it is likely that the two usages of ~ in his post refer to the same thing), common sense or the assumption that Jonathan probably knows about the unary usage. (Parsing natural language is quite hard though, so I could be wrong.)
Jul 05 2013
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07/05/2013 10:34 PM, Timon Gehr wrote:
 On 07/05/2013 10:09 PM, Namespace wrote:
 Unary ~ is bitwise not in Java and D, and he is referring to binary
 usage.
 [...] use ~ for _any_ purpose.
I'd expected that *any* really means *any* and do not refer to binary.
Yes. Neither do 'use', 'for' and 'purpose'. Establishing that it is likely that ~ is referring to binary requires some more context (eg. it is likely that the two usages of ~ in his post refer to the same thing), common sense or the assumption that Jonathan probably knows about the unary usage. (Parsing natural language is quite hard though, so I could be wrong.)
Turns out I was wrong. :o)
Jul 05 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, July 05, 2013 22:34:57 Timon Gehr wrote:
 On 07/05/2013 10:34 PM, Timon Gehr wrote:
 On 07/05/2013 10:09 PM, Namespace wrote:
 Unary ~ is bitwise not in Java and D, and he is referring to binary
 usage.
 
 [...] use ~ for _any_ purpose.
I'd expected that *any* really means *any* and do not refer to binary.
Yes. Neither do 'use', 'for' and 'purpose'. Establishing that it is likely that ~ is referring to binary requires some more context (eg. it is likely that the two usages of ~ in his post refer to the same thing), common sense or the assumption that Jonathan probably knows about the unary usage. (Parsing natural language is quite hard though, so I could be wrong.)
Turns out I was wrong. :o)
Yeah, well. It doesn't hurt my feelings any if you're erring on the side of thinking that I know what I'm talking about. :) And I'm certain that I've seen the unary usage of ~ before. I just couldn't think of it when I posted today. I really need more sleep... - Jonathan M Davis
Jul 05 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/5/2013 1:39 PM, Jonathan M Davis wrote:
 And I'm certain that I've seen the unary usage of ~ before. I just couldn't
 think of it when I posted today. I really need more sleep...
Or more coffee!
Jul 06 2013
prev sibling parent reply "Namespace" <rswhite4 googlemail.com> writes:
On Friday, 5 July 2013 at 20:34:26 UTC, Timon Gehr wrote:
 On 07/05/2013 10:09 PM, Namespace wrote:
 Unary ~ is bitwise not in Java and D, and he is referring to 
 binary
 usage.
 [...] use ~ for _any_ purpose.
I'd expected that *any* really means *any* and do not refer to binary.
Yes. Neither do 'use', 'for' and 'purpose'. Establishing that it is likely that ~ is referring to binary requires some more context (eg. it is likely that the two usages of ~ in his post refer to the same thing), common sense or the assumption that Jonathan probably knows about the unary usage. (Parsing natural language is quite hard though, so I could be wrong.)
Spoken like a true human Compiler. :)
Jul 05 2013
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, July 05, 2013 22:46:59 Namespace wrote:
 On Friday, 5 July 2013 at 20:34:26 UTC, Timon Gehr wrote:
 On 07/05/2013 10:09 PM, Namespace wrote:
 Unary ~ is bitwise not in Java and D, and he is referring to
 binary
 usage.
 
 [...] use ~ for _any_ purpose.
I'd expected that *any* really means *any* and do not refer to binary.
Yes. Neither do 'use', 'for' and 'purpose'. Establishing that it is likely that ~ is referring to binary requires some more context (eg. it is likely that the two usages of ~ in his post refer to the same thing), common sense or the assumption that Jonathan probably knows about the unary usage. (Parsing natural language is quite hard though, so I could be wrong.)
Spoken like a true human Compiler. :)
LOL. Natural language is even more ambiguous than HTML, and we know how bad that can get. Every person is emitting and receiving slightly different versions of whatever natural language they're communicated in, and it's that much worse when it's pure text without body language. And that's with a _human_ deciphering it. It's a miracle that computers ever get much of anywhere with it. - Jonathan M Davis
Jul 05 2013
parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Friday, 5 July 2013 at 22:30:20 UTC, Jonathan M Davis wrote:
 LOL. Natural language is even more ambiguous than HTML, and we 
 know how bad
 that can get. Every person is emitting and receiving slightly 
 different
 versions of whatever natural language they're communicated in, 
 and it's that
 much worse when it's pure text without body language. And 
 that's with a
 _human_ deciphering it. It's a miracle that computers ever get 
 much of
 anywhere with it.

 - Jonathan M Davis
Computers nothing. Humans have problems getting anywhere with it...
Jul 06 2013
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jul 05, 2013 at 03:30:07PM -0700, Jonathan M Davis wrote:
 On Friday, July 05, 2013 22:46:59 Namespace wrote:
 On Friday, 5 July 2013 at 20:34:26 UTC, Timon Gehr wrote:
 On 07/05/2013 10:09 PM, Namespace wrote:
 Unary ~ is bitwise not in Java and D, and he is referring to
 binary usage.
 
 [...] use ~ for _any_ purpose.
I'd expected that *any* really means *any* and do not refer to binary.
Yes. Neither do 'use', 'for' and 'purpose'. Establishing that it is likely that ~ is referring to binary requires some more context (eg. it is likely that the two usages of ~ in his post refer to the same thing), common sense or the assumption that Jonathan probably knows about the unary usage. (Parsing natural language is quite hard though, so I could be wrong.)
Spoken like a true human Compiler. :)
LOL. Natural language is even more ambiguous than HTML, and we know how bad that can get. Every person is emitting and receiving slightly different versions of whatever natural language they're communicated in, and it's that much worse when it's pure text without body language. And that's with a _human_ deciphering it. It's a miracle that computers ever get much of anywhere with it.
[...] Yeah, no kidding. Automated translation, which requires computer parsing of natural languages, is egregiously bad, mainly because it's so hard! Not only does every individual have a slightly different version of the language, but often a lot of information is inferred from context and cultural background, and context-sensitive parsing is a hard problem, and cultural background is nigh impossible to teach a machine. For example, consider the sentence "he's such an office Romeo!". It's relatively easy to parse -- no convoluted nested subordinate clauses or anything tricky like that. But it's extremely difficult for a machine to *interpret*, because to fully understand what "office Romeo" refers to, requires a cultural background of Shakespeare, the fact that he wrote a play in which there was a character named Romeo, what the role of that character is, what that implies about his personality, how that implication about his personality translates into an office context, and what it might mean when applied to someone other than said character. How to even remotely model such a thought process in a machine is an extremely hard problem indeed! HTML is the role model of unambiguity by comparison! T -- Arise, you prisoners of Windows Arise, you slaves of Redmond, Wash, The day and hour soon are coming When all the IT folks say "Gosh!" It isn't from a clever lawsuit That Windowsland will finally fall, But thousands writing open source code Like mice who nibble through a wall. -- The Linux-nationale by Greg Baker
Jul 05 2013
next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 5 July 2013 at 22:49:40 UTC, H. S. Teoh wrote:
 How to even remotely model such a thought process in a machine 
 is an
 extremely hard problem indeed!
I would posit (being a machine learning guy myself to some extent, although not natural language) that it's only an interesting problem up to a point. We have humans for understanding humans! The really interesting thing is when the computer can do something that is actually impossible for humans. The counterargument is of course that although a human can understand 1 human very well, they're not so good at understanding a million humans a second, even very crudely (e.g. google search)
Jul 05 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/5/2013 3:48 PM, H. S. Teoh wrote:
 For example, consider the sentence "he's such an office Romeo!". It's
 relatively easy to parse -- no convoluted nested subordinate clauses or
 anything tricky like that. But it's extremely difficult for a machine to
 *interpret*, because to fully understand what "office Romeo" refers to,
 requires a cultural background of Shakespeare, the fact that he wrote a
 play in which there was a character named Romeo, what the role of that
 character is, what that implies about his personality, how that
 implication about his personality translates into an office context, and
 what it might mean when applied to someone other than said character.
 How to even remotely model such a thought process in a machine is an
 extremely hard problem indeed!
Human speech is also littered with sarcasm, meaning reversal (that's one nasty car!), meaning based on who you are, your social status, age, etc., meaning based on who the recipient is, social status, age, etc. Etc. I can see machine translation that is based on statistical correlation with a sufficiently large corpus of human translations, but I don't see much hope for actual understanding of non-literal speech in the foreseeable future, and I'm actually rather glad of that.
Jul 06 2013
parent reply "TommiT" <tommitissari hotmail.com> writes:
On Saturday, 6 July 2013 at 22:25:59 UTC, Walter Bright wrote:
 On 7/5/2013 3:48 PM, H. S. Teoh wrote:
 For example, consider the sentence "he's such an office 
 Romeo!". It's
 relatively easy to parse -- no convoluted nested subordinate 
 clauses or
 anything tricky like that. But it's extremely difficult for a 
 machine to
 *interpret*, because to fully understand what "office Romeo" 
 refers to,
 requires a cultural background of Shakespeare, the fact that 
 he wrote a
 play in which there was a character named Romeo, what the role 
 of that
 character is, what that implies about his personality, how that
 implication about his personality translates into an office 
 context, and
 what it might mean when applied to someone other than said 
 character.
 How to even remotely model such a thought process in a machine 
 is an
 extremely hard problem indeed!
Human speech is also littered with sarcasm, meaning reversal (that's one nasty car!), meaning based on who you are, your social status, age, etc., meaning based on who the recipient is, social status, age, etc. Etc. I can see machine translation that is based on statistical correlation with a sufficiently large corpus of human translations, but I don't see much hope for actual understanding of non-literal speech in the foreseeable future, and I'm actually rather glad of that.
You haven't read Ray Kurzweil's latest books then or you just don't think he's right?
Jul 06 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/6/2013 11:11 PM, TommiT wrote:
 I can see machine translation that is based on statistical correlation with a
 sufficiently large corpus of human translations, but I don't see much hope for
 actual understanding of non-literal speech in the foreseeable future, and I'm
 actually rather glad of that.
You haven't read Ray Kurzweil's latest books then or you just don't think he's right?
Spend a little quality time with Siri. I did, and discovered it was hardly any better than Eliza, which is a few lines of BASIC written in the 1970's.
Jul 07 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/7/13 1:26 AM, Walter Bright wrote:
 On 7/6/2013 11:11 PM, TommiT wrote:
 I can see machine translation that is based on statistical
 correlation with a
 sufficiently large corpus of human translations, but I don't see much
 hope for
 actual understanding of non-literal speech in the foreseeable future,
 and I'm
 actually rather glad of that.
You haven't read Ray Kurzweil's latest books then or you just don't think he's right?
Spend a little quality time with Siri. I did, and discovered it was hardly any better than Eliza, which is a few lines of BASIC written in the 1970's.
Ow come on. Andrei
Jul 07 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2013 1:30 AM, Andrei Alexandrescu wrote:
 On 7/7/13 1:26 AM, Walter Bright wrote:
 On 7/6/2013 11:11 PM, TommiT wrote:
 I can see machine translation that is based on statistical
 correlation with a
 sufficiently large corpus of human translations, but I don't see much
 hope for
 actual understanding of non-literal speech in the foreseeable future,
 and I'm
 actually rather glad of that.
You haven't read Ray Kurzweil's latest books then or you just don't think he's right?
Spend a little quality time with Siri. I did, and discovered it was hardly any better than Eliza, which is a few lines of BASIC written in the 1970's.
Ow come on.
All Siri does is recognize a set of stock patterns, just like Eliza. Step out of that, even slightly, and it reverts to a default, again, just like Eliza. Of course, Siri had a much larger set of patterns it recognized, but with a bit of experimentation you quickly figure out what those stock patterns are. There's nothing resembling human understanding there.
Jul 07 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/7/13 3:07 AM, Walter Bright wrote:
 On 7/7/2013 1:30 AM, Andrei Alexandrescu wrote:
 On 7/7/13 1:26 AM, Walter Bright wrote:
 On 7/6/2013 11:11 PM, TommiT wrote:
 I can see machine translation that is based on statistical
 correlation with a
 sufficiently large corpus of human translations, but I don't see much
 hope for
 actual understanding of non-literal speech in the foreseeable future,
 and I'm
 actually rather glad of that.
You haven't read Ray Kurzweil's latest books then or you just don't think he's right?
Spend a little quality time with Siri. I did, and discovered it was hardly any better than Eliza, which is a few lines of BASIC written in the 1970's.
Ow come on.
All Siri does is recognize a set of stock patterns, just like Eliza. Step out of that, even slightly, and it reverts to a default, again, just like Eliza. Of course, Siri had a much larger set of patterns it recognized, but with a bit of experimentation you quickly figure out what those stock patterns are. There's nothing resembling human understanding there.
But that applies to humans, too - they just have a much larger set of patterns they recognize. But they don't overlap perfectly for all humans. Try to ask your mailman whether a hash table is better than a singly-linked list for a symbol table. Andrei
Jul 07 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2013 8:38 AM, Andrei Alexandrescu wrote:
 All Siri does is recognize a set of stock patterns, just like Eliza.
 Step out of that, even slightly, and it reverts to a default, again,
 just like Eliza.

 Of course, Siri had a much larger set of patterns it recognized, but
 with a bit of experimentation you quickly figure out what those stock
 patterns are. There's nothing resembling human understanding there.
But that applies to humans, too - they just have a much larger set of patterns they recognize.
I don't buy that. Humans don't process data like computers do.
 But they don't overlap perfectly for all humans. Try to ask your
 mailman whether a hash table is better than a singly-linked list for a symbol
 table.
A mailman can (will) also do things like pretend to know, make up a plausible answer, ask clarifying questions, figure it out, etc. Computers don't, for example, figure it out. They do not reason. Regex is not a thought process.
Jul 07 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/7/13 1:35 PM, Walter Bright wrote:
 A mailman can (will) also do things like pretend to know, make up a
 plausible answer, ask clarifying questions, figure it out, etc.
Siri can also reply by doing a google search and reading the result.
 Computers don't, for example, figure it out. They do not reason. Regex
 is not a thought process.
This started with you claiming that Siri is just Eliza with more memory. That's inaccurate to say the least. Andrei
Jul 07 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2013 2:11 PM, Andrei Alexandrescu wrote:
 On 7/7/13 1:35 PM, Walter Bright wrote:
 A mailman can (will) also do things like pretend to know, make up a
 plausible answer, ask clarifying questions, figure it out, etc.
Siri can also reply by doing a google search and reading the result.
Right, that's what it does when it doesn't match the pattern. There's no understanding at all.
 Computers don't, for example, figure it out. They do not reason. Regex
 is not a thought process.
This started with you claiming that Siri is just Eliza with more memory. That's inaccurate to say the least.
I argue it is dead on. I don't see a fundamental difference. Siri matches your statement against a set of canned patterns (just like Eliza) and gives a canned answer. Failing that, it feeds it to a search engine (Eliza, of course, had no search engine, so it just gave a canned default response). Back in college, I wrote a Zork-style game, and spent some time programming recognition of various patterns, enough to see what's happening behind the curtain with Siri. If you're not familiar with how these things work, it can superficially appear to be magical at "understanding" you, but nothing of the sort is happening. I'm sure Apple collects statements sent to Siri, looks at them, and regularly adds more patterns. But it's just that - more patterns. (Ask Siri to open the pod bay doors, for example.) I think Siri does a mahvelous job of voice recognition - but that's not what we're talking about.
Jul 07 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/7/13 2:44 PM, Walter Bright wrote:
 This started with you claiming that Siri is just Eliza with more
 memory. That's
 inaccurate to say the least.
I argue it is dead on. I don't see a fundamental difference.
Consider someone at a 1970s level of compiler technology coming to you and telling you in all seriousness: "Yeah, I tried your D language. A few more keywords and tricks. Compiler supports lines over 80 columns. Other than that, it has nothing over Fortran77." Knowing the wealth of research and development in programming languages since then, you'd know that that's just an ignorant statement and would not even take the time to get offended. Similarly, it would be an ignorant thing to say that Siri is just a larger Eliza. There is a world of difference between Eliza's and Siri's approaches. In fact the difference is even larger than between 1970s compilers and today's ones. For a simple example, in the 1990s NLP has definitely departed from rule-based models to statistical models. I don't know of a similarly large change in programming language technology. Andrei
Jul 07 2013
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Jul 07, 2013 at 04:03:39PM -0700, Andrei Alexandrescu wrote:
 On 7/7/13 2:44 PM, Walter Bright wrote:
This started with you claiming that Siri is just Eliza with more
memory. That's
inaccurate to say the least.
I argue it is dead on. I don't see a fundamental difference.
Consider someone at a 1970s level of compiler technology coming to you and telling you in all seriousness: "Yeah, I tried your D language. A few more keywords and tricks. Compiler supports lines over 80 columns. Other than that, it has nothing over Fortran77." Knowing the wealth of research and development in programming languages since then, you'd know that that's just an ignorant statement and would not even take the time to get offended. Similarly, it would be an ignorant thing to say that Siri is just a larger Eliza. There is a world of difference between Eliza's and Siri's approaches. In fact the difference is even larger than between 1970s compilers and today's ones. For a simple example, in the 1990s NLP has definitely departed from rule-based models to statistical models. I don't know of a similarly large change in programming language technology.
[...] I look forward to the day programs will be written by statistical models. Random failure FTW! :-P Oh wait, it's already been done: http://p-nand-q.com/humor/programming_languages/java2k.html :-P T -- Doubtless it is a good thing to have an open mind, but a truly open mind should be open at both ends, like the food-pipe, with the capacity for excretion as well as absorption. -- Northrop Frye
Jul 07 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2013 4:03 PM, Andrei Alexandrescu wrote:
 Similarly, it would be an ignorant thing to say that Siri is just a larger
 Eliza. There is a world of difference between Eliza's and Siri's approaches. In
 fact the difference is even larger than between 1970s compilers and today's
 ones.
I don't know how Siri is implemented. If it is using modern approaches, I'd love to sit down with you sometime and learn about it.
Jul 07 2013
next sibling parent reply Timothee Cour <thelastmammoth gmail.com> writes:
On Sun, Jul 7, 2013 at 6:11 PM, Walter Bright <newshound2 digitalmars.com>wrote:

 On 7/7/2013 4:03 PM, Andrei Alexandrescu wrote:

 Similarly, it would be an ignorant thing to say that Siri is just a larger
 Eliza. There is a world of difference between Eliza's and Siri's
 approaches. In
 fact the difference is even larger than between 1970s compilers and
 today's
 ones.
I don't know how Siri is implemented. If it is using modern approaches, I'd love to sit down with you sometime and learn about it.
Can't speak for Siri, but the deep learning architecture used in google now has little to do with Eliza. Nor is the recognition accuracy. Try it if you haven't!
Jul 07 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2013 7:42 PM, Timothee Cour wrote:
 Can't speak for Siri, but the deep learning architecture used in google now has
 little to do with Eliza. Nor is the recognition accuracy. Try it if you
haven't!
Can you give some examples demonstrating this?
Jul 07 2013
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/7/13 6:11 PM, Walter Bright wrote:
 On 7/7/2013 4:03 PM, Andrei Alexandrescu wrote:
 Similarly, it would be an ignorant thing to say that Siri is just a
 larger
 Eliza. There is a world of difference between Eliza's and Siri's
 approaches. In
 fact the difference is even larger than between 1970s compilers and
 today's
 ones.
I don't know how Siri is implemented. If it is using modern approaches, I'd love to sit down with you sometime and learn about it.
Zat's the spirit! Andrei
Jul 07 2013
prev sibling parent reply "Tommi" <tommitissari hotmail.com> writes:
On Sunday, 7 July 2013 at 20:35:49 UTC, Walter Bright wrote:
 On 7/7/2013 8:38 AM, Andrei Alexandrescu wrote:
 All Siri does is recognize a set of stock patterns, just like 
 Eliza. Step out of that, even slightly, and it reverts to a 
 default, again, just like Eliza.

 Of course, Siri had a much larger set of patterns it 
 recognized, but with a bit of experimentation you
 quickly figure out what those stock patterns are.
 There's nothing resembling human understanding there.
But that applies to humans, too - they just have a much larger set of patterns they recognize.
I don't buy that. Humans don't process data like computers do.
Humans don't and _can't_ process data like computers do, but computers _can_ process data like humans do. Human brain does it's computation in a highly parallel manner, but signals run much slower than they do in computers. What human brain does is a very specific process, optimized for survival on planet Earth. But computers are generic computation devices. They can model any computational processes, including the ones that human brain uses (at least once we get some more cores in our computers). Disclaimer: I'm basically just paraphrasing stuff I read from "The Singularity Is Near" and "How to Create a Mind".
Jul 08 2013
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 8 July 2013 at 09:02:44 UTC, Tommi wrote:
 On Sunday, 7 July 2013 at 20:35:49 UTC, Walter Bright wrote:
 On 7/7/2013 8:38 AM, Andrei Alexandrescu wrote:
 All Siri does is recognize a set of stock patterns, just 
 like Eliza. Step out of that, even slightly, and it reverts 
 to a default, again, just like Eliza.

 Of course, Siri had a much larger set of patterns it 
 recognized, but with a bit of experimentation you
 quickly figure out what those stock patterns are.
 There's nothing resembling human understanding there.
But that applies to humans, too - they just have a much larger set of patterns they recognize.
I don't buy that. Humans don't process data like computers do.
Humans don't and _can't_ process data like computers do, but computers _can_ process data like humans do. Human brain does it's computation in a highly parallel manner, but signals run much slower than they do in computers. What human brain does is a very specific process, optimized for survival on planet Earth. But computers are generic computation devices. They can model any computational processes, including the ones that human brain uses (at least once we get some more cores in our computers). Disclaimer: I'm basically just paraphrasing stuff I read from "The Singularity Is Near" and "How to Create a Mind".
The human mind being so particularly powerful at some tasks is a product of both it's architecture *and* it's training. The importance of physical learning in artificial intelligence is getting some good recognition these days. For me, the most interesting question in all of this is "What is intelligence?". While that might seem the preserve of philosophers, I believe that computers have the ability to (and already do) demonstrate new and diverse types of intelligence, entirely unlike human intelligence but nonetheless highly effective.
Jul 08 2013
parent "Tommi" <tommitissari hotmail.com> writes:
On Monday, 8 July 2013 at 10:48:05 UTC, John Colvin wrote:
 For me, the most interesting question in all of this is "What 
 is intelligence?". While that might seem the preserve of 
 philosophers, I believe that computers have the ability to (and 
 already do) demonstrate new and diverse types of intelligence, 
 entirely unlike human intelligence but nonetheless highly 
 effective.
A quite fitting quote from "How to Create a Mind", I think: "American philosopher John Searle (born in 1932) argued recently that Watson is not capable of thinking. Citing his “Chinese room” thought experiment (which I will discuss further inchapter 11), he states that Watson is only manipulating symbols and does not understand the meaning of those symbols.  Actually, Searle is not describing Watson accurately, since its understanding of language is based on hierarchical statistical processes—not the manipulation of symbols. The only way that Searle’s characterization would be accurate is if we considered every step in Watson’s self-organizing processes to be “the manipulation of symbols.” But if that were the case, then the human brain would not be judged capable of thinking either. It is amusing and ironic when observers criticize Watson for just doing statistical analysis of language as opposed to possessing the “true” understanding of language that humans have. Hierarchical statistical analysis is exactly what the human brain is doing when it is resolving multiple hypotheses based on statistical inference (and indeed at every level of the neocortical hierarchy). Both Watson and the human brain learn and respond based on a similar approach to hierarchical understanding. In many respects Watson’s knowledge is far more extensive than a human’s; no human can claim to have mastered all of Wikipedia, which is only part of Watson’s knowledge base. Conversely, a human can today master more conceptual levels than Watson, but that is certainly not a permanent gap."
Jul 08 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/8/2013 2:02 AM, Tommi wrote:
 I don't buy that. Humans don't process data like computers do.
Humans don't and _can't_ process data like computers do, but computers _can_ process data like humans do. Human brain does it's computation in a highly parallel manner, but signals run much slower than they do in computers. What human brain does is a very specific process, optimized for survival on planet Earth. But computers are generic computation devices. They can model any computational processes, including the ones that human brain uses (at least once we get some more cores in our computers).
Except that we have no idea how brains actually work. Are fruit flies self-aware? Probably not. Are dogs? Definitely. So at what point between fruit flies and dogs does self-awareness start? We have no idea. None at all.
Jul 08 2013
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Monday, 8 July 2013 at 12:04:14 UTC, Walter Bright wrote:
 Except that we have no idea how brains actually work.

 Are fruit flies self-aware? Probably not. Are dogs? Definitely. 
 So at what point between fruit flies and dogs does 
 self-awareness start?

 We have no idea. None at all.
+1 Underestimating complexity of human thinking is a tempting mistake to do for a programmer :) Well, to be honest, there are _some_ ideas, but those are more guesses than precise knowledge.
Jul 08 2013
prev sibling next sibling parent "Tommi" <tommitissari hotmail.com> writes:
On Monday, 8 July 2013 at 12:04:14 UTC, Walter Bright wrote:
 On 7/8/2013 2:02 AM, Tommi wrote:
 I don't buy that. Humans don't process data like computers do.
Humans don't and _can't_ process data like computers do, but computers _can_ process data like humans do. Human brain does it's computation in a highly parallel manner, but signals run much slower than they do in computers. What human brain does is a very specific process, optimized for survival on planet Earth. But computers are generic computation devices. They can model any computational processes, including the ones that human brain uses (at least once we get some more cores in our computers).
Except that we have no idea how brains actually work.
"How to Create a Mind" makes a pretty convincing argument to the contrary. It's true that we don't have the full picture of how brains work. But both the temporal and spatial resolution of that picture is increasing rapidly with better brain scanners.
 Are fruit flies self-aware? Probably not. Are dogs? Definitely. 
 So at what point between fruit flies and dogs does 
 self-awareness start?

 We have no idea. None at all.
"How to Create a Mind" talks plenty of consciousness as well. My personal guess is that consciousness is not a binary property. I feel I should get some royalties for plugging that book like this.
Jul 08 2013
prev sibling next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 8 July 2013 at 12:04:14 UTC, Walter Bright wrote:
 On 7/8/2013 2:02 AM, Tommi wrote:
 I don't buy that. Humans don't process data like computers do.
Humans don't and _can't_ process data like computers do, but computers _can_ process data like humans do. Human brain does it's computation in a highly parallel manner, but signals run much slower than they do in computers. What human brain does is a very specific process, optimized for survival on planet Earth. But computers are generic computation devices. They can model any computational processes, including the ones that human brain uses (at least once we get some more cores in our computers).
Except that we have no idea how brains actually work. Are fruit flies self-aware? Probably not. Are dogs? Definitely. So at what point between fruit flies and dogs does self-awareness start? We have no idea. None at all.
Problem A) Understanding how the human brain processes certain types of information. Problem B) Making a decision about what constitutes self-awareness and where to draw the line. Those are not equivalent problems in the slightest. Ugh, the conciousness guys give the whole field of neuro-biology a bad name. Everyone goes "oh, neuroscience, that's cool, but YOU DON'T UNDERSTAND LOVE AND CONSCIOUSNESS LALALALALALA" because that's the only side the science media ever talk about.
Jul 08 2013
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 8 July 2013 at 13:05:55 UTC, John Colvin wrote:
 ..
 Problem A) Understanding how the human brain processes certain 
 types of information.

 Problem B) Making a decision about what constitutes 
 self-awareness and where to draw the line.

 Those are not equivalent problems in the slightest.
Well, second one is not really a scientific problem, it is a philosophical one. Self-awareness is a very vague term with a lot of space for personal interpretation. I don't even think it is worth speaking about.
Jul 08 2013
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 8 July 2013 at 13:31:41 UTC, Dicebot wrote:
 ...
And, yeah, the very point I wanted to mention - while concept of self-awareness is useless on its own, it is quite interesting in scope of first problem - "how does a human brain reason about someones self-awareness" :)
Jul 08 2013
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Jul 08, 2013 at 03:34:43PM +0200, Dicebot wrote:
 On Monday, 8 July 2013 at 13:31:41 UTC, Dicebot wrote:
...
And, yeah, the very point I wanted to mention - while concept of self-awareness is useless on its own, it is quite interesting in scope of first problem - "how does a human brain reason about someones self-awareness" :)
I love you guys. A thread about the merits of adding path append operators to strings turns into a discussion about self-awareness. Brilliant. ;-) T -- It only takes one twig to burn down a forest.
Jul 08 2013
parent "Dicebot" <public dicebot.lv> writes:
On Monday, 8 July 2013 at 14:28:33 UTC, H. S. Teoh wrote:
 I love you guys. A thread about the merits of adding path append
 operators to strings turns into a discussion about 
 self-awareness.
 Brilliant. ;-)
I don't care about path append operators but tricks of human consciousness is a an important hobby interest to me :) And I doubt I can easily find any better community out there to speak about it judging by friendliness and intelligence level :P
Jul 08 2013
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 8 July 2013 at 13:34:44 UTC, Dicebot wrote:
 On Monday, 8 July 2013 at 13:31:41 UTC, Dicebot wrote:
 ...
And, yeah, the very point I wanted to mention - while concept of self-awareness is useless on its own, it is quite interesting in scope of first problem - "how does a human brain reason about someones self-awareness" :)
Without compile time reflection !
Jul 08 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/8/2013 6:31 AM, Dicebot wrote:
 Well, second one is not really a scientific problem, it is a philosophical one.
 Self-awareness is a very vague term with a lot of space for personal
 interpretation. I don't even think it is worth speaking about.
If you consider that our brains evolved, and self-awareness was a result of evolution, then self-awareness presumably offers some sort of survival benefit. Following that line of reasoning, self-awareness becomes a real phenomenon with a scientific basis.
Jul 08 2013
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 8 July 2013 at 18:37:30 UTC, Walter Bright wrote:
 If you consider that our brains evolved, and self-awareness was 
 a result of evolution, then self-awareness presumably offers 
 some sort of survival benefit.

 Following that line of reasoning, self-awareness becomes a real 
 phenomenon with a scientific basis.
I do not consider that self-awareness is result of an evolution. I am not even sure it actually exists. It is a very abstract term with no clear meaning, using it just obfuscates the idea.
Jul 08 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/8/2013 11:54 AM, Dicebot wrote:
 On Monday, 8 July 2013 at 18:37:30 UTC, Walter Bright wrote:
 If you consider that our brains evolved, and self-awareness was a result of
 evolution, then self-awareness presumably offers some sort of survival benefit.

 Following that line of reasoning, self-awareness becomes a real phenomenon
 with a scientific basis.
I do not consider that self-awareness is result of an evolution. I am not even sure it actually exists. It is a very abstract term with no clear meaning, using it just obfuscates the idea.
Just because we have difficulty defining something is not a reason to dismiss it as irrelevant or non-existent. I'm sure you're self-aware, as I'm sure Siri and Watson are not. It's like somewhere between a fertilized egg and your current form you became a person. Just because we are unable to definitively point to the moment when you became one, doesn't mean you didn't become one, nor does it mean that personhood is not a very useful and meaningful construct.
Jul 08 2013
next sibling parent reply "Tommi" <tommitissari hotmail.com> writes:
On Monday, 8 July 2013 at 21:46:24 UTC, Walter Bright wrote:
 I'm sure you're self-aware, as I'm sure Siri and Watson are not.
But there is no way for you to prove to me that you are self-aware. It could be that you are simply programmed to appear to be self-aware; think of an infinite loop containing a massive switch statement, where each case represents a different situation in life and the function to execute in each case represents what you did in that situation in life. As long as we can't test whether and entity is self-aware or not, for our purposes, it kind of doesn't matter whether it is or not. If we ever are able to define what consciousness is (and I'm quite sure we will), I suspect it's going to be some kind of a continuous feedback loop from sensory data to brain, and from brain to muscles and through them back to sensory data again. Consciousness would be kind of your ability to predict what kind of sensory data would be likely to be produced if you sent a certain set of signals to your muscles. I like this guy's take on consciousness: http://www.youtube.com/watch?v=3jBUtKYRxnA
Jul 08 2013
parent "Tommi" <tommitissari hotmail.com> writes:
On Tuesday, 9 July 2013 at 06:07:12 UTC, Tommi wrote:
 Consciousness would be kind of your ability to predict what 
 kind of sensory data would be likely to be produced if you sent 
 a certain set of signals to your muscles.
...and the better you are at predicting those very-near-future sensory signals, the more you feel that you're conscious.
Jul 08 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 8 July 2013 at 21:46:24 UTC, Walter Bright wrote:
 Just because we have difficulty defining something is not a 
 reason to dismiss it as irrelevant or non-existent.

 I'm sure you're self-aware, as I'm sure Siri and Watson are not.
It is proven that at least 70% of what we perceive as being our decisions are in fact backward rationalization. I don't think the idea is that absurd that it may be 100%
Jul 09 2013
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 8 July 2013 at 21:46:24 UTC, Walter Bright wrote:
 Just because we have difficulty defining something is not a 
 reason to dismiss it as irrelevant or non-existent.
Sure, but there is an important difference between "dismissing" and "dismissing as a relevant scientific term to discuss". Speaking about possible self-awareness of computers is perfectly fine for a forum discussion but not acceptable for a scientific one. One needs a common well-defined terms to make progress.
 I'm sure you're self-aware, as I'm sure Siri and Watson are not.
I'll take it as a compliment :) But that is exactly what I am talking about - question if you consider someone self-aware is extremely interesting from the psychological point of view (probably even social psychology). For AI research important question is what properties do self-aware being has. Those are related but different. In a former case exact meaning of self-awareness is not important as you primarily study a person who makes a statement, not statement itself. In other words, it is not important what one means by "self-aware" but what thinking processes result in such tag. The latter relies on research done in previous step to define properties of "self-aware" state that target AI needs to meet to be recognized as such by a wide variety of people. And, of course, as this relies on a common consensus, such concept is naturally very volatile. That is the main idea behind Turing test as far as I understand it.
 ... nor does it mean that personhood is not a very useful and 
 meaningful construct.
Even worse, now you use "personhood" as a replacement for self-awareness! :) It is a very dangerous mistake to use common words when speaking about consciousness and thinking - too much self-reflection involved.
Jul 09 2013
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 9 July 2013 at 10:38:11 UTC, Dicebot wrote:
 ... nor does it mean that personhood is not a very useful and 
 meaningful construct.
Even worse, now you use "personhood" as a replacement for self-awareness! :) It is a very dangerous mistake to use common words when speaking about consciousness and thinking - too much self-reflection involved.
You're looking at it in the wrong context. Walter was talking about personhood as an analogy, not at all conflating it with self-awareness. I agree 100% about the language point. By and large our languages (and language abilities) have evolved to identify and communicate day-to-day opportunities and risks. They are very specialised DSLs running on very specialised hardware, not well suited to performing complex runtime introspection or large-scale formal logic :p Interestingly, it doesn't take a huge change in design to unleash some very different abilities, e.g. autistic savants.
Jul 09 2013
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 8 July 2013 at 18:37:30 UTC, Walter Bright wrote:
 On 7/8/2013 6:31 AM, Dicebot wrote:
 Well, second one is not really a scientific problem, it is a 
 philosophical one.
 Self-awareness is a very vague term with a lot of space for 
 personal
 interpretation. I don't even think it is worth speaking about.
If you consider that our brains evolved, and self-awareness was a result of evolution, then self-awareness presumably offers some sort of survival benefit.
Not necessarily. If the change is neutral, it can still develop in some species. Arguably, as our brain consume 20% of our energy, this is highly likely that it has benefit, so you still have a point.
 Following that line of reasoning, self-awareness becomes a real 
 phenomenon with a scientific basis.
How is it defined in science ? The concept seems hard to define in proper ways to me.
Jul 09 2013
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/8/2013 6:05 AM, John Colvin wrote:
 Problem A) Understanding how the human brain processes certain types of
 information.

 Problem B) Making a decision about what constitutes self-awareness and where to
 draw the line.

 Those are not equivalent problems in the slightest.
I'm not so sure at all about the validity of your last statement. It presumes, for example, that the brain is two separate things with a line dividing them.
Jul 08 2013
prev sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 Except that we have no idea how brains actually work.

 Are fruit flies self-aware? Probably not. Are dogs? Definitely. 
 So at what point between fruit flies and dogs does 
 self-awareness start?

 We have no idea. None at all.
There are many things that are not yet known in neurobiology and in the higher organizational patterns of the brains, both in their computational structure and the dynamic interactions between their parts. But we are not totally ignorant. Neurobiology and other brain sciences have discovered many things. This old guy http://en.wikipedia.org/wiki/Gerald_Edelman has proposed several theories (http://en.wikipedia.org/wiki/Neural_Darwinism ), done simulations; and generally all kind of researchers are increasing our knowledge of such topics every day, so currently we are not in the full dark as you say. The differences in the brains of different animals are slowly getting understood, including what's the difference between the consciousness of dogs, self-consciousness of humans, simpler brains of reptiles, and cabled aggregates of bodies inside tiny insect brains (as it often happens in biological sciences, what we discover is that even the 'simplest brains' are quite more complex than previously believed. Today we know how a fruit flies learns and remembers scents, how its tiny brain copes with the needs of a complex body able to fly in a very complex environment, etc). Bye, bearophile
Jul 08 2013
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Sunday, 7 July 2013 at 08:26:03 UTC, Walter Bright wrote:
 On 7/6/2013 11:11 PM, TommiT wrote:
 I can see machine translation that is based on statistical 
 correlation with a
 sufficiently large corpus of human translations, but I don't 
 see much hope for
 actual understanding of non-literal speech in the foreseeable 
 future, and I'm
 actually rather glad of that.
You haven't read Ray Kurzweil's latest books then or you just don't think he's right?
Spend a little quality time with Siri. I did, and discovered it was hardly any better than Eliza, which is a few lines of BASIC written in the 1970's.
One word: Watson.
Jul 07 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2013 2:16 AM, John Colvin wrote:
 On Sunday, 7 July 2013 at 08:26:03 UTC, Walter Bright wrote:
 On 7/6/2013 11:11 PM, TommiT wrote:
 I can see machine translation that is based on statistical correlation with a
 sufficiently large corpus of human translations, but I don't see much hope for
 actual understanding of non-literal speech in the foreseeable future, and I'm
 actually rather glad of that.
You haven't read Ray Kurzweil's latest books then or you just don't think he's right?
Spend a little quality time with Siri. I did, and discovered it was hardly any better than Eliza, which is a few lines of BASIC written in the 1970's.
One word: Watson.
Ask Watson what its favorite color is. Oh well.
Jul 07 2013
next sibling parent "TommiT" <tommitissari hotmail.com> writes:
On Sunday, 7 July 2013 at 10:07:51 UTC, Walter Bright wrote:
 On 7/7/2013 2:16 AM, John Colvin wrote:
 One word: Watson.
Ask Watson what its favorite color is. Oh well.
That would require self-awareness. But self-awareness is not a requirement of understanding natural language as long as the speaker doesn't refer to the entity doing the understanding.
Jul 07 2013
prev sibling next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Sunday, 7 July 2013 at 10:07:51 UTC, Walter Bright wrote:
 On 7/7/2013 2:16 AM, John Colvin wrote:
 On Sunday, 7 July 2013 at 08:26:03 UTC, Walter Bright wrote:
 On 7/6/2013 11:11 PM, TommiT wrote:
 I can see machine translation that is based on statistical 
 correlation with a
 sufficiently large corpus of human translations, but I 
 don't see much hope for
 actual understanding of non-literal speech in the 
 foreseeable future, and I'm
 actually rather glad of that.
You haven't read Ray Kurzweil's latest books then or you just don't think he's right?
Spend a little quality time with Siri. I did, and discovered it was hardly any better than Eliza, which is a few lines of BASIC written in the 1970's.
One word: Watson.
Ask Watson what its favorite color is. Oh well.
That's asking for an awful lot more than good natural language processing.
Jul 07 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2013 5:41 AM, John Colvin wrote:
 On Sunday, 7 July 2013 at 10:07:51 UTC, Walter Bright wrote:
 Ask Watson what its favorite color is.

 Oh well.
That's asking for an awful lot more than good natural language processing.
Is it? Yes, that's a serious question. I don't presume that human language is something independent from our self-awareness.
Jul 07 2013
next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Sunday, 7 July 2013 at 20:38:31 UTC, Walter Bright wrote:
 On 7/7/2013 5:41 AM, John Colvin wrote:
 On Sunday, 7 July 2013 at 10:07:51 UTC, Walter Bright wrote:
 Ask Watson what its favorite color is.

 Oh well.
That's asking for an awful lot more than good natural language processing.
Is it? Yes, that's a serious question. I don't presume that human language is something independent from our self-awareness.
Fair point. There is, however, a reasonable subset of language (note: not subset of words, phrases or grammar explicitly) that can be interpreted without said self-awareness.
Jul 07 2013
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 07, 2013 13:38:33 Walter Bright wrote:
 On 7/7/2013 5:41 AM, John Colvin wrote:
 On Sunday, 7 July 2013 at 10:07:51 UTC, Walter Bright wrote:
 Ask Watson what its favorite color is.
 
 Oh well.
That's asking for an awful lot more than good natural language processing.
Is it? Yes, that's a serious question. I don't presume that human language is something independent from our self-awareness.
Well, it _is_ considered to be an AI-complete problem. http://en.wikipedia.org/wiki/AI-complete - Jonathan M Davis
Jul 07 2013
prev sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Sunday, 7 July 2013 at 10:07:51 UTC, Walter Bright wrote:
 Ask Watson what its favorite color is.
Ask /me/ what my favorite color is. I always hate questions like that because, and this might sound silly, but it bothers me because if I pick one, I think the others will feel left out, and I feel bad about that. Maybe this is an effect of me being picked last in gym for all those years in school. I'm not even that bad at sports! Anyway, the worst was when a friend would take me with her to the mall to shop, something she did a lot. Which shoes do I like better? idk, I might find one style weird, but who am /I/ to judge something for being weird? I don't think I'd want a computer that is too much like us!
Jul 07 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2013 2:05 PM, Adam D. Ruppe wrote:
 On Sunday, 7 July 2013 at 10:07:51 UTC, Walter Bright wrote:
 Ask Watson what its favorite color is.
Ask /me/ what my favorite color is. I always hate questions like that because, and this might sound silly, but it bothers me because if I pick one, I think the others will feel left out, and I feel bad about that. Maybe this is an effect of me being picked last in gym for all those years in school. I'm not even that bad at sports! Anyway, the worst was when a friend would take me with her to the mall to shop, something she did a lot. Which shoes do I like better? idk, I might find one style weird, but who am /I/ to judge something for being weird? I don't think I'd want a computer that is too much like us!
Exactly. Can you see Watson generating a response like yours? I don't, either. The top hit on google for that question is: http://www.youtube.com/watch?v=pWS8Mg-JWSg The people typing that into google are probably looking for that clip, they are not asking google what google's favorite color is. Google, of course, is programmed to be a search engine, not process natural language for anything other than search. If I was asked that question, the context would matter. If it was at a barbeque with the beer flowing, I'd answer "blue, no ye- .. aaaaahhhhhhggggg!" If an architect working for me asked, I'd give a serious answer, and of course even that answer would depend on the context - I'd pick different colors for the kitchen walls than the bedroom floor. Good luck with Watson or Siri on such.
Jul 07 2013
prev sibling parent reply Artur Skawina <art.08.09 gmail.com> writes:
On 07/02/13 22:47, TommiT wrote:
  Division operator for strings doesn't make any sense, and I doubt there will
ever be some other meaning for '/' that would make more sense than "a directory
separator" for strings in the context of programming.
Umm,
 $ /usr/bin/pike
 Pike v7.8 release 537 running Hilfe v3.5 (Incremental Pike Frontend)
 "/a/b//c" / "/";
(1) Result: ({ /* 5 elements */ "", "a", "b", "", "c" })
That's the only sane use of the division operator on string types; anything else would be extremely confusing. And this still does not mean that it would be a good idea in D. Typing out "splitter()" is not /that/ hard. artur
Jul 02 2013
parent "TommiT" <tommitissari hotmail.com> writes:
On Tuesday, 2 July 2013 at 23:08:37 UTC, Artur Skawina wrote:
 On 07/02/13 22:47, TommiT wrote:
  Division operator for strings doesn't make any sense, and I 
 doubt there will ever be some other meaning for '/' that would 
 make more sense than "a directory separator" for strings in 
 the context of programming.
Umm,
 $ /usr/bin/pike
 Pike v7.8 release 537 running Hilfe v3.5 (Incremental Pike 
 Frontend)
 "/a/b//c" / "/";
(1) Result: ({ /* 5 elements */ "", "a", "b", "", "c" })
That's the only sane use of the division operator on string types; anything else would be extremely confusing. And this still does not mean that it would be a good idea in D. Typing out "splitter()" is not /that/ hard. artur
Perhaps an even more logical meaning for / operator for strings would be to divide the string to N equal sized parts (plus a potential remainder): "abcdefg" / 3 result: ["ab", "cd", "ef", "g"] But your "divide this string using this divider character" is pretty logical too (once you know it).
Jul 02 2013