www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Breaking backwards compatiblity

reply Walter Bright <newshound2 digitalmars.com> writes:
This statement is from Linus Torvalds about breaking binary compatibility:

https://lkml.org/lkml/2012/3/8/495

While I don't think we need to worry so much at the moment about breaking
binary 
compatibility with new D releases, we do have a big problem with breaking
source 
code compatibility.

This is why we need to have a VERY high bar for breaking changes.
Mar 09 2012
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/09/2012 11:32 PM, Walter Bright wrote:
 This statement is from Linus Torvalds about breaking binary compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about
 breaking binary compatibility with new D releases, we do have a big
 problem with breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

Most bug fixes are breaking changes. I don't think we are there yet.
Mar 09 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 2:41 PM, Timon Gehr wrote:
 On 03/09/2012 11:32 PM, Walter Bright wrote:
 This statement is from Linus Torvalds about breaking binary compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about
 breaking binary compatibility with new D releases, we do have a big
 problem with breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

Most bug fixes are breaking changes. I don't think we are there yet.

There have been some gratuitous ones, in my not-so-humble opinion. Those need to stop.
Mar 09 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 2:56 PM, Jonathan M Davis wrote:
 Do you have any specific ones in mind? There were a number of them to try and
 make it so that names were more consistent with regards to camelcasing and the
 like, but those changes have largely stopped (or at least are well into the
 deprecation process if they haven't been completed yet).

I've voiced my objections to changing those names before. I know it's a done deal now.
Mar 09 2012
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, March 09, 2012 14:44:05 Walter Bright wrote:
 On 3/9/2012 2:41 PM, Timon Gehr wrote:
 On 03/09/2012 11:32 PM, Walter Bright wrote:
 This statement is from Linus Torvalds about breaking binary
 compatibility:
 
 https://lkml.org/lkml/2012/3/8/495
 
 While I don't think we need to worry so much at the moment about
 breaking binary compatibility with new D releases, we do have a big
 problem with breaking source code compatibility.
 
 This is why we need to have a VERY high bar for breaking changes.

Most bug fixes are breaking changes. I don't think we are there yet.

There have been some gratuitous ones, in my not-so-humble opinion. Those need to stop.

Do you have any specific ones in mind? There were a number of them to try and make it so that names were more consistent with regards to camelcasing and the like, but those changes have largely stopped (or at least are well into the deprecation process if they haven't been completed yet). The only stuff along those lines that I'm aware of at the moment is the discussion on making some changes to some of the function names in core.time and std.datetime, because some people don't like some of them. And no such changes have been made yet (though there are people are still looking to make some of them). - Jonathan M Davis
Mar 09 2012
prev sibling parent reply "David Nadlinger" <see klickverbot.at> writes:
On Friday, 9 March 2012 at 22:41:22 UTC, Timon Gehr wrote:
 Most bug fixes are breaking changes. I don't think we are there 
 yet.

In my opinion, this is a very interesting and important observation – due to the powerful meta-programming and reflection capabilities, most of the time the question is not whether a change is backward compatibile or not, but rather how _likely_ it is to break code. There isn't really a good way to avoid that, even more so if your language allow testing whether a given piece of code compiles or not. A related problem is that we still don't quite have an appropriate language spec, so you can never be sure if you code is really »correct« or if you are relying on DMD implementation details – I'm sure everybody who had their meta-programming heavy code break due to a seemingly unrelated DMD bugfix knows what I'm trying to say… David
Mar 11 2012
parent deadalnix <deadalnix gmail.com> writes:
Le 11/03/2012 22:59, David Nadlinger a écrit :
 On Friday, 9 March 2012 at 22:41:22 UTC, Timon Gehr wrote:
 Most bug fixes are breaking changes. I don't think we are there yet.

In my opinion, this is a very interesting and important observation – due to the powerful meta-programming and reflection capabilities, most of the time the question is not whether a change is backward compatibile or not, but rather how _likely_ it is to break code. There isn't really a good way to avoid that, even more so if your language allow testing whether a given piece of code compiles or not. A related problem is that we still don't quite have an appropriate language spec, so you can never be sure if you code is really »correct« or if you are relying on DMD implementation details – I'm sure everybody who had their meta-programming heavy code break due to a seemingly unrelated DMD bugfix knows what I'm trying to say… David

D is very tied to DMD's implementation. This may be not good, but it is too soon to maintain several compilers.
Mar 11 2012
prev sibling next sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 09-03-2012 23:32, Walter Bright wrote:
 This statement is from Linus Torvalds about breaking binary compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about
 breaking binary compatibility with new D releases, we do have a big
 problem with breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

If we want to start being able to avoid breaking changes, we *really* need to finally deprecate the stuff that's been slated for deprecation for ages... -- - Alex
Mar 09 2012
next sibling parent =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 10-03-2012 00:14, H. S. Teoh wrote:
 On Fri, Mar 09, 2012 at 11:46:24PM +0100, Alex Rønne Petersen wrote:
 On 09-03-2012 23:32, Walter Bright wrote:
 This statement is from Linus Torvalds about breaking binary compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about
 breaking binary compatibility with new D releases, we do have a big
 problem with breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

If we want to start being able to avoid breaking changes, we *really* need to finally deprecate the stuff that's been slated for deprecation for ages...

Does that include std.stdio and std.stream? When are we expecting std.io to be ready? IMHO, this is one major change that needs to happen sooner rather than later. The current lack of interoperability between std.stdio and std.stream is a big detraction from Phobos' overall quality. T

Well, I was mostly thinking language-wise. But yeah, those too. -- - Alex
Mar 09 2012
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, March 09, 2012 15:14:39 H. S. Teoh wrote:
 On Fri, Mar 09, 2012 at 11:46:24PM +0100, Alex Rønne Petersen wrote:
 On 09-03-2012 23:32, Walter Bright wrote:
This statement is from Linus Torvalds about breaking binary
compatibility:

https://lkml.org/lkml/2012/3/8/495

While I don't think we need to worry so much at the moment about
breaking binary compatibility with new D releases, we do have a big
problem with breaking source code compatibility.

This is why we need to have a VERY high bar for breaking changes.

If we want to start being able to avoid breaking changes, we *really* need to finally deprecate the stuff that's been slated for deprecation for ages...

[...] Does that include std.stdio and std.stream? When are we expecting std.io to be ready? IMHO, this is one major change that needs to happen sooner rather than later. The current lack of interoperability between std.stdio and std.stream is a big detraction from Phobos' overall quality.

Note that he didn't say that we should _never_ make breaking changes but rather that we need to have a very high bar for making such changes. In particular, it's stuff like renaming functions without changing functionality that he's against. If a module really needs a rewrite, then it'll get a rewrite, but we also need to do our best to avoid the need. And in the long run, it will hopefully be incredibly rare that we'll consider replacing modules. But there _are_ a few modules which are going to be replaced or rewritten. It's just that those are the modules that really need it and therefore meet the high bar required to make such changes. - Jonathan M Davis
Mar 09 2012
parent deadalnix <deadalnix gmail.com> writes:
Le 10/03/2012 00:24, Jonathan M Davis a écrit :
 On Friday, March 09, 2012 15:14:39 H. S. Teoh wrote:
 On Fri, Mar 09, 2012 at 11:46:24PM +0100, Alex Rønne Petersen wrote:
 On 09-03-2012 23:32, Walter Bright wrote:
 This statement is from Linus Torvalds about breaking binary
 compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about
 breaking binary compatibility with new D releases, we do have a big
 problem with breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

If we want to start being able to avoid breaking changes, we *really* need to finally deprecate the stuff that's been slated for deprecation for ages...

[...] Does that include std.stdio and std.stream? When are we expecting std.io to be ready? IMHO, this is one major change that needs to happen sooner rather than later. The current lack of interoperability between std.stdio and std.stream is a big detraction from Phobos' overall quality.

Note that he didn't say that we should _never_ make breaking changes but rather that we need to have a very high bar for making such changes. In particular, it's stuff like renaming functions without changing functionality that he's against.

Just about that, having a consistent naming convention is an issue of first importance. In the given topic, I did argue for some convention, but, let's look at the larger picture and how it relate to this topic. Whatever the naming convention is, it mandatory to have one. So at some point, renaming stuff are going to happen, or phobos will become completely opaque when growing. Renaming just one function isn't important enough to justify to break compatibility. Having a consistent naming convention is.
Mar 11 2012
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 09 Mar 2012 18:14:39 -0500, H. S. Teoh <hsteoh quickfur.ath.cx> =
 =

wrote:

 On Fri, Mar 09, 2012 at 11:46:24PM +0100, Alex R=C3=B8nne Petersen wro=

 On 09-03-2012 23:32, Walter Bright wrote:
This statement is from Linus Torvalds about breaking binary  =



 compatibility:
https://lkml.org/lkml/2012/3/8/495

While I don't think we need to worry so much at the moment about
breaking binary compatibility with new D releases, we do have a big
problem with breaking source code compatibility.

This is why we need to have a VERY high bar for breaking changes.

If we want to start being able to avoid breaking changes, we *really* need to finally deprecate the stuff that's been slated for deprecation for ages...

Does that include std.stdio and std.stream? When are we expecting std.=

 to be ready?

Sadly, I have no guarantees for when it will be ready. The rewrite is = mostly in place, what I'm struggling with is how to make it backwards = compatible with std.stdio. Specifically, I think I need to rewrite = std.typecons.RefCounted to be more flexible.
 IMHO, this is one major change that needs to happen sooner rather than=

 later. The current lack of interoperability between std.stdio and
 std.stream is a big detraction from Phobos' overall quality.

I agree, as I watch other modules which would benefit from the rewrite g= et = more attention, I cringe hoping that it doesn't introduce something that= = would necessitate a complete rewrite with the new system (thereby making= = my case for rewriting std.stdio weaker). It's my number 1 priority for D. The problem is that D is not my number= 1 = priority right now :( If you want to take a look so far (I haven't compiled in a long time sin= ce = starting the migration to backwards compatibility): https://github.com/schveiguy/phobos/blob/new-io2/std/io.d I also have a trello card for it... -Steve
Mar 09 2012
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 4:02 PM, Andrej Mitrovic wrote:
 Linus would probably hate D just as much as he hates C++. :p

It's clear to me from Linus' postings that he would not be interested in D. And that's ok. He's doing what works for him, and it's hard to argue with his success at it.
Mar 09 2012
prev sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 10-03-2012 01:15, H. S. Teoh wrote:
 On Sat, Mar 10, 2012 at 01:02:35AM +0100, Andrej Mitrovic wrote:
 Linus would probably hate D just as much as he hates C++. :p

Yeah... I can just imagine his eye scanning the description of D and stopping right at the word "GC", and immediately writing a flaming vitriolic post to LKML about how a GC is the absolutely worst thing one could possibly conceive of putting inside a kernel, and that any kernel developer caught flirting with the idea of using D ought to have all kernel patches ignored from that point on. :-) T

In all fairness, a stop-the-world GC in a kernel probably *is* a horrible idea. -- - Alex
Mar 10 2012
parent reply =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 10-03-2012 10:09, so wrote:
 On Saturday, 10 March 2012 at 08:53:23 UTC, Alex Rønne Petersen wrote:

 In all fairness, a stop-the-world GC in a kernel probably *is* a
 horrible idea.

For us (desktop users), it would not differ much would it now?

Linux was never intended to be a pure desktop kernel. It's used widely in server and embedded machines. -- - Alex
Mar 10 2012
next sibling parent =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 10-03-2012 16:28, H. S. Teoh wrote:
 On Sat, Mar 10, 2012 at 04:23:43PM +0100, Adam D. Ruppe wrote:
 On Saturday, 10 March 2012 at 15:19:15 UTC, H. S. Teoh wrote:
 Since when is mouse movement a stop-the-world event on Linux?

It's a hardware interrupt. They all work that way. You have to give a lot of care to handling them very quickly and not letting them stack up (lest the whole system freeze).

Sure, but I've never seen a problem with that. T

A GC world stop is much more severe than a simple interrupt... This comparison is completely insane. -- - Alex
Mar 10 2012
prev sibling next sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Adam D. Ruppe wrote:
 On Saturday, 10 March 2012 at 15:19:15 UTC, H. S. Teoh wrote:
 Since when is mouse movement a stop-the-world event on Linux?

It's a hardware interrupt. They all work that way. You have to give a lot of care to handling them very quickly and not letting them stack up (lest the whole system freeze).

So? It's not stop-the-world. While one core is handling the interrupt, the other(s) is(are) still running. A stop-the-world GC would need to block all threads on all cores while running. Jerome PS: This is nothing restricted to Linux. Windows, MacOS X and the *BSDs have the same behaviour. --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Mar 10 2012
parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 10-03-2012 18:58, H. S. Teoh wrote:
 On Sat, Mar 10, 2012 at 06:49:02PM +0100, so wrote:
 On Saturday, 10 March 2012 at 16:22:41 UTC, H. S. Teoh wrote:

 As for Win95 being unable to keep up with mouse movement... well, to
 be honest I hated Win95 so much that 90% of the time I was in the DOS
 prompt anyway, so I didn't even notice this. If it were truly a
 problem, it's probably a sign of poor hardware interrupt handling
 (interrupt handler is taking too long to process events). But I
 haven't seen this myself either.

Design of input handling, the theoretical part is irrelevant. I was solely talking about how they do it in practice. OSs are simply unresponsive and in linux it is more severe. If i am having this issue in practice it doesn't matter if it was the GC lock or an another failure to handle input.

Then you must be running a very different Linux from the one I use. In my experience, it's Windows that's an order of magnitude less responsive due to constant HD thrashing (esp. on bootup, and then periodically thereafter) and too much eye-candy.

This. On the other hand, OS X has all the eye candy and is still extremely responsive. ;)
 (Then again, I don't use graphics-heavy UIs... on Linux you can turn
 most of it off, and I do, but on Windows you have no choice. So perhaps
 it's more a measure of how I configured my system than anything else. I
 tried doing this in Windows once, and let's just say that I'll never,
 ever, even _dream_ of attempting it again, it was that painful.)


 T

-- - Alex
Mar 10 2012
next sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 10-03-2012 19:54, H. S. Teoh wrote:
 On Sat, Mar 10, 2012 at 07:44:10PM +0100, Alex Rnne Petersen wrote:
 On 10-03-2012 18:58, H. S. Teoh wrote:
 On Sat, Mar 10, 2012 at 06:49:02PM +0100, so wrote:


 Design of input handling, the theoretical part is irrelevant. I was
 solely talking about how they do it in practice. OSs are simply
 unresponsive and in linux it is more severe. If i am having this
 issue in practice it doesn't matter if it was the GC lock or an
 another failure to handle input.

Then you must be running a very different Linux from the one I use. In my experience, it's Windows that's an order of magnitude less responsive due to constant HD thrashing (esp. on bootup, and then periodically thereafter) and too much eye-candy.

This. On the other hand, OS X has all the eye candy and is still extremely responsive. ;)

But if I wanted eye candy, I'd be using Windows in the first place. :-) T

Personally I'm all for OS X; it's a good UI on top of a Unix shell - what's not to love? But I don't intend to start an OS war or anything here... :P -- - Alex
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"Alex Rnne Petersen" <xtzgzorex gmail.com> wrote in message 
news:jjg8e8$46e$1 digitalmars.com...
 Personally I'm all for OS X; it's a good UI

Compared to CDE, yes.
 on top of a Unix shell - what's not to love?

 But I don't intend to start an OS war or anything here... :P

Oh, it's ON! j/k ;)
Mar 10 2012
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Alex Rnne Petersen" <xtzgzorex gmail.com> wrote in message 
news:jjg7dq$24q$1 digitalmars.com...
 On 10-03-2012 18:58, H. S. Teoh wrote:
 Then you must be running a very different Linux from the one I use. In
 my experience, it's Windows that's an order of magnitude less responsive
 due to constant HD thrashing (esp. on bootup, and then periodically
 thereafter) and too much eye-candy.

This. On the other hand, OS X has all the eye candy and is still extremely responsive. ;)

That's because they cram [their] hardware upgrades down your throat every couple years.
Mar 10 2012
next sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 10-03-2012 20:23, Nick Sabalausky wrote:
 "Alex Rnne Petersen"<xtzgzorex gmail.com>  wrote in message
 news:jjg7dq$24q$1 digitalmars.com...
 On 10-03-2012 18:58, H. S. Teoh wrote:
 Then you must be running a very different Linux from the one I use. In
 my experience, it's Windows that's an order of magnitude less responsive
 due to constant HD thrashing (esp. on bootup, and then periodically
 thereafter) and too much eye-candy.

This. On the other hand, OS X has all the eye candy and is still extremely responsive. ;)

That's because they cram [their] hardware upgrades down your throat every couple years.

No one forces you to upgrade. -- - Alex
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"Alex Rnne Petersen" <xtzgzorex gmail.com> wrote in message 
news:jjgb5l$94f$1 digitalmars.com...
 On 10-03-2012 20:23, Nick Sabalausky wrote:
 "Alex Rnne Petersen"<xtzgzorex gmail.com>  wrote in message
 news:jjg7dq$24q$1 digitalmars.com...
 On 10-03-2012 18:58, H. S. Teoh wrote:
 Then you must be running a very different Linux from the one I use. In
 my experience, it's Windows that's an order of magnitude less 
 responsive
 due to constant HD thrashing (esp. on bootup, and then periodically
 thereafter) and too much eye-candy.

This. On the other hand, OS X has all the eye candy and is still extremely responsive. ;)

That's because they cram [their] hardware upgrades down your throat every couple years.

No one forces you to upgrade.

That's true. They just say "You *could* stick with your ancient two-year-old machine...You'll be shit out of luck when you need to install anything...but yea, we'll *cough* 'let' *cough* you do it...hee hee hee."
Mar 10 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, March 10, 2012 20:48:05 Alex R=C3=B8nne Petersen wrote:
 No one forces you to upgrade.

What, you've never had the Apple police come to your door and force a n= ew=20 computer on you at gunpoint? ;) - Jonathan M Davis
Mar 10 2012
prev sibling next sibling parent "Nick Sabalausky" <a a.a> writes:
"Derek" <ddparnell bigpond.com> wrote in message 
news:op.wazmllu534mv3i red-beast...
 Ugh. If the authors of a GUI program can't be bothered to put an
 option in their own options menus, then that option may as well not
 exist. Why can't they learn that? I searched every inch of Opera's
 options screens and never found *any* mention or reference to any
 "Disable AutoUpdate" or "opera:config". What the fuck did they expect?
 Clairvoyance? Omniscience?


I found it in a minute. First I tried opera help and it directed me to details about auto-update, which showed how to disable it. It is in the normal UI place for such stuff. Tools -> Preferences -> Advanced -> Security -> Auto-Update.

They stuck it under "Security"? No wonder I couldn't find it. That's like putting "blue" under "shapes". :/
Mar 10 2012
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/10/2012 1:20 PM, H. S. Teoh wrote:
 It's no fun at all if you had to wait 2 hours just to
 find out you screwed up some parameters in your test render. Imagine if
 you had to wait 2 hours to know the result of every 1 line code change.

2 hours? Man, you got good service. When I submitted my punched card decks, I'd be lucky to get a result the next day! (Yes, I did learn to program using punch cards. And to be fair, the programs were trivial compared with the behemoths we write today.)
Mar 10 2012
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"so" <so so.so> wrote in message 
news:pzghdzojddybajuguxwa forum.dlang.org...
 On Saturday, 10 March 2012 at 19:54:13 UTC, Jonathan M Davis wrote:

 LOL. I'm the complete opposite. I seem to end up upgrading my computer 
 every 2
 or 3 years. I wouldn't be able to stand being on an older computer that 
 long.
 I'm constantly annoyed by how slow my computer is no matter how new it 
 is.

No matter how much hardware you throw at it, somehow it gets slower and slower. New hardware can't keep up with (ever increasing) writing bad software. http://www.agner.org/optimize/blog/read.php?i=9

That is a *FANTASTIC* article. Completely agree, and it's very well-written. That's actually one of reasons I like to *not* use higher-end hardware. Every programmer in the world, no exceptions, has a natural tendancy to target the hardware they're developing on. If you're developing on high-end hardware, your software is likely to end up requiring high-end hardware even without your noticing. If you're developing on lower-end hardware, your software is going to run well on fucking *everything*. Similar thing for server software. If your developing on a low-end local machine, it's going to run that much better under heavier loads. I think it's a shame that companies hand out high-end hardware to their developers like it was candy. There's no doubt in my mind that's significantly contributed to the amount of bloatware out there.
Mar 11 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 07:44:10PM +0100, Alex Rnne Petersen wrote:
 On 10-03-2012 18:58, H. S. Teoh wrote:
On Sat, Mar 10, 2012 at 06:49:02PM +0100, so wrote:


Design of input handling, the theoretical part is irrelevant. I was
solely talking about how they do it in practice. OSs are simply
unresponsive and in linux it is more severe. If i am having this
issue in practice it doesn't matter if it was the GC lock or an
another failure to handle input.

Then you must be running a very different Linux from the one I use. In my experience, it's Windows that's an order of magnitude less responsive due to constant HD thrashing (esp. on bootup, and then periodically thereafter) and too much eye-candy.

This. On the other hand, OS X has all the eye candy and is still extremely responsive. ;)

But if I wanted eye candy, I'd be using Windows in the first place. :-) T -- They pretend to pay us, and we pretend to work. -- Russian saying
Mar 10 2012
prev sibling parent "Tove" <tove fransson.se> writes:
On Saturday, 10 March 2012 at 19:01:29 UTC, Alex Rønne Petersen 
wrote:
 Personally I'm all for OS X; it's a good UI on top of a Unix 
 shell - what's not to love?

 But I don't intend to start an OS war or anything here... :P

On "paper"(based on features) OS X has been my first OS of choice since the day it was launched... yet I never once tried it, as there are no sane hardware options. :( Since I require a Discrete Graphics Card, "Mac Pro" is the only choice available, but it's a workstation class computer, however considering I don't have any mission critical requirements for my home computer... the 100% price premium is not justified.
Mar 10 2012
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Linus would probably hate D just as much as he hates C++. :p
Mar 09 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 01:02:35AM +0100, Andrej Mitrovic wrote:
 Linus would probably hate D just as much as he hates C++. :p

Yeah... I can just imagine his eye scanning the description of D and stopping right at the word "GC", and immediately writing a flaming vitriolic post to LKML about how a GC is the absolutely worst thing one could possibly conceive of putting inside a kernel, and that any kernel developer caught flirting with the idea of using D ought to have all kernel patches ignored from that point on. :-) T -- Amateurs built the Ark; professionals built the Titanic.
Mar 09 2012
prev sibling next sibling parent "so" <so so.so> writes:
On Saturday, 10 March 2012 at 00:02:44 UTC, Andrej Mitrovic wrote:
 Linus would probably hate D just as much as he hates C++. :p

Rather then using ones influence to make a better language (C) it is much easier to bitch about attempts made by others.
Mar 09 2012
prev sibling next sibling parent reply "so" <so so.so> writes:
On Saturday, 10 March 2012 at 08:53:23 UTC, Alex Rønne Petersen 
wrote:

 In all fairness, a stop-the-world GC in a kernel probably *is* 
 a horrible idea.

For us (desktop users), it would not differ much would it now?
Mar 10 2012
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 07:23:11PM +0100, so wrote:
 On Saturday, 10 March 2012 at 17:51:28 UTC, H. S. Teoh wrote:

Then again, I never believed in the desktop metaphor, and have never
seriously used Gnome or KDE or any of that fluffy stuff. I was on
VTWM until I decided ratpoison (a mouseless WM) better suited the way
I worked.

I am also using light window managers. Most of the time only tmux and gvim running. I tried many WMs but if you are using it frequently and don't like falling back to windows and such, you need a WM working seamlessly with GUIs. Gimp is one. (You might not believe in desktop but how would you use a program like Gimp?) Now most of the tiling WMs suck at handling that kind of thing. Using xmonad now, at least it has a little better support.

I don't use tiling WMs. And frankly, Gimp's multi-window interface (or OpenOffice, I mean, LibreOffice, for that matter) is very annoying. That's why I don't use gimp very much. I just use command-line imagemagick tools to do stuff. And when I need to generate complex images, I use povray. :-P (Or write my own image generating algos.) But I don't do much fancy stuff with images anyway, otherwise I would've figured out a way to make gimp work nicely. But on the point of WMs, the only *real* GUI app that I use regularly is the browser. (And Skype, only because the people I want to talk to are on the other side of the world and they only have Skype. But this is only once a week as opposed to every day.) I pull up OpenOffice / LibreOffice every now and then, under protest, when it's *absolutely* necessary. Pretty much everything else I do in the terminal. So I don't really use any "desktop" features at all anyway. That's why I like ratpoison: maximize everything, no overlapping/tiling windows, and keyboard controls for everything. T -- Real Programmers use "cat > a.out".
Mar 10 2012
prev sibling parent "Kagamin" <spam here.lot> writes:
On Saturday, 10 March 2012 at 08:53:23 UTC, Alex Rønne Petersen 
wrote:
 In all fairness, a stop-the-world GC in a kernel probably *is* 
 a horrible idea.

Doesn't kernel always work in a stop-the-world mode?
Apr 03 2012
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jje0er$24mb$1 digitalmars.com...
 This statement is from Linus Torvalds about breaking binary compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about breaking 
 binary compatibility with new D releases, we do have a big problem with 
 breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

Freezing things against breaking changes is all fine and good, but NOT before they're reached a point where they're good enough to be frozen. Premature freezing is how you create cruft and other such shit. Let's not go jumping any guns here.
Mar 09 2012
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:jje2cg$27tg$1 digitalmars.com...
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:jje0er$24mb$1 digitalmars.com...
 This statement is from Linus Torvalds about breaking binary 
 compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about breaking 
 binary compatibility with new D releases, we do have a big problem with 
 breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

Freezing things against breaking changes is all fine and good, but NOT before they're reached a point where they're good enough to be frozen. Premature freezing is how you create cruft and other such shit. Let's not go jumping any guns here.

Keep in mind, too, that Linux has decades of legacy and millions of users. That's a *very* different situation from Phobos. Apples and oranges.
Mar 09 2012
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 03/10/2012 12:09 AM, Nick Sabalausky wrote:
 "Nick Sabalausky"<a a.a>  wrote in message
 news:jje2cg$27tg$1 digitalmars.com...
 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 news:jje0er$24mb$1 digitalmars.com...
 This statement is from Linus Torvalds about breaking binary
 compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about breaking
 binary compatibility with new D releases, we do have a big problem with
 breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

Freezing things against breaking changes is all fine and good, but NOT before they're reached a point where they're good enough to be frozen. Premature freezing is how you create cruft and other such shit. Let's not go jumping any guns here.


+1.
 Keep in mind, too, that Linux has decades of legacy and millions of users.
 That's a *very* different situation from Phobos. Apples and oranges.

+1.
Mar 09 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 3:09 PM, Nick Sabalausky wrote:
 Keep in mind, too, that Linux has decades of legacy and millions of users.
 That's a *very* different situation from Phobos. Apples and oranges.

Linux has had a habit of not breaking existing code from decades ago. I think that is one reason why it has millions of users. Remember, every time you break existing code you reset your user base back to zero. I'm *still* regularly annoyed by the writefln => writeln change in D1 to D2, and I agreed to that change. Grrrr.
Mar 09 2012
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/9/12 8:40 PM, Adam D. Ruppe wrote:
 (For example, take dmd out of the box on CentOS. Won't
 work.)

Why? I have CentOS at work and it seems to work. Andrei
Mar 09 2012
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/9/12 8:46 PM, Adam D. Ruppe wrote:
 We might have a stable language, but if the library doesn't
 do the same, we'll never be Windows.

I hear ya. Andrei
Mar 09 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 8:40 PM, Adam D. Ruppe wrote:
 On Windows though, even if you relied on bugs twenty
 years ago, they bend over backward to keep your app
 functioning. It is really an amazing feat they've
 accomplished, both from technical and business
 perspectives, in doing this while still moving
 forward.

I agree that Windows does a better job of it than Linux. MS really does pour enormous effort into backwards compatibility. You could legitimately call it heroic - and it has paid off for MS.
Mar 09 2012
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 10.03.2012 06:47, schrieb Walter Bright:
 On 3/9/2012 8:40 PM, Adam D. Ruppe wrote:
 On Windows though, even if you relied on bugs twenty
 years ago, they bend over backward to keep your app
 functioning. It is really an amazing feat they've
 accomplished, both from technical and business
 perspectives, in doing this while still moving
 forward.

I agree that Windows does a better job of it than Linux. MS really does pour enormous effort into backwards compatibility. You could legitimately call it heroic - and it has paid off for MS.

For those that don't know it, this is a great blog about how Microsoft efforts in backwards compatibility. http://blogs.msdn.com/b/oldnewthing/ I must say I used to be a bit anti-Microsoft while I was at university and discovered UNIX. But later on, I got to work for a couple of Fortune 500 companies and got to understand why companies the size of Microsoft are as they are. -- Paulo
Mar 10 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 10/03/2012 06:47, Walter Bright a écrit :
 On 3/9/2012 8:40 PM, Adam D. Ruppe wrote:
 On Windows though, even if you relied on bugs twenty
 years ago, they bend over backward to keep your app
 functioning. It is really an amazing feat they've
 accomplished, both from technical and business
 perspectives, in doing this while still moving
 forward.

I agree that Windows does a better job of it than Linux. MS really does pour enormous effort into backwards compatibility. You could legitimately call it heroic - and it has paid off for MS.

Micsrosoft being incompatible with mostly everything, they sure can't afford to not be compatible with themselves. This is of strategic importance for microsoft. I don't think this is as important for us as it is for microsoft (even if it is).
Mar 11 2012
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jjelk7$7fm$1 digitalmars.com...
 On 3/9/2012 3:09 PM, Nick Sabalausky wrote:
 Keep in mind, too, that Linux has decades of legacy and millions of 
 users.
 That's a *very* different situation from Phobos. Apples and oranges.

Linux has had a habit of not breaking existing code from decades ago. I think that is one reason why it has millions of users. Remember, every time you break existing code you reset your user base back to zero. I'm *still* regularly annoyed by the writefln => writeln change in D1 to D2, and I agreed to that change. Grrrr.

Are you kidding me? I'm *thrilled* with how much of an improvement writeln is *every time I use it*. Seriously how the hell did writeln ever hurst *anyone*? We're bitching about trivialities here.
Mar 09 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 10:43 PM, Nick Sabalausky wrote:
 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 I'm *still* regularly annoyed by the writefln =>  writeln change in D1 to
 D2, and I agreed to that change. Grrrr.

Are you kidding me? I'm *thrilled* with how much of an improvement writeln is *every time I use it*. Seriously how the hell did writeln ever hurst *anyone*? We're bitching about trivialities here.

I'm not complaining about the functionality improvement - I think that's great. I'm talking about the name change. It's far and away the most common thing I have to edit when moving code from D1 <=> D2.
Mar 10 2012
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jjg7s4$24p$2 digitalmars.com...
 On 3/9/2012 10:43 PM, Nick Sabalausky wrote:
 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 I'm *still* regularly annoyed by the writefln =>  writeln change in D1 
 to
 D2, and I agreed to that change. Grrrr.

Are you kidding me? I'm *thrilled* with how much of an improvement writeln is *every time I use it*. Seriously how the hell did writeln ever hurst *anyone*? We're bitching about trivialities here.

I'm not complaining about the functionality improvement - I think that's great. I'm talking about the name change. It's far and away the most common thing I have to edit when moving code from D1 <=> D2.

I still like the name better. Do we really need an alphabet soup appended to "write" just to spit out one string? It's really not a name change at all though: It's a new function. writefln is still there with the same old functionality (which is good, it *is* a good function). It's just that writeln has been added and just happens to be better in every way for the majority of use-cases.
Mar 10 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/10/2012 11:31 AM, Nick Sabalausky wrote:
 I still like the name better. Do we really need an alphabet soup appended to
 "write" just to spit out one string?

It's not about whether it was a better name. It was about having to constantly edit code.
Mar 10 2012
next sibling parent reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
On 10/03/12 20:38, Walter Bright wrote:
 On 3/10/2012 11:31 AM, Nick Sabalausky wrote:
 I still like the name better. Do we really need an alphabet soup
 appended to
 "write" just to spit out one string?

It's not about whether it was a better name. It was about having to constantly edit code.

But... writefln is still there. Is it incompatible with the D1 one in some way? -Lars
Mar 10 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/10/2012 2:44 PM, Lars T. Kyllingstad wrote:
 But... writefln is still there. Is it incompatible with the D1 one in some way?

Try writefln(3);
Mar 11 2012
parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jjho0a$2qdu$1 digitalmars.com...
 On 3/10/2012 2:44 PM, Lars T. Kyllingstad wrote:
 But... writefln is still there. Is it incompatible with the D1 one in 
 some way?

Try writefln(3);

Having been accustomed to C's printf, I don't think it would have ever even occurred to me to try that. (FWIW/YMMV)
Mar 11 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/11/2012 4:20 AM, Matej Nanut wrote:
 I've been using an EeePC for everything for the past 2.5 years and
 until now, I could cope.

You're right that there's a downside to providing your developers the hottest machines available - their code tends to be a dog on the machines your customer has. I have an EeePC, but I find I could not cope with the tiny screen and tiny keyboard :-)
Mar 11 2012
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jjj0je$2bse$2 digitalmars.com...
 On 3/11/2012 4:20 AM, Matej Nanut wrote:
 I've been using an EeePC for everything for the past 2.5 years and
 until now, I could cope.

You're right that there's a downside to providing your developers the hottest machines available - their code tends to be a dog on the machines your customer has. I have an EeePC, but I find I could not cope with the tiny screen and tiny keyboard :-)

I've often found the keys on a normal sized keyboard to be a bit small. (I have somewhat large hands - I think the original XBox1 "Duke" controller is the only good-sized game controller.) I spent some time trying to find a keyboard with slightly larger than normal keys, but couldn't come up with anything. OTOH, I may have just been being sloppy with my keypresses: I'm not hitting multiple keys anymore like I had been for awhile.
Mar 11 2012
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 11/03/2012 21:06, Walter Bright a écrit :
 On 3/11/2012 4:20 AM, Matej Nanut wrote:
 I've been using an EeePC for everything for the past 2.5 years and
 until now, I could cope.

You're right that there's a downside to providing your developers the hottest machines available - their code tends to be a dog on the machines your customer has.

I think a better solution is including expected performances in the user stories and add them in the testing suite. Dev can enjoy a powerful machine without risking to get a resource monster as a final executable.
Mar 11 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.517.1331521772.4860.digitalmars-d puremagic.com...
 On Sun, Mar 11, 2012 at 11:38:12PM +0100, deadalnix wrote:
 I think a better solution is including expected performances in the
 user stories and add them in the testing suite. Dev can enjoy a
 powerful machine without risking to get a resource monster as a final
 executable.

Even better, have some way of running your program with artificially reduced speed & resources, so that you can (sortof) see how your program degrades with lower-powered systems. Perhaps run the program inside a VM or emulator?

I don't think such things would ever truly work, except maybe in isolated cases. It's an issue of dogfooding. But then these "eat your cake and then still have it" strategies ultimately mean that you're *not* actually doing the dogfooding, just kinda pretending to. Instead, you'd be eating steak seven days a week, occasionally do a half-bite of dogfooding, and immediately wash it down with...I dunno, name some fancy expensive drink, I don't know my wines ;)
Mar 11 2012
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"Simen Kjrs" <simen.kjaras gmail.com> wrote in message 
news:op.wa28iobk0gpyof biotronic.lan...
 On Sun, 11 Mar 2012 21:07:06 +0100, Walter Bright 
 <newshound2 digitalmars.com> wrote:

 On 3/11/2012 12:32 PM, Nick Sabalausky wrote:
 I'm convinced that colleges in general produce very bad programmers. The
 good programmers who have degrees, for the most part (I'm sure there are
 rare exceptions), are the ones who learned on their own, not in a 
 classroom.

Often the best programmers seem to have physics degrees!

Eugh. Physicist programmers tend to use one-letter variable names in my experience. Makes for... interesting reading of their code.

D is great for physics programming. Now you can have much, much more than 26 variables :)
Mar 12 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 12, 2012 at 03:15:32PM -0400, Nick Sabalausky wrote:
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
 news:mailman.531.1331533449.4860.digitalmars-d puremagic.com...

 (And before you shoot me down with "infinite quantities are not
 practical in programming", I'd like to say that certain non-finite
 arithmetic systems actually have real-life consequences in finite
 computations. Look up "Hydra game" sometime. Or "Goldstein
 sequences" if you're into that sorta thing.)


Argh. Epic fail on my part, it's *Goodstein* sequence, not Goldstein.
 Yea, I don't doubt that. While no game programmer, for example, would
 be caught dead having their code crunching calculus computations,
 there are some computations done in games that are obtained in the
 first place by doing some calculus (mostly physics, IIRC). Not exactly
 the same thing, but I get that applicablity of theory isn't limited to
 what the computer is actually calculating.

I think the bottom line is that a lot of this stuff needs someone who can explain and teach it in an engaging, interesting way. It's not that the subject matter itself is boring or stupid, but that the teacher failed at his job and so his students find the subject boring and stupid. [...]
 Our course descriptions didn't have much fine print. Just one short
 vaguely-worded paragraph. I probably could have asked around and
 gotten a syllubus from previous semesters, but I didn't learn advanced
 student tricks like that until a few years into college. ;) Plus,
 that's other concerns, like scheduling and requirements. I found that
 a lot of my course selections had to be dictated more by scheduling
 and availability than much anything else.

I guess I was lucky then. There were a couple o' useless mandatory courses I had to take, but for the most part, I got to choose what I wanted. (And then my geeky side took over and I filled up most of my electives with math courses... sigh...) T -- Some days you win; most days you lose.
Mar 12 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Mar 13, 2012 at 04:10:20AM +0100, Simen Kjærås wrote:
 On Tue, 13 Mar 2012 03:50:49 +0100, Nick Sabalausky <a a.a> wrote:

D is great for physics programming. Now you can have much, much more
than 26 variables :)

True, though mostly, you'd just change to using greek letters, right?

And Russian. And extended Latin. And Chinese (try exhausting that one!). And a whole bunch of other stuff that you may not have known even existed.
 Finally we can use θ for angles, alias ulong ℕ...

+1. Come to think of it, I wonder if it's possible to write a large D program using only 1-letter identifiers. After all, Unicode has enough alphabetic characters that you could go for a long, long time before you exhausted them all. (The CJK block will be especially resilient to exhaustion.) :-) Worse yet, if you don't have fonts installed for some of the Unicode blocks, you'd just end up with functions and variables that have invisible names (or they all look like a black splotch). So it'll be a bunch of code that reads like black splotch = black splotch ( black splotch ) + black splotch. Ah, the hilarity that will ensue... T -- It's bad luck to be superstitious. -- YHL
Mar 12 2012
prev sibling parent =?utf-8?Q?Simen_Kj=C3=A6r=C3=A5s?= <simen.kjaras gmail.com> writes:
On Tue, 13 Mar 2012 06:45:12 +0100, H. S. Teoh <hsteoh quickfur.ath.cx> =
 =

wrote:

 On Tue, Mar 13, 2012 at 04:10:20AM +0100, Simen Kj=C3=A6r=C3=A5s wrote=

 On Tue, 13 Mar 2012 03:50:49 +0100, Nick Sabalausky <a a.a> wrote:

D is great for physics programming. Now you can have much, much more=



than 26 variables :)

True, though mostly, you'd just change to using Greek letters, right?=


 And Russian. And extended Latin. And Chinese (try exhausting that one!=

 And a whole bunch of other stuff that you may not have known even
 existed.

I know Unicode covers a lot more than just Greek. I didn't know the usag= e of Chinese was very common among physicists, though. :p
 Finally we can use =CE=B8 for angles, alias ulong =E2=84=95...

+1. Come to think of it, I wonder if it's possible to write a large D program using only 1-letter identifiers. After all, Unicode has enough=

 alphabetic characters that you could go for a long, long time before y=

 exhausted them all. (The CJK block will be especially resilient to
 exhaustion.) :-)

63,207[1] designated characters thus far[2]. Add in module names and oth= er 'namespaces', and I'd say that should be no problem at all. As long as your head doesn't explode, that is. [1] http://unicode.org/alloc/CurrentAllocation.html [2] Yeah, not all of those are valid identifiers.
Mar 13 2012
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 10/03/2012 20:38, Walter Bright a crit :
 On 3/10/2012 11:31 AM, Nick Sabalausky wrote:
 I still like the name better. Do we really need an alphabet soup
 appended to
 "write" just to spit out one string?

It's not about whether it was a better name. It was about having to constantly edit code.

I do think better name isn't the problem. The problem is about consistency, and will persist as long as we don't agree on a guideline on that in phobos. Changing a name just for changing it doesn't worth the cost, unless the original name is horribly misleading - rare case. But getting the naming convention consistent is of much greater importance, and justify breaking code.
Mar 11 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/11/2012 8:34 AM, deadalnix wrote:
 I do think better name isn't the problem. The problem is about consistency, and
 will persist as long as we don't agree on a guideline on that in phobos.

 Changing a name just for changing it doesn't worth the cost, unless the
original
 name is horribly misleading - rare case. But getting the naming convention
 consistent is of much greater importance, and justify breaking code.

Frankly, I think naming conventions are overrated. The problem is that, as the sec vs seconds debate shows, there is not a correct answer. It becomes a bikeshed issue. There are a lot of considerations for a name, usually conflicting with each other. To set rules in concrete and follow them no matter what is a formula for silly results. I'm not suggesting no naming convention. Naming conventions are good. But they don't trump everything else in importance, not even close. And sometimes, a name change can be a huge win - the invariant=>immutable one is an example. But I think that's an exceptional case, not a rule.
Mar 11 2012
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 And sometimes, a name change can be a huge win - the invariant=>immutable one
is 
 an example. But I think that's an exceptional case, not a rule.

I was among the ones that have asked for that name change. But "immutable" is a quite long word. Now I think the "val" used by Scala is better, it uses less space for something I use often enough. "imm" is another option, but it looks less nice :-) Bye, bearophile
Mar 11 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/11/2012 2:51 PM, bearophile wrote:
 And sometimes, a name change can be a huge win - the invariant=>immutable
 one is an example. But I think that's an exceptional case, not a rule.

I was among the ones that have asked for that name change. But "immutable" is a quite long word. Now I think the "val" used by Scala is better, it uses less space for something I use often enough. "imm" is another option, but it looks less nice :-)

The reason we went with "immutable" is for any other name, I'd be constantly explaining: "xyzzy" means immutable And I did just that for "invariant". Over and over and over. People immediately get what "immutable" means, like for no other name. So consider "immutable" a labor saving device for me.
Mar 11 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 11/03/2012 23:12, Walter Bright a crit :
 On 3/11/2012 2:51 PM, bearophile wrote:
 And sometimes, a name change can be a huge win - the
 invariant=>immutable
 one is an example. But I think that's an exceptional case, not a rule.

I was among the ones that have asked for that name change. But "immutable" is a quite long word. Now I think the "val" used by Scala is better, it uses less space for something I use often enough. "imm" is another option, but it looks less nice :-)

The reason we went with "immutable" is for any other name, I'd be constantly explaining: "xyzzy" means immutable And I did just that for "invariant". Over and over and over. People immediately get what "immutable" means, like for no other name. So consider "immutable" a labor saving device for me.

We have the same phenomena with dur and return type type qualifier (ie: why does const int* fun() isn't compiling ? Because const is qualifying the function, not the return type). Both are recurring questions and so should be as important as immutable. But both are major breaking change.
Mar 11 2012
parent "Nick Sabalausky" <a a.a> writes:
"deadalnix" <deadalnix gmail.com> wrote in message 
news:jjj907$2t00$1 digitalmars.com...
 Le 11/03/2012 23:12, Walter Bright a crit :
 And I did just that for "invariant". Over and over and over. People
 immediately get what "immutable" means, like for no other name. So
 consider "immutable" a labor saving device for me.

We have the same phenomena with dur and return type type qualifier (ie: why does const int* fun() isn't compiling ? Because const is qualifying the function, not the return type). Both are recurring questions and so should be as important as immutable. But both are major breaking change.

I wouldn't call dur->duration a *major* breaking change. First of all, you get a clear compile-time error, not silently changed semantics. Secondly, it's a simple search/replace: s/dur!/duration!/ (Not that I normally do search/replaces *completely* blind and unattended, but it's still trivial.)
Mar 11 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 11/03/2012 21:16, Walter Bright a crit :
 On 3/11/2012 8:34 AM, deadalnix wrote:
 I do think better name isn't the problem. The problem is about
 consistency, and
 will persist as long as we don't agree on a guideline on that in phobos.

 Changing a name just for changing it doesn't worth the cost, unless
 the original
 name is horribly misleading - rare case. But getting the naming
 convention
 consistent is of much greater importance, and justify breaking code.

Frankly, I think naming conventions are overrated. The problem is that, as the sec vs seconds debate shows, there is not a correct answer. It becomes a bikeshed issue. There are a lot of considerations for a name, usually conflicting with each other. To set rules in concrete and follow them no matter what is a formula for silly results.

I think this example is very good. The seconds/secs show us the importance of getting consistent. Here the problem come from the fact that some stuff has been abbreviated (msecs, usecs, . . .) and some other hasn't (minutes, hours). Now we ends up with the tricky case of seconds, because it is in between theses 2 worlds and 2 naming conventions. And as we have seen, it confuses people, and have no good solution now (either duplication of the value, which is never good, either arbitrary choice of one naming convention). This typically show us what problems bad naming convention occurs, and how difficult it is solve afterward, because it breaks compatibility.
 I'm not suggesting no naming convention. Naming conventions are good.
 But they don't trump everything else in importance, not even close.

I have to disagree. They are really important on a large codebase (let say > 100,000 lines of code). Otherwise people tend not to find modules they can reuse expect by nowing the whole codebase. This have indirect effect to cause needless duplication - with all known drawbacks - and make the project more dependent on documentation - more documentation have to be produced, which means an overhead in workload and more trouble when documentation lacks, is outdated, or is bugguy.
 And sometimes, a name change can be a huge win - the
 invariant=>immutable one is an example. But I think that's an
 exceptional case, not a rule.

IMO, the name in itself isn't that important. The important thing is that thing get named in a predictable and simple way.
Mar 11 2012
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, March 11, 2012 13:16:44 Walter Bright wrote:
 On 3/11/2012 8:34 AM, deadalnix wrote:
 I do think better name isn't the problem. The problem is about
 consistency, and will persist as long as we don't agree on a guideline on
 that in phobos.
 
 Changing a name just for changing it doesn't worth the cost, unless the
 original name is horribly misleading - rare case. But getting the naming
 convention consistent is of much greater importance, and justify breaking
 code.

the sec vs seconds debate shows, there is not a correct answer. It becomes a bikeshed issue. There are a lot of considerations for a name, usually conflicting with each other. To set rules in concrete and follow them no matter what is a formula for silly results. I'm not suggesting no naming convention. Naming conventions are good. But they don't trump everything else in importance, not even close. And sometimes, a name change can be a huge win - the invariant=>immutable one is an example. But I think that's an exceptional case, not a rule.

Another issue is the question of what you consider consitency to be. On the whole, I really don't see much inconsistent about Phobos' naming at this point. The main problem that it's had is that it hasn't always used camelcasing historically, and a number of functions weren't camelcased like the rest and like we'd decide we wanted our function names and enum values to generally be. That's mostly been fixed. And yet there are still some people complaining about consistency. And to boot, other than the issue of secs vs seconds, I don't know what they even think is inconsistent. Some very basic conventions (such as the normal casing rules for various types of symbols) are certainly helpful, but you _are_ going to have corner cases, and if we tried to have strict and complicated rules on naming, it would just be frustrating all around. Not to mention, there's always _someone_ who comes up with a reason why they think that a particular name is bad or doesn't fit with the rest. So, I think that Phobos is mostly consistent with its naming at this point, and aside from some people not liking the name of a function or two, I really don't see what there is to complain about. - Jonathan M Davis
Mar 11 2012
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 12, 2012 at 01:48:46AM -0400, Nick Sabalausky wrote:
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
 news:mailman.517.1331521772.4860.digitalmars-d puremagic.com...
 On Sun, Mar 11, 2012 at 11:38:12PM +0100, deadalnix wrote:
 I think a better solution is including expected performances in the
 user stories and add them in the testing suite. Dev can enjoy a
 powerful machine without risking to get a resource monster as a
 final executable.

Even better, have some way of running your program with artificially reduced speed & resources, so that you can (sortof) see how your program degrades with lower-powered systems. Perhaps run the program inside a VM or emulator?

I don't think such things would ever truly work, except maybe in isolated cases. It's an issue of dogfooding. But then these "eat your cake and then still have it" strategies ultimately mean that you're *not* actually doing the dogfooding, just kinda pretending to. Instead, you'd be eating steak seven days a week, occasionally do a half-bite of dogfooding, and immediately wash it down with...I dunno, name some fancy expensive drink, I don't know my wines ;)

Nah, it's like ordering extra large triple steak burger with double-extra cheese, extra bacon, sausage on the side, extra large french fries swimming in grease, and _diet_ coke to go with it. T -- Blunt statements really don't have a point.
Mar 11 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, March 10, 2012 11:56:03 H. S. Teoh wrote:
 On Sat, Mar 10, 2012 at 02:31:53PM -0500, Nick Sabalausky wrote:
 [...]
 
 writefln is still there with the same old functionality (which is
 good, it *is* a good function). It's just that writeln has been added
 and just happens to be better in every way for the majority of
 use-cases.

[...] Strange, I still find myself using writef/writefln very frequently. When you want formatting in your output, printf specs are just sooo convenient. But perhaps it's just a symptom of my having just emerged from the C/C++ world. :-)

It's a question of what you're printing out. Is it more typical to write a string out without needing to construct it from some set of arguments, or is it more common to have to print a string that you've constructed from a set of arguments? It all depends on your code. There's no question that writef and writefln are useful. It's just a matter of what _your_ use cases are which determines whether you use writeln or writefln more. - Jonathan M Davis
Mar 10 2012
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 12, 2012 at 01:36:06AM -0400, Nick Sabalausky wrote:
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
 news:mailman.510.1331520028.4860.digitalmars-d puremagic.com...

 Personally, I found discrete math to be the easiest class I took since
 kindergarten (*Both* of the times they made me take discrete math.
 Ugh. God that got boring.) It was almost entirely the sorts of things
 that any average coder already understands intuitively. Like
 DeMorgan's: I hadn't known the name "DeMorgan", but just from growing
 up writing "if" statements I had already grokked how it worked and how
 to use it. No doubt in my mind that *all* of us here have grokked it
 (even any of us who might not know it by name) *and* many of the
 coworkers I've had who I'd normally classify as "incompetent VB-loving
 imbiciles".

It's not that I didn't already know most of the stuff intuitively, I found that, in retrospect, having to learn it formally helped to solidify my mental grasp of it, and to be able to analyse it abstractly without being tied to intuition. This later developed into the ability to reason about other stuff in the same way, so you could *derive* new stuff yourself in similar ways.
 Then there was Pidgeonhole principle, which was basically just obvious
 corollaries to preschool-level spacial relations. Etc.  All pretty
 much BASIC-level stuff.

Oh reeeeaally?! Just wait till you learn how the pigeonhole principle allows you to do arithmetic with infinite quantities... ;-) (And before you shoot me down with "infinite quantities are not practical in programming", I'd like to say that certain non-finite arithmetic systems actually have real-life consequences in finite computations. Look up "Hydra game" sometime. Or "Goldstein sequences" if you're into that sorta thing.) [...]
 However, I also found that most big-name colleges are geared toward
 producing researchers rather than programmers in the industry.

The colleges I've seen seemed to have an identity crisis in that regard: Sometimes they acted like their role was teaching theory, sometimes they acted like their role was job training/placement, and all the time they were incompetent at both.

In my experience, I found that the quality of a course depends a LOT on the attitude and teaching ability of the professor. I've had courses which were like mind-openers every other class, where you just go "wow, *that* is one heck of a cool algorithm!". Unfortunately, (1) most professors can't teach; (2) they're not *paid* to teach (they're paid to do research), so they regard it as a tedious chore imposed upon them that takes away their time for research. This makes them hate teaching, and so most courses suck. [...]
 I once made the mistake of signing up for a class that claimed to be
 part of the CS department and was titled "Optimization Techniques". I
 thought it was obvious what it was and that it would be a great class
 for me to take.  Turned out to be a class that, realistically,
 belonged in the Math dept and had nothing to do with efficient
 software, even in theory. Wasn't even in the ballpark of Big-O, etc.
 It was linear algebra with large numbers of variables.

Ahhhhahahahahaha... must've been high-dimensional polytope optimization stuff, I'll bet. That stuff *does* have its uses... but yeah, that was a really dumb course title. Another dumb course title that I've encountered was along the lines of "computational theory" where 95% of the course talks about *uncomputable* problems. You'd think they would've named it "*un*computational theory". :-P
 I'm sure it would be great material for the right person, but it
 wasn't remotely what I expected given the name and department of the
 course.  (Actually, similar thing with my High School class of
 "Business Law" - Turned out to have *nothing* to do with business
 whatsoever. Never understood why they didn't just call the class "Law"
 or "Civic Law".) Kinda felt "baited and switched" both times.

That's why I always took the effort read course descriptions VERY carefully before I sign up. It's like the fine print in contracts. You skip over it at your own peril. (Though, that didn't stop me from taking "Number Theory". Or "Set Theory". Both of which went wayyyyyy over my head for the most part.) T -- 2+2=4. 2*2=4. 2^2=4. Therefore, +, *, and ^ are the same operation.
Mar 11 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.531.1331533449.4860.digitalmars-d puremagic.com...
 On Mon, Mar 12, 2012 at 01:36:06AM -0400, Nick Sabalausky wrote:
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message
 news:mailman.510.1331520028.4860.digitalmars-d puremagic.com...

 Personally, I found discrete math to be the easiest class I took since
 kindergarten (*Both* of the times they made me take discrete math.
 Ugh. God that got boring.) It was almost entirely the sorts of things
 that any average coder already understands intuitively. Like
 DeMorgan's: I hadn't known the name "DeMorgan", but just from growing
 up writing "if" statements I had already grokked how it worked and how
 to use it. No doubt in my mind that *all* of us here have grokked it
 (even any of us who might not know it by name) *and* many of the
 coworkers I've had who I'd normally classify as "incompetent VB-loving
 imbiciles".

It's not that I didn't already know most of the stuff intuitively,

Didn't mean to imply that you didn't, of course.
 Then there was Pidgeonhole principle, which was basically just obvious
 corollaries to preschool-level spacial relations. Etc.  All pretty
 much BASIC-level stuff.

Oh reeeeaally?! Just wait till you learn how the pigeonhole principle allows you to do arithmetic with infinite quantities... ;-)

Well, the discrete math courses offered at the places I went to didn't take things that far. Just explained the principle itself.
 (And before you shoot me down with "infinite quantities are not
 practical in programming", I'd like to say that certain non-finite
 arithmetic systems actually have real-life consequences in finite
 computations. Look up "Hydra game" sometime. Or "Goldstein sequences" if
 you're into that sorta thing.)

Yea, I don't doubt that. While no game programmer, for example, would be caught dead having their code crunching calculus computations, there are some computations done in games that are obtained in the first place by doing some calculus (mostly physics, IIRC). Not exactly the same thing, but I get that applicablity of theory isn't limited to what the computer is actually calculating.
 [...]
 However, I also found that most big-name colleges are geared toward
 producing researchers rather than programmers in the industry.

The colleges I've seen seemed to have an identity crisis in that regard: Sometimes they acted like their role was teaching theory, sometimes they acted like their role was job training/placement, and all the time they were incompetent at both.

In my experience, I found that the quality of a course depends a LOT on the attitude and teaching ability of the professor. I've had courses which were like mind-openers every other class, where you just go "wow, *that* is one heck of a cool algorithm!".

Yea, I *have* had some good instructors. Not many. But some.
 Unfortunately, (1) most professors can't teach; (2) they're not *paid*
 to teach (they're paid to do research), so they regard it as a tedious
 chore imposed upon them that takes away their time for research. This
 makes them hate teaching, and so most courses suck.

#1 I definitely agree with. #2 I don't doubt for at least some colleges, although I'm uncertain how applicable it is to public party schools like BGSU. There didn't seem to be much research going on there as far as I could tell, but I could be wrong though.
 [...]
 I once made the mistake of signing up for a class that claimed to be
 part of the CS department and was titled "Optimization Techniques". I
 thought it was obvious what it was and that it would be a great class
 for me to take.  Turned out to be a class that, realistically,
 belonged in the Math dept and had nothing to do with efficient
 software, even in theory. Wasn't even in the ballpark of Big-O, etc.
 It was linear algebra with large numbers of variables.

Ahhhhahahahahaha... must've been high-dimensional polytope optimization stuff, I'll bet.

Sounds about right. I think the term "linear programming" was tossed around a bit, which I do remember from high school to be an application of linear algebra rather than software.
 That stuff *does* have its uses...

Yea, I never doubted that. Just not what I was expected. Really caught me offguard.
 but yeah, that was a
 really dumb course title.

 Another dumb course title that I've encountered was along the lines of
 "computational theory" where 95% of the course talks about
 *uncomputable* problems. You'd think they would've named it
 "*un*computational theory". :-P

Yea that is kinda funny.
 I'm sure it would be great material for the right person, but it
 wasn't remotely what I expected given the name and department of the
 course.  (Actually, similar thing with my High School class of
 "Business Law" - Turned out to have *nothing* to do with business
 whatsoever. Never understood why they didn't just call the class "Law"
 or "Civic Law".) Kinda felt "baited and switched" both times.

That's why I always took the effort read course descriptions VERY carefully before I sign up. It's like the fine print in contracts. You skip over it at your own peril.

Our course descriptions didn't have much fine print. Just one short vaguely-worded paragraph. I probably could have asked around and gotten a syllubus from previous semesters, but I didn't learn advanced student tricks like that until a few years into college. ;) Plus, that's other concerns, like scheduling and requirements. I found that a lot of my course selections had to be dictated more by scheduling and availability than much anything else.
Mar 12 2012
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 I'm talking about the name change. It's far and away the most common thing I 
 have to edit when moving code from D1 <=> D2.

We need good/better ways to manage Change and make it faster and less painful, instead of refusing almost all change right now. Things like more fine-graded deprecation abilities, smarter error messages in libraries that suggest how to fix the code, tools that update the code (py2to3 or the Go language tool to update the programs), things like the strange "future" built-in Python package, and so on. Bye, bearophile
Mar 10 2012
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:jjgi8v$p1s$1 digitalmars.com...
 Walter:

 I'm talking about the name change. It's far and away the most common 
 thing I
 have to edit when moving code from D1 <=> D2.

We need good/better ways to manage Change and make it faster and less painful, instead of refusing almost all change right now. Things like more fine-graded deprecation abilities, smarter error messages in libraries that suggest how to fix the code, tools that update the code (py2to3 or the Go language tool to update the programs), things like the strange "future" built-in Python package, and so on.

+1
Mar 10 2012
parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:jjgkb8$s1f$1 digitalmars.com...
 "bearophile" <bearophileHUGS lycos.com> wrote in message 
 news:jjgi8v$p1s$1 digitalmars.com...
 Walter:

 I'm talking about the name change. It's far and away the most common 
 thing I
 have to edit when moving code from D1 <=> D2.

We need good/better ways to manage Change and make it faster and less painful, instead of refusing almost all change right now. Things like more fine-graded deprecation abilities, smarter error messages in libraries that suggest how to fix the code, tools that update the code (py2to3 or the Go language tool to update the programs), things like the strange "future" built-in Python package, and so on.

+1

To elaborate on that, stagnation with thinks that should have been *just fixed* is one of the reasons I got fed up with C++ and went looking for an alternative (and found D). I hate to see D starting to jump into the same boat so soon.
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:jjgkf6$s6h$1 digitalmars.com...
 "Nick Sabalausky" <a a.a> wrote in message 
 news:jjgkb8$s1f$1 digitalmars.com...
 "bearophile" <bearophileHUGS lycos.com> wrote in message 
 news:jjgi8v$p1s$1 digitalmars.com...
 Walter:

 I'm talking about the name change. It's far and away the most common 
 thing I
 have to edit when moving code from D1 <=> D2.

We need good/better ways to manage Change and make it faster and less painful, instead of refusing almost all change right now. Things like more fine-graded deprecation abilities, smarter error messages in libraries that suggest how to fix the code, tools that update the code (py2to3 or the Go language tool to update the programs), things like the strange "future" built-in Python package, and so on.

+1

To elaborate on that, stagnation with thinks that should have been *just fixed* is one of the reasons I got fed up with C++ and went looking for an alternative (and found D). I hate to see D starting to jump into the same boat so soon.

And seriously, name changes are one hell of a *trivial* breaking change. Some people here makes it sound like name changes are akin to...I dunno...banning templates or something. I happily switched from C++ to D. *That's* a "breaking change".
Mar 10 2012
prev sibling parent Sean Cavanaugh <WorksOnMyMachine gmail.com> writes:
On 3/10/2012 3:49 PM, bearophile wrote:
 Walter:

 I'm talking about the name change. It's far and away the most common thing I
 have to edit when moving code from D1<=>  D2.

We need good/better ways to manage Change and make it faster and less painful, instead of refusing almost all change right now. Things like more fine-graded deprecation abilities, smarter error messages in libraries that suggest how to fix the code, tools that update the code (py2to3 or the Go language tool to update the programs), things like the strange "future" built-in Python package, and so on. Bye, bearophile

I would think if a language designed in a migration path that worked it would allow things to be more fluid (on the library and language side). This is one thing I believe has really hurt C++, since the 'good words' like null couldn't be used they had to settle for dumber names like nullptr_t. From what I gather D's way 'out' is abuse of I would much rather the language be able to expand its turf at the expense of the existing codebases, as long as there was a way to _migrate the code cleanly_. I envision something like this would work: In addition to the 'module mypackage.mymodule' statement at the top of each file, should be a version number of D of some sort that the code was last built against. A very high level language revision like D1 or D2. Newer compilers would maintain the previous front-ends ability to parse these older files, purely for the sake of outputing TODO-like messages for how to upgrade the codebase to a newer version. A simple example right now from D1 to D2 would be the way floating point literals are parsed is no longer compatible. The UFCS changes could silently break existing code in theory and probably should be pointed out in some way before upgrading code from D1 to D2.
Mar 10 2012
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Mar 11, 2012 at 11:38:12PM +0100, deadalnix wrote:
 Le 11/03/2012 21:06, Walter Bright a crit :
On 3/11/2012 4:20 AM, Matej Nanut wrote:
I've been using an EeePC for everything for the past 2.5 years and
until now, I could cope.

You're right that there's a downside to providing your developers the hottest machines available - their code tends to be a dog on the machines your customer has.

I think a better solution is including expected performances in the user stories and add them in the testing suite. Dev can enjoy a powerful machine without risking to get a resource monster as a final executable.

Even better, have some way of running your program with artificially reduced speed & resources, so that you can (sortof) see how your program degrades with lower-powered systems. Perhaps run the program inside a VM or emulator? T -- It's bad luck to be superstitious. -- YHL
Mar 11 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Mar 11, 2012 at 03:32:39PM -0400, Nick Sabalausky wrote:
[...]
 I'm convinced that colleges in general produce very bad programmers.
 The good programmers who have degrees, for the most part (I'm sure
 there are rare exceptions), are the ones who learned on their own, not
 in a classroom.  It's sad that society brainwashes people into
 believing the opposite.

I have a master's degree in computer science. About 90% (perhaps 95%) of what I do at my day job is stuff I learned on my own outside the classroom. That is not to say the classroom is completely worthless, mind you; courses like discrete maths and programming logic did train me to think logically and rigorously, an indispensible requirement in the field. However, I also found that most big-name colleges are geared toward producing researchers rather than programmers in the industry. Now I don't know if this applies in general, but the curriculum I was in was so geared towards CS research rather than doing real industry work (i.e., write actual programs!) that we spent more time studying uncomputable problems than computable ones. OK, so knowing what isn't computable is important so that you don't waste time trying to solve the halting problem, for example. But when *most* (all?) of your time is spent contemplating the uncomputable, wouldn't you say that you're a bit too high up in that ivory tower? I mean, this is *computer science*, not *uncomputable science* we're talking about. Case in point. One of the courses I took as a grad student was taught by none less than Professor Cook himself (y'know the guy behind Cook's Theorem). He was a pretty cool guy, and I respect him for what he does. But the course material was... I don't remember what the official course title was, but we spent the entire term proving stuff about proofs. Let me say that again. I'm not just talking about spending the entire semester proving math theorems (which is already questionable enough in a course that's listed as a *computer science* course). I'm talking about spending the entire semester proving things *about* math proofs. IOW, we were dealing with *meta-proofs*. And most of the "proofs" we proved things about involved *proofs of infinite length*. Yeah. I spent the entire course repeatedly wondering if I had misread the course calendar and gone to the wrong class, and, when I had ruled that out, what any of this meta-proof stuff had to do with programming. T -- Recently, our IT department hired a bug-fix engineer. He used to work for Volkswagen.
Mar 11 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.510.1331520028.4860.digitalmars-d puremagic.com...
 That is not to say the classroom is completely worthless,
 mind you;

I'd say that, and I often have ;) And I forever will.
 courses like discrete maths

Personally, I found discrete math to be the easiest class I took since kindergarten (*Both* of the times they made me take discrete math. Ugh. God that got boring.) It was almost entirely the sorts of things that any average coder already understands intuitively. Like DeMorgan's: I hadn't known the name "DeMorgan", but just from growing up writing "if" statements I had already grokked how it worked and how to use it. No doubt in my mind that *all* of us here have grokked it (even any of us who might not know it by name) *and* many of the coworkers I've had who I'd normally classify as "incompetent VB-loving imbiciles". Then there was Pidgeonhole principle, which was basically just obvious corollaries to preschool-level spacial relations. Etc. All pretty much BASIC-level stuff.
 and programming logic did train me
 to think logically and rigorously, an indispensible requirement in the
 field.

 However, I also found that most big-name colleges are geared toward
 producing researchers rather than programmers in the industry.

The colleges I've seen seemed to have an identity crisis in that regard: Sometimes they acted like their role was teaching theory, sometimes they acted like their role was job training/placement, and all the time they were incompetent at both.
 Case in point. One of the courses I took as a grad student was taught by
 none less than Professor Cook himself (y'know the guy behind Cook's
 Theorem). He was a pretty cool guy, and I respect him for what he does.
 But the course material was... I don't remember what the official course
 title was, but we spent the entire term proving stuff about proofs.  Let
 me say that again.  I'm not just talking about spending the entire
 semester proving math theorems (which is already questionable enough in
 a course that's listed as a *computer science* course). I'm talking
 about spending the entire semester proving things *about* math proofs.
 IOW, we were dealing with *meta-proofs*.  And most of the "proofs" we
 proved things about involved *proofs of infinite length*.

 Yeah.

 I spent the entire course repeatedly wondering if I had misread the
 course calendar and gone to the wrong class, and, when I had ruled that
 out, what any of this meta-proof stuff had to do with programming.

I once made the mistake of signing up for a class that claimed to be part of the CS department and was titled "Optimization Techniques". I thought it was obvious what it was and that it would be a great class for me to take. Turned out to be a class that, realistically, belonged in the Math dept and had nothing to do with efficient software, even in theory. Wasn't even in the ballpark of Big-O, etc. It was linear algebra with large numbers of variables. I'm sure it would be great material for the right person, but it wasn't remotely what I expected given the name and department of the course. (Actually, similar thing with my High School class of "Business Law" - Turned out to have *nothing* to do with business whatsoever. Never understood why they didn't just call the class "Law" or "Civic Law".) Kinda felt "baited and switched" both times.
Mar 11 2012
prev sibling parent James Miller <james aatch.net> writes:
On 12 March 2012 15:42, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 On Sun, Mar 11, 2012 at 03:32:39PM -0400, Nick Sabalausky wrote:
 [...]
 I'm convinced that colleges in general produce very bad programmers.
 The good programmers who have degrees, for the most part (I'm sure
 there are rare exceptions), are the ones who learned on their own, not
 in a classroom. =C2=A0It's sad that society brainwashes people into
 believing the opposite.

I have a master's degree in computer science. About 90% (perhaps 95%) of what I do at my day job is stuff I learned on my own outside the classroom. That is not to say the classroom is completely worthless, mind you; courses like discrete maths and programming logic did train me to think logically and rigorously, an indispensible requirement in the field. However, I also found that most big-name colleges are geared toward producing researchers rather than programmers in the industry. Now I don't know if this applies in general, but the curriculum I was in was so geared towards CS research rather than doing real industry work (i.e., write actual programs!) that we spent more time studying uncomputable problems than computable ones. OK, so knowing what isn't computable is important so that you don't waste time trying to solve the halting problem, for example. But when *most* (all?) of your time is spent contemplating the uncomputable, wouldn't you say that you're a bit too high up in that ivory tower? I mean, this is *computer science*, not *uncomputable science* we're talking about. Case in point. One of the courses I took as a grad student was taught by none less than Professor Cook himself (y'know the guy behind Cook's Theorem). He was a pretty cool guy, and I respect him for what he does. But the course material was... I don't remember what the official course title was, but we spent the entire term proving stuff about proofs. =C2=

 me say that again. =C2=A0I'm not just talking about spending the entire
 semester proving math theorems (which is already questionable enough in
 a course that's listed as a *computer science* course). I'm talking
 about spending the entire semester proving things *about* math proofs.
 IOW, we were dealing with *meta-proofs*. =C2=A0And most of the "proofs" w=

 proved things about involved *proofs of infinite length*.

 Yeah.

 I spent the entire course repeatedly wondering if I had misread the
 course calendar and gone to the wrong class, and, when I had ruled that
 out, what any of this meta-proof stuff had to do with programming.


 T

 --
 Recently, our IT department hired a bug-fix engineer. He used to work
 for Volkswagen.

I'm entirely self-taught, and currently taking a break from university (too much debt, not enough time, too much stress). I rarely use stuff that I haven't taught myself. I realize now that trying to teach people how to program is very, very hard however, since I always think about how to teach stuff I know. Ideally you'd learn everything at once and spend the next 2 years re-arranging it in your brain, but unfortunately people don't work like that... -- James Miller
Mar 11 2012
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:wirsowklisbhbkbujewr forum.dlang.org...
 On Saturday, 10 March 2012 at 04:40:11 UTC, Adam D. Ruppe wrote:
 Yeah, the kernel is decent about it, but the rest of the
 system sure as hell isn't.

Let me tie this into D. A couple weeks ago, I revived one of my work D projects - about 30,000 lines of code - that was dormant for about a year. The language worked fine. The library was a bit more of a pain. std.date's deprecation still makes me mad. And the move of std.string.replace over to std.array meant not one of the modules compiled without a change. (Really easy change: "import std.string : replace;" why that works and "import std.string;" doesn't I'm not sure. I'm probably relying on a bug here!) But still, the D language manages to move forward without much breakage. dmd pretty much gets better each release. Phobos has some breakage though. Not really bad; updating this code went quickly. I think I spent half an hour on it. But, there was some minor changes needed. We might have a stable language, but if the library doesn't do the same, we'll never be Windows.

If we start freezing things now, we're going to be Windows 9x.
Mar 09 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 10:44 PM, Nick Sabalausky wrote:
 If we start freezing things now, we're going to be Windows 9x.

Win9x was incredibly successful (!) We can only hope to be so successful.
Mar 10 2012
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:tfdzpwcijnavdalmnzit forum.dlang.org...
 On Saturday, 10 March 2012 at 18:57:10 UTC, H. S. Teoh wrote:
 It can hardly be called a success technology-wise.

It is significantly ahead of its competition at the time.

And it was a big advancement over 3.1. Pre-emptive multitasking anyone?
Mar 10 2012
parent Paulo Pinto <pjmlp progtools.org> writes:
Am 10.03.2012 20:52, schrieb H. S. Teoh:
 On Sat, Mar 10, 2012 at 02:27:20PM -0500, Nick Sabalausky wrote:
 "Adam D. Ruppe"<destructionator gmail.com>  wrote in message
 news:tfdzpwcijnavdalmnzit forum.dlang.org...
 On Saturday, 10 March 2012 at 18:57:10 UTC, H. S. Teoh wrote:
 It can hardly be called a success technology-wise.

It is significantly ahead of its competition at the time.

And it was a big advancement over 3.1. Pre-emptive multitasking anyone?

I thought the Unix world has had that years before Windows. But not in the consumer PC market, I suppose. But 3.1 was such a sad mess that just about *anything* would be an improvement on it. T

Sure it had pre-emptive multitasking. On the other hand, on those days, UNIX had very nice guis called Motif, NEWS and NeXTSTEP. Personally I think the only nice one was from NeXTSTEP. Each version costed a few dollars more that what most house holds would be willing to pay. And lets not forget that long before gcc became famous, you would have to pay extra for the developer tools. Which in some cases did cost as much as the OS itself. And most importantly, there were almost no games, compared with what the home markets had access to. Yes I am aware of the dirty tricks Microsoft played on IBM, but actually taking into consideration the way IBM managed OS/2, those tricks weren't actually needed. So in the end, for the people using PC compatibles, the only game in town was Windows 9x. -- Paulo
Mar 10 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/10/2012 10:58 AM, H. S. Teoh wrote:
 Win9x's success is mainly attributable to Microsoft's superior marketing
 strategies. It can hardly be called a success technology-wise.

Oh, I disagree with that. Certainly, Win9x was a compromise, but it nailed being a transition operating system from 16 to 32 bit, and it nailed making Windows an attractive target for game developers.
Mar 10 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 10/03/2012 20:37, Walter Bright a crit :
 On 3/10/2012 10:58 AM, H. S. Teoh wrote:
 Win9x's success is mainly attributable to Microsoft's superior marketing
 strategies. It can hardly be called a success technology-wise.

Oh, I disagree with that. Certainly, Win9x was a compromise, but it nailed being a transition operating system from 16 to 32 bit, and it nailed making Windows an attractive target for game developers.

Windows 3.1 had patches provided by microsoft to handle 32bits. But this is quite offtopic. Win9x was good back then. Now it is crap. When doing something new (like D) you don't only need to provide something as good as what existed before. Actually, providing better isn't enough either. You need to provide enough to compensate the cost of the change, and additionally communication/marketing must convince user to switch.
Mar 11 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/11/2012 8:29 AM, deadalnix wrote:
 Win9x was good back then. Now it is crap.

Sure, but it's only fair to assess it in the context of its times, not current times.
 When doing something new (like D) you don't only need to provide something as
 good as what existed before. Actually, providing better isn't enough either.
You
 need to provide enough to compensate the cost of the change, and additionally
 communication/marketing must convince user to switch.

Yup.
Mar 11 2012
prev sibling next sibling parent Jeff Nowakowski <jeff dilacero.org> writes:
On 03/09/2012 11:40 PM, Adam D. Ruppe wrote:
 On Windows though, even if you relied on bugs twenty
 years ago, they bend over backward to keep your app
 functioning.

They stopped doing that a long time ago. There's a well-known blog article about this: http://www.joelonsoftware.com/articles/APIWar.html Some apps and hardware had trouble running on XP, and Vista took this to all new levels -- one of the reasons it got so much bad press.
Mar 10 2012
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Mar 11, 2012 at 12:20:43PM +0100, Matej Nanut wrote:
[...]
 This has also been one of the reasons I became interested in languages
 like C and D. Believe it or not, in our university, you don't ever get
 to see C officially if you don't learn it yourself. I consider this
 pathetic. The official and only taught language is Java.

Ugh.
 Which I grant them is at least cross-platform, but I believe that
 every university-educated programmer must know C.

+1. Java is a not-bad language. In fact, as a language it has quite a few good points. However, one thing I could never stand about Java culture is what I call the bandwagon-jumping attitude. It's this obsessive belief that Java is the best thing invented since coffee (har har) and that it's the panacea to solve all programming problems, cure world hunger, and solve world peace, and that whoever doesn't use Java must therefore be some inferior antiquated dinosaur from the last ice age. Every new trend that comes out must be blindly adopted without any question, because obviously new == good, and therefore whatever diseased fancy some self-appointed genius dreamed up one night must be adopted without regard for whether it actually adds value. C is therefore a fossilized relic from bygone times and nobody uses it anymore, and we've never heard of what on earth an assembler is, neither do we care, since the JVM is obviously superior anyway. As the saying goes, if you don't know history, you'll end up repeating it.
 I am convinced that my university produces bad programmers and as such
 don't find it surprising that new written programs are terribly slow,
 if they even work at all.

Obligatory quote: If Java had true garbage collection, most programs would delete themselves upon execution. -- Robert Sewell :-) T -- Question authority. Don't ask why, just do it.
Mar 11 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.479.1331479176.4860.digitalmars-d puremagic.com...
 Java is a not-bad language. In fact, as a language it has quite a few
 good points.

+1 Java's actually the reason I got fed up with C++'s *cough*module system*cough* and header files. The lack of "->" was nice too.
 However, one thing I could never stand about Java culture
 is what I call the bandwagon-jumping attitude. [...]

+1, or two, or three
 Obligatory quote:

 If Java had true garbage collection, most programs would delete
 themselves upon execution. -- Robert Sewell

 :-)

Hah! Fantastic.
 -- 
 Question authority. Don't ask why, just do it.

Fantastic, too. :)
Mar 11 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 02:27:20PM -0500, Nick Sabalausky wrote:
 "Adam D. Ruppe" <destructionator gmail.com> wrote in message 
 news:tfdzpwcijnavdalmnzit forum.dlang.org...
 On Saturday, 10 March 2012 at 18:57:10 UTC, H. S. Teoh wrote:
 It can hardly be called a success technology-wise.

It is significantly ahead of its competition at the time.

And it was a big advancement over 3.1. Pre-emptive multitasking anyone?

I thought the Unix world has had that years before Windows. But not in the consumer PC market, I suppose. But 3.1 was such a sad mess that just about *anything* would be an improvement on it. T -- ASCII stupid question, getty stupid ANSI.
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.427.1331409078.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 02:27:20PM -0500, Nick Sabalausky wrote:
 "Adam D. Ruppe" <destructionator gmail.com> wrote in message
 news:tfdzpwcijnavdalmnzit forum.dlang.org...
 On Saturday, 10 March 2012 at 18:57:10 UTC, H. S. Teoh wrote:
 It can hardly be called a success technology-wise.

It is significantly ahead of its competition at the time.

And it was a big advancement over 3.1. Pre-emptive multitasking anyone?

I thought the Unix world has had that years before Windows.

I just meant versus 3.1. I wouldn't know about Unix.
 But not in the consumer PC market, I suppose.

I'm not sure I'd say there was a consumer-level Unix at all back then.
Mar 10 2012
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Mar 11, 2012 at 11:31:33PM +0100, deadalnix wrote:
 Le 11/03/2012 21:16, Walter Bright a crit :

I'm not suggesting no naming convention. Naming conventions are good.
But they don't trump everything else in importance, not even close.

I have to disagree. They are really important on a large codebase (let say > 100,000 lines of code). Otherwise people tend not to find modules they can reuse expect by nowing the whole codebase. This have indirect effect to cause needless duplication - with all known drawbacks - and make the project more dependent on documentation - more documentation have to be produced, which means an overhead in workload and more trouble when documentation lacks, is outdated, or is bugguy.

I used to be very much against verbose naming conventions. Almost all naming conventions are verbose, and I hate verbosity with a passion (I still do). But after having been forced to use naming conventions in some code at work, I'm starting to see some value to them, especially in very large projects where there's so much code that without some sort of convention, it quickly becomes impossible to find what you want. Due to lack of convention, one team tries writing code with what they feel is a consistent naming, then another team comes along and do the same, and they run into naming conflicts, so they rename stuff haphazardly. Repeat this over a few iterations of the product, and you end up with 50 modules all with inconsistent naming due to historical conflicts. Code readability thus drops dramatically, and people coming on board later on can't understand what the code was supposed to do because it's all so obtuse (and they don't have the time/energy to wade through 5 million lines of code to understand every little nuance). This, plus the time pressure of impending deadlines, cause them to resort to copy-n-pasting, second-guessing what a piece of code does without bothering to check their assumptions (since it's so obtuse that looking up *one* thing would cost them hours just to even begin to understand it), and all sorts of bad code enters the system. Compare this with a project that has naming conventions from the get go. Name clashes are essentially non-existent if the naming convention is consistent, and if you're looking for a function in module X to do Y, the naming convention almost already spells out the name for you. Makes things very easy to find, and once you understand the convention, makes code very easy to read (most guesses at what the code means are actually correct -- in a large project, you *know* people are just going to write what they think is right rather than spend time reading code to understand what it actually does). So people are more likely to actually write correct code, which means people who come on board after them are more likely to understand the code and not do stupid things. [...]
 IMO, the name in itself isn't that important. The important thing is
 that thing get named in a predictable and simple way.

+1. T -- First Rule of History: History doesn't repeat itself -- historians merely repeat each other.
Mar 11 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 10 March 2012 at 04:34:15 UTC, Walter Bright wrote:
 Linux has had a habit of not breaking existing code from 
 decades ago. I think that is one reason why it has millions of 
 users.

If you want to see someone who takes compatibility seriously (and all the way to the bank), take a look at Microsoft Windows. I don't like developing Linux apps much, nor to a lot of professionals, because it's binary compatibility is a joke. Yeah, the kernel is decent about it, but the rest of the system sure as hell isn't. You're lucky if you can take a Linux binary and use it next month, and certainly not ten years from now. (For example, take dmd out of the box on CentOS. Won't work.) On Windows though, even if you relied on bugs twenty years ago, they bend over backward to keep your app functioning. It is really an amazing feat they've accomplished, both from technical and business perspectives, in doing this while still moving forward.
Mar 09 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 10 March 2012 at 04:40:11 UTC, Adam D. Ruppe wrote:
 Yeah, the kernel is decent about it, but the rest of the
 system sure as hell isn't.

Let me tie this into D. A couple weeks ago, I revived one of my work D projects - about 30,000 lines of code - that was dormant for about a year. The language worked fine. The library was a bit more of a pain. std.date's deprecation still makes me mad. And the move of std.string.replace over to std.array meant not one of the modules compiled without a change. (Really easy change: "import std.string : replace;" why that works and "import std.string;" doesn't I'm not sure. I'm probably relying on a bug here!) But still, the D language manages to move forward without much breakage. dmd pretty much gets better each release. Phobos has some breakage though. Not really bad; updating this code went quickly. I think I spent half an hour on it. But, there was some minor changes needed. We might have a stable language, but if the library doesn't do the same, we'll never be Windows.
Mar 09 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 10 March 2012 at 05:15:32 UTC, Andrei Alexandrescu 
wrote:
 Why? I have CentOS at work and it seems to work.

My work server is CentOS 5.6 (32 bit), maybe it is this specific version, but the fresh dmd always gives: 1) a libc version mismatch. I fix this by recompiling from source. 2) a linker error on warn-mismatch or something like that not being a valid option. Editing dmd.conf takes care of this one.
Mar 09 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 01:44:59AM -0500, Nick Sabalausky wrote:
 "Adam D. Ruppe" <destructionator gmail.com> wrote in message 
 news:wirsowklisbhbkbujewr forum.dlang.org...

 We might have a stable language, but if the library doesn't do the
 same, we'll never be Windows.


Really? D is a stable language as of this moment? Interesting.
 If we start freezing things now, we're going to be Windows 9x.

You mean Windows 3.1. T -- Without outlines, life would be pointless.
Mar 09 2012
parent reply "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.398.1331362435.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 01:44:59AM -0500, Nick Sabalausky wrote:
 "Adam D. Ruppe" <destructionator gmail.com> wrote in message
 news:wirsowklisbhbkbujewr forum.dlang.org...

 We might have a stable language, but if the library doesn't do the
 same, we'll never be Windows.


Really? D is a stable language as of this moment? Interesting.
 If we start freezing things now, we're going to be Windows 9x.

You mean Windows 3.1.

I was pretty happy with 3.1. It's terrible in retrospect, but viewed in the context of the early 90's, I don't think it was bad at all. Maybe not as stable as the Unix of the time (I wouldn't know), but it was usable by mere mortals and was still a lot more robust than WinMe. But then around the time of 98SE and Me, the 9x line just wasn't up to even the current expectations of the time. (And maybe my memory's foggy, but I seem to remember having more troubles with 98 than I did with 3.1. And Me was definitely much worse.)
Mar 09 2012
parent sclytrack <sclytrack hotmail.com> writes:
On 03/10/2012 08:31 AM, Nick Sabalausky wrote:
 I was pretty happy with 3.1. It's terrible in retrospect, but viewed in the
 context of the early 90's, I don't think it was bad at all. Maybe not as
 stable as the Unix of the time (I wouldn't know), but it was usable by mere
 mortals and was still a lot more robust than WinMe.

 But then around the time of 98SE and Me, the 9x line just wasn't up to even
 the current expectations of the time. (And maybe my memory's foggy, but I
 seem to remember having more troubles with 98 than I did with 3.1. And Me
 was definitely much worse.)

I gave him my old computer (Athlon) with Windows 7. But no after a week he wanted his old system back.
Mar 10 2012
prev sibling next sibling parent "so" <so so.so> writes:
 My grandfather still uses 3.11

 I gave him my old computer (Athlon) with Windows 7. But no 
 after a week he wanted his old system back.

"my old Windows 7" Which was the newest/bestest thing a few days/weeks/months/years ago. It must be hard to keep up, they release OSs faster than the time it takes to install or possibly booting a system! :)
Mar 10 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 10 March 2012 at 06:53:49 UTC, H. S. Teoh wrote:
 Really? D is a stable language as of this moment? Interesting.

Yeah, it is. It removes restrictions or sometimes adds stuff, but over this last year, not much of the language has actually broke. I have a lot of D code, some of it rather fancy, and I can't actually think of a place where a *language* change broke it. Library changes break it almost every other release, but language changes tend to be ok. There's regressions every so often, but they aren't bad.
 You mean Windows 3.1.

You're insane! D rox my sox right off my cox and has for a long time, and it has been getting pretty consistently better.
Mar 10 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 03:34:48PM +0100, Adam D. Ruppe wrote:
 On Saturday, 10 March 2012 at 06:53:49 UTC, H. S. Teoh wrote:
Really? D is a stable language as of this moment? Interesting.

Yeah, it is. It removes restrictions or sometimes adds stuff, but over this last year, not much of the language has actually broke. I have a lot of D code, some of it rather fancy, and I can't actually think of a place where a *language* change broke it.

Well, the slated property enforcement *will* break a lot of stuff...
 Library changes break it almost every other release, but language
 changes tend to be ok. There's regressions every so often, but they
 aren't bad.

OK.
You mean Windows 3.1.

You're insane! D rox my sox right off my cox and has for a long time, and it has been getting pretty consistently better.

You're right, D *does* rock in spite of its current shortcomings. It's easy to see only the flaws when you're just focused on fixing problems, but it's true that when I take a step back and compare it with, say, C/C++, there's simply no comparison. Even with all its warts, D is unquestionably superior in pretty much every way. T -- "The number you have dialed is imaginary. Please rotate your phone 90 degrees and try again."
Mar 10 2012
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-03-10 00:04, Nick Sabalausky wrote:
 Freezing things against breaking changes is all fine and good, but NOT
 before they're reached a point where they're good enough to be frozen.
 Premature freezing is how you create cruft and other such shit. Let's not go
 jumping any guns here.

I completely aggree. -- /Jacob Carlborg
Mar 10 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 10:47:54AM -0800, Walter Bright wrote:
 On 3/9/2012 10:44 PM, Nick Sabalausky wrote:
If we start freezing things now, we're going to be Windows 9x.

Win9x was incredibly successful (!) We can only hope to be so successful.

Win9x's success is mainly attributable to Microsoft's superior marketing strategies. It can hardly be called a success technology-wise. T -- I am not young enough to know everything. -- Oscar Wilde
Mar 10 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 10 March 2012 at 18:57:10 UTC, H. S. Teoh wrote:
 It can hardly be called a success technology-wise.

It is significantly ahead of its competition at the time.
Mar 10 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 02:31:53PM -0500, Nick Sabalausky wrote:
[...]
 writefln is still there with the same old functionality (which is
 good, it *is* a good function). It's just that writeln has been added
 and just happens to be better in every way for the majority of
 use-cases.

Strange, I still find myself using writef/writefln very frequently. When you want formatting in your output, printf specs are just sooo convenient. But perhaps it's just a symptom of my having just emerged from the C/C++ world. :-) T -- First Rule of History: History doesn't repeat itself -- historians merely repeat each other.
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.429.1331409266.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 02:31:53PM -0500, Nick Sabalausky wrote:
 [...]
 writefln is still there with the same old functionality (which is
 good, it *is* a good function). It's just that writeln has been added
 and just happens to be better in every way for the majority of
 use-cases.

Strange, I still find myself using writef/writefln very frequently. When you want formatting in your output, printf specs are just sooo convenient. But perhaps it's just a symptom of my having just emerged from the C/C++ world. :-)

They are nice, but I've found that in most of my cases, the non-formatted version is all I usually need. It's great though that the formatted ones are there for the cases where I do need them.
Mar 10 2012
prev sibling next sibling parent reply Matej Nanut <matejnanut gmail.com> writes:
I find the point on developing on a slower computer very interesting,
and here's my story.

I've been using an EeePC for everything for the past 2.5 years and
until now, I could cope. I'm getting a new laptop this week because I
direly need it at the faculty (some robotics/image processing/computer
vision =E2=80=94 no way to run these on an EeePC realtime).

However, I could notice an interesting trend between my colleagues'
programs and mine. For example, solving the 15-game with heuristics
took ~0.01 secs on the Eee, and comparing to others' programs theirs
took several seconds and found worse solutions (not all of them of
course, but most). When doing some local search optimisation, the
difference was seconds-to-HOURS. I guess someone was really sloppy,
but still.

This has also been one of the reasons I became interested in languages
like C and D. Believe it or not, in our university, you don't ever get
to see C officially if you don't learn it yourself. I consider this
pathetic. The official and only taught language is Java. Which I grant
them is at least cross-platform, but I believe that every
university-educated programmer must know C.

I am convinced that my university produces bad programmers and as such
don't find it surprising that new written programs are terribly slow,
if they even work at all.

Matej
Mar 11 2012
parent reply "Nick Sabalausky" <a a.a> writes:
"Matej Nanut" <matejnanut gmail.com> wrote in message 
news:mailman.471.1331466712.4860.digitalmars-d puremagic.com...
I'm getting a new laptop this week because I
direly need it at the faculty (some robotics/image processing/computer
vision

Neat stuff!!
However, I could notice an interesting trend between my colleagues'
programs and mine. For example, solving the 15-game with heuristics
took ~0.01 secs on the Eee, and comparing to others' programs theirs
took several seconds and found worse solutions (not all of them of
course, but most). When doing some local search optimisation, the
difference was seconds-to-HOURS. I guess someone was really sloppy,
but still.

Yup. Not surprised.
This has also been one of the reasons I became interested in languages
like C and D. Believe it or not, in our university, you don't ever get
to see C officially if you don't learn it yourself. I consider this
pathetic. The official and only taught language is Java. Which I grant
them is at least cross-platform, but I believe that every
university-educated programmer must know C.

Yea, Java has been pretty much the de facto standard College Computer Science language since I left college (close to 10 years ago now...god, that just seems so *wrong* that's it's been that long...)
I am convinced that my university produces bad programmers and as such
don't find it surprising that new written programs are terribly slow,
if they even work at all.

I'm convinced that colleges in general produce very bad programmers. The good programmers who have degrees, for the most part (I'm sure there are rare exceptions), are the ones who learned on their own, not in a classroom. It's sad that society brainwashes people into believing the opposite.
Mar 11 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/11/2012 12:32 PM, Nick Sabalausky wrote:
 I'm convinced that colleges in general produce very bad programmers. The
 good programmers who have degrees, for the most part (I'm sure there are
 rare exceptions), are the ones who learned on their own, not in a classroom.

Often the best programmers seem to have physics degrees!
Mar 11 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 11/03/2012 21:07, Walter Bright a crit :
 On 3/11/2012 12:32 PM, Nick Sabalausky wrote:
 I'm convinced that colleges in general produce very bad programmers. The
 good programmers who have degrees, for the most part (I'm sure there are
 rare exceptions), are the ones who learned on their own, not in a
 classroom.

Often the best programmers seem to have physics degrees!

I saw you coming ^^.
Mar 11 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/11/2012 3:36 PM, deadalnix wrote:
 Le 11/03/2012 21:07, Walter Bright a crit :
 On 3/11/2012 12:32 PM, Nick Sabalausky wrote:
 I'm convinced that colleges in general produce very bad programmers. The
 good programmers who have degrees, for the most part (I'm sure there are
 rare exceptions), are the ones who learned on their own, not in a
 classroom.

Often the best programmers seem to have physics degrees!

I saw you coming ^^.

Whether or not anyone considers me a good programmer, I don't have a degree in physics. Frankly, I wasn't good enough to consider it.
Mar 11 2012
prev sibling next sibling parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Sun, 11 Mar 2012 12:20:43 +0100, Matej Nanut <matejnanut gmail.com>  =

wrote:

 I find the point on developing on a slower computer very interesting,
 and here's my story.

 I've been using an EeePC for everything for the past 2.5 years and
 until now, I could cope. I'm getting a new laptop this week because I
 direly need it at the faculty (some robotics/image processing/computer=

 vision =E2=80=94 no way to run these on an EeePC realtime).

While being about 10x faster at compilation it lead to more trial and error compilation. Turnaround times with dmd are a little too short.
Mar 11 2012
prev sibling next sibling parent =?utf-8?Q?Simen_Kj=C3=A6r=C3=A5s?= <simen.kjaras gmail.com> writes:
On Sun, 11 Mar 2012 21:07:06 +0100, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 3/11/2012 12:32 PM, Nick Sabalausky wrote:
 I'm convinced that colleges in general produce very bad programmers. The
 good programmers who have degrees, for the most part (I'm sure there are
 rare exceptions), are the ones who learned on their own, not in a  
 classroom.

Often the best programmers seem to have physics degrees!

Eugh. Physicist programmers tend to use one-letter variable names in my experience. Makes for... interesting reading of their code.
Mar 12 2012
prev sibling next sibling parent =?utf-8?Q?Simen_Kj=C3=A6r=C3=A5s?= <simen.kjaras gmail.com> writes:
On Tue, 13 Mar 2012 03:50:49 +0100, Nick Sabalausky <a a.a> wrote:

 "Simen Kj=C3=A6r=C3=A5s" <simen.kjaras gmail.com> wrote in message
 news:op.wa28iobk0gpyof biotronic.lan...
 On Sun, 11 Mar 2012 21:07:06 +0100, Walter Bright
 <newshound2 digitalmars.com> wrote:

 On 3/11/2012 12:32 PM, Nick Sabalausky wrote:
 I'm convinced that colleges in general produce very bad programmers=




 The
 good programmers who have degrees, for the most part (I'm sure ther=




 are
 rare exceptions), are the ones who learned on their own, not in a
 classroom.

Often the best programmers seem to have physics degrees!

Eugh. Physicist programmers tend to use one-letter variable names in =


 experience. Makes for... interesting reading of their code.

D is great for physics programming. Now you can have much, much more =

 than 26
 variables :)

True, though mostly, you'd just change to using greek letters, right? Finally we can use =CE=B8 for angles, alias ulong =E2=84=95...
Mar 12 2012
prev sibling next sibling parent James Miller <james aatch.net> writes:
On 13 March 2012 16:10, Simen Kj=C3=A6r=C3=A5s <simen.kjaras gmail.com> wro=
te:
 On Tue, 13 Mar 2012 03:50:49 +0100, Nick Sabalausky <a a.a> wrote:

 "Simen Kj=C3=A6r=C3=A5s" <simen.kjaras gmail.com> wrote in message
 news:op.wa28iobk0gpyof biotronic.lan...
 On Sun, 11 Mar 2012 21:07:06 +0100, Walter Bright
 <newshound2 digitalmars.com> wrote:

 On 3/11/2012 12:32 PM, Nick Sabalausky wrote:
 I'm convinced that colleges in general produce very bad programmers.
 The
 good programmers who have degrees, for the most part (I'm sure there
 are
 rare exceptions), are the ones who learned on their own, not in a
 classroom.

Often the best programmers seem to have physics degrees!

Eugh. Physicist programmers tend to use one-letter variable names in my experience. Makes for... interesting reading of their code.

D is great for physics programming. Now you can have much, much more tha=


 26
 variables :)

True, though mostly, you'd just change to using greek letters, right? Finally we can use =CE=B8 for angles, alias ulong =E2=84=95...

That might actually make it /more/ readable in some cases. -- James Miller
Mar 12 2012
prev sibling parent Boris Wang <kona.ming gmail.com> writes:
--20cf302074b69584b804bc9e8460
Content-Type: text/plain; charset=UTF-8

the ABI of linux is good enough,  it's based on a mature os : UNIX.

forget the name of D, name is not important.

There is no need for a replacement for c in OS area, because c is the best
high level language match current CPU architecture
Why c++ is so complexity? because the cpu architecture is not object
oriented.

energy save, high performance, develop effective, in the area focus on
these, is the market for D : "Half" system program language.
and the big point for growth, can call c/c++ binary from D source.

sorry about digressing from the subject.


2012/3/10 Walter Bright <newshound2 digitalmars.com>

 On 3/9/2012 3:09 PM, Nick Sabalausky wrote:

 Keep in mind, too, that Linux has decades of legacy and millions of users.
 That's a *very* different situation from Phobos. Apples and oranges.

Linux has had a habit of not breaking existing code from decades ago. I think that is one reason why it has millions of users. Remember, every time you break existing code you reset your user base back to zero. I'm *still* regularly annoyed by the writefln => writeln change in D1 to D2, and I agreed to that change. Grrrr.

--20cf302074b69584b804bc9e8460 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable the ABI of linux is good enough,=C2=A0 it&#39;s based on a mature os : UNIX= .<br><br>forget the name of D, name is not important.=C2=A0 <br><br>There i= s no need for a replacement for c in OS area, because c is the best high le= vel language match current CPU architecture<br> Why c++ is so complexity? because the cpu architecture is not object orient= ed.<br><br>energy save, high performance, develop effective, in the area fo= cus on these, is the market for D : &quot;Half&quot; system program languag= e.<br> and the big point for growth, can call c/c++ binary from D source.<br><br>s= orry about digressing from the subject.<br><br><br><div class=3D"gmail_quot= e">2012/3/10 Walter Bright <span dir=3D"ltr">&lt;<a href=3D"mailto:newshoun= d2 digitalmars.com">newshound2 digitalmars.com</a>&gt;</span><br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"><div class=3D"im">On 3/9/2012 3:09 PM, Nick = Sabalausky wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> Keep in mind, too, that Linux has decades of legacy and millions of users.<= br> That&#39;s a *very* different situation from Phobos. Apples and oranges.<br=

<br></div> Linux has had a habit of not breaking existing code from decades ago. I thi= nk that is one reason why it has millions of users.<br> <br> Remember, every time you break existing code you reset your user base back = to zero.<br> <br> I&#39;m *still* regularly annoyed by the writefln =3D&gt; writeln change in= D1 to D2, and I agreed to that change. Grrrr.<br> </blockquote></div><br> --20cf302074b69584b804bc9e8460--
Apr 01 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 09, 2012 at 11:46:24PM +0100, Alex Rønne Petersen wrote:
 On 09-03-2012 23:32, Walter Bright wrote:
This statement is from Linus Torvalds about breaking binary compatibility:

https://lkml.org/lkml/2012/3/8/495

While I don't think we need to worry so much at the moment about
breaking binary compatibility with new D releases, we do have a big
problem with breaking source code compatibility.

This is why we need to have a VERY high bar for breaking changes.

If we want to start being able to avoid breaking changes, we *really* need to finally deprecate the stuff that's been slated for deprecation for ages...

Does that include std.stdio and std.stream? When are we expecting std.io to be ready? IMHO, this is one major change that needs to happen sooner rather than later. The current lack of interoperability between std.stdio and std.stream is a big detraction from Phobos' overall quality. T -- Жил-был король когда-то, при нём блоха жила.
Mar 09 2012
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 While I don't think we need to worry so much at the moment about breaking
binary 
 compatibility with new D releases, we do have a big problem with breaking
source 
 code compatibility.
 
 This is why we need to have a VERY high bar for breaking changes.

D will naturally progressively slow down the rhythm of its new breaking changes, but even very old languages introduce some breaking changes (see some of the changes in C++11), and D is not yet mature enough to slow them sharply down now. In D there are some unfinished parts still. Finishing them will probably break something. Bug fixing are now breaking things in every release, and I don't want to see an arbitrary stop to those improvements now. If you stop breaking changes, we'll not be able to fix a nasty situation like ("Some untidy attributes"): http://d.puremagic.com/issues/show_bug.cgi?id=3934 In bug 3934 there is a lot of stuff that I'd love to break in the code of the silly programmers that have used it. Another small breaking change, that I have argued about for years: http://d.puremagic.com/issues/show_bug.cgi?id=7444 Or this one "Default arguments of out and ref arguments": http://d.puremagic.com/issues/show_bug.cgi?id=5850 If today people write code with default arguments at ref/out arguments, forbidding those default arguments will break their code. "A bug-prone situation with AAs": http://d.puremagic.com/issues/show_bug.cgi?id=3825 Currently string literals concatenation is accepted like this: auto s = "foo" "bar"; But you have accepted to disallow this statically (and require a ~), after a request of mine. If today people write such code, this will be a breaking change for their code. Do you want me to list some more bugs now that are small breaking changes? I am willing to discuss each one of them. Stopping all those improvements right now at once is exceptionally unwise. Bye, bearophile
Mar 09 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 3:14 PM, bearophile wrote:
 D will naturally progressively slow down the rhythm of its new breaking
 changes, but even very old languages introduce some breaking changes (see
 some of the changes in C++11),

What breaking changes are there in C++11, other than dumping export?
Mar 09 2012
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/9/12 8:35 PM, Walter Bright wrote:
 On 3/9/2012 3:14 PM, bearophile wrote:
 D will naturally progressively slow down the rhythm of its new breaking
 changes, but even very old languages introduce some breaking changes (see
 some of the changes in C++11),

What breaking changes are there in C++11, other than dumping export?

Deprecating exception specifications :o). Andrei
Mar 09 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 9:14 PM, Andrei Alexandrescu wrote:
 On 3/9/12 8:35 PM, Walter Bright wrote:
 On 3/9/2012 3:14 PM, bearophile wrote:
 D will naturally progressively slow down the rhythm of its new breaking
 changes, but even very old languages introduce some breaking changes (see
 some of the changes in C++11),

What breaking changes are there in C++11, other than dumping export?

Deprecating exception specifications :o).

I don't think that broke any existing code, because there wasn't any :-) Consider that I and some others agitated for dumping trigraphs. A couple of people voiciferously claimed that their entire code base depended on it, so it stayed in. Never mind that that codebase could be easily accommodated by writing a literally trivial filter. But now, to support raw string literals, C++11 has mucked up trigraphs. It's no longer possible to deprecate them without writing a filter that is pretty much a full blown C++ compiler itself.
Mar 09 2012
parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jjeqak$f3i$1 digitalmars.com...
 On 3/9/2012 9:14 PM, Andrei Alexandrescu wrote:
 On 3/9/12 8:35 PM, Walter Bright wrote:
 On 3/9/2012 3:14 PM, bearophile wrote:
 D will naturally progressively slow down the rhythm of its new breaking
 changes, but even very old languages introduce some breaking changes 
 (see
 some of the changes in C++11),

What breaking changes are there in C++11, other than dumping export?

Deprecating exception specifications :o).

I don't think that broke any existing code, because there wasn't any :-) Consider that I and some others agitated for dumping trigraphs. A couple of people voiciferously claimed that their entire code base depended on it, so it stayed in. Never mind that that codebase could be easily accommodated by writing a literally trivial filter. But now, to support raw string literals, C++11 has mucked up trigraphs. It's no longer possible to deprecate them without writing a filter that is pretty much a full blown C++ compiler itself.

So making improvements that involve trivially-handled breaking changes is good for C++ but bad for D?
Mar 09 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 10:48 PM, Nick Sabalausky wrote:
 So making improvements that involve trivially-handled breaking changes is
 good for C++ but bad for D?

It's always a judgment call.
Mar 10 2012
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, March 09, 2012 21:54:28 Walter Bright wrote:
 Deprecating exception specifications :o).

I don't think that broke any existing code, because there wasn't any :-)

Sadly, there's code that will break because of that where _I_ work. For a while, they were recommended/required, because some of the people dictating coding standards didn't understand them properly. Fortunately, much of the newer code doesn't use them, but there's still plenty that does. I think that it stems primarily from Java programmers expecting them to work as checked exceptions and not realizing how they really work. The only thing that will likely stop code from breaking where I work due to the deprecation of exception specifications is the fact that it'll be years before we use C++11, and it'll probably only be newer projects that get compiled that way when it _does_ finally happen. - Jonathan M Davis
Mar 09 2012
prev sibling next sibling parent =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 10-03-2012 05:35, Walter Bright wrote:
 On 3/9/2012 3:14 PM, bearophile wrote:
 D will naturally progressively slow down the rhythm of its new breaking
 changes, but even very old languages introduce some breaking changes (see
 some of the changes in C++11),

What breaking changes are there in C++11, other than dumping export?

auto as a storage specifier. auto_ptr. Exception specifications. std::unary/binary_function. -- - Alex
Mar 10 2012
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter:

 What breaking changes are there in C++11, other than dumping export?

The end of the talk (just before the questions) about Clang of GoingNative2012 has shown one breaking change of C++: #include <iostream> struct S { int n; }; struct X { X(int) {} }; void f(void*) { std::cerr << "Pointer!\n"; } void f(X) { std::cerr << "X!\n"; } int main() { f(S().n); } % clang++ -std=c++11 -g -o cxx11-4 cxx11-4.cpp % ./cxx11-4 Pointer! % clang++ -std=c++98 -g -o cxx11-4 cxx11-4.cpp % ./cxx11-4 X! But in the end I don't care for C++11 here, my post here was about D and several bug reports that are able to break user code once fixed or improved. If you raise the bar too much for breaking changes now, most of those things will never be fixed or done. And this is _not_ acceptable for me. Bye, bearophile
Mar 10 2012
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 03/10/2012 05:35 AM, Walter Bright wrote:
 On 3/9/2012 3:14 PM, bearophile wrote:
 D will naturally progressively slow down the rhythm of its new breaking
 changes, but even very old languages introduce some breaking changes (see
 some of the changes in C++11),

What breaking changes are there in C++11, other than dumping export?

I am only aware of those, but I don't use C++ a lot. So there might be more: - introduction of constexpr messed with the overload resolution rules. - '>>' can no longer be used inside a template argument expression.
Mar 10 2012
prev sibling next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Friday, 9 March 2012 at 22:32:59 UTC, Walter Bright wrote:
 This is why we need to have a VERY high bar for breaking 
 changes.

Please remember this if someone proposes enforcing property by default. The -property switch is a big mistake that breaks a lot of code.
Mar 09 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/9/2012 4:12 PM, Adam D. Ruppe wrote:
 The -property switch is a big mistake that breaks a
 lot of code.

It was done as a switch to see how much it would break and if that was worth it.
Mar 09 2012
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Friday, 9 March 2012 at 22:32:59 UTC, Walter Bright wrote:
 This is why we need to have a VERY high bar for breaking 
 changes.

Oh, and how do you intend to accomplish that with things like bug 314 still being open (and code relying on the buggy behavior), and not even the language being finalized (think: shared)? David
Mar 09 2012
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, March 10, 2012 01:12:34 Adam D. Ruppe wrote:
 On Friday, 9 March 2012 at 22:32:59 UTC, Walter Bright wrote:
 This is why we need to have a VERY high bar for breaking
 changes.

Please remember this if someone proposes enforcing property by default. The -property switch is a big mistake that breaks a lot of code.

As I understand it, it's like override, it's being phased in, and it _will_ become the normal behavior. There are also a number of things in the language that are supposed to be deprecated but have been sitting around for ages without actually being deprecated (e.g. delete). Walter's sentiments may be good, but there are a number of things which are still in a transitional phase and _will_ end up breaking code. Of course, I'd argue that stuff like deprecating/removing delete and fully enabling override and property _do_ pass the very high bar. I think that his biggest complaint is minor changes (such as changing function names) rather than the large changes that are still being made. - Jonathan M Davis
Mar 09 2012
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, March 09, 2012 16:15:13 H. S. Teoh wrote:
 On Sat, Mar 10, 2012 at 01:02:35AM +0100, Andrej Mitrovic wrote:
 Linus would probably hate D just as much as he hates C++. :p

Yeah... I can just imagine his eye scanning the description of D and stopping right at the word "GC", and immediately writing a flaming vitriolic post to LKML about how a GC is the absolutely worst thing one could possibly conceive of putting inside a kernel, and that any kernel developer caught flirting with the idea of using D ought to have all kernel patches ignored from that point on.

He hates function overloading! I question that he'd ever be happy with anything other than C. And he's so specialized in what he works on that I think that a number of his opinions are completely inapplicable to the majority of programmers. Some of what he says is very valuable, but he's a very opinionated person whose opinions often don't line up with the rest of the programming world. If you'll notice, Walter sometimes has similar issues simply due to the kind of stuff he works on (e.g. thinking that the fact that you could run your program in a debugger to see a segfault was enough (rather than getting some sort of stacktrace with a segfault), which works great for compilers, but works horribly for programs that run for weeks at a time). We all have our biases based on what we've worked on. Linus just so happens to be very famous and _very_ specialized in the type of stuff that he works on. - Jonathan M Davis
Mar 09 2012
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 09 Mar 2012 19:12:34 -0500, Adam D. Ruppe  
<destructionator gmail.com> wrote:

 On Friday, 9 March 2012 at 22:32:59 UTC, Walter Bright wrote:
 This is why we need to have a VERY high bar for breaking changes.

Please remember this if someone proposes enforcing property by default.

Clears the bar with room to spare IMO. Not to mention it's not just a proposal, but in print in TDPL . (dons flame war proof suit) -STeve
Mar 09 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 09, 2012 at 07:26:48PM -0500, Steven Schveighoffer wrote:
 On Fri, 09 Mar 2012 19:12:34 -0500, Adam D. Ruppe
 <destructionator gmail.com> wrote:
 
On Friday, 9 March 2012 at 22:32:59 UTC, Walter Bright wrote:
This is why we need to have a VERY high bar for breaking changes.

Please remember this if someone proposes enforcing property by default.


I propose enabling property by default.
 Clears the bar with room to spare IMO.
 
 Not to mention it's not just a proposal, but in print in TDPL .
 
 (dons flame war proof suit)

I don't see what's there to flamewar about. AFAIK property enforcement is going to happen sooner or later. T -- There is no gravity. The earth sucks.
Mar 09 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 10 March 2012 at 00:25:23 UTC, Jonathan M Davis 
wrote:
 As I understand it, it's like override, it's being phased in, 
 and it _will_ become the normal behavior.

A planned or inevitable big mistake that will break piles of code in a painful way is still a big mistake that will break piles of code in a painful way.
 and  property _do_ pass the very high bar.

If the height of the bar is based on how much code it breaks, sure. If it based on the benefit it actually brings us, no, absolutely not. property is the biggest name bikeshed of them all.
Mar 09 2012
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, March 09, 2012 16:40:37 H. S. Teoh wrote:
 On Fri, Mar 09, 2012 at 07:26:48PM -0500, Steven Schveighoffer wrote:
 On Fri, 09 Mar 2012 19:12:34 -0500, Adam D. Ruppe
 
 <destructionator gmail.com> wrote:
On Friday, 9 March 2012 at 22:32:59 UTC, Walter Bright wrote:
This is why we need to have a VERY high bar for breaking changes.

Please remember this if someone proposes enforcing property by default.


I propose enabling property by default.
 Clears the bar with room to spare IMO.
 
 Not to mention it's not just a proposal, but in print in TDPL .
 
 (dons flame war proof suit)

[...] I don't see what's there to flamewar about. AFAIK property enforcement is going to happen sooner or later.

Yes. The problem is that some people don't like property and don't want it enforced, whereas others think it's a big improvement. From the sounds of it, Adam thinks that it's bad, whereas Steven thinks that it's good. Personally, I'm _definitely_ in favor of property enforcement. Regardless, it's in TDPL, and the current plan is to enforce it. It just hasn't reached that point yet, since we're still in the transition stage from not having property at all to having property and having it enforced. - Jonathan M Davis
Mar 09 2012
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 09 Mar 2012 19:40:37 -0500, H. S. Teoh <hsteoh quickfur.ath.cx>  
wrote:

 On Fri, Mar 09, 2012 at 07:26:48PM -0500, Steven Schveighoffer wrote:

 (dons flame war proof suit)

I don't see what's there to flamewar about.

You're new... :) -Steve
Mar 09 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 10 March 2012 at 00:48:56 UTC, Jonathan M Davis 
wrote:
 From the  sounds of it, Adam thinks that it's bad

Indeed. I have an advantage here though: it is an objective fact that -property breaks a lot of existing D code. We can (and have) argue(d) all day about what, if any, improvements strict enforcement actually brings.
Mar 09 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 09, 2012 at 07:51:12PM -0500, Steven Schveighoffer wrote:
 On Fri, 09 Mar 2012 19:40:37 -0500, H. S. Teoh
 <hsteoh quickfur.ath.cx> wrote:
 
On Fri, Mar 09, 2012 at 07:26:48PM -0500, Steven Schveighoffer wrote:

(dons flame war proof suit)

I don't see what's there to flamewar about.

You're new... :)

Not nearly new enough to not be partially responsible for that huge gigantic thread about exceptions. :-) (Which I had no trouble following, btw, even though it's clear that a lot of people were totally lost after the first 100 posts or so. Mutt is just *that* cool. OTOH, when the thread nesting depth exceeded even mutt's ability to display the thread lines (because it was overflowing my terminal width), the thread started to fizzle out. I'm guessing that's when the rest of us mutt users bailed out. :-P) T -- "Outlook not so good." That magic 8-ball knows everything! I'll ask about Exchange Server next. -- (Stolen from the net)
Mar 09 2012
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 09 Mar 2012 20:04:49 -0500, H. S. Teoh <hsteoh quickfur.ath.cx>  
wrote:

 On Fri, Mar 09, 2012 at 07:51:12PM -0500, Steven Schveighoffer wrote:
 On Fri, 09 Mar 2012 19:40:37 -0500, H. S. Teoh
 <hsteoh quickfur.ath.cx> wrote:

On Fri, Mar 09, 2012 at 07:26:48PM -0500, Steven Schveighoffer wrote:

(dons flame war proof suit)

I don't see what's there to flamewar about.

You're new... :)

Not nearly new enough to not be partially responsible for that huge gigantic thread about exceptions. :-) (Which I had no trouble following, btw, even though it's clear that a lot of people were totally lost after the first 100 posts or so. Mutt is just *that* cool. OTOH, when the thread nesting depth exceeded even mutt's ability to display the thread lines (because it was overflowing my terminal width), the thread started to fizzle out. I'm guessing that's when the rest of us mutt users bailed out. :-P)

No, what I meant is that property is a sore subject that invariable starts a time-consuming never-winning flame war any time someone mentions how it should or shouldn't be mandatory. -Steve
Mar 09 2012
prev sibling next sibling parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Fri, 09 Mar 2012 23:32:58 +0100, Walter Bright  
<newshound2 digitalmars.com> wrote:

 This statement is from Linus Torvalds about breaking binary  
 compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about  
 breaking binary compatibility with new D releases, we do have a big  
 problem with breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

Definitely. I think we should gather bugs that require breaking changes and prioritize them. IMHO it's also the main reason to continue fixing imports and protection. By now people mostly still appreciate those fixes but the pain they cause will only ever grow. ABI http://d.puremagic.com/issues/show_bug.cgi?id=7469
Mar 09 2012
prev sibling next sibling parent "ludi" <my email.com> writes:
On Friday, 9 March 2012 at 22:32:59 UTC, Walter Bright wrote:
 This statement is from Linus Torvalds about breaking binary 
 compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment 
 about breaking binary compatibility with new D releases, we do 
 have a big problem with breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking 
 changes.

I don't think it is that straightforward to conclude anything about source compatibility in a programming language from this argument about binary compatibility in a kernel.
Mar 09 2012
prev sibling next sibling parent Mantis <mail.mantis.88 gmail.com> writes:
10.03.2012 3:01, Adam D. Ruppe пишет:
 On Saturday, 10 March 2012 at 00:48:56 UTC, Jonathan M Davis wrote:
 From the  sounds of it, Adam thinks that it's bad

Indeed. I have an advantage here though: it is an objective fact that -property breaks a lot of existing D code. We can (and have) argue(d) all day about what, if any, improvements strict enforcement actually brings.

Arguments are same as before, I believe: // alias int delegate() dg_t; property dg_t foo() { return { return 42; }; } int main() { auto a = foo(); return 0; } // What should be the type of a?
Mar 09 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 10 March 2012 at 03:33:42 UTC, Mantis wrote:
 What should be the type of a?

int. I'm for using property to disambiguate in any case. That's a clear benefit. I'm against the strict enforcement where it forces you to choose parens or no parens at the declaration site (what -property does). That's just pointless bikeshedding that breaks perfectly good code.
Mar 09 2012
prev sibling next sibling parent reply Gour <gour atmarama.net> writes:
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

On Fri, 09 Mar 2012 14:32:58 -0800
Walter Bright <newshound2 digitalmars.com> wrote:

Dear Walter,

 This is why we need to have a VERY high bar for breaking changes.

I agree with your statement above, but, personally, not having legacy D code to take care about I'm leaving it to more expertt users here to discuss, but I'm more concerned with another thing and that is roadmap. It would be nice if D would have some kind of roadmap with several milestones so that users can have some rough (it's not required that milestones are carved in stone) idea when to expect that some things will be fixed and/or new features added/implemented. Sincerely, Gour` --=20 As the embodied soul continuously passes, in this body,=20 from boyhood to youth to old age, the soul similarly passes=20 into another body at death. A sober person is not bewildered=20 by such a change. http://atmarama.net | Hlapicina (Croatia) | GPG: 52B5C810
Mar 10 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/10/2012 12:06 AM, Gour wrote:
 It would be nice if D would have some kind of roadmap with several
 milestones so that users can have some rough (it's not required that
 milestones are carved in stone) idea when to expect that some things
 will be fixed and/or new features added/implemented.

Right now the priority is eliminating high priority bugs from bugzilla, not implementing new features.
Mar 10 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/10/2012 11:02 AM, H. S. Teoh wrote:
 Speaking of which, how's our progress on that front? What are the major
 roadblocks still facing us?

http://d.puremagic.com/issues/buglist.cgi?query_format=advanced&bug_severity=regression&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED
Mar 10 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 10:53:22AM -0800, Walter Bright wrote:
 On 3/10/2012 12:06 AM, Gour wrote:
It would be nice if D would have some kind of roadmap with several
milestones so that users can have some rough (it's not required that
milestones are carved in stone) idea when to expect that some things
will be fixed and/or new features added/implemented.

Right now the priority is eliminating high priority bugs from bugzilla, not implementing new features.

Speaking of which, how's our progress on that front? What are the major roadblocks still facing us? T -- He who does not appreciate the beauty of language is not worthy to bemoan its flaws.
Mar 10 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 11:39:54AM -0800, Walter Bright wrote:
 On 3/10/2012 11:02 AM, H. S. Teoh wrote:
Speaking of which, how's our progress on that front? What are the
major roadblocks still facing us?

http://d.puremagic.com/issues/buglist.cgi?query_format=advanced&bug_severity=regression&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED

Looks quite promising to me. Can we expect dmd 2.060 Real Soon Now(tm)? :-) T -- "Uhh, I'm still not here." -- KD, while "away" on ICQ.
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.431.1331409456.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 11:39:54AM -0800, Walter Bright wrote:
 On 3/10/2012 11:02 AM, H. S. Teoh wrote:
Speaking of which, how's our progress on that front? What are the
major roadblocks still facing us?

http://d.puremagic.com/issues/buglist.cgi?query_format=advanced&bug_severity=regression&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED

Looks quite promising to me. Can we expect dmd 2.060 Real Soon Now(tm)? :-)

No. Unfortnately, 2.059 will have to come first. ;)
Mar 10 2012
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 02:59:28PM -0500, Nick Sabalausky wrote:
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
 news:mailman.431.1331409456.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 11:39:54AM -0800, Walter Bright wrote:
 On 3/10/2012 11:02 AM, H. S. Teoh wrote:
Speaking of which, how's our progress on that front? What are the
major roadblocks still facing us?

http://d.puremagic.com/issues/buglist.cgi?query_format=advanced&bug_severity=regression&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED

Looks quite promising to me. Can we expect dmd 2.060 Real Soon Now(tm)? :-)

No. Unfortnately, 2.059 will have to come first. ;)

Argh! I didn't realize dmd bumped its version in git immediately after a release, rather than before. At my day job, we do it the other way round (make a bunch of changes, test it, then bump the version once we decide it's ready to ship). T -- Always remember that you are unique. Just like everybody else. -- despair.com
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.435.1331411268.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 02:59:28PM -0500, Nick Sabalausky wrote:
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message
 news:mailman.431.1331409456.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 11:39:54AM -0800, Walter Bright wrote:
 On 3/10/2012 11:02 AM, H. S. Teoh wrote:
Speaking of which, how's our progress on that front? What are the
major roadblocks still facing us?

http://d.puremagic.com/issues/buglist.cgi?query_format=advanced&bug_severity=regression&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED

Looks quite promising to me. Can we expect dmd 2.060 Real Soon Now(tm)? :-)

No. Unfortnately, 2.059 will have to come first. ;)

Argh! I didn't realize dmd bumped its version in git immediately after a release, rather than before. At my day job, we do it the other way round (make a bunch of changes, test it, then bump the version once we decide it's ready to ship).

I honestly don't like it either way. For my stuff, I bump it just before *and* just after a release. If you see something of mine with a version like "vX.Y", then it's a release version. If it's "vX.Y.1" than it's a development snapshot that could be anywhere between the next and previous "vX.Y". For instance, v0.5.1 would be a dev snapshot that could be anywhere between the v0.5 and v0.6 releases. Once I reach v1.0, then whenever I need to do a "vX.Y.Z" release, the 'Z' part will always been an even number for releases and odd for dev snapshots (unless I decide to just add an extra fourth number instead). (Prior to a v1.0, I don't think there's much point in bothering with a full "vX.Y.Z": just bump the Y since, by definition, you can always expect breaking changes prior to v1.0) I think it's terrible for dev and release versions to share the same version number.
Mar 10 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, March 10, 2012 10:12:44 Alex R=C3=B8nne Petersen wrote:
 On 10-03-2012 10:09, so wrote:
 On Saturday, 10 March 2012 at 08:53:23 UTC, Alex R=C3=B8nne Peterse=


 In all fairness, a stop-the-world GC in a kernel probably *is* a
 horrible idea.

For us (desktop users), it would not differ much would it now?

Linux was never intended to be a pure desktop kernel. It's used widel=

 in server and embedded machines.

And actually, when _responsiveness_ is one of the key features that a d= esktop=20 OS requires, a stop-the-world GC in a desktop would probably be _worse_= than=20 one in a server. - Jonathan M Davis
Mar 10 2012
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 02:23:15PM -0500, Nick Sabalausky wrote:
 "Alex Rnne Petersen" <xtzgzorex gmail.com> wrote in message 
 news:jjg7dq$24q$1 digitalmars.com...
 On 10-03-2012 18:58, H. S. Teoh wrote:
 Then you must be running a very different Linux from the one I use.
 In my experience, it's Windows that's an order of magnitude less
 responsive due to constant HD thrashing (esp. on bootup, and then
 periodically thereafter) and too much eye-candy.

This. On the other hand, OS X has all the eye candy and is still extremely responsive. ;)

That's because they cram [their] hardware upgrades down your throat every couple years.

Yikes. That would *not* sit well with me. Before my last upgrade, my PC was at least 10 years old. (And the upgrade before that was at least 5 years prior.) Last year I finally replaced my 10 y.o. PC with a brand new AMD hexacore system. The plan being to not upgrade for at least the next 10 years, preferably more. :-) (Maybe by then, Intel's currently-experimental 80-core system would be out in the consumer market, and I'll be a really happy geek sitting in the corner watching 1000 instances of povray cranking out images at lightning speed like there's no tomorrow.) T -- "Outlook not so good." That magic 8-ball knows everything! I'll ask about Exchange Server next. -- (Stolen from the net)
Mar 10 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, March 10, 2012 11:49:22 H. S. Teoh wrote:
 Yikes. That would *not* sit well with me. Before my last upgrade, my PC
 was at least 10 years old. (And the upgrade before that was at least 5
 years prior.) Last year I finally replaced my 10 y.o. PC with a brand
 new AMD hexacore system. The plan being to not upgrade for at least the
 next 10 years, preferably more. :-)

LOL. I'm the complete opposite. I seem to end up upgrading my computer every 2 or 3 years. I wouldn't be able to stand being on an older computer that long. I'm constantly annoyed by how slow my computer is no matter how new it is. Of course, I do tend to stress my machine quite a lot by having a ton of stuff open all the time and doing CPU-intensive stuff like transcoding video, and how you use your computer is a definite factor in how much value there is in upgrading. - Jonathan M Davis
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"Jonathan M Davis" <jmdavisProg gmx.com> wrote in message 
news:mailman.428.1331409260.4860.digitalmars-d puremagic.com...
 On Saturday, March 10, 2012 11:49:22 H. S. Teoh wrote:
 Yikes. That would *not* sit well with me. Before my last upgrade, my PC
 was at least 10 years old. (And the upgrade before that was at least 5
 years prior.) Last year I finally replaced my 10 y.o. PC with a brand
 new AMD hexacore system. The plan being to not upgrade for at least the
 next 10 years, preferably more. :-)

LOL. I'm the complete opposite. I seem to end up upgrading my computer every 2 or 3 years. I wouldn't be able to stand being on an older computer that long. I'm constantly annoyed by how slow my computer is no matter how new it is. Of course, I do tend to stress my machine quite a lot by having a ton of stuff open all the time and doing CPU-intensive stuff like transcoding video, and how you use your computer is a definite factor in how much value there is in upgrading.

With the exception of notably-expensive things like video processing, ever since CPUs hit the GHz mark (and arguably for some time before that), there has been *no* reason to blame slowness on anything other than shitty software. My Apple IIc literally had more responsive text entry than at least half of the textarea boxes on the modern web. Slowness is *not* a hardware issue anymore, and hasn't been for a long time. You know what *really* happens when you upgrade to a computer that's, say, twice as fast with twice as much memory? About 90% of the so-called "programmers" out there decide "Hey, now I can get away with my software being twice as slow and eat up twice as much memory! And it's all on *my user's* dime!" You're literally paying for programmer laziness. I just stick with software that isn't bloated. I get just as much speed, but without all that cost. (Again, there are obviously exceptions, like video processing, DNA processing, etc.)
Mar 10 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 11:52:47AM -0800, Jonathan M Davis wrote:
 On Saturday, March 10, 2012 11:49:22 H. S. Teoh wrote:
 Yikes. That would *not* sit well with me. Before my last upgrade, my
 PC was at least 10 years old. (And the upgrade before that was at
 least 5 years prior.) Last year I finally replaced my 10 y.o. PC
 with a brand new AMD hexacore system. The plan being to not upgrade
 for at least the next 10 years, preferably more. :-)

LOL. I'm the complete opposite. I seem to end up upgrading my computer every 2 or 3 years. I wouldn't be able to stand being on an older computer that long. I'm constantly annoyed by how slow my computer is no matter how new it is. Of course, I do tend to stress my machine quite a lot by having a ton of stuff open all the time and doing CPU-intensive stuff like transcoding video, and how you use your computer is a definite factor in how much value there is in upgrading.

True. But I found Linux far more superior in terms of being usable on very old hardware. I can't imagine the pain of trying to run Windows 7 on, say, a 5 y.o. PC (if it will even let you install it on something that old!). I used to run CPU-intensive stuff too, by using 'at' to schedule it to run overnight. :-) Although, I have to admit the reason for my last upgrade was because I was doing lots of povray rendering, and it was getting a bit too slow for my tastes. It's no fun at all if you had to wait 2 hours just to find out you screwed up some parameters in your test render. Imagine if you had to wait 2 hours to know the result of every 1 line code change. T -- Lottery: tax on the stupid. -- Slashdotter
Mar 10 2012
parent reply "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.437.1331414346.4860.digitalmars-d puremagic.com...
 True. But I found Linux far more superior in terms of being usable on
 very old hardware.

There have been exceptions to that: About 10-12 years ago, GNOME (or at least Nautlus) and KDE were *insanely* bloated to the pount of making Win2k/XP seem ultra-lean.
Mar 10 2012
parent Sean Cavanaugh <WorksOnMyMachine gmail.com> writes:
On 3/10/2012 4:22 PM, Nick Sabalausky wrote:
 "H. S. Teoh"<hsteoh quickfur.ath.cx>  wrote in message
 news:mailman.437.1331414346.4860.digitalmars-d puremagic.com...
 True. But I found Linux far more superior in terms of being usable on
 very old hardware.

There have been exceptions to that: About 10-12 years ago, GNOME (or at least Nautlus) and KDE were *insanely* bloated to the pount of making Win2k/XP seem ultra-lean.

Both KDE and Gnome UIs and apps using these Uis still feel very sluggish to me on a modern machine. I don't get this feeling at all on Win7, even on a much slower machine (my laptop for instance).
Mar 10 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, March 10, 2012 16:08:28 Nick Sabalausky wrote:
 With the exception of notably-expensive things like video processing, ever
 since CPUs hit the GHz mark (and arguably for some time before that), there
 has been *no* reason to blame slowness on anything other than shitty
 software.
 
 My Apple IIc literally had more responsive text entry than at least half of
 the textarea boxes on the modern web. Slowness is *not* a hardware issue
 anymore, and hasn't been for a long time.
 
 You know what *really* happens when you upgrade to a computer that's, say,
 twice as fast with twice as much memory? About 90% of the so-called
 "programmers" out there decide "Hey, now I can get away with my software
 being twice as slow and eat up twice as much memory! And it's all on *my
 user's* dime!" You're literally paying for programmer laziness.
 
 I just stick with software that isn't bloated. I get just as much speed, but
 without all that cost.

Yeah. CPU is not the issue. I/O and/or memory tends to be the bottleneck for most stuff - at least for me. Getting a faster CPU wouldn't make my computer any more responsive.
 (Again, there are obviously exceptions, like video processing, DNA
 processing, etc.)

I do plenty of that sort of thing though, so CPU really does matter quite a bit to me, even if it doesn't affect my normal computing much. When transcoding video, CPU speed makes a _huge_ difference. - Jonathan M Davis
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"Jonathan M Davis" <jmdavisProg gmx.com> wrote in message 
news:mailman.438.1331414665.4860.digitalmars-d puremagic.com...
 On Saturday, March 10, 2012 16:08:28 Nick Sabalausky wrote:
 With the exception of notably-expensive things like video processing, 
 ever
 since CPUs hit the GHz mark (and arguably for some time before that), 
 there
 has been *no* reason to blame slowness on anything other than shitty
 software.

 My Apple IIc literally had more responsive text entry than at least half 
 of
 the textarea boxes on the modern web. Slowness is *not* a hardware issue
 anymore, and hasn't been for a long time.

 You know what *really* happens when you upgrade to a computer that's, 
 say,
 twice as fast with twice as much memory? About 90% of the so-called
 "programmers" out there decide "Hey, now I can get away with my software
 being twice as slow and eat up twice as much memory! And it's all on *my
 user's* dime!" You're literally paying for programmer laziness.

 I just stick with software that isn't bloated. I get just as much speed, 
 but
 without all that cost.

Yeah. CPU is not the issue. I/O and/or memory tends to be the bottleneck for most stuff - at least for me. Getting a faster CPU wouldn't make my computer any more responsive.

Well, all those busses, I/O devices, etc, are still a lot faster than they were back in, say, the 486 or Pentium 1 days, and things were plenty responsive then, too. But, I do agree, like you say, it *does* depend on what you're doing. If you're doing a lot of video as you say, then I completely understand.
Mar 10 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 04:08:28PM -0500, Nick Sabalausky wrote:
[...]
 My Apple IIc literally had more responsive text entry than at least
 half of the textarea boxes on the modern web. Slowness is *not* a
 hardware issue anymore, and hasn't been for a long time.

Ugh. You remind me of the early releases of Mozilla, where loading the *UI* would slow my machine down to a crawl (if not to a literal stop). Needless to say actually browsing. I stuck with Netscape 4 for as long as I could get away with, and then switched to Opera because it could do everything Mozilla did at 10 times the speed. Sad to say, recent versions of Opera (and Firefox) have become massive memory and disk hogs. I still mainly use Opera because I like the interface better, but sometimes I have the misfortune of needing Firefox for some newfangled Javascript nonsense that the GUI team at my day job were arm-twisted to implement by the PTBs. Running *both* Firefox and Opera simultaneously, with some heavy-duty Javascript going on in Firefox, routinely soaks up all RAM, hogs the disk at 99% usage, and renders the PC essentially unusable. Exiting one (or preferably both) of them immediately solves the problem. And people keep talking about web apps and the browser as a "platform". Sigh.
 You know what *really* happens when you upgrade to a computer that's,
 say, twice as fast with twice as much memory? About 90% of the
 so-called "programmers" out there decide "Hey, now I can get away with
 my software being twice as slow and eat up twice as much memory! And
 it's all on *my user's* dime!" You're literally paying for programmer
 laziness.

Or worse, "Hey look! We can add goofy animations to every aspect of our UI to hog all CPU and memory, because users love eye-candy and will be induced to upgrade! We get a kickback from our hardware manufacturers and we sell more software without actually adding any new features! It's a win-win situation!"
 I just stick with software that isn't bloated. I get just as much
 speed, but without all that cost.

I'm constantly amazed by the amount of CPU and memory needed to run a *word processor*. I mean, really?! All of that just for pushing some characters around? And I thought word-processing has been solved since the days of CP/M. Silly me.
 (Again, there are obviously exceptions, like video processing, DNA 
 processing, etc.)

And povray rendering. :-) Or computing the convex hull of high-dimensional polytopes. Or solving the travelling salesman problem. Or inverting very large matrices. Y'know, actual, *hard* problems. As opposed to fiddling with some pixels and pushing some characters around. T -- Tech-savvy: euphemism for nerdy.
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.439.1331415624.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 04:08:28PM -0500, Nick Sabalausky wrote:
 [...]
 My Apple IIc literally had more responsive text entry than at least
 half of the textarea boxes on the modern web. Slowness is *not* a
 hardware issue anymore, and hasn't been for a long time.

Ugh. You remind me of the early releases of Mozilla, where loading the *UI* would slow my machine down to a crawl (if not to a literal stop). Needless to say actually browsing. I stuck with Netscape 4 for as long as I could get away with, and then switched to Opera because it could do everything Mozilla did at 10 times the speed. Sad to say, recent versions of Opera (and Firefox) have become massive memory and disk hogs. I still mainly use Opera because I like the interface better,

I couldn't beleive that Opera actually *removed* the native "skin" (even what joke it was in the first place) in the latest versions. That's why my Opera installation is staying put at v10.62. Which reminds me, I still need to figure out what domain it contacts to check whether or not to incessently nag me about *cough* "upgrading" *cough*, so I can ban the damn thing via my hosts file.
 And people keep talking about web apps and the browser as a "platform".
 Sigh.

Yea. There's even an entire company dedicated to pushing that moronic agenda (*and* tracking you like Big Brother). They're called "Microsoft Mark 2"...erm...wait...I mean "Google".
 You know what *really* happens when you upgrade to a computer that's,
 say, twice as fast with twice as much memory? About 90% of the
 so-called "programmers" out there decide "Hey, now I can get away with
 my software being twice as slow and eat up twice as much memory! And
 it's all on *my user's* dime!" You're literally paying for programmer
 laziness.

Or worse, "Hey look! We can add goofy animations to every aspect of our UI to hog all CPU and memory, because users love eye-candy and will be induced to upgrade!

Yes, seriously! "And let's not bother to allow anyone to disable the moronic UI changes even though we (*cough* Mozilla) *claim* to care about being super-configurable."
 We get a kickback from our hardware manufacturers
 and we sell more software without actually adding any new features! It's
 a win-win situation!"

That's one of the reasons I despise the modern-day Epic and Valve: *Complete* graphics whores (not to mention Microsoft sluts, particularly in Epic's case), and I don't believe for a second that what you've described isn't the exact nature of...what does Epic call it? Some sort of "Alliance" with NVIDIA and ATI that Epic was so *publically* proud of. Fuck Cliffy, Sweeny, "Fat Fuck" Newell, et al. Shit, and Epic actually used to be pretty good back in their "Megagames" days. Portal's great (honestly I hate myself for how much I *like* it ;) ), but seriously, it would be *so* much better with Wii controls instead of that dual-analog bullshit. But unlike modern game devs I'm not a graphics whore, so I don't give two shits about the tradeoff in visual quality ('Course that's still no free ride for the lazy crapjobs that were done with the Wii ports of Splinter Cell 4 and FarCry - it may not be a 360/PS3, but it sure as hell is no N64 or even any sub-XBox1 machine, as the "modern" gamedevs would have me believe).
 I just stick with software that isn't bloated. I get just as much
 speed, but without all that cost.

I'm constantly amazed by the amount of CPU and memory needed to run a *word processor*. I mean, really?! All of that just for pushing some characters around? And I thought word-processing has been solved since the days of CP/M. Silly me.

Ditto.
 (Again, there are obviously exceptions, like video processing, DNA
 processing, etc.)

And povray rendering. :-) Or computing the convex hull of high-dimensional polytopes. Or solving the travelling salesman problem. Or inverting very large matrices. Y'know, actual, *hard* problems. As opposed to fiddling with some pixels and pushing some characters around.

Yup. Although I like to count non-realtime 3D rendering under the "video processing" category even though the details are very, very different from transcoding or AfterEffects and such.
 Or solving the travelling salesman problem.

That's aready been solved. Haven't you heard of eCommerce? j/k ;)
Mar 10 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 05:22:07PM -0500, Nick Sabalausky wrote:
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
 news:mailman.437.1331414346.4860.digitalmars-d puremagic.com...
 True. But I found Linux far more superior in terms of being usable
 on very old hardware.

There have been exceptions to that: About 10-12 years ago, GNOME (or at least Nautlus) and KDE were *insanely* bloated to the pount of making Win2k/XP seem ultra-lean.

Good thing I didn't use them then. :-) But if I was installing Linux on ancient hardware, I wouldn't even dream of install KDE or GNOME. I mean, X11 itself is already a resource hog, nevermind something built on top of it. I only installed X11 'cos I had to use a GUI browser. (I would've stuck with Lynx if it had been able to render tables the way elinks can today.) T -- "I'm not childish; I'm just in touch with the child within!" - RL
Mar 10 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 05:16:15PM -0500, Nick Sabalausky wrote:
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
 news:mailman.439.1331415624.4860.digitalmars-d puremagic.com...

 Sad to say, recent versions of Opera (and Firefox) have become
 massive memory and disk hogs. I still mainly use Opera because I
 like the interface better,

I couldn't beleive that Opera actually *removed* the native "skin" (even what joke it was in the first place) in the latest versions. That's why my Opera installation is staying put at v10.62.

While I noticed that Opera's UI seems to be undergoing all sorts of overhauls recently, I didn't bother to find out what changed/didn't change. My custom setup disables most of the toolbars anyway, so I don't really notice. :-)
 Which reminds me, I still need to figure out what domain it contacts
 to check whether or not to incessently nag me about *cough*
 "upgrading" *cough*, so I can ban the damn thing via my hosts file.

Umm... you *could* just point Opera at opera:config, then search for "Disable Opera Package AutoUpdate", y'know...
 And people keep talking about web apps and the browser as a
 "platform".  Sigh.

Yea. There's even an entire company dedicated to pushing that moronic agenda (*and* tracking you like Big Brother). They're called "Microsoft Mark 2"...erm...wait...I mean "Google".

lol... [...]
 We get a kickback from our hardware manufacturers and we sell more
 software without actually adding any new features! It's a win-win
 situation!"

That's one of the reasons I despise the modern-day Epic and Valve: *Complete* graphics whores (not to mention Microsoft sluts, particularly in Epic's case), and I don't believe for a second that what you've described isn't the exact nature of...what does Epic call it? Some sort of "Alliance" with NVIDIA and ATI that Epic was so *publically* proud of. Fuck Cliffy, Sweeny, "Fat Fuck" Newell, et al. Shit, and Epic actually used to be pretty good back in their "Megagames" days.

I root for indie games. That's where the real creativity's at. Creativity has died in big-budget games years ago.
 Or solving the travelling salesman problem.

That's aready been solved. Haven't you heard of eCommerce? j/k ;)

lol! T -- Public parking: euphemism for paid parking. -- Flora
Mar 10 2012
parent reply "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.446.1331424217.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 05:16:15PM -0500, Nick Sabalausky wrote:
 Which reminds me, I still need to figure out what domain it contacts
 to check whether or not to incessently nag me about *cough*
 "upgrading" *cough*, so I can ban the damn thing via my hosts file.

Umm... you *could* just point Opera at opera:config, then search for "Disable Opera Package AutoUpdate", y'know...

Ugh. If the authors of a GUI program can't be bothered to put an option in their own options menus, then that option may as well not exist. Why can't they learn that? I searched every inch of Opera's options screens and never found *any* mention or reference to any "Disable AutoUpdate" or "opera:config". What the fuck did they expect? Clairvoyance? Omniscience? Thanks for the tip, though.
 And people keep talking about web apps and the browser as a
 "platform".  Sigh.

Yea. There's even an entire company dedicated to pushing that moronic agenda (*and* tracking you like Big Brother). They're called "Microsoft Mark 2"...erm...wait...I mean "Google".

lol...

Heh :) I really do see modern Google as "the new microsoft" though, but just with less respect for personal privacy. (Heck, aren't half their employees former MS employees anyway?) I don't care how much they chant "Don't be evil", it's actions that count, not mantras. Hell, that's what happened to MS and Apple, too. *They* used to be the "Google" to IBM's "evil", and then they themselves became the new IBMs. That famous Apple II commercial is so depressingly ironic these days. Success changes corporations.
 [...]
 We get a kickback from our hardware manufacturers and we sell more
 software without actually adding any new features! It's a win-win
 situation!"

That's one of the reasons I despise the modern-day Epic and Valve: *Complete* graphics whores (not to mention Microsoft sluts, particularly in Epic's case), and I don't believe for a second that what you've described isn't the exact nature of...what does Epic call it? Some sort of "Alliance" with NVIDIA and ATI that Epic was so *publically* proud of. Fuck Cliffy, Sweeny, "Fat Fuck" Newell, et al. Shit, and Epic actually used to be pretty good back in their "Megagames" days.

I root for indie games. That's where the real creativity's at. Creativity has died in big-budget games years ago.

Absolutely. And it's not just from the gamer's side, but from the developer's side too. I grew up wanting to join an id or an Apogee, Sierra, Sega, etc., but when I got to college twelve years ago, I looked at the state of the industry and decided "If I'm going to do games, it's going to be as an indie." I hate the web dev I do, yet I still vastly prefer it to joining an EA or "yet another group of 'developers' who are really just trying to get into Pixar" or any of the smaller houses that Bobby Kodick and Activision are holding by the balls.
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:jjgtjn$1aqm$1 digitalmars.com...
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
 news:mailman.446.1331424217.4860.digitalmars-d puremagic.com...
 I root for indie games. That's where the real creativity's at.
 Creativity has died in big-budget games years ago.

Absolutely. [...]

There are some rare but notable exceptions, though: The first three Splinter Cells are among my favorite games ever. As much as I hate Valve, I have to admit, both Portal games are phenomenal (although Portal itself is ultimately rooted in indie-land via Narbacular Drop). And Japan can still be relied on as much as ever to produce some very good games: MegaMan 9, Kororinpa, No More Heroes, Resident Wiivil 4, probably half the games Atlus publishes like 3D Dot Game Heroes, etc. (Although I guess many of those aren't really "big-budget". But they're not strictly indie either.)
Mar 10 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 08:01:11PM -0500, Nick Sabalausky wrote:
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
 news:mailman.446.1331424217.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 05:16:15PM -0500, Nick Sabalausky wrote:
 Which reminds me, I still need to figure out what domain it
 contacts to check whether or not to incessently nag me about
 *cough* "upgrading" *cough*, so I can ban the damn thing via my
 hosts file.

Umm... you *could* just point Opera at opera:config, then search for "Disable Opera Package AutoUpdate", y'know...

Ugh. If the authors of a GUI program can't be bothered to put an option in their own options menus, then that option may as well not exist. Why can't they learn that? I searched every inch of Opera's options screens and never found *any* mention or reference to any "Disable AutoUpdate" or "opera:config". What the fuck did they expect? Clairvoyance? Omniscience?

Yay! I'm clairvoyant! :-P Seriously though, I suspect the reason for opera:config is to hide "dangerous" options for the "end users", but keep it available to geeks like you & me who like to tweak stuff most people don't even know exists. I can just imagine somebody filing an Opera bug that auto update stopped working, when they were the ones who turned it off themselves. Can't say I agree with this approach, but that's the way things are, sad to say. [...]
 Heh :) I really do see modern Google as "the new microsoft" though,
 but just with less respect for personal privacy. (Heck, aren't half
 their employees former MS employees anyway?) I don't care how much
 they chant "Don't be evil", it's actions that count, not mantras.
 
 Hell, that's what happened to MS and Apple, too. *They* used to be the
 "Google" to IBM's "evil", and then they themselves became the new
 IBMs. That famous Apple II commercial is so depressingly ironic these
 days. Success changes corporations.

Here's a quote for you: "Perhaps the most widespread illusion is that if we were in power we would behave very differently from those who now hold it---when, in truth, in order to get power we would have to become very much like them." -- Unknown T -- Recently, our IT department hired a bug-fix engineer. He used to work for Volkswagen.
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.449.1331429962.4860.digitalmars-d puremagic.com...
 Seriously though, I suspect the reason for opera:config is to hide
 "dangerous" options for the "end users", but keep it available to geeks
 like you & me who like to tweak stuff most people don't even know
 exists.

Yea, that's what buttons labeled "Advanced Options" are for. :/
Mar 10 2012
prev sibling next sibling parent Derek <ddparnell bigpond.com> writes:
 Ugh. If the authors of a GUI program can't be bothered to put an
 option in their own options menus, then that option may as well not
 exist. Why can't they learn that? I searched every inch of Opera's
 options screens and never found *any* mention or reference to any
 "Disable AutoUpdate" or "opera:config". What the fuck did they expect?
 Clairvoyance? Omniscience?


I found it in a minute. First I tried opera help and it directed me to details about auto-update, which showed how to disable it. It is in the normal UI place for such stuff. Tools -> Preferences -> Advanced -> Security -> Auto-Update. -- Derek Parnell Melbourne, Australia
Mar 10 2012
prev sibling next sibling parent "so" <so so.so> writes:
On Saturday, 10 March 2012 at 19:54:13 UTC, Jonathan M Davis 
wrote:

 LOL. I'm the complete opposite. I seem to end up upgrading my 
 computer every 2
 or 3 years. I wouldn't be able to stand being on an older 
 computer that long.
 I'm constantly annoyed by how slow my computer is no matter how 
 new it is.

No matter how much hardware you throw at it, somehow it gets slower and slower. New hardware can't keep up with (ever increasing) writing bad software. http://www.agner.org/optimize/blog/read.php?i=9
Mar 10 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 10:41:48PM -0800, Walter Bright wrote:
 On 3/10/2012 1:20 PM, H. S. Teoh wrote:
It's no fun at all if you had to wait 2 hours just to find out you
screwed up some parameters in your test render. Imagine if you had to
wait 2 hours to know the result of every 1 line code change.

2 hours? Man, you got good service. When I submitted my punched card decks, I'd be lucky to get a result the next day! (Yes, I did learn to program using punch cards. And to be fair, the programs were trivial compared with the behemoths we write today.)

And also today, the complexity of the compile/link process can lead to dainbramaged makefiles that sometimes fail to recompile a changed source, and the linker picks up leftover junk .o's from who knows how many weeks ago, causing heisenbugs that don't exist in the source code but persistently show up in the binary until you rm -rf the entire source tree, checkout a fresh copy from the repos, reapply your changes, and rebuild the whole thing from scratch. (And that's assuming that in the meantime somebody didn't check in something that doesn't compile, or that introduces new and ingenious ways of breaking the system.) So perhaps the turnaround time has improved, but the frustration level has also increased. :-) T -- The best way to destroy a cause is to defend it poorly.
Mar 10 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.456.1331450402.4860.digitalmars-d puremagic.com...
 On Sat, Mar 10, 2012 at 10:41:48PM -0800, Walter Bright wrote:
 On 3/10/2012 1:20 PM, H. S. Teoh wrote:
It's no fun at all if you had to wait 2 hours just to find out you
screwed up some parameters in your test render. Imagine if you had to
wait 2 hours to know the result of every 1 line code change.

2 hours? Man, you got good service. When I submitted my punched card decks, I'd be lucky to get a result the next day! (Yes, I did learn to program using punch cards. And to be fair, the programs were trivial compared with the behemoths we write today.)

And also today, the complexity of the compile/link process can lead to dainbramaged makefiles that sometimes fail to recompile a changed source, and the linker picks up leftover junk .o's from who knows how many weeks ago, causing heisenbugs that don't exist in the source code but persistently show up in the binary until you rm -rf the entire source tree, checkout a fresh copy from the repos, reapply your changes, and rebuild the whole thing from scratch. (And that's assuming that in the meantime somebody didn't check in something that doesn't compile, or that introduces new and ingenious ways of breaking the system.)

*cough*DMD*cough*
Mar 11 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Mar 11, 2012 at 04:12:12AM -0400, Nick Sabalausky wrote:
 "so" <so so.so> wrote in message 
 news:pzghdzojddybajuguxwa forum.dlang.org...

 No matter how much hardware you throw at it, somehow it gets slower
 and slower.  New hardware can't keep up with (ever increasing)
 writing bad software.

 http://www.agner.org/optimize/blog/read.php?i=9

That is a *FANTASTIC* article. Completely agree, and it's very well-written.

I really liked the point about GUIs. Many resources are used for graphical elements that only have aesthetic value AND tends to distract the user from actual work. IOW, you're wasting CPU, RAM, and disk time (which comes from spending lots of hard-earned cash for that expensive hardware upgrade) just for some silly eye-candy that has no value whatsoever except to distract from the task at hand, that is, to accomplish what you set out to do in the first place. That's why I use ratpoison as my WM. Who needs title bars with fancy colored icons, gradient shading, and *shadows*?! I mean, c'mon. You're trying to get work done, not admire how clever the UI designers were and how cool a color gradient is. If I wanted to admire eye-candy, I'd be playing computer games, not working. (That said, though, I did at one point have a Compiz installation for the sole purpose of showing off Linux to clueless people. :-P) Then the points about background processes, auto-updates, and boot-up times. These are things about Windows that consistently drive me up the wall. Background processes are all nice and good as long as they are (1) necessary, and (2) don't do stupid things like hog your CPU or thrash your disk every 12 seconds. But the way Windows works, every time you install something, it insists on starting up at boot-time, incessantly checking for auto-updates every 12 seconds, downloading crap from online without your knowledge, and THEN pop up those intrusive, distracting, and utterly annoying "Plz Update Meeee!" dialogs. Ugh. Everytime I see one of those dialogs I have this urge to delete the app and expunge all traces of it from the system with extreme prejudice. At least on Linux you can turn off this crap and/or otherwise prevent it from doing stupid things. But on Windows you have no choice. Attempting to disable stuff usually breaks said apps, or affects system usability in some way.
 That's actually one of reasons I like to *not* use higher-end
 hardware.  Every programmer in the world, no exceptions, has a natural
 tendancy to target the hardware they're developing on. If you're
 developing on high-end hardware, your software is likely to end up
 requiring high-end hardware even without your noticing. If you're
 developing on lower-end hardware, your software is going to run well
 on fucking *everything*.

True. I suppose it's a good thing at my day job that we don't get free upgrades. Whatever was current when we first got the job is whatever we have today. It does have a certain value to it, in that we notice how idiotically long it takes to compile the software we're working on, and how much CPU and RAM a particular ludicrously-long linker command-line eats up at a certain point in the build (which, not too surprisingly, is the linking of the GUI component). It does provide a disincentive against doing more stupid things to make this worse. Now if only everyone (particular the people working on the GUI component :-P) had 5-year old development machines, perhaps that ludicrously-long linker command would never have existed in the first place. Well, I can dream. :-) T -- Ignorance is bliss... but only until you suffer the consequences!
Mar 11 2012
parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.478.1331478431.4860.digitalmars-d puremagic.com...
 On Sun, Mar 11, 2012 at 04:12:12AM -0400, Nick Sabalausky wrote:
 "so" <so so.so> wrote in message
 news:pzghdzojddybajuguxwa forum.dlang.org...

 No matter how much hardware you throw at it, somehow it gets slower
 and slower.  New hardware can't keep up with (ever increasing)
 writing bad software.

 http://www.agner.org/optimize/blog/read.php?i=9

That is a *FANTASTIC* article. Completely agree, and it's very well-written.

I really liked the point about GUIs. Many resources are used for graphical elements that only have aesthetic value AND tends to distract the user from actual work. IOW, you're wasting CPU, RAM, and disk time (which comes from spending lots of hard-earned cash for that expensive hardware upgrade) just for some silly eye-candy that has no value whatsoever except to distract from the task at hand, that is, to accomplish what you set out to do in the first place. That's why I use ratpoison as my WM. Who needs title bars with fancy colored icons, gradient shading, and *shadows*?! I mean, c'mon.

Actually, I rather like my black windows with "dark-blue to light-purple" gradient title bars. I've been using that scheme for years and I don't think I'll ever change it: http://www.semitwist.com/download/img/shots/myColorScheme.png
 You're
 trying to get work done, not admire how clever the UI designers were and
 how cool a color gradient is. If I wanted to admire eye-candy, I'd be
 playing computer games, not working. (That said, though, I did at one
 point have a Compiz installation for the sole purpose of showing off
 Linux to clueless people. :-P)

Before I upgraded my Linux box to Kubuntu 10.04, it was Ubuntu...umm...something before 10.04, and although I'm not normally a UI-eye-candy guy, I fell in love with the window physics effect whle draggng windows. And it was properly hardware accellerated, so it worked very fast even being on an old 32-bit single-core. My brother, who had recently gotten a Mac laptop (although he's now become the third member of my family who's gotten fed up with Apple) saw it and exclaimed "I want jelly windows!" But then in Kubuntu 10.04, the effect no longer works (or maybe it just doesn't work with hardware accelleration, I don't remember now), so I had to give it up :( 'Course, I'm more than ready to give up KDE itself now. Move to something like Trinity or LXDE or XFCE. And Debian 6. Canonincal just keeps getting crazier and crazier. I don't want their new Linux-based iOS of an operating system. OTOH, Debian's "versioning" system is irritationly moroninc. Squeeze, wheeze, wtf? They don't even have any natural ordering for god's sake! At least Ubuntu's moronic names have *that* much going for them! I don't care what Pixar character my OS is pretending to be, and I don't *want* to care.
 Then the points about background processes, auto-updates, and boot-up
 times. These are things about Windows that consistently drive me up the
 wall. Background processes are all nice and good as long as they are (1)
 necessary, and (2) don't do stupid things like hog your CPU or thrash
 your disk every 12 seconds. But the way Windows works, every time you
 install something, it insists on starting up at boot-time, incessantly
 checking for auto-updates every 12 seconds, downloading crap from online
 without your knowledge, and THEN pop up those intrusive, distracting,
 and utterly annoying "Plz Update Meeee!" dialogs. Ugh. Everytime I see
 one of those dialogs I have this urge to delete the app and expunge all
 traces of it from the system with extreme prejudice.

 At least on Linux you can turn off this crap and/or otherwise prevent it
 from doing stupid things. But on Windows you have no choice. Attempting
 to disable stuff usually breaks said apps, or affects system usability
 in some way.

I just avoid those programs (and immediately disable the upgrade nag screens). For example, I will *not* allow Safari or Chrome to even *touch* my computer. (When I want to test a page in Chrome, I use SRWare Iron instead. If SRWare Iron ever goes away, then Chrome users will be on their own when viewing my pages.)
 -- 
 Ignorance is bliss... but only until you suffer the consequences!

So very true :)
Mar 11 2012
prev sibling next sibling parent "so" <so so.so> writes:
On Sunday, 11 March 2012 at 19:22:04 UTC, Nick Sabalausky wrote:

 I just avoid those programs (and immediately disable the 
 upgrade nag
 screens). For example, I will *not* allow Safari or Chrome to 
 even *touch*
 my computer. (When I want to test a page in Chrome, I use 
 SRWare Iron
 instead. If SRWare Iron ever goes away, then Chrome users will 
 be on their
 own when viewing my pages.)

http://code.google.com/p/smoothgestures-chromium/issues/detail?id=498
Mar 11 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Mar 11, 2012 at 03:20:34PM -0400, Nick Sabalausky wrote:
[...]
 'Course, I'm more than ready to give up KDE itself now. Move to
 something like Trinity or LXDE or XFCE.

Way ahead of you there. ;-) I'm already using a mouseless WM, and thinking of replacing ratpoison with something even more radical. Something that not only doesn't need the mouse, but *eradicates* all need for the mouse on virtually all applications. Something that maps keystrokes to geometry, so that you point using your keyboard. The screen would divide into regions mapped to certain keys, and certain key sequences would subdivide regions, so you can virtually point at anything just by hitting 3-4 keys. Furthermore, due to X11's root window allowing the WM to scan pixels, the region subdivisions can auto-snap to high-contrast boundaries, so you're actually subdividing based on visually distinct regions like text lines or buttons, etc., rather than just a blind coordinates subdivision (which will require unreasonable amounts of keystrokes to point accurately). (Though at the rate I'm going, I don't know when I'm ever going to have the time to actually sit down and write a WM. So maybe this is just a really wild impractical pipe dream. :-P) The mouse still has its place, of course, for when you *actually* need it, like drawing free-hand curves and stuff like that.
 And Debian 6. Canonincal just keeps getting crazier and crazier. I
 don't want their new Linux-based iOS of an operating system. OTOH,
 Debian's "versioning" system is irritationly moroninc.  Squeeze,
 wheeze, wtf? They don't even have any natural ordering for god's sake!
 At least Ubuntu's moronic names have *that* much going for them! I
 don't care what Pixar character my OS is pretending to be, and I don't
 *want* to care.

I'm a Debian developer, actually. Though I've been so busy with other stuff that I haven't really done anything worth mentioning for the last long while. To me, Debian only ever has 3 versions, oldstable, stable, and unstable. Every Debian "release" is really just a rollover of unstable into stable, and stable into oldstable. I don't even remember those silly Pixar names or their correlation with actual version numbers (actually, I find the version numbers quite meaningless). "Unstable" is really a misnomer... it's generally a lot more stable than, say, your typical Windows XP installation. (But YMMV... remember I don't use a real desktop environment or any of the stuff that "common people" use.) I see it more as "current release" than "unstable", actually. The *real* unstable is "experimental", which only fools who like crashing their system every other month would run. Stable is for those freaks who run mission-critical servers, whose life depends on the servers being almost impossible to crash. So really, for PCs, most people just run "unstable". So in my mind, I never even think about "Debian 6" or "Debian 5" or whatever arbitrary number they want to call it. I run "unstable" at home and "stable" on a remote server, and it's as simple as that. T -- If it tastes good, it's probably bad for you.
Mar 11 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 12 March 2012 at 01:28:39 UTC, H. S. Teoh wrote:
 Something that not only doesn't need the mouse, but 
 *eradicates* all
 need for the mouse on virtually all applications.

It isn't what you described, but in X11, if you hit shift+numlock, it toggles a mode that lets you move the cursor and click by using the numpad keys.
Mar 11 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 12, 2012 at 02:31:44AM +0100, Adam D. Ruppe wrote:
 On Monday, 12 March 2012 at 01:28:39 UTC, H. S. Teoh wrote:
Something that not only doesn't need the mouse, but *eradicates* all
need for the mouse on virtually all applications.

It isn't what you described, but in X11, if you hit shift+numlock, it toggles a mode that lets you move the cursor and click by using the numpad keys.

There is that, but that's still just a mouse in disguise. In fact, it's worse than a mouse, 'cos now you're pushing the mouse with buttons instead of just sweeping it across the mouse pad with your hand. I'm talking about a sort of quadtree-type spatial navigation where you zero in on the target position in logarithmic leaps, rather than a "linear" cursor displacement. T -- INTEL = Only half of "intelligence".
Mar 11 2012
prev sibling next sibling parent Marco Leise <Marco.Leise gmx.de> writes:
Am Sun, 11 Mar 2012 04:12:12 -0400
schrieb "Nick Sabalausky" <a a.a>:

 I think it's a shame that companies hand out high-end hardware to their 
 developers like it was candy. There's no doubt in my mind that's 
 significantly contributed to the amount of bloatware out there.

But what if the developers themselves use bloated software, like Eclipse or slow compilation processes, like big C++ programs. It is a net productivity increase. But yeah, I sometimes think about keeping some old notebook around to test on it - not to use it for development. Actually, sometimes you may want to debug your code with a very large data set. So you end up on the other side of the extreme: Your computer has too little RAM to run some real world application of your software. As for the article: The situation with automatic updates was worse than now - Adobe, Apple and the others have learned and added the option to disable most of the background processing. The developments in the web sector are interesting under that aspect. High quality videos and several scripting/VM languages make most older computers useless for tabbed browsing :D -- Marco
Mar 12 2012
prev sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
 I searched every inch of Opera's options screens and never 
 found *any* mention or reference to any "Disable AutoUpdate"

"Derek" <ddparnell bigpond.com> wrote in message news:op.wazmllu534mv3i red-beast...
 I found it in a minute. First I tried opera help and it directed me to 
 details about auto-update, which showed how to disable it. It is in the 
 normal UI place for such stuff.

   Tools -> Preferences -> Advanced -> Security -> Auto-Update.

Am Sat, 10 Mar 2012 23:44:20 -0500 schrieb "Nick Sabalausky" <a a.a>:
 They stuck it under "Security"? No wonder I couldn't find it. That's like 
 putting "blue" under "shapes". :/

So much for every inch ...and false accusations. You made my day! ;) -- Marco
Mar 12 2012
parent "Nick Sabalausky" <a a.a> writes:
"Marco Leise" <Marco.Leise gmx.de> wrote in message 
news:20120312124959.2ef8eb86 marco-leise.homedns.org...
 I searched every inch of Opera's options screens and never
 found *any* mention or reference to any "Disable AutoUpdate"

"Derek" <ddparnell bigpond.com> wrote in message news:op.wazmllu534mv3i red-beast...
 I found it in a minute. First I tried opera help and it directed me to
 details about auto-update, which showed how to disable it. It is in the
 normal UI place for such stuff.

   Tools -> Preferences -> Advanced -> Security -> Auto-Update.

Am Sat, 10 Mar 2012 23:44:20 -0500 schrieb "Nick Sabalausky" <a a.a>:
 They stuck it under "Security"? No wonder I couldn't find it. That's like
 putting "blue" under "shapes". :/

So much for every inch ...and false accusations. You made my day! ;)

Yup. You've got me there! (I had thought that I had, but I'm not sure if that works for or against me ;) )
Mar 12 2012
prev sibling next sibling parent "so" <so so.so> writes:
On Saturday, 10 March 2012 at 09:17:37 UTC, Jonathan M Davis 
wrote:

 And actually, when _responsiveness_ is one of the key features 
 that a desktop
 OS requires, a stop-the-world GC in a desktop would probably be 
 _worse_ than
 one in a server.

My point is, every operation, even a mouse movement is already a stop-the-world event for all the "modern" operating systems i have encountered and Linux manages to take this to ridiculous levels.
Mar 10 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 10:30:46AM +0100, so wrote:
 On Saturday, 10 March 2012 at 09:17:37 UTC, Jonathan M Davis wrote:
 
And actually, when _responsiveness_ is one of the key features that a
desktop OS requires, a stop-the-world GC in a desktop would probably
be _worse_ than one in a server.

My point is, every operation, even a mouse movement is already a stop-the-world event for all the "modern" operating systems i have encountered and Linux manages to take this to ridiculous levels.

Huh??! Since when is mouse movement a stop-the-world event on Linux? I've been using Linux for the past 15 years and have never seen such a thing. T -- Some ideas are so stupid that only intellectuals could believe them. -- George Orwell
Mar 10 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 10 March 2012 at 15:19:15 UTC, H. S. Teoh wrote:
 Since when is mouse movement a stop-the-world event on Linux?

It's a hardware interrupt. They all work that way. You have to give a lot of care to handling them very quickly and not letting them stack up (lest the whole system freeze).
Mar 10 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 04:23:43PM +0100, Adam D. Ruppe wrote:
 On Saturday, 10 March 2012 at 15:19:15 UTC, H. S. Teoh wrote:
Since when is mouse movement a stop-the-world event on Linux?

It's a hardware interrupt. They all work that way. You have to give a lot of care to handling them very quickly and not letting them stack up (lest the whole system freeze).

Sure, but I've never seen a problem with that. T -- This is a tpyo.
Mar 10 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 10 March 2012 at 15:27:00 UTC, H. S. Teoh wrote:
 Sure, but I've never seen a problem with that.

I used to back on Windows 95. You could work that computer hard enough that the poor thing just couldn't keep up.
Mar 10 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 04:35:26PM +0100, "Jrme M. Berger" wrote:
 Adam D. Ruppe wrote:
 On Saturday, 10 March 2012 at 15:19:15 UTC, H. S. Teoh wrote:
 Since when is mouse movement a stop-the-world event on Linux?

It's a hardware interrupt. They all work that way. You have to give a lot of care to handling them very quickly and not letting them stack up (lest the whole system freeze).

So? It's not stop-the-world. While one core is handling the interrupt, the other(s) is(are) still running. A stop-the-world GC would need to block all threads on all cores while running. Jerome PS: This is nothing restricted to Linux. Windows, MacOS X and the *BSDs have the same behaviour.

OK, clearly I wasn't understanding what the OP was talking about. It *seemed* to imply that Linux had stop-the-world problems with mouse movement, but this isn't the case. A hardware interrupt is a hardware interrupt. Whatever OS you're using, it's got to stop to handle this somehow. I don't see how else you can do this. When the hardware needs to signal the OS about something, it's gotta do it somehow. And hardware often requires top-priority stop-the-world handling, because it may not be able to wait a few milliseconds before it's handled. It's not like software that generally can afford to wait for a period of time. As for Win95 being unable to keep up with mouse movement... well, to be honest I hated Win95 so much that 90% of the time I was in the DOS prompt anyway, so I didn't even notice this. If it were truly a problem, it's probably a sign of poor hardware interrupt handling (interrupt handler is taking too long to process events). But I haven't seen this myself either. T -- Lottery: tax on the stupid. -- Slashdotter
Mar 10 2012
prev sibling next sibling parent "so" <so so.so> writes:
On Saturday, 10 March 2012 at 15:27:00 UTC, H. S. Teoh wrote:
 On Sat, Mar 10, 2012 at 04:23:43PM +0100, Adam D. Ruppe wrote:
 On Saturday, 10 March 2012 at 15:19:15 UTC, H. S. Teoh wrote:
Since when is mouse movement a stop-the-world event on Linux?

It's a hardware interrupt. They all work that way. You have to give a lot of care to handling them very quickly and not letting them stack up (lest the whole system freeze).

Sure, but I've never seen a problem with that.

Neither the OS developers, especially when they are on 999kTB ram and 1billion core processors.
Mar 10 2012
prev sibling next sibling parent "so" <so so.so> writes:
On Saturday, 10 March 2012 at 16:22:41 UTC, H. S. Teoh wrote:

 OK, clearly I wasn't understanding what the OP was talking 
 about. It
 *seemed* to imply that Linux had stop-the-world problems with 
 mouse
 movement, but this isn't the case.

 A hardware interrupt is a hardware interrupt. Whatever OS 
 you're using,
 it's got to stop to handle this somehow. I don't see how else 
 you can do
 this. When the hardware needs to signal the OS about something, 
 it's
 gotta do it somehow. And hardware often requires top-priority
 stop-the-world handling, because it may not be able to wait a 
 few
 milliseconds before it's handled. It's not like software that 
 generally
 can afford to wait for a period of time.

 As for Win95 being unable to keep up with mouse movement... 
 well, to be
 honest I hated Win95 so much that 90% of the time I was in the 
 DOS
 prompt anyway, so I didn't even notice this. If it were truly a 
 problem,
 it's probably a sign of poor hardware interrupt handling 
 (interrupt
 handler is taking too long to process events). But I haven't 
 seen this
 myself either.

Design of input handling, the theoretical part is irrelevant. I was solely talking about how they do it in practice. OSs are simply unresponsive and in linux it is more severe. If i am having this issue in practice it doesn't matter if it was the GC lock or an another failure to handle input.
Mar 10 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 06:41:24PM +0100, so wrote:
 On Saturday, 10 March 2012 at 15:27:00 UTC, H. S. Teoh wrote:
On Sat, Mar 10, 2012 at 04:23:43PM +0100, Adam D. Ruppe wrote:
On Saturday, 10 March 2012 at 15:19:15 UTC, H. S. Teoh wrote:
Since when is mouse movement a stop-the-world event on Linux?

It's a hardware interrupt. They all work that way. You have to give a lot of care to handling them very quickly and not letting them stack up (lest the whole system freeze).

Sure, but I've never seen a problem with that.

Neither the OS developers, especially when they are on 999kTB ram and 1billion core processors.

Um... before my recent upgrade (about a year ago), I had been using a 500MB (or was it 100MB?) RAM machine running a 10-year-old processor. And before *that*, it was a 64MB (or 32MB?) machine running a 15-year-old processor... Then again, I never believed in the desktop metaphor, and have never seriously used Gnome or KDE or any of that fluffy stuff. I was on VTWM until I decided ratpoison (a mouseless WM) better suited the way I worked. T -- Who told you to swim in Crocodile Lake without life insurance??
Mar 10 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 10, 2012 at 06:49:02PM +0100, so wrote:
 On Saturday, 10 March 2012 at 16:22:41 UTC, H. S. Teoh wrote:

As for Win95 being unable to keep up with mouse movement... well, to
be honest I hated Win95 so much that 90% of the time I was in the DOS
prompt anyway, so I didn't even notice this. If it were truly a
problem, it's probably a sign of poor hardware interrupt handling
(interrupt handler is taking too long to process events). But I
haven't seen this myself either.

Design of input handling, the theoretical part is irrelevant. I was solely talking about how they do it in practice. OSs are simply unresponsive and in linux it is more severe. If i am having this issue in practice it doesn't matter if it was the GC lock or an another failure to handle input.

Then you must be running a very different Linux from the one I use. In my experience, it's Windows that's an order of magnitude less responsive due to constant HD thrashing (esp. on bootup, and then periodically thereafter) and too much eye-candy. (Then again, I don't use graphics-heavy UIs... on Linux you can turn most of it off, and I do, but on Windows you have no choice. So perhaps it's more a measure of how I configured my system than anything else. I tried doing this in Windows once, and let's just say that I'll never, ever, even _dream_ of attempting it again, it was that painful.) T -- I'm still trying to find a pun for "punishment"...
Mar 10 2012
parent bearophile <bearophileHUGS lycos.com> writes:
H. S. Teoh:

 (Then again, I don't use graphics-heavy UIs... on Linux you can turn
 most of it off, and I do, but on Windows you have no choice.

In Windows there is a very very easy way to disable all eye candy and most UI sugar, to produce a snappy graphics interface even on low powered laptops, that looks like Windows95 :-) Bye, bearophile
Mar 10 2012
prev sibling next sibling parent "so" <so so.so> writes:
On Saturday, 10 March 2012 at 17:51:28 UTC, H. S. Teoh wrote:

 Um... before my recent upgrade (about a year ago), I had been 
 using a
 500MB (or was it 100MB?) RAM machine running a 10-year-old 
 processor.
 And before *that*, it was a 64MB (or 32MB?) machine running a
 15-year-old processor...

 Then again, I never believed in the desktop metaphor, and have 
 never
 seriously used Gnome or KDE or any of that fluffy stuff. I was 
 on VTWM
 until I decided ratpoison (a mouseless WM) better suited the 
 way I
 worked.

I am also using light window managers. Most of the time only tmux and gvim running. I tried many WMs but if you are using it frequently and don't like falling back to windows and such, you need a WM working seamlessly with GUIs. Gimp is one. (You might not believe in desktop but how would you use a program like Gimp?) Now most of the tiling WMs suck at handling that kind of thing. Using xmonad now, at least it has a little better support.
Mar 10 2012
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 09/03/2012 23:32, Walter Bright a crit :
 This statement is from Linus Torvalds about breaking binary compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about
 breaking binary compatibility with new D releases, we do have a big
 problem with breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

I think Linus is mostly right. But I don't think this is a reason not put the bar very high. Especially with the current state of D. This is more about providing a nice, and long, transition process. The way property evolve is a good example of what we should do. An example of management the change (even breaking change) is PHP. PHP has so much improved if you consider what have changed between v4 and v5, and then v5 and v5.3, that it is a factual proof that breaking change can be done to great benefit.
Mar 11 2012
parent reply "Nick Sabalausky" <a a.a> writes:
"deadalnix" <deadalnix gmail.com> wrote in message 
news:jjif2l$1cdi$1 digitalmars.com...
 Le 09/03/2012 23:32, Walter Bright a crit :
 This statement is from Linus Torvalds about breaking binary 
 compatibility:

 https://lkml.org/lkml/2012/3/8/495

 While I don't think we need to worry so much at the moment about
 breaking binary compatibility with new D releases, we do have a big
 problem with breaking source code compatibility.

 This is why we need to have a VERY high bar for breaking changes.

I think Linus is mostly right. But I don't think this is a reason not put the bar very high. Especially with the current state of D. This is more about providing a nice, and long, transition process. The way property evolve is a good example of what we should do. An example of management the change (even breaking change) is PHP. PHP has so much improved if you consider what have changed between v4 and v5, and then v5 and v5.3, that it is a factual proof that breaking change can be done to great benefit.

PHP could have handled the changes MUCH better than it did, though.
Mar 11 2012
parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:jjiv40$2aba$1 digitalmars.com...
 "deadalnix" <deadalnix gmail.com> wrote in message 
 news:jjif2l$1cdi$1 digitalmars.com...
 An example of management the change (even breaking change) is PHP. PHP 
 has so much improved if you consider what have changed between v4 and v5, 
 and then v5 and v5.3, that it is a factual proof that breaking change can 
 be done to great benefit.

PHP could have handled the changes MUCH better than it did, though.

Ermm, the transition, I mean.
Mar 11 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 11/03/2012 20:39, Nick Sabalausky a écrit :
 "Nick Sabalausky"<a a.a>  wrote in message
 news:jjiv40$2aba$1 digitalmars.com...
 "deadalnix"<deadalnix gmail.com>  wrote in message
 news:jjif2l$1cdi$1 digitalmars.com...
 An example of management the change (even breaking change) is PHP. PHP
 has so much improved if you consider what have changed between v4 and v5,
 and then v5 and v5.3, that it is a factual proof that breaking change can
 be done to great benefit.

PHP could have handled the changes MUCH better than it did, though.

Ermm, the transition, I mean.

The point wasn't to say that what PHP did is perfect. But that they lead change and are successful at it. This clearly show that this is possible, they DID it.
Mar 11 2012
parent "Nick Sabalausky" <a a.a> writes:
"deadalnix" <deadalnix gmail.com> wrote in message 
news:jjj9bm$2t00$4 digitalmars.com...
 Le 11/03/2012 20:39, Nick Sabalausky a crit :
 "Nick Sabalausky"<a a.a>  wrote in message
 news:jjiv40$2aba$1 digitalmars.com...
 "deadalnix"<deadalnix gmail.com>  wrote in message
 news:jjif2l$1cdi$1 digitalmars.com...
 An example of management the change (even breaking change) is PHP. PHP
 has so much improved if you consider what have changed between v4 and 
 v5,
 and then v5 and v5.3, that it is a factual proof that breaking change 
 can
 be done to great benefit.

PHP could have handled the changes MUCH better than it did, though.

Ermm, the transition, I mean.

The point wasn't to say that what PHP did is perfect. But that they lead change and are successful at it. This clearly show that this is possible, they DID it.

Right, I agree.
Mar 11 2012